Parallax Occlusion Mapping in GLSL [转]
coordinates in such a way that plain surface will look like 3D. Effect
is calculated in fragment shader for each visible fragment of the
object. Look at the following image. Level 0.0 represets absence of
holes. Level 1.0 represents holes of maximum depth. Real geometry of the
object is unchanged, and always lies on level 0.0. Curve represents
values that are stored in the heightmap, and how these values are
interpreted.
= 0.55. Value isn't equal to 0.0, so the fragment doesn't lie on the
surface. There is a hole below the fragment. So you have to extend
vector V to the closest intersection with the surface defined by the
heightmap. Such intersection is at depth H(T1) and at texture coordinates T1. Then Ò1 is used to sample diffuse texture and normal map.
precisely calculate intersection point between camera vector V and the
surface defined by the heightmap.
performed in tangent space. So vectors to the light (L) and to the
camera (V) should be transformed to tangent space. After new texture
coordinates are calculated by Parallax Mapping Technique, you can use
that texture coordinates to calculate self-shadowing, to get color of
the fragment from diffuse texture and for Normal Mapping.
function parallaxMapping(), self-shadowing is in shader function
parallaxSoftShadowMultiplier(), and lighting by Blinn-Phong model and
Normal Mapping is in normalMappingLighting() shader function. Following
vertex and fragment shaders may be used as base for Parallax Mapping and
self-shadowing. Vertex shader transforms vectors to the light and to
the camera to tangent space. Fragment shader calls Parallax Mapping
technique, then calculation of self-shadowing factor, and finally
calculation of lighting:
// Basic vertex shader for parallax mapping
#version 330
// attributes
layout(location = 0) in vec3 i_position; // xyz - position
layout(location = 1) in vec3 i_normal; // xyz - normal
layout(location = 2) in vec2 i_texcoord0; // xy - texture coords
layout(location = 3) in vec4 i_tangent; // xyz - tangent, w - handedness
// uniforms
uniform mat4 u_model_mat;
uniform mat4 u_view_mat;
uniform mat4 u_proj_mat;
uniform mat3 u_normal_mat;
uniform vec3 u_light_position;
uniform vec3 u_camera_position;
// data for fragment shader
out vec2 o_texcoords;
out vec3 o_toLightInTangentSpace;
out vec3 o_toCameraInTangentSpace;
///////////////////////////////////////////////////////////////////
void main(void)
{
// transform to world space
vec4 worldPosition = u_model_mat * vec4(i_position, 1);
vec3 worldNormal = normalize(u_normal_mat * i_normal);
vec3 worldTangent = normalize(u_normal_mat * i_tangent.xyz);
// calculate vectors to the camera and to the light
vec3 worldDirectionToLight = normalize(u_light_position - worldPosition.xyz);
vec3 worldDirectionToCamera = normalize(u_camera_position - worldPosition.xyz);
// calculate bitangent from normal and tangent
vec3 worldBitangnent = cross(worldNormal, worldTangent) * i_tangent.w;
// transform direction to the light to tangent space
o_toLightInTangentSpace = vec3(
dot(worldDirectionToLight, worldTangent),
dot(worldDirectionToLight, worldBitangnent),
dot(worldDirectionToLight, worldNormal)
);
// transform direction to the camera to tangent space
o_toCameraInTangentSpace= vec3(
dot(worldDirectionToCamera, worldTangent),
dot(worldDirectionToCamera, worldBitangnent),
dot(worldDirectionToCamera, worldNormal)
);
// pass texture coordinates to fragment shader
o_texcoords = i_texcoord0;
// calculate screen space position of the vertex
gl_Position = u_proj_mat * u_view_mat * worldPosition;
}
// basic fragment shader for Parallax Mapping
#version 330
// data from vertex shader
in vec2 o_texcoords;
in vec3 o_toLightInTangentSpace;
in vec3 o_toCameraInTangentSpace;
// textures
layout(location = 0) uniform sampler2D u_diffuseTexture;
layout(location = 1) uniform sampler2D u_heightTexture;
layout(location = 2) uniform sampler2D u_normalTexture;
// color output to the framebuffer
out vec4 resultingColor;
////////////////////////////////////////
// scale for size of Parallax Mapping effect
uniform float parallaxScale; // ~0.1
//////////////////////////////////////////////////////
// Implements Parallax Mapping technique
// Returns modified texture coordinates, and last used depth
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// ...
}
//////////////////////////////////////////////////////
// Implements self-shadowing technique - hard or soft shadows
// Returns shadow factor
float parallaxSoftShadowMultiplier(in vec3 L, in vec2 initialTexCoord,
in float initialHeight)
{
// ...
}
//////////////////////////////////////////////////////
// Calculates lighting by Blinn-Phong model and Normal Mapping
// Returns color of the fragment
vec4 normalMappingLighting(in vec2 T, in vec3 L, in vec3 V, float shadowMultiplier)
{
// restore normal from normal map
vec3 N = normalize(texture(u_normalTexture, T).xyz * 2 - 1);
vec3 D = texture(u_diffuseTexture, T).rgb;
// ambient lighting
float iamb = 0.2;
// diffuse lighting
float idiff = clamp(dot(N, L), 0, 1);
// specular lighting
float ispec = 0;
if(dot(N, L) > 0.2)
{
vec3 R = reflect(-L, N);
ispec = pow(dot(R, V), 32) / 1.5;
}
vec4 resColor;
resColor.rgb = D * (ambientLighting + (idiff + ispec) * pow(shadowMultiplier, 4));
resColor.a = 1;
return resColor;
}
/////////////////////////////////////////////
// Entry point for Parallax Mapping shader
void main(void)
{
// normalize vectors after vertex shader
vec3 V = normalize(o_toCameraInTangentSpace);
vec3 L = normalize(o_toLightInTangentSpace);
// get new texture coordinates from Parallax Mapping
float parallaxHeight;
vec2 T = parallaxMapping(V, o_texcoords, parallaxHeight);
// get self-shadowing factor for elements of parallax
float shadowMultiplier = parallaxSoftShadowMultiplier(L, T, parallaxHeight - 0.05);
// calculate lighting
resultingColor = normalMappingLighting(T, L, V, shadowMultiplier);
}
approximation of new texture coordinates from original texture
coordinates. This technique is simply called Parallax Mapping. Parallax
Mapping gives more or less valid results only when heightmap is smooth
and doesn't contain a lot of small details. In another case, with large
angles between vector to camera (V) and normal (N), effect of parallax
won't be valid. The main idea of Parallax Mapping approximation is
following:
- get height H(T0) from the heightmap, which is at original texture coodinates T0.
- offset original texture coordinates taking into accout vector to the camera V and height at initial texture coordinates H(T0).
vector to the camera V is in tangent space, and tangent space is built
along gradient of the texture coordinates, so components x and y of
vector V can be used without any transforamtion as direction for offset
of the texture coordinates along vector V. Component z of vector V is
normal component, and it's perpendicular to the surface. You can divide
components x and y by z component. This is the original calculation of
texture coordinates in Parallax Mapping technique. Or you can leave
components x and y as they are, and such implementation is called
Parallax Mapping with Offset Limiting. Parallax Mapping with Offset
Limiting allows to decrease amount of weird results when angle between
vector to the camera (V) and noraml (N) is high. So if you will add x
and y components of vector V to original texture coordinates you get new
texture coordinates that are shifted along vector V.
You can control amount of Parallax Mapping effect with scale
variable. Again you have to multiply V.xy. The most usefull values of scale are from 0+ to ~0.5. With higher scale results of Parallax Mapping approximation are wrong in most cases (as on the image). You can also make scale negative. In such case you have to invert z components of normals from the normal map. So here is the final formula for calculation of shifted texture coordinates TP: |
|
is wrong, as Parallax Mapping is only approximation that isn't intended
to find exact location of intersection of vector V and the surface.
additional sampling of the heightmap and this leads to great performance
of GLSL shader. Here is implementation of shader function for simple
Parallax Mapping:
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// get depth for this fragment
float initialHeight = texture(u_heightTexture, o_texcoords).r;
// calculate amount of offset for Parallax Mapping
vec2 texCoordOffset = parallaxScale * V.xy / V.z * initialHeight;
// calculate amount of offset for Parallax Mapping With Offset Limiting
texCoordOffset = parallaxScale * V.xy * initialHeight;
// retunr modified texture coordinates
return o_texcoords - texCoordOffset;
}
not simply offsets texture coordinates without checks for validity and
relevance, but checks whether result is close to valid value. The main
idea of this method is to divide depth of the surface into number of
layers of same height. Then starting from the topmost layer you have to
sample the heightmap, each time shifting texture coordinates along view
vector V. If point is under the surface (depth of the current layer is
greater than depth sampled from texture), stop the checks and use last
used texture coordinates as result of Steep Parallax Mapping.
Depth is divided into 8 layers. Each layer has height of 0.125. Shift of
the texture coordinates with each layer is equal to
V.xy/V.z*scale/numLayers. Checks are started from the topmost layer,
where the fragment is located (yellow square). Here is manual
calculations:
- Depth of the layer is equal to 0. Depth H(T0) is equal
to ~0.75. Depth from the heightmapt is greater than depth of the layer
(point is above surface), so start next iteration. - Shift texture coordinates along vector V. Select next layer with depth equal to 0.125. Depth H(T1)
is equal to ~0.625. Depth from the heightmapt is greater than depth of
the layer (point is above surface), so start next iteration. - Shift texture coordinates along vector V. Select next layer with depth equal to 0.25. Depth H(T2)
is equal to ~0.4. Depth from the heightmapt is greater than depth of
the layer (point is above surface), so start next iteration. - Shift texture coordinates along vector V. Select next layer with depth equal to 0.375. Depth H(T3)
is equal to ~0.2. Depth from the heightmap is less than depth of the
layer, so current point on vector V lies below the the surface. We have
found texture coordinate Tp = T3 that is close to real intersection point.
are quite far far from the intersection point of vector V and the
surface. But these texture coordinate are closer to valid than results
of Parallax Mapping. Increase number of layers if you want more precise
results.
The main disadvantage of Steep Parallax Mapping is that it divides
depth into finite number of layers. If the number of layers is large, then performance will be low. And if the number of layers is too small, then you will notice effect of aliasing (steps), as on the image to the right. You can dynamically determine number of layers with interpolation between minimum and maximum number of layers by angle between vector V and normal of the polygon. The performance/aliasing problem can be fixed with Relief Parallax Mapping or Parallax Occlusion Mapping (POM) that are covered in following parts of the tutorial. |
|
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine number of layers from angle between V and N
const float minLayers = 5;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// depth of current layer
float currentLayerHeight = 0;
// shift of texture coordinates for each iteration
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// get first depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above surface
while(heightFromTexture > currentLayerHeight)
{
// to the next layer
currentLayerHeight += layerHeight;
// shift texture coordinates along vector V
currentTextureCoords -= dtex;
// get new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
// return results
parallaxHeight = currentLayerHeight;
return currentTextureCoords;
}
GLSL shader to more precisely find new texture coordinates. Fisrt you
have to use Steep Parallax Mapping. After that GLSL shader have depths
of two layers between which intersection between vector V and the
surface is located. On the following image such layers are at texture
coordinates T3 and T2. Now you can improve the result with binary search. Binary search with each iteraction improves precision of result by 2.
- After Steep Parallax Mapping we know texture coordinates T2 and T3 between which intersection of vector V and the surface is located. Real intersection point is marked with green dot.
- Divide current shift of texture coordinates and current height of the layer by two.
- Shift texture coordinates T3 in direction opposite to vector V (in direction of T2) by current shift. Decrease depth of layer by current height of layer.
- (*) Sample the heightmap. Divide current shift of texture coordinates and current height of the layer by two.
- If depth from texture is larger than depth of layer, then
increase depth of layer by current height of layer, and shift texture
coordinates along vector V by amount of current shift. - If depth from texture is less than depth of layer, then
decrease depth of layer by current height of layer, and shift texture
coordinates in direction opposite to vector V by amount of current
shift. - Repeat binary search from step (*) for specified number of times.
- Texture coordinates on the last step of search is results of Relief Parallax Mapping.
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine required number of layers
const float minLayers = 10;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// depth of current layer
float currentLayerHeight = 0;
// shift of texture coordinates for each iteration
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above surface
while(heightFromTexture > currentLayerHeight)
{
// go to the next layer
currentLayerHeight += layerHeight;
// shift texture coordinates along V
currentTextureCoords -= dtex;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
///////////////////////////////////////////////////////////
// Start of Relief Parallax Mapping
// decrease shift and height of layer by half
vec2 deltaTexCoord = dtex / 2;
float deltaHeight = layerHeight / 2;
// return to the mid point of previous layer
currentTextureCoords += deltaTexCoord;
currentLayerHeight -= deltaHeight;
// binary search to increase precision of Steep Paralax Mapping
const int numSearches = 5;
for(int i=0; i<numSearches; i++)
{
// decrease shift and height of layer by half
deltaTexCoord /= 2;
deltaHeight /= 2;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// shift along or agains vector V
if(heightFromTexture > currentLayerHeight) // below the surface
{
currentTextureCoords -= deltaTexCoord;
currentLayerHeight += deltaHeight;
}
else // above the surface
{
currentTextureCoords += deltaTexCoord;
currentLayerHeight -= deltaHeight;
}
}
// return results
parallaxHeight = currentLayerHeight;
return currentTextureCoords;
}
Parallax Mapping. Relief Parallax Mapping uses binary search to improve
resuls but such search decreases performance. Parallax Occlusion Mapping
is intended to produce better performance compared to Relief Parallax
Mapping and provide better results than Steep Parallax Mapping. But the
results of POM are a bit worse than results of Relief Parallax Mapping.
Steep Parallax Mapping. Look at the following image. For interpolation
POM uses depth of the layer after intersection (0.375, where Steep
Parallax Mapping has stopped), previous H(T2) and next H(T3)
depths from heightmap. As you can see from the image, result of
Parallax Occlusion Mapping interpolation in on intersection of view
vector V and line between heights H(T3) and H(T2). Intersection is close enough to real point of intersection (marked with green).
- nextHeight = HT3 - currentLayerHeight;
- prevHeight = HT2 - (currentLayerHeight - layerHeight)
- weight = nextHeight / (nextHeight - prevHeight)
- TP = TT2 * weight + TT3 * (1.0 - weight)
small number of samples from heightmap. But Parallax Occlusion Mapping
may skip small details of the heighmap more than Relief Parallax Mapping
and may produce incorrect results for abrupt changes of values in the
heightmap.
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine optimal number of layers
const float minLayers = 10;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// current depth of the layer
float curLayerHeight = 0;
// shift of texture coordinates for each layer
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above the surface
while(heightFromTexture > curLayerHeight)
{
// to the next layer
curLayerHeight += layerHeight;
// shift of texture coordinates
currentTextureCoords -= dtex;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
///////////////////////////////////////////////////////////
// previous texture coordinates
vec2 prevTCoords = currentTextureCoords + texStep;
// heights for linear interpolation
float nextH = heightFromTexture - curLayerHeight;
float prevH = texture(u_heightTexture, prevTCoords).r
- curLayerHeight + layerHeight;
// proportions for linear interpolation
float weight = nextH / (nextH - prevH);
// interpolation of texture coordinates
vec2 finalTexCoords = prevTCoords * weight + currentTextureCoords * (1.0-weight);
// interpolation of depth values
parallaxHeight = curLayerHeight + prevH * weight + nextH * (1.0 - weight);
// return result
return finalTexCoords;
}
algorithm very similar to Steep Parallax Mapping. You have to search
not inside the surface (down), but outside (up). Also shifts of texture
coordinates should be along vector from fragment to the light (L), not
along view vector V. Vector to the light L should be in tangent space,
as vector V, and can be directrly used to specify direction of shift
for texture coordinates. Result of self-shadowing calculation is a
shadowing factor - value in [0, 1] range. This value is used later to
modulate intensity of diffuse and specular lighting.
|
|
values along light vector L until first point under the surface. If
point is under the surface than shadowing factor is 0, otherwise
shadowing factor is 1. For example, on the next image, H(TL1) is less than height of layer Ha,
so the point is under the surface, and shadowing factor is 0. If there
are no points under the surface while light vector L is below level 0.0,
then fragment is in the light, and shadowing factor is equal to 1.
Quality of shadows depends greatly on number of layers, on value of
scale modifier and on angle between light vector L and the normal of the
polygon. With wrong settings shadows suffer from aliasing or even
worse.
Only points under the surface are taken into consideration. Partial
shadowing factor is calculated as difference between depth of current
layer and depth from texture. You also have to take into account
distance from the point to the fragment. So partial factor is multiplied
by (1.0 - stepIndex/numberOfSteps). To calculate final shadowing factor
you have to select one of the calculated partial shadow factors that
has maximum value. So here is formula to calculate shadowing factor for
soft shadows:
- Set shadow factor to 0, number of steps to 4.
- Make step along L to Ha. Ha is less than H(TL1), so point is under the surface. Calculate partial shadowing factor as Ha - H(TL1).
This is first check, and total number of check is 4. So taking into
account distance to the fragment, multiply partial shadowing factor by
(1.0 - 1.0/4.0). Save partial shadowing factor. - Make step along L to Hb. Hb is less than H(TL2), so point is under the surface. Calculate partial shadowing factor as Hb - H(TL2).
This is second check, and total number of checks is 4. So taking into
account distance to the fragment, multiply partial shadowing factor by
(1.0 - 2.0/4.0). Save partial shadowing factor. - Make step along L. Point is above the surface.
- Make another step along L. Point is above the surface.
- Point is above layer 0.0. Stop movement along vector L.
- Select maximum from partial shadow factors as final shadow factor
float parallaxSoftShadowMultiplier(in vec3 L, in vec2 initialTexCoord,
in float initialHeight)
{
float shadowMultiplier = 1;
const float minLayers = 15;
const float maxLayers = 30;
// calculate lighting only for surface oriented to the light source
if(dot(vec3(0, 0, 1), L) > 0)
{
// calculate initial parameters
float numSamplesUnderSurface = 0;
shadowMultiplier = 0;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), L)));
float layerHeight = initialHeight / numLayers;
vec2 texStep = parallaxScale * L.xy / L.z / numLayers;
// current parameters
float currentLayerHeight = initialHeight - layerHeight;
vec2 currentTextureCoords = initialTexCoord + texStep;
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
int stepIndex = 1;
// while point is below depth 0.0 )
while(currentLayerHeight > 0)
{
// if point is under the surface
if(heightFromTexture < currentLayerHeight)
{
// calculate partial shadowing factor
numSamplesUnderSurface += 1;
float newShadowMultiplier = (currentLayerHeight - heightFromTexture) *
(1.0 - stepIndex / numLayers);
shadowMultiplier = max(shadowMultiplier, newShadowMultiplier);
}
// offset to the next layer
stepIndex += 1;
currentLayerHeight -= layerHeight;
currentTextureCoords += texStep;
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
// Shadowing factor should be 1 if there were no points under the surface
if(numSamplesUnderSurface < 1)
{
shadowMultiplier = 1;
}
else
{
shadowMultiplier = 1.0 - shadowMultiplier;
}
}
return shadowMultiplier;
}
Parallax Occlusion Mapping in GLSL [转]的更多相关文章
- Parallax Occlusion Mapping
如上图,本来是采样original texture coordinates点的颜色,其实却采样了correcter texture coordinates点的颜色. 而且会随着视线的不同看到凹凸程度变 ...
- Bump mapping的GLSL实现 [转]
原文 http://www.cnblogs.com/CGDeveloper/archive/2008/07/03/1234206.html 如果物体表面细节很多,我们可以不断的精细化物体的几何数据,但 ...
- 在 OpenGL ES 2.0 上实现视差贴图(Parallax Mapping)
在 OpenGL ES 2.0 上实现视差贴图(Parallax Mapping) 视差贴图 最近一直在研究如何在我的 iPad 2(只支持 OpenGL ES 2.0, 不支持 3.0) 上实现 视 ...
- Parallax Mapping
[Parallax Mapping] Parallax mapping belongs to the family of displacement mapping techniques that di ...
- VR制作的规格分析
因为UE4的演示资源更丰富一些,我这边把UE4的有代表性的演示都跑了一遍,同时也通过Rift确认效果,和里面的资源制作方式. 首先,UE4是基于物理渲染的引擎,大部分都是偏向图像真实的.使用的材质 ...
- Game Engine Architecture 10
[Game Engine Architecture 10] 1.Full-Screen Antialiasing (FSAA) also known as super-sampled antialia ...
- OpenGL核心之视差映射
笔者介绍:姜雪伟,IT公司技术合伙人.IT高级讲师,CSDN社区专家,特邀编辑.畅销书作者;已出版书籍:<手把手教你¯的纹理坐标偏移T3来对fragment的纹理坐标进行位移.你能够看到随着深度 ...
- UV贴图类型
凹凸贴图Bump Map.法线贴图Normal Map.高度贴图Height map.漫反射贴图Diffuse Map.高光贴图Specular Map.AO贴图Ambient Occlusion ...
- 翻译:非常详细易懂的法线贴图(Normal Mapping)
翻译:非常详细易懂的法线贴图(Normal Mapping) 本文翻译自: Shaders » Lesson 6: Normal Mapping 作者: Matt DesLauriers 译者: Fr ...
随机推荐
- 用eclipse运行项目时怎么设置虚拟机内存大小
方法一: 打开eclipse,选择Window--Preferences...在对话框左边的树上双击Java,再双击InstalledJREs,在右边选择前面有对勾的JRE,再单击右边的“Edit”按 ...
- python 几种循环性能测试: while, for, 列表生成式, map等
直接上代码: #!/usr/bin/env python # -*- coding: utf-8 -*- # @Time : 2018/07/24 16:23 import itertools imp ...
- git在window与linux的换行符问题
1:背景.我win7,后端是win10,使用了TortoiseGit工具.我使用ssh,他使用http.仓库是在linux,使用gitLab管理 2:问题.仓库是总监之前建好的.后端把文件add后pu ...
- 升级PIP源
pip3 install -i https://pypi.tuna.tsinghua.edu.cn/simple django
- C#发送Post请求,带参数,不带参数,指定参数
1.不带参数发送Post请求 /// <summary> /// 指定Post地址使用Get 方式获取全部字符串 /// </summary> /// <param na ...
- sql server 完整备份、差异备份、事务日志备份
一. 理解: 完整备份为基础, 完整备份可以实物回滚还原,但是由于完整备份文件过大,对硬盘空间比较浪费这是就需要差异备份 或者 事务日志备份. 差异备份还原时,只能还原到备份的那个点, 日志备份还原时 ...
- .net 多播委托的使用方法以及场景,更简单的观察者模式
首先来说一下什么是多播委托 多播委托就是在委托里定义一个或者多个方法的一个集合 使用方法: public Action actList; //添加方法 public void AddActionMet ...
- 【剑指offer】面试题 4. 二维数组中的查找
面试题 4. 二维数组中的查找 题目:在一个二维数组中,每一行都按照从左到右递增的顺序排序,每一列都按照从上到下递增的顺序排序. 请完成一个函数,输入这样的一个二维数组和一个整数,判断数组中是否含有该 ...
- npoi的用法,动态的判断单元格的大小,设置列的宽度
public MemoryStream GridToExcelByNPOI(DataTable dt, string strExcelFileName) { HSSFWorkbook wk = new ...
- 洛谷P2224 [HNOI2001] 产品加工 [DP补完计划,背包]
题目传送门 产品加工 题目描述 某加工厂有A.B两台机器,来加工的产品可以由其中任何一台机器完成,或者两台机器共同完成.由于受到机器性能和产品特性的限制,不同的机器加工同一产品所需的时间会不同,若同时 ...