Parallax Occlusion Mapping in GLSL [转]
coordinates in such a way that plain surface will look like 3D. Effect
is calculated in fragment shader for each visible fragment of the
object. Look at the following image. Level 0.0 represets absence of
holes. Level 1.0 represents holes of maximum depth. Real geometry of the
object is unchanged, and always lies on level 0.0. Curve represents
values that are stored in the heightmap, and how these values are
interpreted.
= 0.55. Value isn't equal to 0.0, so the fragment doesn't lie on the
surface. There is a hole below the fragment. So you have to extend
vector V to the closest intersection with the surface defined by the
heightmap. Such intersection is at depth H(T1) and at texture coordinates T1. Then Ò1 is used to sample diffuse texture and normal map.
precisely calculate intersection point between camera vector V and the
surface defined by the heightmap.
performed in tangent space. So vectors to the light (L) and to the
camera (V) should be transformed to tangent space. After new texture
coordinates are calculated by Parallax Mapping Technique, you can use
that texture coordinates to calculate self-shadowing, to get color of
the fragment from diffuse texture and for Normal Mapping.
function parallaxMapping(), self-shadowing is in shader function
parallaxSoftShadowMultiplier(), and lighting by Blinn-Phong model and
Normal Mapping is in normalMappingLighting() shader function. Following
vertex and fragment shaders may be used as base for Parallax Mapping and
self-shadowing. Vertex shader transforms vectors to the light and to
the camera to tangent space. Fragment shader calls Parallax Mapping
technique, then calculation of self-shadowing factor, and finally
calculation of lighting:
// Basic vertex shader for parallax mapping
#version 330
// attributes
layout(location = 0) in vec3 i_position; // xyz - position
layout(location = 1) in vec3 i_normal; // xyz - normal
layout(location = 2) in vec2 i_texcoord0; // xy - texture coords
layout(location = 3) in vec4 i_tangent; // xyz - tangent, w - handedness
// uniforms
uniform mat4 u_model_mat;
uniform mat4 u_view_mat;
uniform mat4 u_proj_mat;
uniform mat3 u_normal_mat;
uniform vec3 u_light_position;
uniform vec3 u_camera_position;
// data for fragment shader
out vec2 o_texcoords;
out vec3 o_toLightInTangentSpace;
out vec3 o_toCameraInTangentSpace;
///////////////////////////////////////////////////////////////////
void main(void)
{
// transform to world space
vec4 worldPosition = u_model_mat * vec4(i_position, 1);
vec3 worldNormal = normalize(u_normal_mat * i_normal);
vec3 worldTangent = normalize(u_normal_mat * i_tangent.xyz);
// calculate vectors to the camera and to the light
vec3 worldDirectionToLight = normalize(u_light_position - worldPosition.xyz);
vec3 worldDirectionToCamera = normalize(u_camera_position - worldPosition.xyz);
// calculate bitangent from normal and tangent
vec3 worldBitangnent = cross(worldNormal, worldTangent) * i_tangent.w;
// transform direction to the light to tangent space
o_toLightInTangentSpace = vec3(
dot(worldDirectionToLight, worldTangent),
dot(worldDirectionToLight, worldBitangnent),
dot(worldDirectionToLight, worldNormal)
);
// transform direction to the camera to tangent space
o_toCameraInTangentSpace= vec3(
dot(worldDirectionToCamera, worldTangent),
dot(worldDirectionToCamera, worldBitangnent),
dot(worldDirectionToCamera, worldNormal)
);
// pass texture coordinates to fragment shader
o_texcoords = i_texcoord0;
// calculate screen space position of the vertex
gl_Position = u_proj_mat * u_view_mat * worldPosition;
}
// basic fragment shader for Parallax Mapping
#version 330
// data from vertex shader
in vec2 o_texcoords;
in vec3 o_toLightInTangentSpace;
in vec3 o_toCameraInTangentSpace;
// textures
layout(location = 0) uniform sampler2D u_diffuseTexture;
layout(location = 1) uniform sampler2D u_heightTexture;
layout(location = 2) uniform sampler2D u_normalTexture;
// color output to the framebuffer
out vec4 resultingColor;
////////////////////////////////////////
// scale for size of Parallax Mapping effect
uniform float parallaxScale; // ~0.1
//////////////////////////////////////////////////////
// Implements Parallax Mapping technique
// Returns modified texture coordinates, and last used depth
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// ...
}
//////////////////////////////////////////////////////
// Implements self-shadowing technique - hard or soft shadows
// Returns shadow factor
float parallaxSoftShadowMultiplier(in vec3 L, in vec2 initialTexCoord,
in float initialHeight)
{
// ...
}
//////////////////////////////////////////////////////
// Calculates lighting by Blinn-Phong model and Normal Mapping
// Returns color of the fragment
vec4 normalMappingLighting(in vec2 T, in vec3 L, in vec3 V, float shadowMultiplier)
{
// restore normal from normal map
vec3 N = normalize(texture(u_normalTexture, T).xyz * 2 - 1);
vec3 D = texture(u_diffuseTexture, T).rgb;
// ambient lighting
float iamb = 0.2;
// diffuse lighting
float idiff = clamp(dot(N, L), 0, 1);
// specular lighting
float ispec = 0;
if(dot(N, L) > 0.2)
{
vec3 R = reflect(-L, N);
ispec = pow(dot(R, V), 32) / 1.5;
}
vec4 resColor;
resColor.rgb = D * (ambientLighting + (idiff + ispec) * pow(shadowMultiplier, 4));
resColor.a = 1;
return resColor;
}
/////////////////////////////////////////////
// Entry point for Parallax Mapping shader
void main(void)
{
// normalize vectors after vertex shader
vec3 V = normalize(o_toCameraInTangentSpace);
vec3 L = normalize(o_toLightInTangentSpace);
// get new texture coordinates from Parallax Mapping
float parallaxHeight;
vec2 T = parallaxMapping(V, o_texcoords, parallaxHeight);
// get self-shadowing factor for elements of parallax
float shadowMultiplier = parallaxSoftShadowMultiplier(L, T, parallaxHeight - 0.05);
// calculate lighting
resultingColor = normalMappingLighting(T, L, V, shadowMultiplier);
}
approximation of new texture coordinates from original texture
coordinates. This technique is simply called Parallax Mapping. Parallax
Mapping gives more or less valid results only when heightmap is smooth
and doesn't contain a lot of small details. In another case, with large
angles between vector to camera (V) and normal (N), effect of parallax
won't be valid. The main idea of Parallax Mapping approximation is
following:
- get height H(T0) from the heightmap, which is at original texture coodinates T0.
- offset original texture coordinates taking into accout vector to the camera V and height at initial texture coordinates H(T0).
vector to the camera V is in tangent space, and tangent space is built
along gradient of the texture coordinates, so components x and y of
vector V can be used without any transforamtion as direction for offset
of the texture coordinates along vector V. Component z of vector V is
normal component, and it's perpendicular to the surface. You can divide
components x and y by z component. This is the original calculation of
texture coordinates in Parallax Mapping technique. Or you can leave
components x and y as they are, and such implementation is called
Parallax Mapping with Offset Limiting. Parallax Mapping with Offset
Limiting allows to decrease amount of weird results when angle between
vector to the camera (V) and noraml (N) is high. So if you will add x
and y components of vector V to original texture coordinates you get new
texture coordinates that are shifted along vector V.
You can control amount of Parallax Mapping effect with scale
variable. Again you have to multiply V.xy. The most usefull values of scale are from 0+ to ~0.5. With higher scale results of Parallax Mapping approximation are wrong in most cases (as on the image). You can also make scale negative. In such case you have to invert z components of normals from the normal map. So here is the final formula for calculation of shifted texture coordinates TP: |
|
is wrong, as Parallax Mapping is only approximation that isn't intended
to find exact location of intersection of vector V and the surface.
additional sampling of the heightmap and this leads to great performance
of GLSL shader. Here is implementation of shader function for simple
Parallax Mapping:
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// get depth for this fragment
float initialHeight = texture(u_heightTexture, o_texcoords).r;
// calculate amount of offset for Parallax Mapping
vec2 texCoordOffset = parallaxScale * V.xy / V.z * initialHeight;
// calculate amount of offset for Parallax Mapping With Offset Limiting
texCoordOffset = parallaxScale * V.xy * initialHeight;
// retunr modified texture coordinates
return o_texcoords - texCoordOffset;
}
not simply offsets texture coordinates without checks for validity and
relevance, but checks whether result is close to valid value. The main
idea of this method is to divide depth of the surface into number of
layers of same height. Then starting from the topmost layer you have to
sample the heightmap, each time shifting texture coordinates along view
vector V. If point is under the surface (depth of the current layer is
greater than depth sampled from texture), stop the checks and use last
used texture coordinates as result of Steep Parallax Mapping.
Depth is divided into 8 layers. Each layer has height of 0.125. Shift of
the texture coordinates with each layer is equal to
V.xy/V.z*scale/numLayers. Checks are started from the topmost layer,
where the fragment is located (yellow square). Here is manual
calculations:
- Depth of the layer is equal to 0. Depth H(T0) is equal
to ~0.75. Depth from the heightmapt is greater than depth of the layer
(point is above surface), so start next iteration. - Shift texture coordinates along vector V. Select next layer with depth equal to 0.125. Depth H(T1)
is equal to ~0.625. Depth from the heightmapt is greater than depth of
the layer (point is above surface), so start next iteration. - Shift texture coordinates along vector V. Select next layer with depth equal to 0.25. Depth H(T2)
is equal to ~0.4. Depth from the heightmapt is greater than depth of
the layer (point is above surface), so start next iteration. - Shift texture coordinates along vector V. Select next layer with depth equal to 0.375. Depth H(T3)
is equal to ~0.2. Depth from the heightmap is less than depth of the
layer, so current point on vector V lies below the the surface. We have
found texture coordinate Tp = T3 that is close to real intersection point.
are quite far far from the intersection point of vector V and the
surface. But these texture coordinate are closer to valid than results
of Parallax Mapping. Increase number of layers if you want more precise
results.
The main disadvantage of Steep Parallax Mapping is that it divides
depth into finite number of layers. If the number of layers is large, then performance will be low. And if the number of layers is too small, then you will notice effect of aliasing (steps), as on the image to the right. You can dynamically determine number of layers with interpolation between minimum and maximum number of layers by angle between vector V and normal of the polygon. The performance/aliasing problem can be fixed with Relief Parallax Mapping or Parallax Occlusion Mapping (POM) that are covered in following parts of the tutorial. |
|
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine number of layers from angle between V and N
const float minLayers = 5;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// depth of current layer
float currentLayerHeight = 0;
// shift of texture coordinates for each iteration
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// get first depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above surface
while(heightFromTexture > currentLayerHeight)
{
// to the next layer
currentLayerHeight += layerHeight;
// shift texture coordinates along vector V
currentTextureCoords -= dtex;
// get new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
// return results
parallaxHeight = currentLayerHeight;
return currentTextureCoords;
}
GLSL shader to more precisely find new texture coordinates. Fisrt you
have to use Steep Parallax Mapping. After that GLSL shader have depths
of two layers between which intersection between vector V and the
surface is located. On the following image such layers are at texture
coordinates T3 and T2. Now you can improve the result with binary search. Binary search with each iteraction improves precision of result by 2.
- After Steep Parallax Mapping we know texture coordinates T2 and T3 between which intersection of vector V and the surface is located. Real intersection point is marked with green dot.
- Divide current shift of texture coordinates and current height of the layer by two.
- Shift texture coordinates T3 in direction opposite to vector V (in direction of T2) by current shift. Decrease depth of layer by current height of layer.
- (*) Sample the heightmap. Divide current shift of texture coordinates and current height of the layer by two.
- If depth from texture is larger than depth of layer, then
increase depth of layer by current height of layer, and shift texture
coordinates along vector V by amount of current shift. - If depth from texture is less than depth of layer, then
decrease depth of layer by current height of layer, and shift texture
coordinates in direction opposite to vector V by amount of current
shift. - Repeat binary search from step (*) for specified number of times.
- Texture coordinates on the last step of search is results of Relief Parallax Mapping.
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine required number of layers
const float minLayers = 10;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// depth of current layer
float currentLayerHeight = 0;
// shift of texture coordinates for each iteration
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above surface
while(heightFromTexture > currentLayerHeight)
{
// go to the next layer
currentLayerHeight += layerHeight;
// shift texture coordinates along V
currentTextureCoords -= dtex;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
///////////////////////////////////////////////////////////
// Start of Relief Parallax Mapping
// decrease shift and height of layer by half
vec2 deltaTexCoord = dtex / 2;
float deltaHeight = layerHeight / 2;
// return to the mid point of previous layer
currentTextureCoords += deltaTexCoord;
currentLayerHeight -= deltaHeight;
// binary search to increase precision of Steep Paralax Mapping
const int numSearches = 5;
for(int i=0; i<numSearches; i++)
{
// decrease shift and height of layer by half
deltaTexCoord /= 2;
deltaHeight /= 2;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// shift along or agains vector V
if(heightFromTexture > currentLayerHeight) // below the surface
{
currentTextureCoords -= deltaTexCoord;
currentLayerHeight += deltaHeight;
}
else // above the surface
{
currentTextureCoords += deltaTexCoord;
currentLayerHeight -= deltaHeight;
}
}
// return results
parallaxHeight = currentLayerHeight;
return currentTextureCoords;
}
Parallax Mapping. Relief Parallax Mapping uses binary search to improve
resuls but such search decreases performance. Parallax Occlusion Mapping
is intended to produce better performance compared to Relief Parallax
Mapping and provide better results than Steep Parallax Mapping. But the
results of POM are a bit worse than results of Relief Parallax Mapping.
Steep Parallax Mapping. Look at the following image. For interpolation
POM uses depth of the layer after intersection (0.375, where Steep
Parallax Mapping has stopped), previous H(T2) and next H(T3)
depths from heightmap. As you can see from the image, result of
Parallax Occlusion Mapping interpolation in on intersection of view
vector V and line between heights H(T3) and H(T2). Intersection is close enough to real point of intersection (marked with green).
- nextHeight = HT3 - currentLayerHeight;
- prevHeight = HT2 - (currentLayerHeight - layerHeight)
- weight = nextHeight / (nextHeight - prevHeight)
- TP = TT2 * weight + TT3 * (1.0 - weight)
small number of samples from heightmap. But Parallax Occlusion Mapping
may skip small details of the heighmap more than Relief Parallax Mapping
and may produce incorrect results for abrupt changes of values in the
heightmap.
vec2 parallaxMapping(in vec3 V, in vec2 T, out float parallaxHeight)
{
// determine optimal number of layers
const float minLayers = 10;
const float maxLayers = 15;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), V)));
// height of each layer
float layerHeight = 1.0 / numLayers;
// current depth of the layer
float curLayerHeight = 0;
// shift of texture coordinates for each layer
vec2 dtex = parallaxScale * V.xy / V.z / numLayers;
// current texture coordinates
vec2 currentTextureCoords = T;
// depth from heightmap
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
// while point is above the surface
while(heightFromTexture > curLayerHeight)
{
// to the next layer
curLayerHeight += layerHeight;
// shift of texture coordinates
currentTextureCoords -= dtex;
// new depth from heightmap
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
///////////////////////////////////////////////////////////
// previous texture coordinates
vec2 prevTCoords = currentTextureCoords + texStep;
// heights for linear interpolation
float nextH = heightFromTexture - curLayerHeight;
float prevH = texture(u_heightTexture, prevTCoords).r
- curLayerHeight + layerHeight;
// proportions for linear interpolation
float weight = nextH / (nextH - prevH);
// interpolation of texture coordinates
vec2 finalTexCoords = prevTCoords * weight + currentTextureCoords * (1.0-weight);
// interpolation of depth values
parallaxHeight = curLayerHeight + prevH * weight + nextH * (1.0 - weight);
// return result
return finalTexCoords;
}
algorithm very similar to Steep Parallax Mapping. You have to search
not inside the surface (down), but outside (up). Also shifts of texture
coordinates should be along vector from fragment to the light (L), not
along view vector V. Vector to the light L should be in tangent space,
as vector V, and can be directrly used to specify direction of shift
for texture coordinates. Result of self-shadowing calculation is a
shadowing factor - value in [0, 1] range. This value is used later to
modulate intensity of diffuse and specular lighting.
|
|
values along light vector L until first point under the surface. If
point is under the surface than shadowing factor is 0, otherwise
shadowing factor is 1. For example, on the next image, H(TL1) is less than height of layer Ha,
so the point is under the surface, and shadowing factor is 0. If there
are no points under the surface while light vector L is below level 0.0,
then fragment is in the light, and shadowing factor is equal to 1.
Quality of shadows depends greatly on number of layers, on value of
scale modifier and on angle between light vector L and the normal of the
polygon. With wrong settings shadows suffer from aliasing or even
worse.
Only points under the surface are taken into consideration. Partial
shadowing factor is calculated as difference between depth of current
layer and depth from texture. You also have to take into account
distance from the point to the fragment. So partial factor is multiplied
by (1.0 - stepIndex/numberOfSteps). To calculate final shadowing factor
you have to select one of the calculated partial shadow factors that
has maximum value. So here is formula to calculate shadowing factor for
soft shadows:
- Set shadow factor to 0, number of steps to 4.
- Make step along L to Ha. Ha is less than H(TL1), so point is under the surface. Calculate partial shadowing factor as Ha - H(TL1).
This is first check, and total number of check is 4. So taking into
account distance to the fragment, multiply partial shadowing factor by
(1.0 - 1.0/4.0). Save partial shadowing factor. - Make step along L to Hb. Hb is less than H(TL2), so point is under the surface. Calculate partial shadowing factor as Hb - H(TL2).
This is second check, and total number of checks is 4. So taking into
account distance to the fragment, multiply partial shadowing factor by
(1.0 - 2.0/4.0). Save partial shadowing factor. - Make step along L. Point is above the surface.
- Make another step along L. Point is above the surface.
- Point is above layer 0.0. Stop movement along vector L.
- Select maximum from partial shadow factors as final shadow factor
float parallaxSoftShadowMultiplier(in vec3 L, in vec2 initialTexCoord,
in float initialHeight)
{
float shadowMultiplier = 1;
const float minLayers = 15;
const float maxLayers = 30;
// calculate lighting only for surface oriented to the light source
if(dot(vec3(0, 0, 1), L) > 0)
{
// calculate initial parameters
float numSamplesUnderSurface = 0;
shadowMultiplier = 0;
float numLayers = mix(maxLayers, minLayers, abs(dot(vec3(0, 0, 1), L)));
float layerHeight = initialHeight / numLayers;
vec2 texStep = parallaxScale * L.xy / L.z / numLayers;
// current parameters
float currentLayerHeight = initialHeight - layerHeight;
vec2 currentTextureCoords = initialTexCoord + texStep;
float heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
int stepIndex = 1;
// while point is below depth 0.0 )
while(currentLayerHeight > 0)
{
// if point is under the surface
if(heightFromTexture < currentLayerHeight)
{
// calculate partial shadowing factor
numSamplesUnderSurface += 1;
float newShadowMultiplier = (currentLayerHeight - heightFromTexture) *
(1.0 - stepIndex / numLayers);
shadowMultiplier = max(shadowMultiplier, newShadowMultiplier);
}
// offset to the next layer
stepIndex += 1;
currentLayerHeight -= layerHeight;
currentTextureCoords += texStep;
heightFromTexture = texture(u_heightTexture, currentTextureCoords).r;
}
// Shadowing factor should be 1 if there were no points under the surface
if(numSamplesUnderSurface < 1)
{
shadowMultiplier = 1;
}
else
{
shadowMultiplier = 1.0 - shadowMultiplier;
}
}
return shadowMultiplier;
}
Parallax Occlusion Mapping in GLSL [转]的更多相关文章
- Parallax Occlusion Mapping
如上图,本来是采样original texture coordinates点的颜色,其实却采样了correcter texture coordinates点的颜色. 而且会随着视线的不同看到凹凸程度变 ...
- Bump mapping的GLSL实现 [转]
原文 http://www.cnblogs.com/CGDeveloper/archive/2008/07/03/1234206.html 如果物体表面细节很多,我们可以不断的精细化物体的几何数据,但 ...
- 在 OpenGL ES 2.0 上实现视差贴图(Parallax Mapping)
在 OpenGL ES 2.0 上实现视差贴图(Parallax Mapping) 视差贴图 最近一直在研究如何在我的 iPad 2(只支持 OpenGL ES 2.0, 不支持 3.0) 上实现 视 ...
- Parallax Mapping
[Parallax Mapping] Parallax mapping belongs to the family of displacement mapping techniques that di ...
- VR制作的规格分析
因为UE4的演示资源更丰富一些,我这边把UE4的有代表性的演示都跑了一遍,同时也通过Rift确认效果,和里面的资源制作方式. 首先,UE4是基于物理渲染的引擎,大部分都是偏向图像真实的.使用的材质 ...
- Game Engine Architecture 10
[Game Engine Architecture 10] 1.Full-Screen Antialiasing (FSAA) also known as super-sampled antialia ...
- OpenGL核心之视差映射
笔者介绍:姜雪伟,IT公司技术合伙人.IT高级讲师,CSDN社区专家,特邀编辑.畅销书作者;已出版书籍:<手把手教你¯的纹理坐标偏移T3来对fragment的纹理坐标进行位移.你能够看到随着深度 ...
- UV贴图类型
凹凸贴图Bump Map.法线贴图Normal Map.高度贴图Height map.漫反射贴图Diffuse Map.高光贴图Specular Map.AO贴图Ambient Occlusion ...
- 翻译:非常详细易懂的法线贴图(Normal Mapping)
翻译:非常详细易懂的法线贴图(Normal Mapping) 本文翻译自: Shaders » Lesson 6: Normal Mapping 作者: Matt DesLauriers 译者: Fr ...
随机推荐
- jenkins构建触发器定时任务Build periodically和Poll SCM【转载】
转至博客:上海-悠悠 前言 跑自动化用例每次用手工点击jenkins出发自动化用例太麻烦了,我们希望能每天固定时间跑,这样就不用管了,坐等收测试报告结果就行. 一.定时构建语法 * * * * * ( ...
- TP-LINK路由器设置内网的一台电脑在外网可以远程操控
1.[IP和MAC绑定]--[静态ARP绑定设置]对MAC和IP进行绑定 2.[转发规则]--[DMZ主机],选择启用并把刚才设置的内网IP填入 3.直接访问路由器的外网IP就可以直接访问绑定的MAC ...
- For input string: "..."
之前在项目中通过EL表达式从cart 中取出itemPirce这个值时始终报:For input string: "${cart.itemPrice / 100}" 错误. 事故原 ...
- RedisDesktopManager-0.9.3 for windows (转)
redis数据库的可视化工具 官方出了RedisDesktopManager-0.9.8版本后要购买了.之前自用的Windows版本0.9.3.817有需要的可以使用.解压直接启动即可.主要以备自用! ...
- 2017腾讯OMG实习生面试总结
2017腾讯OMG实习生面试总结 一面 一面有两个面试官,轮着来的,一共是一个半小时,中间等了10分钟左右.结合简历问了问项目,也考察了基础,手写了两道简单的算法题.问题包括: 第一个面试官 1.自我 ...
- Cobol online program 传指针
- [BZOJ4246]两个人的星座(计算几何)
4246: 两个人的星座 Time Limit: 40 Sec Memory Limit: 256 MBSubmit: 101 Solved: 55[Submit][Status][Discuss ...
- POJ 2348 Euclid's Game 博弈论
http://poj.org/problem?id=2348 顺便说,必应翻译真的好用,比谷歌翻译好用100倍. 很难判断这道题的具体博弈类型. 有两种写法,一种是找规律,一种是推理得到关系后循环(或 ...
- AC自动机及KMP练习
好久都没敲过KMP和AC自动机了.以前只会敲个kuangbin牌板子套题.现在重新写了自己的板子加深了印象.并且刷了一些题来增加自己的理解. KMP网上教程很多,但我的建议还是先看AC自动机(Trie ...
- 【Heap-dijkstra】CDOJ1639 云中谁寄锦书来?雁字回时,月满西楼。
题意: 在n个点m条边的无向图上,有k个出口 从起点出发,每到一个点(包括起点),该点连出的边中有d条会被封锁 求最坏情况下到达出口的最短路 题解: 该题为dijkstra算法的拓展 由于求最坏情况下 ...