1
votes

I tried to implement a motion-blur post processing effect as described in GPU Gems 3 Chapter 27, but I am encountering issues because the blur jitters when i move the camera and does not work as expected. This is my fragment shader:

varying vec3 pos;
varying vec3 normal;
varying vec2 tex;
varying vec3 tangent;
uniform mat4 matrix;
uniform mat4 VPmatrix;
uniform mat4 matrixPrev;
uniform mat4 VPmatrixPrev;
uniform sampler2D diffuseTexture;
uniform sampler2D zTexture;
void main() {
    vec2 texCoord = tex;
    float zOverW = texture2D(zTexture, texCoord).r;
    vec4 H = vec4(texCoord.x * 2.0 - 1.0, (1 - texCoord.y) * 2.0 - 1.0, zOverW, 1.0);
    mat4 inv1 = inverse(matrix);
    mat4 inv2 = inverse(VPmatrix);
    vec4 D = H*(inv2*inv1);
    vec4 worldPos = D/D.w;
    mat4 prev = matrixPrev*VPmatrixPrev;
    vec4 previousPos = worldPos*prev;
    previousPos /= previousPos.w;
    vec2 velocity = vec2((H.x-previousPos.x)/2.0, (H.y-previousPos.y)/2.0);
    vec3 color = vec3(texture2D(diffuseTexture, texCoord));
    for(int i = 0; i < 16; i++) {
        texCoord += velocity;
        vec3 color2 = vec3(texture2D(diffuseTexture, texCoord));
        color += color2;
    }
    color /= 16;
    gl_FragColor = vec4(color, 1.0);
}

The uniforms matrix and VPmatrix are the ModelView and Projection matrixes that were got as following:

float matrix[16];
float VPmatrix[16];
glGetFloatv(GL_MODELVIEW_MATRIX, matrix);
glGetFloatv(GL_PROJECTION_MATRIX, VPmatrix);

The uniforms matrixPrev and VPmatrixPrev are the previous ModelView and Projection matrixes got as following after rendering: (in the code below matrixPrev and VPmatrixPrev are global variables)

for(int i = 0; i < 16; i++) {
    matrixPrev[i] = matrix[i];
    VPmatrixPrev[i] = VPmatrix[i];
}

All four matrixes are passed to the shader as following:

glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 16, GL_FALSE, matrix);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "VPmatrix"), 16, GL_FALSE, VPmatrix);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrixPrev"), 16, GL_FALSE, matrixPrev);
glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "VPmatrixPrev"), 16, GL_FALSE, VPmatrixPrev);

In the shader, the uniform zTexture is a texture containing the depth values of the frame buffer. (Not sure if they are divided by W)

I hoped the shader would work but what I get instead is that when i rotate the camera around the blur jitters really fast with subtle rotations. I tried rendering the zTexture and the result I get is a grayscale image so It seems alright. I also tried setting the fragment color to H.xyz and previousPos.xyz and while rendering H.xyz produces a colored screen, previousPos.xyz produces the same colored screen, except that when the camera rotates the colors seem to invert, so I suspect there is something wrong with extracting the world position from depth.

Am I missing something here? Any help would be greatly appreciated.

2
In your code, you do the blur sampling 16 times. Notice that this means you need to sample 16x per pixel. That's a lot. Did you try reducing this to e.g. 4 times? Is the effect OK? Do you still see the jitter? It's hard to say anything based on the code alone, maybe some experiments will show what can be wrong.Krystian

2 Answers

0
votes

forget my previous answer, is a matrix error:

matrix multiplications are in the wrong order or otherwise you must send transpose matrix (explaining why only the rotation was taken in acount, translation/scale components where messed up):

glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 16, GL_FALSE, matrix);

becomes

glUniformMatrix4fvARB(glGetUniformLocationARB(postShader, "matrix"), 1, GL_TRUE, matrix);

(note 16 was not the matrix size but the matrix count, so it should be only only one here)

Another note: you should compute inverse matrix and project*view result matrix in your main application, not the pixel shader: it's done once per pixels but should be done only once per frame!)

Past post note: Google "RuinIsland_GLSL_Demo.zip": it contains many good glsl sample, helped me solve this issue

0
votes

I'm sorry I don't have an answer but just a clue: I have EXACTLY the same problem as yours. I guess I used the same material pointed by google and, like you, the blurring exists only when the camera rotates.

However I have a clue (the same as yours in fact): I think it is because the glsl shader around the net assume our depth texture contains z/w but, like me, you used a genuine depth texture filled using fixed pipeline. So you only have z and you are missing w in the very first step. Since "texture2D(zTexture, texCoord).r" does contain only z : we miss the computation to get zOverW. In the end we are stuck halfway from window spce to to clip space.

I found this: https://www.opengl.org/wiki/Compute_eye_space_from_window_space#From_NDC_to_clip but my perspective projection matrix does not meet the requierements, perhaps it will help you.