0
votes

I am attempting to reconstruct my fragment's position from a depth value stored in a GL_DEPTH_ATTACHMENT. To do this, I linearize the depth then multiply the depth by a ray from the camera position and to the corresponding point on the far plane.

This method is the second one described here. In order to get the ray from the camera to the far plane, I retrieve rays to the four corners of the far planes, pass them to my vertex shader, then interpolate into the fragment shader. I am using the following code to get the rays from the camera to the far plane's corners in world space.

std::vector<float> Camera::GetFlatFarFrustumCorners() {
    // rotation is the orientation of my camera in a quaternion.
    glm::quat inverseRotation = glm::inverse(rotation);
    glm::vec3 localUp = glm::normalize(inverseRotation * glm::vec3(0.0f, 1.0f, 0.0f));
    glm::vec3 localRight = glm::normalize(inverseRotation * glm::vec3(1.0f, 0.0f, 0.0f));
    float farHeight = 2.0f * tan(90.0f / 2) * 100.0f;
    float farWidth = farHeight * aspect;

    // 100.0f is the distance to the far plane. position is the location of the camera in word space.
    glm::vec3 farCenter = position + glm::vec3(0.0f, 0.0f, -1.0f) * 100.0f;
    glm::vec3 farTopLeft = farCenter + (localUp * (farHeight / 2)) - (localRight * (farWidth / 2));
    glm::vec3 farTopRight = farCenter + (localUp * (farHeight / 2)) + (localRight * (farWidth / 2));
    glm::vec3 farBottomLeft = farCenter - (localUp * (farHeight / 2)) - (localRight * (farWidth / 2));
    glm::vec3 farBottomRight = farCenter - (localUp * (farHeight / 2)) + (localRight * (farWidth / 2));

    return { 
        farTopLeft.x, farTopLeft.y, farTopLeft.z,
        farTopRight.x, farTopRight.y, farTopRight.z,
        farBottomLeft.x, farBottomLeft.y, farBottomLeft.z,
        farBottomRight.x, farBottomRight.y, farBottomRight.z
    };
}

Is this a correct way to retrieve the corners of the far plane in world space?

When I use these corners with my shaders, the results are incorrect, and what I get seems to be in view space. These are the shaders I am using:

Vertex Shader:

layout(location = 0) in vec2 vp;
layout(location = 1) in vec3 textureCoordinates;

uniform vec3 farFrustumCorners[4];
uniform vec3 cameraPosition;

out vec2 st;
out vec3 frustumRay;

void main () {
    st = textureCoordinates.xy;
    gl_Position = vec4 (vp, 0.0, 1.0);
    frustumRay = farFrustumCorners[int(textureCoordinates.z)-1] - cameraPosition;
}

Fragment Shader:

in vec2 st;
in vec3 frustumRay;

uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;

uniform vec3 cameraPosition;
uniform vec3 lightPosition;

out vec3 color;

void main () {
    // Far and near distances; Used to linearize the depth value.
    float f = 100.0;
    float n = 0.1;
    float depth = (2 * n) / (f + n - (texture(depthTexture, st).x) * (f - n));
    vec3 position = cameraPosition + (normalize(frustumRay) * depth);
    vec3 normal = texture(normalTexture, st);


    float k = 0.00001;
    vec3 distanceToLight = lightPosition - position;
    float distanceLength = length(distanceToLight);
    float attenuation = (1.0 / (1.0 + (0.1 * distanceLength) + k * (distanceLength * distanceLength)));
    float diffuseTemp = max(dot(normalize(normal), normalize(distanceToLight)), 0.0);
    vec3 diffuse = vec3(1.0, 1.0, 1.0) * attenuation * diffuseTemp;

    vec3 gamma = vec3(1.0/2.2);
    color = pow(texture(colorTexture, st).xyz+diffuse, gamma);

    //color = texture(colorTexture, st);
    //colour.r = (2 * n) / (f + n - texture( tex, st ).x * (f - n));
    //colour.g = (2 * n) / (f + n - texture( tex, st ).y* (f - n));
    //colour.b = (2 * n) / (f + n - texture( tex, st ).z * (f - n));
}

This is what my scene's lighting looks like under these shaders: Horrible lighting

I am pretty sure that this is the result of either my reconstructed position being completely wrong, or it being in the wrong space. What is wrong with my reconstruction, and what can I do to fix it?

1

1 Answers

2
votes

What you will first want to do is develop a temporary addition to your G-Buffer setup that stores the initial position of each fragment in world/view space (really, whatever space you are trying to reconstruct here). Then write a shader that does nothing but reconstruct these positions from the depth buffer. Set everything up so that half of your screen is displays the original G-Buffer and the other half displays your reconstructed position. You should be able to immediately spot discrepancies this way.

That said, you might want to take a look at an implementation I have used in the past to reconstruct (object space) position from the depth buffer. It basically gets you into view space first, then uses the inverse modelview matrix to go to object space. You can adjust it for world space trivially. It is probably not the most flexible implementation, what with FOV being hard-coded and all, but you can easily modify it to use uniforms instead...

Trimmed down fragment shader:

flat in mat4 inv_mv_mat;
     in vec2 uv;

...

float linearZ (float z)
{
#ifdef INVERT_NEAR_FAR
  const float f = 2.5;
  const float n = 25000.0;
#else
  const float f = 25000.0;
  const float n = 2.5;
#endif

  return n / (f - z * (f - n)) * f;
}

vec4
reconstruct_pos (float depth)
{
  depth = linearZ (depth);

  vec4 pos = vec4 (uv * depth, -depth, 1.0); 
  vec4 ret = (inv_mv_mat * pos);

  return ret / ret.w;
}

It takes a little additional setup in the vertex shader stage of the deferred shading lighting pass, which looks like this:

#version 150 core

in       vec4 vtx_pos;
in       vec2 vtx_st;

uniform  mat4 modelview_mat; // Matrix used when the G-Buffer was built
uniform  mat4 camera_matrix; // Matrix used to stretch the G-Buffer over the viewport

uniform float buffer_res_x;
uniform float buffer_res_y;

     out vec2 tex_st;
flat out mat4 inv_mv_mat;
     out vec2 uv;


// Hard-Coded 45 degree FOV
//const float fovy = 0.78539818525314331; // NV pukes on the line below!
//const float fovy = radians (45.0);
//const float tan_half_fovy = tan (fovy * 0.5);

const float   tan_half_fovy = 0.41421356797218323;

      float   aspect        = buffer_res_x / buffer_res_y;
      vec2    inv_focal_len = vec2 (tan_half_fovy * aspect,
                                    tan_half_fovy);

const vec2    uv_scale     = vec2 (2.0, 2.0);
const vec2    uv_translate = vec2 (1.0, 1.0);


void main (void)
{
  inv_mv_mat  = inverse (modelview_mat);
  tex_st      = vtx_st;
  gl_Position = camera_matrix * vtx_pos;
  uv          = (vtx_st * uv_scale - uv_translate) * inv_focal_len;
}

Depth range inversion is something you might find useful for deferred shading, normally a perspective depth buffer gives you more precision than you need at close range and not enough far away for quality reconstruction. If you flip things on their head by inverting the depth range you can even things out a little bit while still using the hardware depth buffer. This is discussed in detail here.