3
votes

I have a deferred renderer which appears to work correctly, depth, colour and shading comes out correctly. However the position buffer is fine for orthographic, while the geometry appears 'inverted' (or depth disabled) when using a perspective projection.

I am getting the following buffer outputs for orthographic.

depth

normals

positions

With the final 'shaded' image currently looking correct.

enter image description here

However when I am using a perspective projection I get the following buffers coming out...

enter image description here

enter image description here

enter image description here

And final image is fine, although I don't incorporate any position buffer information at the moment (N.B Only doing 'headlight' shading at the moment)

enter image description here

While the final image appears correct, the depth buffer appears to be ignored for my position buffer...(there is no glDisable(GL_DEPTH_TEST) in the code.

The depth and normal buffers looks ok to me, it's only the 'position' buffer which appears to be ignoring the depth? The render pipeline is exactly the same in for ortho and perspective with the only difference being the projection matrix.

I use glm::ortho, and glm::perspective and I calculate my near/far clipping distances on the fly based on the scene AABB. For orthographic my near/far is 1 & 11.4734 respectively, and for perspective it is 11.0875 & 22.5609... The width and height values are the same, fov is 45 for perspective projection.

I do have these calls before drawing any geometry...

glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

Which I use for compositing different layers as part of the render pipeline.

Am I doing anything wrong here? or am I misunderstanding something?

Here are my shaders... Vertex shader of gBuffer...

#version 430 core

layout (std140) uniform MatrixPV
{
    mat4 P;
    mat4 V;
};

layout(location = 0) in vec3 InPoint;
layout(location = 1) in vec3 InNormal;
layout(location = 2) in vec2 InUV;

uniform mat4 M;

out vec4 Position;
out vec3 Normal;
out vec2 UV;

void main()
{
    mat4 VM = V * M;
    gl_Position = P * VM * vec4(InPoint, 1.0);
    Position = P * VM * vec4(InPoint, 1.0);
    Normal = mat3(M) * InNormal;
    UV = InUV;
}

Fragment shader of gBuffer...

#version 430 core

layout(location = 0) out vec4 gBufferPicker;
layout(location = 1) out vec4 gBufferPosition;
layout(location = 2) out vec4 gBufferNormal;
layout(location = 3) out vec4 gBufferDiffuse;

in vec3 Normal;
in vec4 Position;

vec4 Diffuse();
uniform vec4 PickerColour;

void main()
{
    gBufferPosition = Position;
    gBufferNormal = vec4(Normal.xyz, 1.0);
    gBufferPicker = PickerColour;
    gBufferDiffuse = Diffuse();
}

And here is the 'second pass' shader to visualise the position buffer...

#version 430 core

uniform sampler2D debugBufferPosition;

in vec2 UV;
out vec4 frag;

void main()
{
    vec3 val = texture(debugBufferPosition, UV).xyz;
    frag = vec4(val.xyz, 1.0);
}

I haven't used the position buffer data yet, and I know I can reconstruct it without having to store them in another buffer, however the positions are useful for me for other reasons and I would like to know why they are coming out as they are for perspective?

1

1 Answers

1
votes

What you actually write in the position buffer is the clip space coordinate

Position = P * VM * vec4(InPoint, 1.0);

The clip space coordinate is a Homogeneous coordinates and transformed to the normaliced device cooridnate (which is a Cartesian coordinate by a Perspective divide.

ndc = gl_Position.xyz / gl_Position.w;

At orthographic projection the w component is 1, but at perspective projection, the w component contains a value which depends on the z component (depth) of the (cartesian ) view space coordinate.

I recommend to store the normalized device coordinate to the position buffer, rather than the clip space coordinate. e.g.:

gBufferPosition = vec4(Position.xyz / Position.w, 1.0);