4
votes

I'm trying to render a sphere as an impostor, but I have a problem in calculating the depth value of the points of the sphere's surface.

Here you can see what happens when I move the camera around the impostor-sphere and a "real" cube that intersects the sphere.

IMG

You can see that when I move the camera, the depth values are not consistent and some parts of the cube that should be inside the sphere, depending on the position of the camera, pop in and out of the sphere.

These are the vertex and fragment shaders for the impostor-sphere.

vertex shader :

#version 450 core

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

out vec2 coords;

void main()
{
    switch (gl_VertexID)
    {
    case 0:
        coords = vec2(-1.0f, -1.0f);
        break;
    case 1:
        coords = vec2(-1.0f, +1.0f);
        break;
    case 2:
        coords = vec2(+1.0f, -1.0f);
        break;
    case 3:
        coords = vec2(+1.0f, +1.0f);
        break;
    }

    // right & up axes camera space
    vec3 right = vec3(view[0][0], view[1][0], view[2][0]);
    vec3 up = vec3(view[0][1], view[1][1], view[2][1]);

    vec3 center = vec3(0.0f);
    float radius = 1.0f;

    // vertex position
    vec3 position = radius * vec3(coords, 0.0f);
    position = right * position.x + up * position.y;

    mat4 MVP = projection * view * model;
    gl_Position = MVP * vec4(center + position, 1.0f);
}

fragment shader :

#version 450 core

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

in vec2 coords;
out vec4 pixel;

void main()
{
    float d = dot(coords, coords);

    if (d > 1.0f)
    {
        discard;
    }

    float z = sqrt(1.0f - d);

    vec3 normal = vec3(coords, z);
    normal = mat3(transpose((view * model))) * normal;

    vec3 center = vec3(0.0f);
    float radius = 1.0f;

    vec3 position = center + radius * normal;

    mat4 MVP = projection * view * model;

    // gl_DepthRange.diff value is the far value minus the near value

    vec4 clipPos = MVP * vec4(position, 1.0f);
    float ndcDepth = clipPos.z / clipPos.w;
    gl_FragDepth = ((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) * 0.5f;

    pixel = vec4((normal + 1.0f) * 0.5f, 1.0f);
}

I followed this example to calculate the depth value.

The model matrix that I pass to the shaders is an identity matrix, so it doesn't affect the operations.

Thanks a lot for your help!!

2
Hm, that's weird. Upon first glance, that all looks rather sound to me. Just to make sure: you don't call glClipControl() anywhere in your program?Michael Kenzel
Thanks for your answer! No I never call that function. I neither touch the near and far values (which should be 0 and 1 respectively), so the last line in the depth calculation should be equivalent to gl_FragDepth = (ndcDepth + 1.0f) * 0.5f;, which simply maps the depth values from the range [-1, +1] in clip space to the range [0, 1]. I think the code is almost correct, but something is missing, especially when the depth values between objects are close. If you have any other idea or you think it could be useful to check other parts of the code, please just tell me! Thanks a lot!Arctic Pi
@ArcticPi: "I followed this example to calculate the depth value." But... you didn't use the rest of the tutorial. Like the page right before this, where it explains why the method you are trying to use doesn't work and that you should ray-trace the sphere's position instead.Nicol Bolas
@NicolBolas Thanks for your answer! Yeah, I read all the pages of the tutorial. I solved the problem mention multiplying the normal vectors by the transpose of the view-model matrix. In this way, I update the normal vectors as the camera moves and then use them for light processing. The ray-casting method explained in the tutorial has the problem of having to extend the size of the quad and the author only suggests an approximate scale factor of 2, without showing that this value is suitable for all possible situations.Arctic Pi
@ArcticPi: "I solved the problem mention multiplying the normal vectors by the transpose of the view-model matrix." That doesn't actually fix your depth problem; it's still there, but simply is not apparent in your current environment. The reason the tutorial has to extend the size of the quad is because that's how reality works. Under a perspective projection, a sphere does not take up a square-sized area. Therefore, if you are going to present an impostor sphere, you have to extend the area.Nicol Bolas

2 Answers

5
votes

Your depth computations are fine. The problem you're having is essentially the same one outlined in the very tutorial you took your depth computations from. It's a different manifestation of that problem, but they all come from the same place: your computation of position is not correct relative to reality. So bad values are being fed to the depth computation; it's no surprise that you're not getting the right result.

This is an exaggerated, 2D representation of your basic problem:

A picture of a circle impostor, with a close-up viewer

We are at the View, looking at Point. According to your impostor code, we will compute the position for Point by essentially projecting it forward perpendicularly to the plane until it hits the sphere. That computation yields the point Impostor.

However, as you can see, that's not what reality shows. If we draw a line from Point to View (which represents what we should see from View in that direction), the first position on the sphere that we see is Real. And that's very far from Impostor.

Since the fundamental problem is that your impostor position computation is wrong, the only way to fix that is to use a correct impostor computation. And the correct way to do that is (one way or another) through ray-tracing of a sphere from View. Which is what the tutorial does.

1
votes

Ok, I think I finally resolved the problem.

I used ray-tracing as suggest by @NicolBolas following the tutorial, and this is the result.

enter image description here

These are the vertex and fragment shaders for the impostor-sphere.

vertex shader :

#version 330

out vec2 mapping;

uniform mat4 view;
uniform mat4 cameraToClipMatrix;

const float sphereRadius = 1.0f;
const vec3 worldSpherePos = vec3(0.0f);

const float g_boxCorrection = 1.5;

void main()
{
    vec2 offset;
    switch(gl_VertexID)
    {
    case 0:
        //Bottom-left
        mapping = vec2(-1.0, -1.0) * g_boxCorrection;
        offset = vec2(-sphereRadius, -sphereRadius);
        break;
    case 1:
        //Top-left
        mapping = vec2(-1.0, 1.0) * g_boxCorrection;
        offset = vec2(-sphereRadius, sphereRadius);
        break;
    case 2:
        //Bottom-right
        mapping = vec2(1.0, -1.0) * g_boxCorrection;
        offset = vec2(sphereRadius, -sphereRadius);
        break;
    case 3:
        //Top-right
        mapping = vec2(1.0, 1.0) * g_boxCorrection;
        offset = vec2(sphereRadius, sphereRadius);
        break;
    }

    vec3 cameraSpherePos = vec3(view * vec4(worldSpherePos, 1.0));

    vec4 cameraCornerPos = vec4(cameraSpherePos, 1.0);
    cameraCornerPos.xy += offset * g_boxCorrection;

    gl_Position = cameraToClipMatrix * cameraCornerPos;
}

fragment shader :

#version 330

in vec2 mapping;

out vec4 outputColor;

uniform mat4 view;
uniform mat4 cameraToClipMatrix;

const float sphereRadius = 1.0f;
const vec3 worldSpherePos = vec3(0.0f);

uniform vec3 eye;

void Impostor(out vec3 cameraPos, out vec3 cameraNormal)
{
    vec3 cameraSpherePos = vec3(view * vec4(worldSpherePos, 1.0));

    vec3 cameraPlanePos = vec3(mapping * sphereRadius, 0.0) + cameraSpherePos;
    vec3 rayDirection = normalize(cameraPlanePos);

    float B = 2.0 * dot(rayDirection, -cameraSpherePos);
    float C = dot(cameraSpherePos, cameraSpherePos) - (sphereRadius * sphereRadius);

    float det = (B * B) - (4 * C);
    if(det < 0.0)
        discard;

    float sqrtDet = sqrt(det);
    float posT = (-B + sqrtDet)/2;
    float negT = (-B - sqrtDet)/2;

    float intersectT = min(posT, negT);
    cameraPos = rayDirection * intersectT;
    cameraNormal = normalize(cameraPos - cameraSpherePos);
}

void main()
{
    vec3 cameraPos;
    vec3 cameraNormal;

    Impostor(cameraPos, cameraNormal);

    //Set the depth based on the new cameraPos.
    vec4 clipPos = cameraToClipMatrix * vec4(cameraPos, 1.0);
    float ndcDepth = clipPos.z / clipPos.w;
    gl_FragDepth = ((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;

    cameraNormal = mat3(transpose(view)) * cameraNormal;

    outputColor = vec4((cameraNormal + 1.0f) * 0.5f, 1.0f);
}

I only slightly modified the shaders in the tutorial to manage camera movements.

I'd like to ask you some questions.

  • As you can see, I still multiply the normal of each fragment by the transpose of the view matrix. Is that right? In this way, will I be sure to see the sphere "oriented" in the correct way when I move the camera around it?

  • About the box sizes correction, how can I be sure that increasing the size of the quad by 50% works in every situation? In the tutorial it is mentioned that in order to find out the exact sizes of the quad, it's necessary take in consideration the viewport and the perspective matrix. Do you know how this calculation can be performed?

  • In this famous paper, it seems that they didn't use ray-tracing to get the same result. But unfortunately, I don't get how they do it.

enter image description here

  • Now I need to render tens or hundreds of thousands of impostor-spheres in the same scene. I'm wondering if the ray-tracing calculation is too expensive for so many spheres. Also, I will add the computational cost of lighting. I know there is another technique called ray-marching that should be cheaper, but at the moment I don't know if it is suitable in my case.

Thanks a lot for your help! :)