1
votes

I am writing a ray tracer from scratch. The example is rendering two spheres using ray-sphere intersection detection. When the spheres are close to the center of the screen, they look fine. However, when I move the camera, or if I adjust the spheres position so they are closer to the edge, they become distorted.

enter image description here

enter image description here

This is the ray casting code:

void Renderer::RenderThread(int start, int span)
{

    // pCamera holds the position, rotation, and fov of the camera
    // pRenderTarget is the screen to render to

    // calculate the camera space to world space matrix
    Mat4 camSpaceMatrix = Mat4::Get3DTranslation(pCamera->position.x, pCamera->position.y, pCamera->position.z) *
        Mat4::GetRotation(pCamera->rotation.x, pCamera->rotation.y, pCamera->rotation.z);

    // use the cameras origin as the rays origin
    Vec3 origin(0, 0, 0);
    origin = (camSpaceMatrix * origin.Vec4()).Vec3();

    // this for loop loops over all the pixels on the screen
    for ( int p = start; p < start + span; ++p ) {

        // get the pixel coordinates on the screen
        int px = p % pRenderTarget->GetWidth();
        int py = p / pRenderTarget->GetWidth();

        // in ray tracing, ndc space is from [0, 1]
        Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());

        // in ray tracing, screen space is [-1, 1]
        Vec2 screen(2 * ndc.x - 1, 1 - 2 * ndc.y);

        // scale x by aspect ratio
        screen.x *= (float)pRenderTarget->GetWidth() / pRenderTarget->GetHeight();

        // scale screen by the field of view
        // fov is currently set to 90
        screen *= tan((pCamera->fov / 2) * (PI / 180));

        // screen point is the pixels point in camera space,
        // give a z value of -1
        Vec3 camSpace(screen.x, screen.y, -1);
        camSpace = (camSpaceMatrix * camSpace.Vec4()).Vec3();

        // the rays direction is its point on the cameras viewing plane
        // minus the cameras origin
        Vec3 dir = (camSpace - origin).Normalized();

        Ray ray = { origin, dir };

        // find where the ray intersects with the spheres
        // using ray-sphere intersection algorithm
        Vec4 color = TraceRay(ray);
        pRenderTarget->PutPixel(px, py, color);
    }

}

The FOV is set to 90. I have seen where other people have had this problem but it was because they were using a very high FOV value. I don't think there should be issues with 90. This issue persists even if the camera is not moved at all. Any object close to the edge of the screen appears distorted.

1
Are you sure this isn't just normal perspective distortions? Without a full scene for reference, normal perspective transformations can look weird. - François Andrieux
Projecting 3D into 2D will always have some distortion. As to how much depends on the field of view. Higher values mean more distortion at the edges. - tadman
If it is just normal perspective distortion, why is it showing up to such a high degree at only a 90 FOV? In a raster renderer I recently made, 90 looked great. The problem does start to go away at around 40 FOV, but I don't want to have to use such a low FOV. - Joey
90 degrees of FOV is actually pretty high. Like I said, if you add scenery or motion for reference it wouldn't look as obvious. I'm not sure why your raster renderer looked better. I suspect it is just a cognitive bias. In other words, you may only be remembering it better than it was, you maybe just didn't notice it before or the context made it less apparent. In my experience, when ray tracing goes bad it either goes very badly or goes imperceptibly badly. Edit : Can you put up the rastered version for comparison? - François Andrieux
ok. I thought 90 was sort of the standard - Joey

1 Answers

0
votes

When in doubt, you can always check out what other renderers are doing. I always compare my results and settings to Blender. Blender 2.82, for example, has a default field of view of 39.6 degrees.

I also feel inclined to point out that this is wrong:

 Vec2 ndc((px + 0.75f) / pRenderTarget->GetWidth(), (py + 0.75f) / pRenderTarget->GetHeight());

If you want to get the center of the pixel, then it should be 0.5f:

 Vec2 ndc((px + 0.5f) / pRenderTarget->GetWidth(), (py + 0.5f) / pRenderTarget->GetHeight());

Also, and this is really a nit-picky kind of thing, your intervals are open intervals and not closed ones (as you mentioned in the source comments.) The image plane coordinates never reach 0 or 1 and your camera space coordinates are never fully -1 or 1. Eventually, then the image plane coordinates are converted to pixel coordinates, it is left-closed interval [0, width) and [0, height).

Good luck on your ray tracer!