1
votes

I'm trying to add rasterization support to my ray tracing graphics engine. However, I was expecting to obtain the same image regardless of the approach (not counting shading, of course).

Instead, I'm getting the test model rendered at two different sizes, so I'm guessing it must have something to do with the way I cast rays for ray tracing, or the way I build the projection matrix for rasterization.

Here is my "camera" constructor (in charge of building the projection matrix):

Camera::Camera(float fLength, float fov, float targetRatio, float zNear, float zFar)
{
    focalLength = fLength;
    fieldOfView = fov;
    aspectRatio = targetRatio;
    scale = tan(fieldOfView * 0.5 * DEG_TO_RAD);

    viewMatrix = Matrix4<float>();

    projectionMatrix = Matrix4<float>();

    float distance = zFar - zNear;

    projectionMatrix.xx = scale / aspectRatio;
    projectionMatrix.yy = scale;
    projectionMatrix.zz = -(zFar + zNear) / distance;
    projectionMatrix.zw = -1.0f;
    projectionMatrix.wz = -2.0f * zNear * zFar / distance;
    projectionMatrix.ww = 0.0;

    //aperture = tan(fieldOfView / 2 * DEG_TO_RAD) * focalLength * 2;
    //fieldOfView = atan((aperture / 2) / focalLength) * 2 * RAD_TO_DEG;
}

This is how I cast rays based on framebuffer dimensions and the current pixel coordinates (I compute the direction and get the camera position from its last matrix row):

Ray Camera::castRay(unsigned int width, unsigned int height, unsigned int x, unsigned int y)
{
    float dirX = (2 * (x + 0.5) / (float)width - 1) * aspectRatio * scale;
    float dirY = (1 - 2 * (y + 0.5) / (float)height) * scale;

    Vector3<float> dir = (Vector3<float>(dirX, dirY, -1.0) * viewMatrix).normalize();

    return Ray(Vector3<float>(viewMatrix.wx, viewMatrix.wy, viewMatrix.wz), dir);
}

On the other hand, this is how I raster vertices using the rasterization approach:

    Vector4<float> vertexInRasterSpace;
    Vector4<float> vertexInCameraSpace;

    vertexInCameraSpace = currentVertex * camera.viewMatrix;
    vertexInRasterSpace = vertexInCameraSpace * camera.projectionMatrix;

    vertexInRasterSpace.x = std::min(width - 1, (int)((vertexInRasterSpace.x + 1) * 0.5 * width));
    vertexInRasterSpace.y = std::min(height - 1, (int)((1 - (vertexInRasterSpace.y + 1) * 0.5) * height));
    vertexInRasterSpace.z = -vertexInCameraSpace.z;

Finally, the results obtained:

-Ray tracing -> Ray tracing picture

-Rasterization -> Rasterization picture

Needles to say, both images use the same modelview matrix. Was I all wrong in my initial assumption that I'd get the same (size) image?

~Sky

1

1 Answers

1
votes

Depends on what the rasterization algorithm is doing (are you feeding the vertices to a 3D graphics API or doing your own rasterization?), but my first suspects are:

After multiplying by the perspective matrix, but before rasterization, you need to perform the perspective divide: that is, dividing the X, Y, and Z components of your vector by the W component. That appears to be missing.

If you are feeding the output vertices to a 3D graphics API, then accounting for the width and height of the window may not be what you want (that is part of the viewport transform of the hardware rendering pipeline).