0
votes

I have written two conventional ray tracers in the past but now I want to program a reverse ray tracer, which traces photons from the light source(s) to the eye point. After spending the last few days reading articles on this topic, I know I shouldn't try it but I want to do it anyway. The problem I am facing right now is how to compute the values for each pixel in the image I am trying to produce.

In normal ray tracing I am shooting rays from the eye point through a pixel in the image plane into the scene and after this branch of the ray tree has been traced I end up with a color value which is assigned to that pixel. I can't think of a way to reverse this process. If I test every ray whether it only intersects with the image plane, then I would get a 180 degree field of view and a focal point right on the image plane (a blurry mess). So I have to test every ray whether it intersects the image plane and the eye point. But since the eye point is just an infinitessimal small point, the chances for a ray hitting it should be (almost) zero, which will result in no image at all.

So my question is: how do I compute the pixel values of an image to be rendered by tracing the photons from the light source?

Thanks in advance,
Jan

1
You should be aware that "photon tracing" is not a synonym for "ray tracing". In the field of computer graphics, "photon tracing" refers to a specific technique, which stores ray hits in a 3-D database, which it uses to improve statistical convergence compared to regular ray tracing.comingstorm
@comingstorm: I know that photon tracing is not ray tracing but there are so many names for different or equal things that I just might have used the wrong name. I meant: tracing single photons from the light source to the camera / eyepoint with no fancy optimizations.sl0815
The terminology for this is somewhat confused: it's either called "reverse ray tracing" (because it is in the opposite direction from conventional ray tracing) or "forward ray tracing" (because it traces in the same direction as the light goes).comingstorm

1 Answers

1
votes

You will need to do a "final gather" in order to produce an image. If your ray tree is branching out from a light source, this will effectively "decorate" the leaves of the ray tree with an additional ray to the eye.

Of course, not every such ray will be valid: if the surface is facing away from the eye, or if it is occluded, then it should be rejected. Note that this method of generating a ray is like the "shadow" rays needed to determine illumination in regular ray tracing.

An additional problem is that your received rays will be in a random pattern, instead of the regular or well-distributed pattern conventional ray tracing provides. This means you will need to average and/or interpolate among the rays received by the camera, in order to get your pixel values.

I believe your pixel colors will be determined by a combination of the sample density and the color values of your samples; if so, you will want to make sure that your averaging/interpolation method provides that behavior. A initial approximation of this might simply add incoming samples to the nearest pixel; a better one might be "splatting" a simple additive decal for each incoming sample. A more sophisticated method could scale the size of the decal proportionally to the local density of samples -- while keeping the total integrated brightness proportional to the sample brightness.


Edit: Given an incoming "eye" ray, you still need to determine what screen location your incoming ray corresponds to. To do this, you need to compute the "ViewProjection" matrix for the camera which you would use for rasterization. This is actually the inverse of the process used for conventional ray tracing:

conventional ray tracing:
    // find direction vector for given screen coordinates (x,y)
    homog4vector homog_clip_coords( (x - x_offset) / x_resolution,
                                    (y - y_offset) / y_resolution,
                                    1.0,  // z-coordinate
                                    1.0); // w-coordinate
    homog4vector homog_world_coords = InverseViewProjectionMatrix * homog_clip_coords
    ray_vector_x = homog_world_coords.x / homog_world_coords.w - eye_x;
    ray_vector_y = homog_world_coords.y / homog_world_coords.w - eye_y;
    ray_vector_z = homog_world_coords.z / homog_world_coords.w - eye_z;

rasterization or "reverse" ray tracing:
    // find screen coordinates for given source point "p"
    homog4vector eye_ray_source(p.x, p.y, p.z, 1.0);
    homog4vector homog_clip_coords = ViewProjectionMatrix * homog4vector(x,y,z,1);
    screen_coords.x = x_offset + x_resolution * homog_clip_coords.x / homog_clip_coords.w
    screen_coords.y = y_offset + y_resolution * homog_clip_coords.y / homog_c.ip_coords.w

Of course, not every incoming ray will be on-screen. Make sure to discard rays coming into the camera from behind:

    if (homog_clip_coords.z < 0 || homog_clip_coords.w < 0)
      { /* reject ray */ }