You will need to do a "final gather" in order to produce an image. If your ray tree is branching out from a light source, this will effectively "decorate" the leaves of the ray tree with an additional ray to the eye.
Of course, not every such ray will be valid: if the surface is facing away from the eye, or if it is occluded, then it should be rejected. Note that this method of generating a ray is like the "shadow" rays needed to determine illumination in regular ray tracing.
An additional problem is that your received rays will be in a random pattern, instead of the regular or well-distributed pattern conventional ray tracing provides. This means you will need to average and/or interpolate among the rays received by the camera, in order to get your pixel values.
I believe your pixel colors will be determined by a combination of the sample density and the color values of your samples; if so, you will want to make sure that your averaging/interpolation method provides that behavior. A initial approximation of this might simply add incoming samples to the nearest pixel; a better one might be "splatting" a simple additive decal for each incoming sample. A more sophisticated method could scale the size of the decal proportionally to the local density of samples -- while keeping the total integrated brightness proportional to the sample brightness.
Edit: Given an incoming "eye" ray, you still need to determine what screen location your incoming ray corresponds to. To do this, you need to compute the "ViewProjection" matrix for the camera which you would use for rasterization. This is actually the inverse of the process used for conventional ray tracing:
conventional ray tracing:
// find direction vector for given screen coordinates (x,y)
homog4vector homog_clip_coords( (x - x_offset) / x_resolution,
(y - y_offset) / y_resolution,
1.0, // z-coordinate
1.0); // w-coordinate
homog4vector homog_world_coords = InverseViewProjectionMatrix * homog_clip_coords
ray_vector_x = homog_world_coords.x / homog_world_coords.w - eye_x;
ray_vector_y = homog_world_coords.y / homog_world_coords.w - eye_y;
ray_vector_z = homog_world_coords.z / homog_world_coords.w - eye_z;
rasterization or "reverse" ray tracing:
// find screen coordinates for given source point "p"
homog4vector eye_ray_source(p.x, p.y, p.z, 1.0);
homog4vector homog_clip_coords = ViewProjectionMatrix * homog4vector(x,y,z,1);
screen_coords.x = x_offset + x_resolution * homog_clip_coords.x / homog_clip_coords.w
screen_coords.y = y_offset + y_resolution * homog_clip_coords.y / homog_c.ip_coords.w
Of course, not every incoming ray will be on-screen. Make sure to discard rays coming into the camera from behind:
if (homog_clip_coords.z < 0 || homog_clip_coords.w < 0)
{ /* reject ray */ }