1
votes

So the Wikipedia page for path tracing (http://en.wikipedia.org/wiki/Path_tracing) contains a naive implementation of the algorithm with the following explanation underneath:

"All these samples must then be averaged to obtain the output color. Note this method of always sampling a random ray in the normal's hemisphere only works well for perfectly diffuse surfaces. For other materials, one generally has to use importance-sampling, i.e. probabilistically select a new ray according to the BRDF's distribution. For instance, a perfectly specular (mirror) material would not work with the method above, as the probability of the new ray being the correct reflected ray - which is the only ray through which any radiance will be reflected - is zero. In these situations, one must divide the reflectance by the probability density function of the sampling scheme, as per Monte-Carlo integration (in the naive case above, there is no particular sampling scheme, so the PDF turns out to be 1)."

The part I'm having trouble understanding is the part in bold. I am familiar with PDFs but I am not quite sure how they fit into here. If we stick to the mirror example, what would be the PDF value we would divide by? Why? How would I go about finding the PDF value to divide by if I was using an arbitrary BRDF value such as a Phong reflection model or Cook-Torrance reflection model, etc? Lastly, why do we divide by the PDF instead of multiply? If we divide, don't we give more weight to a direction with a lower probability?

2
I can only assume that it has to do with the fact that a higher reflectance implies that is more likely to go in a particular direction. So you divide the reflectance by the PDF to narrow the possible results.VoronoiPotato

2 Answers

3
votes
  1. Let's assume that we have only materials without color (greyscale). Then, their BDRF at each point can be expressed as a single valued function

    float BDRF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit);
    

    Here, phi and theta are the azimuth and zenith angles of the two rays under consideration. For pure Lambertian reflection, this function would look like this:

    float lambertBRDF(phi_in, theta_in, phi_out, theta_out, pointWhereObjWasHit)
    {
        return albedo*1/pi*cos(theta_out);
    }
    

    albedo ranges from 0 to 1 - this measures how much of the incoming light is reemitted. The factor 1/pi ensures that the integral of BRDF over all outgoing vectors does not exceed 1. With the naive approach of the Wikipedia article (http://en.wikipedia.org/wiki/Path_tracing), one can use this BRDF as follows:

    Color TracePath(Ray r, depth) {
    /* .... */ 
    Ray newRay;
    newRay.origin = r.pointWhereObjWasHit;
    newRay.direction = RandomUnitVectorInHemisphereOf(normal(r.pointWhereObjWasHit));     
    Color reflected = TracePath(newRay, depth + 1);
    return emittance + reflected*lambertBDRF(r.phi,r.theta,newRay.phi,newRay.theta,r.pointWhereObjWasHit);
    }
    
  2. As mentioned in the article and by Ross, this random sampling is unfortunate because it traces incoming directions (newRay's) from which little light is reflected with the same probability as directions from which there is lots of light. Instead, directions whence much light is reflected to the observer should be selected preferentially, to have an equal sample rate per contribution to the final color over all directions. For that, one needs a way to generate random rays from a probability distribution. Let's say there exists a function that can do that; this function takes as input the desired PDF (which, ideally should be be equal to the BDRF) and the incoming ray:

    vector RandomVectorWithPDF(function PDF(p_i,t_i,p_o,t_o,point x), Ray incoming)
     {
       // this function is responsible to create random Rays emanating from x
       // with the probability distribution PDF. Depending on the complexity of PDF, 
       // this might somewhat involved. It is possible, however, to do it for Lambertian
       // reflection (how exactly is math, not programming):
       vector randomVector;
       if(PDF==lambertBDRF)
       {
         float phi = uniformRandomNumber(0,2*pi);
         float rho = acos(sqrt(uniformRandomNumber(0,1)));
         float theta = pi/2-rho;
         randomVector = getVectorFromAzimuthZenithAndNormal(phi,zenith,normal(incoming.whereObjectWasHit));
       }
       else // deal with other PDFs
       return randomVector;
     }     
    

    The code in the TracePath routine would then simply look like this:

     newRay.direction = RandomVectorWithPDF(lambertBDRF,r);
     Color reflected = TracePath(newRay, depth + 1);
     return emittance + reflected;
    

    Because the bright directions are preferred in the choice of samples, you do not have to weight them again by applying the BDRF as a scaling factor to reflected. However, if PDF and BDRF are different for some reason, you would have to scale down the output whenever PDF>BDRF (if you picked to many from the respective direction) and enhance it when you picked to little . In code:

    newRay.direction = RandomVectorWithPDF(PDF,r);
    Color reflected = TracePath(newRay, depth + 1);
    return emittance + reflected*BDRF(...)/PDF(...);
    

    The output is best, however, if BDRF/PDF is equal to 1.

  3. The question remains why can't one always choose the perfect PDF which is exactly equal to the BDRF? First, some random distributions are harder to compute than others. For example, if there was a slight variation in the albedo parameter, the algorithm would still do much better for the non-naive sampling than for uniform sampling, but the correction term BDRF/PDF would be needed for the slight variations. Sometimes, it might even be impossible to do it at all. Imagine a colored object with different reflective behavior of red green and blue - you could either render in three passes, one for each color, or use an average PDF, which fits all color components approximately, but none perfectly.

  4. How would one go about implementing something like Phong shading? For simplicity, I still assume that there is only one color component, and that the ratio of diffuse to specular reflection is 60% / 40% (the notion of ambient light makes no sense in path tracing). Then my code would look like this:

    if(uniformRandomNumber(0,1)<0.6)    //diffuse reflection
    {
       newRay.direction=RandomVectorWithPDF(lambertBDRF,r);
       reflected = TracePath(newRay,depth+1)/0.6;
    }
    else //specular reflection
    {
       newRay.direction=RandomVectorWithPDF(specularPDF,r);
       reflected = TracePath(newRay,depth+1)*specularBDRF/specularPDF/0.4;
    }
    return emittance + reflected;
    

Here specularPDF is a distribution with a narrow peak around the reflected ray (theta_in=theta_out,phi_in=phi_out+pi) for which a way to create random vectors is available, and specularBDRF returns the specular intensity from Phong's model (http://en.wikipedia.org/wiki/Phong_reflection_model). Note how the PDFs are modified by 0.6 and 0.4 respectively.

1
votes

I'm by no means an expert in ray tracing, but this seems to be classic Monte Carlo:

You have lots of possible rays, and you choose one uniformly at random and then average over lots of trials. The distribution you used to choose one of the rays was uniform (they were all equally as likely) so you don't have to do any clever re-normalising.

However, Perhaps there are lots of possible rays to choose, but only a few would possibly lead to useful results.We therefore bias towards picking those 'useful' possibilities with higher probability, and then re-normalise (we are not choosing the rays uniformly any more, so we can't just take the average). This is importance sampling.

The mirror example seems to be the following: only one possible ray will give a useful result. If we choose a ray at random then the probability we hit that useful ray is zero: this is a property of conditional probability on continuous spaces (it's not actually continuous, it's implicitly discretised by your computer, so it's not quite true...): the probability of hitting something specific when there are infinitely many things must be zero.

Thus we are re-normalising by something with probability zero - standard conditional probability definitions break when we consider events with probability zero, and that is where the problem would come from.