Path Tracer
As part of my CS224 coursework, I implemented Jim Kajiya’s path tracing algorithm. I used G3D to help with scene intersections, specular reflections, and refraction impulses. I also implemented depth of field and intelligent Russian Roulette sampling.
The Light Transport Equation
This equation describes the radiance of a specific point in a scene. The radiance is the sum of the emission from the surface and the indirect illumination that arrives at that point. Path tracing is one algorithm that tries to solve this equation.
Path Tracing
A path tracer is a simple but effective unbiased Monte Carlo algorithm. It generates random paths that start at the camera, bounce through the scene, and end at the light sources in the scene. There are theoretically infinite pathways a ray could take, so I used Russian Roulette to begin to terminate the path after a fixed number of bounces. It averages multiple runs of this recursive tracing, making it more efficient than the naive approach. For the purposes of this project, it was assumed that there was no participating media (like fog) in the scene.
Path tracing with a low number of samples results in a grainy image (as can be seen in the beginning of the gif to the left). When executed with a high sample count, the path tracing integrator will ultimately converge and produce a sharp, photorealistic image. Although path tracing is time intensive, lighting effects like soft shadows are automatically factored into the rendering equation. The gif to the left also shows how path tracing solves difficult problems, like the interactions between mirrors and glass objects.
Depth of Field
One benefit of path tracing is that it enables easy implementation of effects like depth of field. With the addition of 2 parameters - focal length and aperture size - I was able to adjust the algorithm to handle depth of field. The focal length determines how far away from the camera objects must be in order to be in focus. The aperture size determines the “blurriness” of objects that are out of focus. Given a ray origin, direction, and the focal length, I can then determine the focal point. If an object intersects the focal plane, it will be in focus; otherwise, it will be blurred.
To begin blurring out objects, jitter the ray origin by some amount of the aperture size. I calculated a random number between -0.5 and 0.5 for each component of the vector, multiplied those values by the aperture size, and then added the resultant delta to the ray origin. The ray should still pass through the focal point, so I recalculated the ray direction given the same fixed focal point, and the new ray origin. Once this ray manipulation was done, the algorithm could proceed as normal and depth of field would become apparent.