CptS 548 (Advanced Computer Graphics) Unit 3: Distribution Ray Tracing Bob Lewis School of Engineering and Applied Science Washington State University Spring, 2018
References Cook, R. L., Porter, T., and Carpenter, L. Distributed Ray Tracing, SIGGRAPH 84. Cook, R. L., Stochastic Sampling and Distributed Ray Tracing, in An Introduction to Ray Tracing (Glassner, ed.), Academic Press, 1989. (out of print sigh!) Glassner, A., Principles of Digital Image Synthesis, Morgan-Kaufmann, 1995. Jenkins, F. A. and White, H. E., Fundamentals of Optics, McGraw-Hill, 1957. Kolb, C., Hanrahan, P., and Mitchell, D., A Realistic Camera Model for Computer Graphics, SIGGRAPH 95. Watt, A. and Watt, M., Advanced Animation and Rendering Techniques, Addison-Wesley, 1992.
What s Wrong with Basic Ray Tracing? Small objects and textures are aliased. Shadows are too sharp. (point and directional luminaires only) Everything is in focus. (not photorealistic) Moving objects are not blurred. (the O Brien-Harryhausen effect) And that s just for starters. Solution: Distribution Ray Tracing
What is Distribution Ray Tracing? Cast multiple instead of single rays and combine results to measure a distribution of light. It was originally called distributed, but this was an unfortunate choice of words. It has nothing intrinsically to do with parallel ray tracing. Parallel ray tracers may or may not be distributed (on multiple nodes). Alternative term: stochastic ray tracing But distribution ray tracing isn t always stochastic.
review: Camera.raytrace() Recall this method s original version: class Camera :... method raytrace ( camera, scene, width, height ): image = Image ( width, height ) for i from 0 to width -1: for j from 0 to height -1: ray = camera. ray ( width, height, i +0.5, j +0.5) image [i,j] = scene. trace (ray, 0.0) Note that the ray goes through the middle (0.5,0.5) of the pixel. We said that we could combine the two method calls into one, but we don t, for good reason. Here it is...
A Distribution Ray Tracer: Top Level (I) class Camera :... method raytracepixel ( camera, scene, imagerow, imagecolumn ): result = Radiance (0,0,0) for ( ray, weight ) in camera.viewingraysandweights( imagerow, imagecolumn): result += weight * scene. traceray ( ray, 0.0) return result method raytrace ( camera, scene ): image = Image ( camera. width, camera. height ) for i from 0 to camera. width - 1: for j from 0 to camera. height - 1: image [i, j] = camera. raytracepixel ( scene, i, j)
A Distribution Ray Tracer: Top Level (II) The only change is the generation of multiple viewing rays, all with potentially different origins and directions, for each pixel. This is the good reason we didn t merge the Camera.viewingRay() and Scene.traceRay() methods in the basic ray tracer. camera.raytracepixel() computes a weighted sum of radiances for each pixel. This is for primary (view) rays. So now all we do is write camera.viewingraysandweights(). We recover old behavior with: def viewingraysandweights ( camera, imagerow, imagecolumn ): return [ ( camera. viewingray ( imagerow +0.5, imagecolumn +0.5), 1.0) ]
What Can Distribution Ray Tracing Do? Depends on the kind of rays you cast: eye rays (shown above) antialiasing depth-of-field motion blur shadow rays soft shadows area luminaires reflection rays glossy (rough mirror, e.g. sandblasted) surfaces area luminaires refraction rays murky (scattering) liquids
Unit 3: Distribution Ray Tracing Part Part Part Part 1: 2: 3: 4: Supersampling Depth of Field Motion Blur Soft Shadows (Penumbras) What is a Distribution of Light? Look at an enlargement of a single pixel: v u A scene is a continuous Radiance function L(u, v ). What the PA1 raytracer does about this... Bob Lewis WSU CptS 548 (Spring, 2018)
Review: Basic Ray Tracing It samples a single pixel: v ( 1 img[i, j] = L 2, 1 ) 2 u...so it could miss quite a lot of detail. It would be better to average L over the pixel: img[i, j] = L(u, v)dudv pixel
Supersampling...but that integral is very hard, if not impossible, to do analytically, so we approximate it with multiple samples: img[i, j] = L(u, v)dudv pixel v u 1 N anti N anti 1 k=0 L (u k, v k ) samp where N samp is the number of samples. How do we pick the (u k, v k )? We can, for instance, choose a uniform N anti N anti. (The above shows N anti = 4.)
Camera.viewingRaysAndWeights() for Regular Supersampling class Camera :... method viewingraysandweights ( camera, imagerow, imagecolumn ): result = [] n = camera. npixelsamples n1d = sqrt ( n) # assume n is a square duv = 1 / n1d # sample spacing for i from 0 to n1d - 1: for j from 0 to n1d - 1: u = imagerow + ( i + 0.5) * duv v = imagecolumn + ( j + 0.5) * duv ray = camera. viewingray (u, v) result += [ (ray, 1/n) ] return result
Raytraced Images with Regular Supersampling sampled 1 1 sampled 2 2 sampled 4 4 (i. e. basic)
Camera.viewingRaysAndWeights() for Jittered Supersampling What if we vary ( jitter ) the ray origins by as small, random amount? This requires two teensy little changes (in red): class Camera :... method viewingraysandweights ( camera, imagerow, imagecolumn ): result = [] n = camera. npixelsamples n1d = sqrt ( n) # assume n is a square duv = 1 / n1d # sample spacing for i from 0 to n1d - 1: for j from 0 to n1d - 1: u = imagerow + ( i + rand01()) * duv v = imagecolumn + ( j + rand01()) * duv ray = camera. viewingray (u, v) result += [ (ray, 1/n) ] return result
Raytraced Images with Jittered Supersampling sampled 1 1 sampled 2 2 sampled 4 4
To Jitter or Not to Jitter? sampled 1 1 sampled 2 2 sampled 4 4 sampled 1 1, jittered sampled 2 2, jittered sampled 4 4, jittered
Jittered 8 8 In terms of primary rays only, how much slower is our raytracer now?
Efficiency Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N anti be the number of antialiasing (super)samples. What is the ( asymptotic ) per-pixel (time) efficiency of supersampled ray tracing with shadows? (i.e. What goes inside the parens in O()?) O(Nobj * Nlum * Nanti)
Adaptive Supersampling One approach to reduce the N anti factor is this... v v v u u u The idea: Keep subdividing until there s only a small variation between the samples.
Camera.raytracePixel() for Adaptive Supersampling To get adaptive supersampling, we modify Camera.raytracePixel() to call a new recursive pixel sampling method Camera.adaptivelyRaytracePixel(): class Camera :... method raytracepixel ( camera, imagerow, imagecolumn ): ( du, dv) = camera. pixeldimensions () (u, v) = ( imagerow + 0.5 * du, imagecolumn + 0.5 * dv) # pixel center return = camera.adaptivelyraytracepixel(u, v, du, dv)
Camera.adaptivelyRaytracePixel() class Camera :... method adaptivelyraytracepixel ( camera, scene, u, v, du, dv ): ( du, dv) = ( du /2, dv /2) # dimensions of subcell uvsubs = [] samples = [] for (delu, delv ) in (( -du,-dv ),(du,-dv ),(du, dv ),(-du, dv )): usub = u + delu vsub = v + delv uvsubs += [ ( usub, vsub ) ] rysub = camera. viewingray ( usub, vsub ) samples += [ scene. traceray ( rysub, 0.0) ] if closeenough(samples): return sum ( samples ) / 4 # i. e, the mean else : sum = Radiance (0,0,0) for ( usub, vsub ) in uvsubs : sum += camera. adaptivelyraytracepixel ( usub, vsub, du, dv) return sum / 4
What s Wrong with Adaptive Supersampling? Adaptive supersampling is easy to understand, but there are some issues: What are the stopping criteria? How do we define closeenough()? It makes a questionable assumption: Just because four samples are close (however we define it) is no indication that L(u, v) is uniform over the subpixel. It s still susceptible to sampling errors, which we ll cover in an upcoming unit. Perhaps the best approach is a pragmatic one: For a particular scene, try adaptive supersampling and see if it works.
What We ve Assumed So Far: The Pinhole Camera image pinhole object
A More Realistic Camera Model image lens object But this geometry only holds when the object is in focus...
Lenses and Foci A little more physics: A lens brings light rays to a focal point: V P P ~ Q 0 ' ~ Q 1 ' F ~ Q 1 ~ Q 0 1 Recall (I hope) the thin lens formula: P + 1 V P = 1 F where F, the (intrinsic) focal length of the lens relates V P the distance where light rays from an object at distance P are focussed.
Unit 3: Distribution Ray Tracing Part Part Part Part 1: 2: 3: 4: Supersampling Depth of Field Motion Blur Soft Shadows (Penumbras) Depth-of-Field image plane Only the sphere is in relatively sharp focus. The other objects are more blurred. Bob Lewis WSU CptS 548 (Spring, 2018)
Unit 3: Distribution Ray Tracing Part Part Part Part 1: 2: 3: 4: Supersampling Depth of Field Motion Blur Soft Shadows (Penumbras) Depth-of-Field Ray Geometry image plane pinhole replaced with lens d ~ o' previous pinhole viewing ray d ' ~ o Bob Lewis WSU CptS 548 (Spring, 2018)
Computing Depth-of-Field Rays Here s the idea (from Cook s article in Glassner s An Introduction to Ray Tracing: We re given a pixel origin õ in the image plane. (This may be supersampled.) Choose a point õ on the lens which, will be the new ray origin. Compute the pixel-lens ray direction d = õ õ. Compute d, the refraction of d and use it as the new ray direction. Do this for a lot of õ points distributed over the lens and compute the mean of the results.
Lens Rays (I) V P ^r P ~ o F ~ d ~ o' ~ e ~ d ' ~ C ^z Because of symmetry around the ẑ axis, this is taking place in the plane defined by õ, õ, and ẽ (that s why we have the r axis instead of x or ŷ).
Lens Rays (II) We find the lens ray direction by working backwards. õ is a focal point for all rays that originate at another point C in front of the lens. Equivalently, light that reached õ from any point õ on the lens must have come from the direction of C, because that s how the lens bends the light. If we knew C, we could compute the lens direction d = C o. This is our goal, so how do we find C? An unbent ray passes through the (origin at) the center of the lens, leading, via similar triangles, to C P = õ V P
Lens Rays (III) õ is in the opposite direction from C wrt the origin, so: Ĉ = C C = õ õ Recall the thin lens formula: 1 P + 1 V P = 1 F and note: V p = õ ẑ P = C ẑ and you have enough to solve for C, which you can then use to find d. This is an exercise in algebra left to the reader. (Hint: Start with C = C Ĉ.)
Adding Depth-of-Field class Camera :... method viewingraysandweights ( camera, imagerow, imagecolumn ): n = camera. npixelsamples n1d = sqrt ( n) duv = 1/ n1d # assume n is a square result = [] for i from 0 to n1d -1: for j from 0 to n1d -1: # assume jittering u = imagerow + ( i + rand01 ()) * duv v = imagecolumn + ( j + rand01 ()) * duv rays = camera.lensrays(img, u, v) weight = 1 / ( len ( rays ) * n) for ray in rays : result += [ ( ray, weight ) ] return result
Camera.lensRays() class Camera :... method lensrays ( camera, imageu, imagev ): pixelrayorigin = Point3D ( imageu, imagev, camera. imagedistance ) C = camera.opposingfocalpoint(pixelrayorigin) # = C rays = [] for lensrayorigin in camera. lenspoints (): # = o lensraydirection = C - lensrayorigin # = d cameraray = Ray ( lensrayorigin, lensraydirection, 0, 1.0) rays += [ cameraray. transform ( camera. cameratoscenetransform ) ] return rays Note that the rest of the raytracer doesn t even care that the rays don t all start from the same place!
scene.lenspoints(): Sampling Lens Points D unstratified, jittered stratified stratified, jittered Alternative to Shirley, ch. 12: Generate test points (u lens, v lens ) [0, D] [0, D] (possibly stratified) and reject those that fall outside the lens circle until you get N lens of them. [ -D/2, D/2 ]?
Aside: Adjusting Lens Diameter for a Real Camera The (effective) lens diameter D is usually given by F f, where f is the f-stop of the lens often adjusted by a diaphragm on mechanical cameras. (There s a pattern to these numbers.)
Son of Efficiency Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N anti be the number of antialiasing (super)samples. Let N lens be the number of lens samples. What is the per-pixel (time) efficiency of supersampled ray tracing with shadows and depth-of-field effects? O(Nobj * Nlum * Nanti * Nlens)
Unit 3: Distribution Ray Tracing Part Part Part Part 1: 2: 3: 4: Supersampling Depth of Field Motion Blur Soft Shadows (Penumbras) What is Motion Blur? Objects can move. (This should not come as a major shock.) If we take a picture of a rapidly-moving object with a real camera with a real (even electronic) shutter, it will appear blurred in the direction of motion. Bob Lewis WSU CptS 548 (Spring, 2018)
Why Haven t We Seen Motion Blur Yet? The image function we see is not just L(u, v), but L(u, v, t): Pixel radiance changes over time. Up until now, we ve been assuming an instantaneous shutter at time t 0 : img[i, j] = pixel i, j L (u, v, t 0) dudv 1 N anti N anti 1 k=0 L (u k, v k, t 0 ) There is no motion blur possible here. (Motion blur is not just a problem for computer graphics. What do Willis O Brien, George Pal, Ray Harryhausen, Will Vinton, and Nick Park have in common?)
Sampling for Motion Blur But we could take account of motion blur by enhancing our model to show the effects of motion (making it kinematic), changing our object (and camera) positions as a function of time, and compute: img[i, j] = 1 T Or, in terms of samples: T 0 L (u, v, t) du dv dt pixel i, j img[i, j] 1 N anti N blur N blur 1 l=0 N anti 1 k=0 L (u k, v k, t l )
Implementing Motion Blur class Camera :... method raytracepixel ( camera, scene, imagerow, imagecolumn ): result = Radiance (0,0,0) dt = 1.0 / camera. ntimesamples # 0 <= t <= 1 for l from 0 to camera. ntimesamples -1: t = ( l + 0.5) * dt # middle of time slot scene.settime(t) # position and orient objects camera.settime(t) # position and orient camera for ( viewingray, weight ) in camera. viewingraysandweights ( imagerow, imagecolumn result += weight * scene. traceray ( viewingray, 0.0) return result / camera. ntimesamples Is there such a thing as temporal aliasing? Yes, and it s got a name: the wagon wheel effect.
Enhancement: Shutter Area So far we assume that the shutter is either 100% open (between times 0 and 1) or closed. We could modify this with a sampled filter function: 1 w(t ) 0 0 t 1 Doing this is pretty straightforward: img[i, j] 1 Nblur 1 N anti N blur l=0 Nanti 1 k=0 w l L (u k, v k, t l ) Nblur 1 l=0 w l
The Return of the Son of Efficiency Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N anti be the number of antialiasing (super)samples. Let N lens be the number of lens samples. Let N blur be the number of time samples (per frame). What is the per-pixel (time) efficiency of supersampled ray tracing with shadows, depth-of-field, and motion blur effects? O(Nobj * Nlum * Nanti * Nlens * Nblur)
What is a Penumbra? area light source obstruction umbra penumbra Soft shadows arise because most luminaires in the physical world aren t point or directional. This is different from a spotlight (and harder to do).
Approximating Soft Shadows The idea: Instead of one shadow ray, cast N sh shadow rays at a set of positions on each area luminaire. Select luminaire positions (hence, shadow ray directions) using randomization and/or stratification (as we did with lens ray origins).
Material.illuminate() (updated) class Material :... method illuminate ( material, intersection, incidentray, scene ): radiance = material. indirectradiance ( intersection, incidentray, scene ) towardsviewer = - incidentray. direction. normalized () # = v p = intersection. p # = p normal = intersection. getnormal () # = n for luminaire in scene. luminaires : radiance += meandirectradiancefromluminaire(material, luminaire, p, normal, towardsviewer, scene) return L
Material.meanDirectRadianceFromLuminaire() class Material :... method meandirectradiancefromluminaire ( material, luminaire, p, normal, towardsviewer, scene ): shadowrays = luminaire. shadowrays ( p) radiance = Radiance (0, 0, 0) for shadowray in shadowrays : radiance += directradiancefromluminaire(material, luminaire, shadowray, normal, towardsviewer, scene) # Warning : shaky illumination calculation here radiance /= len ( shadowrays ) # take the mean (?)
Material.directRadianceFromLuminaire() class Material :... method directradiancefromluminaire ( material, luminaire, ray, normal, towardsviewer, scene ): if ray. direction. dot ( normal ) > 0: intersection = scene. firstintersection ( ray, EPSILON ) if intersection == None or luminaire. iscloser ( intersection.p, ray. origin ): return material. directradiance ( towardsviewer, normal, luminaire ) return Radiance (0, 0, 0) optional towardslight needed for directradiance()
The Return of the Son of Efficiency s Daughter Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N anti be the number of antialiasing (super)samples. Let N lens be the number of lens samples. Let N blur be the number of time samples (per frame). Let N shdw be the number of shadow rays cast per luminaire. What is the per-pixel (time) efficiency of supersampled ray tracing with soft shadows, depth-of-field, and motion blur effects? Even if each of these numbers is small, their product may not be. How can we work around this?
Multidimensional Sampling ray origins lens points time steps luminaire points N anti N anti N lens N lum Randomly chosen parameters for antialiasing (ray origin), depth-of-field (lens position), motion blur (time), and soft shadows (position on light source) don t correlate, so they could all be chosen at random. Create a table for each parameter. For each of N adms samples, choose one from column A, one from column B, etc.
Efficiency: The Final Chapter (For Now) Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N adms be the number of samples for anti-aliasing, depth-of-field, motion blur, and soft shadow effects. What is the per-pixel (time) efficiency such a ray tracer? The next unit will cover how to make ray tracing (even) more efficient.