CptS 548 (Advanced Computer Graphics) Unit 3: Distribution Ray Tracing

Similar documents
Announcements. Written Assignment 2 out (due March 8) Computer Graphics

Distribution Ray Tracing

EECS 487: Interactive Computer Graphics

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics

Reading. Distribution Ray Tracing. BRDF, revisited. Pixel anti-aliasing. ω in. Required: Shirley, section Further reading:

Distribution Ray Tracing. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

Physically Realistic Ray Tracing

Anti-aliasing and Monte Carlo Path Tracing. Brian Curless CSE 557 Autumn 2017

Anti-aliasing and Monte Carlo Path Tracing. Brian Curless CSE 457 Autumn 2017

Reading. 8. Distribution Ray Tracing. Required: Watt, sections 10.6,14.8. Further reading:

Anti-aliasing and Monte Carlo Path Tracing

Distributed Ray Tracing

Distribution Ray-Tracing. Programação 3D Simulação e Jogos

Distributed Ray Tracing

Specular reflection. Lighting II. Snell s Law. Refraction at boundary of media

Visual cues to 3D geometry. Light Reflection and Advanced Shading. Shading. Recognizing materials. size (perspective) occlusion shading

Ray Tracing. Cornell CS4620/5620 Fall 2012 Lecture Kavita Bala 1 (with previous instructors James/Marschner)

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/22/2017

Ray Tracing. Johns Hopkins Department of Computer Science Course : Rendering Techniques, Professor: Jonathan Cohen

distribution ray tracing

distribution ray-tracing

Ray Tracing. CS334 Fall Daniel G. Aliaga Department of Computer Science Purdue University

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome!

CptS 548 (Advanced Computer Graphics) Unit 2: Basic Ray Tracing

Global Illumination The Game of Light Transport. Jian Huang

CS-184: Computer Graphics. Administrative

Ray tracing. EECS 487 March 19,

Chapter 11 Global Illumination. Part 1 Ray Tracing. Reading: Angel s Interactive Computer Graphics (6 th ed.) Sections 11.1, 11.2, 11.

Sung-Eui Yoon ( 윤성의 )

Ray Tracing. Last Time? Reading for Today. Reading for Today

Local vs. Global Illumination & Radiosity

Shadows. COMP 575/770 Spring 2013

Ray Tracing III. Wen-Chieh (Steve) Lin National Chiao-Tung University

Last Time: Acceleration Data Structures for Ray Tracing. Schedule. Today. Shadows & Light Sources. Shadows

Today. Anti-aliasing Surface Parametrization Soft Shadows Global Illumination. Exercise 2. Path Tracing Radiosity

Building a Fast Ray Tracer

CS 563 Advanced Topics in Computer Graphics Camera Models. by Kevin Kardian

Raytracing & Epsilon. Today. Last Time? Forward Ray Tracing. Does Ray Tracing Simulate Physics? Local Illumination

Ray Tracing Part 2. CSC418/2504 Introduction to Computer Graphics. TA: Muhammed Anwar

Today. Acceleration Data Structures for Ray Tracing. Cool results from Assignment 2. Last Week: Questions? Schedule

Ray-tracing Acceleration. Acceleration Data Structures for Ray Tracing. Shadows. Shadows & Light Sources. Antialiasing Supersampling.

Ray tracing idea. Ray Tracing. Ray tracing algorithm. Plane projection in drawing. CS 465 Lecture 3

The Rendering Equation and Path Tracing

Advanced Ray Tracing

Consider a partially transparent object that is illuminated with two lights, one visible from each side of the object. Start with a ray from the eye

Topic 12: Texture Mapping. Motivation Sources of texture Texture coordinates Bump mapping, mip-mapping & env mapping

Computer Graphics Project

Topic 11: Texture Mapping 11/13/2017. Texture sources: Solid textures. Texture sources: Synthesized

Kind of Quick Ray Tracing

Other Rendering Techniques CSE 872 Fall Intro You have seen Scanline converter (+z-buffer) Painter s algorithm Radiosity CSE 872 Fall

Topic 11: Texture Mapping 10/21/2015. Photographs. Solid textures. Procedural

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

Rendering: Reality. Eye acts as pinhole camera. Photons from light hit objects

Ray Tracing Part 1. CSC418/2504 Introduction to Computer Graphics. TA: Muhammed Anwar & Kevin Gibson

CS 4620 Program 4: Ray II

Reading on the Accumulation Buffer: Motion Blur, Anti-Aliasing, and Depth of Field

Ray Tracing. CSCI 420 Computer Graphics Lecture 15. Ray Casting Shadow Rays Reflection and Transmission [Ch ]

Depth of Field for Photorealistic Ray Traced Images JESSICA HO AND DUNCAN MACMICHAEL MARCH 7, 2016 CSS552: TOPICS IN RENDERING

Ray Tracing. CPSC 453 Fall 2018 Sonny Chan

COMP371 COMPUTER GRAPHICS

So far, we have considered only local models of illumination; they only account for incident light coming directly from the light sources.

CS 465 Program 5: Ray II

Fall CSCI 420: Computer Graphics. 7.1 Rasterization. Hao Li.

Photon Mapping. Due: 3/24/05, 11:59 PM

13 Distribution Ray Tracing

Topics and things to know about them:

MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1

Illumination Algorithms

Sampling: Antialiasing - Intro

Motivation. Advanced Computer Graphics (Fall 2009) CS 283, Lecture 11: Monte Carlo Integration Ravi Ramamoorthi

Theoretically Perfect Sensor

Monte-Carlo Ray Tracing. Antialiasing & integration. Global illumination. Why integration? Domains of integration. What else can we integrate?

Ray Tracing COMP575/COMP770

MITOCW MIT6_172_F10_lec18_300k-mp4

COMP371 COMPUTER GRAPHICS

CSCI 420 Computer Graphics Lecture 14. Rasterization. Scan Conversion Antialiasing [Angel Ch. 6] Jernej Barbic University of Southern California

Global Illumination. CSCI 420 Computer Graphics Lecture 18. BRDFs Raytracing and Radiosity Subsurface Scattering Photon Mapping [Ch

Lecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19

Ray Tracing With Adaptive Supersampling in Object Space

Theoretically Perfect Sensor

Anti-aliased and accelerated ray tracing. University of Texas at Austin CS384G - Computer Graphics

Raycast Rendering Maya 2013

Turn your movie file into the homework folder on the server called Lights, Camera, Action.

Rasterization. Rasterization (scan conversion) Digital Differential Analyzer (DDA) Rasterizing a line. Digital Differential Analyzer (DDA)

Real-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

Logistics. CS 586/480 Computer Graphics II. Questions from Last Week? Slide Credits

Project 3 Path Tracing

Programming projects. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer. Assignment 1: Basic ray tracer

Indirect Illumination

Global Illumination. COMP 575/770 Spring 2013

Computer Graphics. Ray Tracing. Based on slides by Dianna Xu, Bryn Mawr College

RAYTRACING. Christopher Peters INTRODUCTION TO COMPUTER GRAPHICS AND INTERACTION. HPCViz, KTH Royal Institute of Technology, Sweden

Anti-aliased and accelerated ray tracing. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

CS380: Computer Graphics Introduction. Sung-Eui Yoon ( 윤성의 ) Course URL:

A Brief Overview of. Global Illumination. Thomas Larsson, Afshin Ameri Mälardalen University

Ray Tracing. Quiz Discussion. Announcements: Final Projects. Last Time? Durer s Ray Casting Machine. Today

INFOGR Computer Graphics

Ray Tracing. Last Time? Today. Ray Casting. Durer s Ray Casting Machine. Reading for Today

Global Illumination. Global Illumination. Direct Illumination vs. Global Illumination. Indirect Illumination. Soft Shadows.

Transcription:

CptS 548 (Advanced Computer Graphics) Unit 3: Distribution Ray Tracing Bob Lewis School of Engineering and Applied Science Washington State University Spring, 2018

References Cook, R. L., Porter, T., and Carpenter, L. Distributed Ray Tracing, SIGGRAPH 84. Cook, R. L., Stochastic Sampling and Distributed Ray Tracing, in An Introduction to Ray Tracing (Glassner, ed.), Academic Press, 1989. (out of print sigh!) Glassner, A., Principles of Digital Image Synthesis, Morgan-Kaufmann, 1995. Jenkins, F. A. and White, H. E., Fundamentals of Optics, McGraw-Hill, 1957. Kolb, C., Hanrahan, P., and Mitchell, D., A Realistic Camera Model for Computer Graphics, SIGGRAPH 95. Watt, A. and Watt, M., Advanced Animation and Rendering Techniques, Addison-Wesley, 1992.

What s Wrong with Basic Ray Tracing? Small objects and textures are aliased. Shadows are too sharp. (point and directional luminaires only) Everything is in focus. (not photorealistic) Moving objects are not blurred. (the O Brien-Harryhausen effect) And that s just for starters. Solution: Distribution Ray Tracing

What is Distribution Ray Tracing? Cast multiple instead of single rays and combine results to measure a distribution of light. It was originally called distributed, but this was an unfortunate choice of words. It has nothing intrinsically to do with parallel ray tracing. Parallel ray tracers may or may not be distributed (on multiple nodes). Alternative term: stochastic ray tracing But distribution ray tracing isn t always stochastic.

review: Camera.raytrace() Recall this method s original version: class Camera :... method raytrace ( camera, scene, width, height ): image = Image ( width, height ) for i from 0 to width -1: for j from 0 to height -1: ray = camera. ray ( width, height, i +0.5, j +0.5) image [i,j] = scene. trace (ray, 0.0) Note that the ray goes through the middle (0.5,0.5) of the pixel. We said that we could combine the two method calls into one, but we don t, for good reason. Here it is...

A Distribution Ray Tracer: Top Level (I) class Camera :... method raytracepixel ( camera, scene, imagerow, imagecolumn ): result = Radiance (0,0,0) for ( ray, weight ) in camera.viewingraysandweights( imagerow, imagecolumn): result += weight * scene. traceray ( ray, 0.0) return result method raytrace ( camera, scene ): image = Image ( camera. width, camera. height ) for i from 0 to camera. width - 1: for j from 0 to camera. height - 1: image [i, j] = camera. raytracepixel ( scene, i, j)

A Distribution Ray Tracer: Top Level (II) The only change is the generation of multiple viewing rays, all with potentially different origins and directions, for each pixel. This is the good reason we didn t merge the Camera.viewingRay() and Scene.traceRay() methods in the basic ray tracer. camera.raytracepixel() computes a weighted sum of radiances for each pixel. This is for primary (view) rays. So now all we do is write camera.viewingraysandweights(). We recover old behavior with: def viewingraysandweights ( camera, imagerow, imagecolumn ): return [ ( camera. viewingray ( imagerow +0.5, imagecolumn +0.5), 1.0) ]

What Can Distribution Ray Tracing Do? Depends on the kind of rays you cast: eye rays (shown above) antialiasing depth-of-field motion blur shadow rays soft shadows area luminaires reflection rays glossy (rough mirror, e.g. sandblasted) surfaces area luminaires refraction rays murky (scattering) liquids

Unit 3: Distribution Ray Tracing Part Part Part Part 1: 2: 3: 4: Supersampling Depth of Field Motion Blur Soft Shadows (Penumbras) What is a Distribution of Light? Look at an enlargement of a single pixel: v u A scene is a continuous Radiance function L(u, v ). What the PA1 raytracer does about this... Bob Lewis WSU CptS 548 (Spring, 2018)

Review: Basic Ray Tracing It samples a single pixel: v ( 1 img[i, j] = L 2, 1 ) 2 u...so it could miss quite a lot of detail. It would be better to average L over the pixel: img[i, j] = L(u, v)dudv pixel

Supersampling...but that integral is very hard, if not impossible, to do analytically, so we approximate it with multiple samples: img[i, j] = L(u, v)dudv pixel v u 1 N anti N anti 1 k=0 L (u k, v k ) samp where N samp is the number of samples. How do we pick the (u k, v k )? We can, for instance, choose a uniform N anti N anti. (The above shows N anti = 4.)

Camera.viewingRaysAndWeights() for Regular Supersampling class Camera :... method viewingraysandweights ( camera, imagerow, imagecolumn ): result = [] n = camera. npixelsamples n1d = sqrt ( n) # assume n is a square duv = 1 / n1d # sample spacing for i from 0 to n1d - 1: for j from 0 to n1d - 1: u = imagerow + ( i + 0.5) * duv v = imagecolumn + ( j + 0.5) * duv ray = camera. viewingray (u, v) result += [ (ray, 1/n) ] return result

Raytraced Images with Regular Supersampling sampled 1 1 sampled 2 2 sampled 4 4 (i. e. basic)

Camera.viewingRaysAndWeights() for Jittered Supersampling What if we vary ( jitter ) the ray origins by as small, random amount? This requires two teensy little changes (in red): class Camera :... method viewingraysandweights ( camera, imagerow, imagecolumn ): result = [] n = camera. npixelsamples n1d = sqrt ( n) # assume n is a square duv = 1 / n1d # sample spacing for i from 0 to n1d - 1: for j from 0 to n1d - 1: u = imagerow + ( i + rand01()) * duv v = imagecolumn + ( j + rand01()) * duv ray = camera. viewingray (u, v) result += [ (ray, 1/n) ] return result

Raytraced Images with Jittered Supersampling sampled 1 1 sampled 2 2 sampled 4 4

To Jitter or Not to Jitter? sampled 1 1 sampled 2 2 sampled 4 4 sampled 1 1, jittered sampled 2 2, jittered sampled 4 4, jittered

Jittered 8 8 In terms of primary rays only, how much slower is our raytracer now?

Efficiency Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N anti be the number of antialiasing (super)samples. What is the ( asymptotic ) per-pixel (time) efficiency of supersampled ray tracing with shadows? (i.e. What goes inside the parens in O()?) O(Nobj * Nlum * Nanti)

Adaptive Supersampling One approach to reduce the N anti factor is this... v v v u u u The idea: Keep subdividing until there s only a small variation between the samples.

Camera.raytracePixel() for Adaptive Supersampling To get adaptive supersampling, we modify Camera.raytracePixel() to call a new recursive pixel sampling method Camera.adaptivelyRaytracePixel(): class Camera :... method raytracepixel ( camera, imagerow, imagecolumn ): ( du, dv) = camera. pixeldimensions () (u, v) = ( imagerow + 0.5 * du, imagecolumn + 0.5 * dv) # pixel center return = camera.adaptivelyraytracepixel(u, v, du, dv)

Camera.adaptivelyRaytracePixel() class Camera :... method adaptivelyraytracepixel ( camera, scene, u, v, du, dv ): ( du, dv) = ( du /2, dv /2) # dimensions of subcell uvsubs = [] samples = [] for (delu, delv ) in (( -du,-dv ),(du,-dv ),(du, dv ),(-du, dv )): usub = u + delu vsub = v + delv uvsubs += [ ( usub, vsub ) ] rysub = camera. viewingray ( usub, vsub ) samples += [ scene. traceray ( rysub, 0.0) ] if closeenough(samples): return sum ( samples ) / 4 # i. e, the mean else : sum = Radiance (0,0,0) for ( usub, vsub ) in uvsubs : sum += camera. adaptivelyraytracepixel ( usub, vsub, du, dv) return sum / 4

What s Wrong with Adaptive Supersampling? Adaptive supersampling is easy to understand, but there are some issues: What are the stopping criteria? How do we define closeenough()? It makes a questionable assumption: Just because four samples are close (however we define it) is no indication that L(u, v) is uniform over the subpixel. It s still susceptible to sampling errors, which we ll cover in an upcoming unit. Perhaps the best approach is a pragmatic one: For a particular scene, try adaptive supersampling and see if it works.

What We ve Assumed So Far: The Pinhole Camera image pinhole object

A More Realistic Camera Model image lens object But this geometry only holds when the object is in focus...

Lenses and Foci A little more physics: A lens brings light rays to a focal point: V P P ~ Q 0 ' ~ Q 1 ' F ~ Q 1 ~ Q 0 1 Recall (I hope) the thin lens formula: P + 1 V P = 1 F where F, the (intrinsic) focal length of the lens relates V P the distance where light rays from an object at distance P are focussed.

Unit 3: Distribution Ray Tracing Part Part Part Part 1: 2: 3: 4: Supersampling Depth of Field Motion Blur Soft Shadows (Penumbras) Depth-of-Field image plane Only the sphere is in relatively sharp focus. The other objects are more blurred. Bob Lewis WSU CptS 548 (Spring, 2018)

Unit 3: Distribution Ray Tracing Part Part Part Part 1: 2: 3: 4: Supersampling Depth of Field Motion Blur Soft Shadows (Penumbras) Depth-of-Field Ray Geometry image plane pinhole replaced with lens d ~ o' previous pinhole viewing ray d ' ~ o Bob Lewis WSU CptS 548 (Spring, 2018)

Computing Depth-of-Field Rays Here s the idea (from Cook s article in Glassner s An Introduction to Ray Tracing: We re given a pixel origin õ in the image plane. (This may be supersampled.) Choose a point õ on the lens which, will be the new ray origin. Compute the pixel-lens ray direction d = õ õ. Compute d, the refraction of d and use it as the new ray direction. Do this for a lot of õ points distributed over the lens and compute the mean of the results.

Lens Rays (I) V P ^r P ~ o F ~ d ~ o' ~ e ~ d ' ~ C ^z Because of symmetry around the ẑ axis, this is taking place in the plane defined by õ, õ, and ẽ (that s why we have the r axis instead of x or ŷ).

Lens Rays (II) We find the lens ray direction by working backwards. õ is a focal point for all rays that originate at another point C in front of the lens. Equivalently, light that reached õ from any point õ on the lens must have come from the direction of C, because that s how the lens bends the light. If we knew C, we could compute the lens direction d = C o. This is our goal, so how do we find C? An unbent ray passes through the (origin at) the center of the lens, leading, via similar triangles, to C P = õ V P

Lens Rays (III) õ is in the opposite direction from C wrt the origin, so: Ĉ = C C = õ õ Recall the thin lens formula: 1 P + 1 V P = 1 F and note: V p = õ ẑ P = C ẑ and you have enough to solve for C, which you can then use to find d. This is an exercise in algebra left to the reader. (Hint: Start with C = C Ĉ.)

Adding Depth-of-Field class Camera :... method viewingraysandweights ( camera, imagerow, imagecolumn ): n = camera. npixelsamples n1d = sqrt ( n) duv = 1/ n1d # assume n is a square result = [] for i from 0 to n1d -1: for j from 0 to n1d -1: # assume jittering u = imagerow + ( i + rand01 ()) * duv v = imagecolumn + ( j + rand01 ()) * duv rays = camera.lensrays(img, u, v) weight = 1 / ( len ( rays ) * n) for ray in rays : result += [ ( ray, weight ) ] return result

Camera.lensRays() class Camera :... method lensrays ( camera, imageu, imagev ): pixelrayorigin = Point3D ( imageu, imagev, camera. imagedistance ) C = camera.opposingfocalpoint(pixelrayorigin) # = C rays = [] for lensrayorigin in camera. lenspoints (): # = o lensraydirection = C - lensrayorigin # = d cameraray = Ray ( lensrayorigin, lensraydirection, 0, 1.0) rays += [ cameraray. transform ( camera. cameratoscenetransform ) ] return rays Note that the rest of the raytracer doesn t even care that the rays don t all start from the same place!

scene.lenspoints(): Sampling Lens Points D unstratified, jittered stratified stratified, jittered Alternative to Shirley, ch. 12: Generate test points (u lens, v lens ) [0, D] [0, D] (possibly stratified) and reject those that fall outside the lens circle until you get N lens of them. [ -D/2, D/2 ]?

Aside: Adjusting Lens Diameter for a Real Camera The (effective) lens diameter D is usually given by F f, where f is the f-stop of the lens often adjusted by a diaphragm on mechanical cameras. (There s a pattern to these numbers.)

Son of Efficiency Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N anti be the number of antialiasing (super)samples. Let N lens be the number of lens samples. What is the per-pixel (time) efficiency of supersampled ray tracing with shadows and depth-of-field effects? O(Nobj * Nlum * Nanti * Nlens)

Unit 3: Distribution Ray Tracing Part Part Part Part 1: 2: 3: 4: Supersampling Depth of Field Motion Blur Soft Shadows (Penumbras) What is Motion Blur? Objects can move. (This should not come as a major shock.) If we take a picture of a rapidly-moving object with a real camera with a real (even electronic) shutter, it will appear blurred in the direction of motion. Bob Lewis WSU CptS 548 (Spring, 2018)

Why Haven t We Seen Motion Blur Yet? The image function we see is not just L(u, v), but L(u, v, t): Pixel radiance changes over time. Up until now, we ve been assuming an instantaneous shutter at time t 0 : img[i, j] = pixel i, j L (u, v, t 0) dudv 1 N anti N anti 1 k=0 L (u k, v k, t 0 ) There is no motion blur possible here. (Motion blur is not just a problem for computer graphics. What do Willis O Brien, George Pal, Ray Harryhausen, Will Vinton, and Nick Park have in common?)

Sampling for Motion Blur But we could take account of motion blur by enhancing our model to show the effects of motion (making it kinematic), changing our object (and camera) positions as a function of time, and compute: img[i, j] = 1 T Or, in terms of samples: T 0 L (u, v, t) du dv dt pixel i, j img[i, j] 1 N anti N blur N blur 1 l=0 N anti 1 k=0 L (u k, v k, t l )

Implementing Motion Blur class Camera :... method raytracepixel ( camera, scene, imagerow, imagecolumn ): result = Radiance (0,0,0) dt = 1.0 / camera. ntimesamples # 0 <= t <= 1 for l from 0 to camera. ntimesamples -1: t = ( l + 0.5) * dt # middle of time slot scene.settime(t) # position and orient objects camera.settime(t) # position and orient camera for ( viewingray, weight ) in camera. viewingraysandweights ( imagerow, imagecolumn result += weight * scene. traceray ( viewingray, 0.0) return result / camera. ntimesamples Is there such a thing as temporal aliasing? Yes, and it s got a name: the wagon wheel effect.

Enhancement: Shutter Area So far we assume that the shutter is either 100% open (between times 0 and 1) or closed. We could modify this with a sampled filter function: 1 w(t ) 0 0 t 1 Doing this is pretty straightforward: img[i, j] 1 Nblur 1 N anti N blur l=0 Nanti 1 k=0 w l L (u k, v k, t l ) Nblur 1 l=0 w l

The Return of the Son of Efficiency Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N anti be the number of antialiasing (super)samples. Let N lens be the number of lens samples. Let N blur be the number of time samples (per frame). What is the per-pixel (time) efficiency of supersampled ray tracing with shadows, depth-of-field, and motion blur effects? O(Nobj * Nlum * Nanti * Nlens * Nblur)

What is a Penumbra? area light source obstruction umbra penumbra Soft shadows arise because most luminaires in the physical world aren t point or directional. This is different from a spotlight (and harder to do).

Approximating Soft Shadows The idea: Instead of one shadow ray, cast N sh shadow rays at a set of positions on each area luminaire. Select luminaire positions (hence, shadow ray directions) using randomization and/or stratification (as we did with lens ray origins).

Material.illuminate() (updated) class Material :... method illuminate ( material, intersection, incidentray, scene ): radiance = material. indirectradiance ( intersection, incidentray, scene ) towardsviewer = - incidentray. direction. normalized () # = v p = intersection. p # = p normal = intersection. getnormal () # = n for luminaire in scene. luminaires : radiance += meandirectradiancefromluminaire(material, luminaire, p, normal, towardsviewer, scene) return L

Material.meanDirectRadianceFromLuminaire() class Material :... method meandirectradiancefromluminaire ( material, luminaire, p, normal, towardsviewer, scene ): shadowrays = luminaire. shadowrays ( p) radiance = Radiance (0, 0, 0) for shadowray in shadowrays : radiance += directradiancefromluminaire(material, luminaire, shadowray, normal, towardsviewer, scene) # Warning : shaky illumination calculation here radiance /= len ( shadowrays ) # take the mean (?)

Material.directRadianceFromLuminaire() class Material :... method directradiancefromluminaire ( material, luminaire, ray, normal, towardsviewer, scene ): if ray. direction. dot ( normal ) > 0: intersection = scene. firstintersection ( ray, EPSILON ) if intersection == None or luminaire. iscloser ( intersection.p, ray. origin ): return material. directradiance ( towardsviewer, normal, luminaire ) return Radiance (0, 0, 0) optional towardslight needed for directradiance()

The Return of the Son of Efficiency s Daughter Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N anti be the number of antialiasing (super)samples. Let N lens be the number of lens samples. Let N blur be the number of time samples (per frame). Let N shdw be the number of shadow rays cast per luminaire. What is the per-pixel (time) efficiency of supersampled ray tracing with soft shadows, depth-of-field, and motion blur effects? Even if each of these numbers is small, their product may not be. How can we work around this?

Multidimensional Sampling ray origins lens points time steps luminaire points N anti N anti N lens N lum Randomly chosen parameters for antialiasing (ray origin), depth-of-field (lens position), motion blur (time), and soft shadows (position on light source) don t correlate, so they could all be chosen at random. Create a table for each parameter. For each of N adms samples, choose one from column A, one from column B, etc.

Efficiency: The Final Chapter (For Now) Let N obj be the number of objects in the scene. Let N lum be the number of luminaires. Let N adms be the number of samples for anti-aliasing, depth-of-field, motion blur, and soft shadow effects. What is the per-pixel (time) efficiency such a ray tracer? The next unit will cover how to make ray tracing (even) more efficient.