Soft Shadows: Heckbert & Herf Soft shadows [Michael Herf and Paul Heckbert] Cornell University CS 569: Interactive Computer Graphics Figure : Hard shadow images from 2 2 grid of sample points on light source. ~ SIGGRAPH'87, Anaheim,July 27-3, 987 Light source Lecture 9 ]~"~-~-- Camera (c) View from (background), the camera. Figure 2: Left: scene with square light source (foreground), occluder (center), and rectangular receiver with shadows (b) Viewtriangular from the light source. (a) View from high above the scene. on receiver. Center: Approximate soft shadows resulting from 2 2 grid of sample points; the average of the four hard shadow images in 6for sampling). This image is used as the texture on the receiver at left. Figure. Right: Correct soft shadow imagefigure (generated withof 6view. Points a simple scene. 28 Steve Marschner Lecture 9 Ordinarily, texture maps are accessed by Cornell CS569 Spring 28 filtering the texture values overpass some of the However, depth maps forrespect sha- to to region determine if atexture given map. 3-D point is illuminated with doweach calculations cannot be transformation accessed in thisofmanner. The main problight source. The points from one coordinate lem is that the filtered depth value would be compared to the depth system to being another can betoaccelerated using texture hardof the surface rendered determine whether or not mapping the surface [7].at This latter method, by of Segal al., achieves real-time is inware shadow that point. The result this et comparison would be rates,making and is soft the other leading method for interactive binary, antialiased edges impossible. Another problemsoft is that filtered depth values along edges workstation of objects would bear no the shadows can be generated on athe graphics by rendering relation the geometry of using the scene. sceneto multiple times, different points on the extended light theorder resulting accumulation buffer Oursource, solutionaveraging reverses the of theimages filteringusing and comparison steps. Thehardware z values []. of the depth map across the entire region are first compared A against the depth f the surface being rendered. sample variation of theoshadow volume approach is tothis intersect these transformation converts the in depth the region into binary and volumes with surfaces the map sceneunder to precompute the aumbra image, which regions is then filtered give the proportion f the region in penumbra on eachtosurface [6]. During othe final rendering shadow. The resulting shadows have soft, antialiased edges. Heckbert & Herf Soft Shadows The difference between ordinary texture map filtering and percentage Lecture 9 closer Most filtering is shown schematically in Figure 2. In this radiosity methods discretize each surface intoexample, a mesh of the distance from the light source to the surface to be shadowed is elements and then use discrete methods such as ray tracing or z = 49.8. The region in the depth map that it maps onto (shown on to compute The hemicube computes the hemicubes left in the figures) is a visibility. square measuring 3 pixelsmethod by 3 pixels.* visibility from would a light filter source an entire Ordinary filtering thepoint depthtomap valueshemisphere to get 22.9 by andprothe that scene a half-cube [7].a value Muchoof computation thenjecting compare to onto 49.8 to end up with f this meaning that % f the surface was in shadow. Percentage closer filtering canobe done in hardware. Radiosity meshes typically do notcomresolve pares each depth map valuetypical to 49.8artifacts and then filters bands the array o f the shadows well, however. are Mach along binary values to arrive at a value of excessively.55 meaning blurry that 55% of the surmesh element boundaries and Most face is in shadow. 2 radiosity methods are not fast enough to support interactive changes to the geometry, however. Chen s incremental radiosity method is Percentage closer filtering A region[5]. and box filtering are used to simplify this example. The real ansquare exception algorithm, as described in subsequent sections, uses more sophisticated Our own method can be categorized next to hemicube radiosity techniques. pass, illumination integrals are evaluated at a sparse sampling of pixels. methods, since it also precomputes visibility discretely. Its technique for computing visibility also has parallels to the method of flattening objects to a plane. Precomputation of Shading. Precomputation can be taken fursurface at z = 49.8 ther, computing not just visibility but also shading. This is most 2.2 Graphics Hardware relevant to diffuse scenes, since their shading is view-independent. Some of these methods compute visibility continuously, while othcurrent graphics hardware, such as the Silicon Graphics Reality 5.2 ers compute it discretely. Engine [], can projective-transform, clip, shade, scan convert, and J Several researchers have explored continuous, x -4-----,---"-visibility methods texture tens of thousands of polygons in real-time (in /3 sec.)..2 mesh.generation. With for soft shadow computation and5. radiosity We 22.9would ~ like to exploit the speed of this hardware to simulate soft this approach, surfaces are subdivided into fully lit, penumbra, and umbra regions by splitting along.3lines or curves where visibility Typically, such hardware supports arbitrary 4 4 homogeneous.4 t.2 changes. In Chin and Feiner s soft shadow method, polygons are transformations of planar polygons, clipping to any truncated pyrasplit using BSP trees, and these sub-polygons are then pre-shaded midal frustum (right or oblique), and scan conversion with z[6]. They achieved rendering times of under a minute for simple buffering or overwriting. On SGI machines, Phong shading (once a) more Ordinary texture map computational filtering. Does not work for depth maps. scenes. Drettakis and Fiume used sophisticated per pixel) is not possible, but faceted shading (once per polygon) and geometry techniques to precompute their subdivision, and reported Gouraud shading (once per vertex) are supported. Phong shading rendering times of several seconds [9]. Surface at z = 49.8 ) / J [Michael Herf and Paul Heckbert] 5..2.3.4 2..2 ~ Sample Transform Step.55 [Reeves et al. 87] 5.2 b) Percentage closer filtering. Figure 2. Ordinary filtering versus percentage closer filtering. Lecture 9 3 284 Lecture 9 4
[Bunnell & Pellacini, GPU Gems] [Bunnell & Pellacini, GPU Gems] Lecture 9 5 Lecture 9 6 Ambient Occlusion: Main Idea Figure 5: The Galileo map [Agarwal, Ramamoorthi, Belongie, & Jensen 23] At each point find Fraction of hemisphere that is occluded Visible fraction of hemisphere: (-occlusion) And average unoccluded direction B!Use B for lighting (see later) Structured importance sampling w/ 3 samples Lecture 9 7
Computing the values: RT For each triangle { Compute center of triangle Generate rays over hemisphere Occlusion = For each ray If ray intersects objects ++occlusion Occlusion /= nrays } Computing the values: SM Create shadow maps from N lights Check visibility of point wrt each light and determine occlusion: accumulation buffer 4 samples 32 samples Computing the values: SM Ambient occlusion: using the values Modulate diffuse shading Kd * (-occlusion) * N.L Modulate irradiance map lookup 52 samples
What about B? The unoccluded direction gives an idea of where the main illumination is coming from This is called the bent normal Computing the values: RT For each triangle { Compute center of triangle Generate rays over hemisphere Occlusion = Avg dir = (,,) For each ray If ray intersects objects ++occlusion Else avg dir += ray.dir Occlusion /= nrays Normalize (avg dir) } [CS 467 slides] [CS 467 slides] unshadowed diffuse shading Lecture 9 5 ambient occlusion map Lecture 9 6
[CS 467 slides] combined diffuse and ambient Lecture 9 7