Rendering Algorithms: Real-time indirect illumination Spring 2010 Matthias Zwicker
Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant radiosity Representing functions on the sphere Precomputed radiance transfer
Ray tracing vs. rasterization Most algorithms for global illumination are based on ray tracing Monte Carlo path tracing, photon mapping, etc. Interactive rendering based on the rendering pipeline and rasterization ti
Rendering pipeline Scene e geometry et (triangles) Scene geomtry (triangles) flows through the pipeline piece by piece (object order) No random access to Camera geometry other than Rendering params. etc. current primitive in pipeline pipeline Textures Can do multiple passes through pipeline p and reuse results from previous passes Image, render target
Rendering pipeline stages Scene geometry Geometry Vertices and how they are connected Transformation, Triangles, lines, points, triangle geometry mod. strips Projection Rasterization, shading, isibilit Processed by the rendering pipeline one-by-one (object order) visibility No random access to other geometry except current primitive in pipeline pp Image
Rendering pipeline stages Scene geometry Transformation, geometry mod. Projection Project geometric primitives to image plane Rasterization, shading, visibility Image 3D scene Center of projection 2D image plane
Rendering pipeline stages Scene geometry Transformation, geometry mod. Projection Rasterization, shading, visibility Draw primitives (triangles, lines, etc.) Shade each pixel Determine what is visible Store image in framebuffer or render target (not displayed d directly, but reused in later passes) Image
Visibility: Z-buffering Framebuffer contains per-pixel depth During rasterization, only draw pixel if closer to eye than previous value in z-buffer Image Depth image, depth map, z-buffer
Rendering pipeline Hardware & software that draws 3D scenes on the screen Most operations performed by specialized graphics processing units (GPU) E.g., NVIDIA, ATI Massively parallel processing Work on several geometric primitives and pixels in parallel Access to hardware through low-level 3D API (DirectX, OpenGL)
Object vs. Image order Object order Rasterization type algorithms Desirable memory access pattern ( streaming, data locality, avoid random scene access) Suitable for real time rendering (OpenGL, DirectX) Popular for production rendering (Pixar RenderMan), where scenes often do not fit in RAM Global illumination challenging with purely object order algorithm, no random access to scene geometry Image order Ray tracing type algorithms Undesirable memory access pattern (random scene access) Requires sophisticaed data structures for fast scene access Full global l illumination possible Most popular for photo-realistic image synthesis
Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant radiosity Representing functions on the sphere Precomputed radiance transfer
Screen space techniques Hack togetrandom access to scene geometry Render at least two passes 1. Pass: render scene, keep depth map Image with depth values in pixels 2. Pass: use depth map as (incomplete) approximation of scene geometry Use as texture to look up geometry Some form of ray tracing possible
Ambient occlusion (AO) Ambient occlusion: fraction of hemisphere h that is un-occluded from a scene point Evaluate using ray tracing Use this fraction to modulate incident light Multiply incident light with AO coefficient for shading Approximation, but looks good Ray tracing to determine AO coefficient http://www.ppsloan.org/publications/vo.pdf
Screen space ambient occlusion (SSAO) Hack: use depth buffer of each frame to compute AO coefficient Only an approximation to correct AO Sample a few points in neighborhood of shading point Even less close to correct shading integral Still looks plausible Depth buffer pixels SSAO using depth buffer http://www.ppsloan.org/publications/vo.pdf
Screen space ambient occlusion (SSAO) Visualization off SSAO coefficients i Dark: small coefficient, large occlusion Bright: large coefficient, little occlusion http://en.wikipedia.org/wiki/screen_space_ambient_occlusion
Extensions Use depth map geomety to compute one indirect bounce of light Approximating Dynamic Global Illumination in Image Space, Ritschel et al. http://www.mpi-inf.mpg.de/~ritschel/ssdo/
Discussion Pros Fast Looks plausible Popular in practice, see list of games on http://en.wikipedia.org/wiki/screen_space_ambient_occlusion Cons Really a big approximation, not close to a solution of the rendering equation SSAO requires sampling, prone to noise artifacts
Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant radiosity Representing functions on the sphere Precomputed radiance transfer
Visibility & shadows In a rendering pipeline: Given two points in scene, how to determine mutual visibility? How to do shadows? No ray-tracing support for shadow rays! Shadow mapping Given one scene point, determine visibility to all other points in scene One-to-many visibility test Most popular use: shadows for point lights
Shadow mapping Scene point lit by light source if visible ibl from light To determine visibility from light source Place camera at light source position Render scene using z-buffering Shadow, light not visible Lit, light visible Determine visibility from light source by placing camera at light source position
Two pass algorithm First pass Render scene by placing camera at light source position Store depth image (shadow map) Shadow map contains visibility information for all points in scene! depth value Depth image seen Depth image seen from light source
Two pass algorithm Second pass Render scene from camera position At each pixel, project pixel into shadow map Shadow map test: compare distance to light source with value in shadow map If distance is larger, in shadow If distance is smaller or equal, pixel is lit v b is in shadow Fi l i Final image with shadows pixel seen from eye v b
Shadow mapping Standard techique in real-time graphics Supported by GPUs, fast Disadvantages Basic algorithm only for point lights, hard shadows Sampling issues, artifacts Need to render scene once for each point light (one pass per light) Slow if many (hundreds) of light sources
Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant radiosity Representing functions on the sphere Precomputed radiance transfer
Instant radiosity Indirect illumination for diffuse surfaces Based on particle tracing, similar to photon mapping Leverages GPUs, shadow mapping Original article Instant Radiosity, Keller et al. SIGGRAPH 1997 http://portal.acm.org/citation.cfm?id=258769
Instant radiosity 1. Distribute virtual point lights (VPLs) Trace light particles starting at light sources Store as virtual point lights on surfaces 2 R d i g i g h d i g 2. Render image using shadow mapping with virtual point lights
Instant radiosity Light source with particles
Instant radiosity Distribute particles
Instant radiosity VPLs stored in scene
Distributing VPLs Essentially the same as photon tracing Trace a set of particles through scene, starting at light source Initial value of VPL is radiance of light source At each bounce Attenuate value of VPL with diffuse reflection coefficient i Sample new direction with cosine distribution (cancels out remaining cosine term) Roussian roulette to terminate particles Reweight particle according to termination probability
Rendering with VPLs Particles can be interpreted as point lights Render image for each VPL using shadow mapping Include geometry term 1/r^2 attenuation Cosine falloff Weight each image with 1/(nr of particles), add up Requires as many rendering passes as VPLs to generate shadow maps Analogous to final gathering in photon mapping Corresponds to using each photon instead of shooting gather rays and radiance estimation Approximates integral over surface area, therefore includes geometry term
Example Direct 1 bounce 2 bounce [Keller et al. Instant Radiosity]
Example 32 particles, 72 VPLs 64 particles, 147 VPLs [Keller et al. Instant Radiosity]
Artifacts Color of VPL is dominated by color of surface point where it lies Can lead to color casts Geometry term can lead to unbounded contribution of VPL (division by zero) Clamp contribution ti for VPLs
Technical details Termination probability Based on average diffuse reflectivity in scene
Technical details Sampling directions Halton or Hammersley sequence Quasi Monte Carlo scheme Random Jittered Halton
Technical details Sampling directions Halton or Hammersley sequence Quasi Monte Carlo scheme Error to path tracing reference
Interactive rendering Only one bounce for VPLs VPLs are stored only at first hit point, but not traced further Generate VPLs by sampling omnidirectional omnidirectional shadow maps, no ray tracing Up to ~1000 VPLs Computing shadow maps is performance bottleneck
Interactive rendering Optimizations Incremental update of shadow maps http://www.tml.hut.fi/~jaakko/publications/laine2007egsr_paper.pdf Shadow maps from simplified scene geometry http://www.mpi-inf.mpg.de/resources/imperfectshadowmaps/ No shadow maps at all! I.e., no visibility testing ti for VPLs
Demos, videos http://www.geomerics.com/ http://realtimeradiosity.com/
Summary Instant t radiosity it Based on particle tracing Particles used as virtual point lights Scene rendered d on GPU with shadow mapping for each VPL Final pixel value is sum over all VPLs Real-time applications
Discussion Pros As number of VPLs goes to infinity, get accurate solution of rendering equation for diffuse surfaces Cons VPLs represent diffuse reflection, i.e., work correctly with diffuse surfaces only Except last bounce (i.e., hit point from eye) Artifacts because of geometry term Flickering i in animations with small number of VPLs Performance bottleneck due to shadow map computation
Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant radiosity Representing functions on the sphere Precomputed radiance transfer
Representing spherical functions G l i di id l i di i l Goal: at individual points, store directional distribution of light over sphere of directions Radiance in each direction This is a function defined on the sphere Simple solution: table Disadvantage: inefficient calculations, large storage requirement Better idea: represent function as a weighted sum of basis functions Will always be an approximation, but can trade accuracy for efficiency
Spherical harmonics Basis functions on the sphere y j (ω) Often, used with two indices l,m instead of one index j; difference only in notation ti Think of it as sine/cosine like functions defined d on the sphere Each function has certain frequency Can approximate any function with desired accuracy by adding more frequencies Fourier transform on the sphere Good introduction http://www.research.scea.com/gdc2003/spherical-harmonic-lighting.html h / h h i li hti l
Spherical harmonics Increasing frequency from top to bottom http://www.research.scea.com/gdc2003/spherical-harmonic-lighting.html
Spherical harmonics Many useful properties Orthogonality Inner product of two different basis functions is zero Inner product of function with itself is one Z y j (ω)y k (ω)dω = δ j,k Same as inner product (dot product) of basis vectors in orthogonal basis of R n
Spherical harmonics projection Given function f( ) on sphere, what s its approximation using spherical harmonics? User chooses number of coefficients n The larger n, the more accurate the approximation n X f(ω) c j y j (ω) Coefficients c j c j = Z j=1 f(ω)y j (ω)dω Because of orthogonality In practice, compute using Monte Carlo integration Coefficients i form a column vector
Spherical harmonics projection n is number of basis functions used in approximation http://www.research.scea.com/gdc2003/spherical-harmonic-lighting.html
Reflection integral Assume diffuse BRDF for now L o (x, ω o ) = ρ(x) Z π L i (x, ω i ) cos θ i dω i Given spherical harmonics coefficients l j, f j X L i (x, X X i(, ω i ) l j y j (ω i ) cos i f j y j (ω) j Because of orthogonality: integral turns into dot product of coefficient vectors! X L o (x, ω o ) X j l j f j j
Today Real-time indirect illumination Ray tracing vs. Rasterization Screen space techniques Visibility & shadows Instant radiosity Representing functions on the sphere Precomputed radiance transfer
Precomputed radiance transfer How to get spherical harmonics representation of incident radiance? Incident radiance should include multiple bounces of indirectly reflected light Will have different coefficients l j at each point x! L n Li (x, ω i ) X l j y j (ω i ) j=1
Environment illumination Special case: environment map is only light source in scene Represent using spherical harmonics Can precompute coefficients e j using spherical harmonics projection X L e (x, ω) n e j y j (ω i ) j=1 Environment map is represented by column vector e j
Transfer matrix Given environment map, obtain incident id light at each point, due to environment Environment map: vector of SH coefficients e j Incident radiance: vector of SH coefficients l j Transfer matrix M: linear function that maps SH coefficients of environment map to SH coefficients of incident radiance l1 1. ln l = M e1 1. en e
Precomputed radiance transfer Precompute and store transfer matrix M at each triangle vertex in scene Precomputation is somewhat similar to form factor computation in FEM methods (radiosity) Each element in the matrix is fraction of light transported from one basis function to an other Computation of elements using Monte Carlo integration During rendering, only need to compute matrix-vector multiplications to get shading Use GPU Interactive frame rates
Discussion Pros Get global illumination with multiple bounces at interactive framerates Everything encoded in transfer matrices Can easily & efficiently change environment illumination, e.g., rotate it Cons Works only for static scenes! Would need to recompute transfer matrices when scene geometry changes, e.g., objects move relative to each other Large memory requirements, need to store transfer matrix at each triangle vertex Basic implementations restricted to low-frequency effects; need more effort to get high frequency reflections & shadows
Literature The original i research paper: Precomputed Radiance Transfer for Real- Time Rendering in Dynamic, Low-Frequency Lighting Environments, Sloan et al. http://web4.cs.ucl.ac.uk/staff/j.kautz/publications/prtsig02.pdf uk/staff/j kautz/publications/prtsig02 pdf A more modern version: All-Frequency Interactive Relighting of Translucent Objects with Single and Multiple Scattering, Wang et al. http://www.cs.virginia.edu/~rw2p/s2005/ Many other extensions and variations Other basis functions, etc.
Summary Accurate solution of rendering equation on GPU at interactive rates not just there yet, but coming soon See for example An Efficient GPU-based Approach for Interactive Global Illumination, Wang et al., 2009 http://graphics.cs.umass.edu/pubs/siggraph09_paper0448.pdf Smart photon mapping approach on GPU; diffuse, glossy, etc. 1.5 fps, 2009
Links Books http://www.realtimerendering.com/ http://tog.acm.org/resources/shaderx/ SIGGRAPH course material http://www.cs.ucl.ac.uk/staff/j.kautz/rtgicourse/ Game technology http://www.crytek.com/technology/presentations/ Some research papers in this field http://www.cs.uiowa.edu/~cwyman/pubs.html http://www.mpi-inf.mpg.de/~ritschel/ http://www.cs.ucl.ac.uk/staff/j.kautz/publications/ etc.
Next time Final project presentation