Antialiasing Physically Based Shading with LEADR Mapping

Similar documents
Complex Shading Algorithms

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Light Reflection Models

Radiance. Radiance properties. Radiance properties. Computer Graphics (Fall 2008)

BRDF Computer Graphics (Spring 2008)

Error Reduction and Simplification for Shading Anti-Aliasing

CMSC427 Shading Intro. Credit: slides from Dr. Zwicker

Advanced d Computer Graphics CS 563: Real Time Ocean Rendering

Rendering Light Reflection Models

Overview. Radiometry and Photometry. Foundations of Computer Graphics (Spring 2012)

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Computer Graphics (CS 543) Lecture 8 (Part 1): Physically-Based Lighting Models

CS 5625 Lec 2: Shading Models

Hello, Thanks for the introduction

Here I ll present a brief summary of the important aspects and then dive into improvements we made on Black Ops II.

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural

Recollection. Models Pixels. Model transformation Viewport transformation Clipping Rasterization Texturing + Lights & shadows

Visualisatie BMT. Rendering. Arjan Kok

Shading & Material Appearance

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

782 Schedule & Notes

Introduction. Chapter Computer Graphics

Rendering Light Reflection Models

Simple Lighting/Illumination Models

Lighting affects appearance

Lecture 4: Reflection Models

Illumination. Illumination CMSC 435/634

Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs

Texture-Mapping Tricks. How Bad Does it Look? We've Seen this Sort of Thing Before. Sampling Texture Maps

Texture mapping. Computer Graphics CSE 167 Lecture 9

Lighting. To do. Course Outline. This Lecture. Continue to work on ray programming assignment Start thinking about final project

INFOGR Computer Graphics. J. Bikker - April-July Lecture 10: Ground Truth. Welcome!

Game Technology. Lecture Physically Based Rendering. Dipl-Inform. Robert Konrad Polona Caserman, M.Sc.

Global Illumination The Game of Light Transport. Jian Huang

Lighting affects appearance

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Introduction to Visualization and Computer Graphics

Computer Graphics (CS 4731) Lecture 16: Lighting, Shading and Materials (Part 1)

Sampling, Aliasing, & Mipmaps

CPSC 314 LIGHTING AND SHADING

Computer Graphics (CS 543) Lecture 7b: Intro to lighting, Shading and Materials + Phong Lighting Model

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Lighting Killzone : Shadow Fall

Shading, lighting, & BRDF Theory. Cliff Lindsay, PHD

Extending the Disney BRDF to a BSDF with Integrated Subsurface Scattering. Brent Burley Walt Disney Animation Studios

Schedule. MIT Monte-Carlo Ray Tracing. Radiosity. Review of last week? Limitations of radiosity. Radiosity

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik

Shading / Light. Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Chapter 4- Blender Render Engines

Visual Appearance and Color. Gianpaolo Palma

Overview. Hierarchy. Level of detail hierarchy Texture maps Procedural shading and texturing Texture synthesis and noise.

A Frequency Analysis of Light Transport

Lights, Surfaces, and Cameras. Light sources emit photons Surfaces reflect & absorb photons Cameras measure photons

Due to time constraints, I will be brief. You can check the slides for more details later.

Timothy Walsh. Reflection Models

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

MIT Monte-Carlo Ray Tracing. MIT EECS 6.837, Cutler and Durand 1

Normal Distribution Mapping

Lighting and Materials

CS6670: Computer Vision

Shading. Brian Curless CSE 557 Autumn 2017

Ray Tracing: shading

Surface Reflection Models

EF432. Introduction to spagetti and meatballs

Illumination & Shading

Topic 0. Introduction: What Is Computer Graphics? CSC 418/2504: Computer Graphics EF432. Today s Topics. What is Computer Graphics?

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Procedural Shaders: A Feature Animation Perspective

Radiometry & BRDFs CS295, Spring 2017 Shuang Zhao

Image Synthesis. Global Illumination. Why Global Illumination? Achieve more photorealistic images

Announcement. Lighting and Photometric Stereo. Computer Vision I. Surface Reflectance Models. Lambertian (Diffuse) Surface.

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

Illumination and Shading

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.

Physically Based Shading in Unity. Aras Pranckevičius Rendering Dude

EF432. Introduction to spagetti and meatballs

Irradiance Gradients. Media & Occlusions

Sampling, Aliasing, & Mipmaps

CS6670: Computer Vision

Chapter 1 Introduction. Marc Olano

Final Project: Real-Time Global Illumination with Radiance Regression Functions

Distributed Ray Tracing

Shading I Computer Graphics I, Fall 2008

Experimental Validation of Analytical BRDF Models

COMP environment mapping Mar. 12, r = 2n(n v) v

CENG 477 Introduction to Computer Graphics. Ray Tracing: Shading

Computer Vision Systems. Viewing Systems Projections Illuminations Rendering Culling and Clipping Implementations

Introduction. Lighting model Light reflection model Local illumination model Reflectance model BRDF

Lighting and Shading

Comp 410/510 Computer Graphics. Spring Shading

Path Tracing part 2. Steve Rotenberg CSE168: Rendering Algorithms UCSD, Spring 2017

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Image Formation: Light and Shading. Introduction to Computer Vision CSE 152 Lecture 3

CS130 : Computer Graphics Lecture 8: Lighting and Shading. Tamar Shinar Computer Science & Engineering UC Riverside

Graphics for VEs. Ruth Aylett

Estimation of Reflection Properties of Silk Textile with Multi-band Camera

Local vs. Global Illumination & Radiosity

Shading. Reading. Pinhole camera. Basic 3D graphics. Brian Curless CSE 557 Fall Required: Shirley, Chapter 10

Transcription:

Antialiasing Physically Based Shading with LEADR Mapping Part of: Physically Based Shading in Theory and Practice SIGGRAPH 2014 Course Jonathan Dupuy¹, ² ¹Université Lyon 1, LIRIS, CNRS ²Université de Montréal, Dept. I.R.O., LIGUM In this talk I would like to emphasize the importance of antialiasing in physically based rendering pipelines. The ideas developed here are based on the LEADR mapping paper I co-wrote with Eric and other colleagues.

Meet Tiny the T. Rex Combination of displaced subdivision surfaces and physically based shading Achieves very high-resolution models, with low storage costs Tiny the T. Rex ( Disney) 100,000,000 polygons for 4 gigabytes 42 bytes per polygon 2 To start things off, I would like to introduce you to a 3D model we used in the paper. His name is Tiny the T. rex; he is composed of displaced subdivision surfaces and relies heavily on physically based shading. This representation is very common in offline rendering today, because it is very good at representing detail in a compact way. For instance, Tiny weighs a little more than 4 Gigabytes on disk, and has a resolution of roughly 100,000,000 polygons.

A Challenge in Rendering 100,000,000 polygons within 200 x 200 pixels = 2500 polygons / pixel Wreck-It Ralph ( Disney 2012) 4 Here is a still frame from Disney's movie Wreck-It Ralph. If you look closely enough, you may notice Tiny performing in the background. To render Tiny in this image, over a million polygons have to be processed in roughly 200x200 pixels, resulting in roughly 2500 polygons and texels per pixel. This requires generating several thousands of samples per pixel just to integrate visibility effects (and then comes shading). In practice, production renders typically use far less samples for this problem. So do video games, which rarely exceed 16 samples per pixel (4 or less is common). Hence, virtual scenes are always undersampled.

Undersampling Misdeeds Undersampling results in noise and/or aliasing undersampled rendering ground truth 5 Undersampling results in artifacts in the image. Note that at this point, the samples that constitute the pixels are still physically based, per se, so we have physically based noise/aliasing. These artifacts are unacceptable though, so images usually get postprocessed. This is problematic for two reasons. First, it negates most of the benefits of using physically based shading models. Second, given how displays are evolving (better resolutions, faster refresh rates, etc.), it is almost certain that this process will eventually become a serious bottleneck for both movies and video games. For the video sequence, see ocean.mp4.

Undersampling Misdeeds Undersampling results in noise and/or aliasing undersampled rendering ground truth 6 Undersampling results in artifacts in the image. Note that at this point, the samples that constitute the pixels are still physically based, per se, so we have physically based noise/aliasing. These artifacts are unacceptable though, so images usually get postprocessed. This is problematic for two reasons. First, it negates most of the benefits of using physically based shading models. Second, given how displays are evolving (better resolutions, faster refresh rates, etc.), it is almost certain that this process will eventually become a serious bottleneck for both movies and video games. For the video sequence, see ocean.mp4.

What to do... What to do... The story ends, you can leave the course now and believe whatever you want to believe. You stay in Wonderland, and I show you how deep the rabbit hole goes. Warner Brothers 7 But let's face it: modern CG images look great! So what do we do now? Well, we can take the blue pill, and forget that these problems were ever mentioned. Or, we can take the red pill, and start looking for solutions to improve current rendering techniques.

At the End of the Rabbit Hole We rederive the main results from the LEADR mapping paper: We can filter normal and displacement texture maps with microfacet theory We can precompute these texture filtering operations efficiently using MIP mapping 8 In this section of the course, we are going to have a look at how recent research has managed to deal with normal- and displacement-mapped surfaces. Dealing with these problems is a first step towards finding better CG representations that are free from the aforementioned limitations.

Pixel Color 9 We first start by looking at what happens at each pixel: how exactly is its color computed? For a physically based renderer, a pixel measures the amount of radiance that travels towards the camera, weighted by an anti-aliasing filter, such as a box filter.

Pixel Color texture-mapped surface 10 The filter gives a footprint to the pixel in the 3D scene. Here, we assume that the footprint is entirely covered by a texture-mapped surface. A texture is composed of multiple texels, which are illustrated here as colored squares.

Pixel Color texture-mapped surface profile (render time increases with texel count) 11 Here is an illustration of the problem in 2D. If each texel gives the amount of radiance that leaves the surface towards the camera, then the computational complexity of the pixel scales linearly with the number of texels it covers.

Pixel Color MIP-mapped, texture-mapped surface profile (render time is constant) 12 MIP maps offer a way to make the per-pixel computational complexity constant. This works by prefiltering the texture and storing the results inside a MIP hierarchy. If the texels represent an RGB triplet, then the precomputations are straightforward.

Pixel Color undersampled rendering ground truth (offline) MIP mapped (real time) 13 Here is an illustration of the benefit of using MIP maps. The ground truth and the pre-filtered renders are very close, only one is real time and has constant complexity per pixel, while the other is not. We would like to use a similar strategy for normal and displacement texture maps. For the video sequence, see albedo.mp4.

Problem 1: MIP Mapping Normals 14 We'll start by looking at how to deal with normal maps. In this case, each texel stores a unit vector, which is used as input to a physically based shading model.

MIP Mapping Normals pixel color (4D+ function) box-filtered average incident radiance BRDF cosine term normal-mapped surface profile 15 Following physically based shading principles, we express the pixel color as a filtered, per-texel local illumination function (L_o). The illumination function is a product of an incident radiance function, a BRDF, and a cosine term. We use a box filter here, so the pixel color is simply the average of the illumination function. The resulting function is 4D: it depends on the orientation of the camera, and of the light source radiating the surface. Furthermore, it can be of arbitrarily high frequency. At this point, prefiltering is too expensive to be practical.

Normals Histogram pixel color statistical space scattering events occurrence frequencies normals histogram-mapped surface profile 16 The local illumination function has a nice property: it produces the same result for two similar normals. Hence, we can reduce the shading complexity by using a histogram. Intuitively, the histogram restructures the surface so that only different normals remain. The outgoing radiance at each normal is then scaled in proportion to its frequency of occurrence on the original surface.

Fresnel Mirrors pixel color (2D function) microfacet model! normals histogram-mapped Fresnel mirror profile 17 If we use a Fresnel mirror BRDF, the equation simplifies to a microfacet model. We can deal with any glossy BRDF by convolving their histograms. While the histogram is a 2D function, It is still problematic to precompute...

Histogram Plots 0.02 0.2 0.6 normal map histogram roughness convolution Beckmann approx. To make this clearer, here is the histogram of a normal map, plotted in slope space. Normals maps can be of arbitrarily high frequency, so they may require memory-intensive discretizations, which we wish to avoid. However, we can still make progress, since in practice normal distribution functions are convolved with some base roughness at shading time. The third column shows an amount of base roughness which is used to convolve the histogram; notice how much smoother it becomes as roughness increases. The fourth column shows an approximation of the histogram as a parametric Gaussian density. While some details are not present, we are able to capture the main direction of anisotropy.

LEAN Mapping = LEAN map (5 MIP mapped floats) compact histogram per texel MIP 19 Gaussian densities are especially interesting because they have five parameters that can be calculated from a LEAN texture map. A LEAN map consists of five terms that can be linearly prefiltered (MIP mapped). These terms can be derived from a normal map very straightforwardly. This is extremely cheap, and works at all scales. For more details on how this works, please take a look at the LEAN or LEADR mapping papers, or my course notes. If we plug this parametric density into the microfacet BRDF that we just derived, we arrive at a constant per-pixel rendering cost.

Result undersampled rendering ground truth (slow) LEAN mapping + microfacet BRDF (real time) 20 Here is the result for an ocean scene. The Gaussian density tends to smooth out the image compared to the ground truth, but the image quality is way better than that of an undersampled solution. So by combining a microfacet model and a LEAN map, we can achieve artifact-free images that still retain all physically based shading principles. For the video sequence, see ocean.mp4.

Problem 2: MIP Mapping Displacements 21 We would like to apply the same idea to displacements as well. Displacement maps are more complex than normal maps because they affect surfaces both in terms of shape and appearance.

MIP Mapping Displacements pixel color (4D+ function) box-filtered average incident radiance BRDF cosine terms shadowing terms displacement-mapped surface profile 22 As a consequence, the scattering model becomes more complex. Note the introduction of the shadowing terms, that must account for the amount of occlusion that occurs on the surface. Just like the normal-mapped case, this function cannot be precomputed at this point.

Visible Normals Histogram pixel color (4D+ function) statistical space scattering events occurrence frequencies incident radiance BRDF cosine terms shadowing cosine terms masking 23 We can still make progress with the histogram approach, but this time it is higher dimensional because we only wish to retrieve the normals that are visible from the camera.

Fresnel Mirrors pixel color (4D+ function) (4D function) microfacet model! incident radiance BRDF cosine terms shadowing cosine terms masking 24 Interestingly, by using a Fresnel mirror BRDF, the model produces the equation of a microfacet BRDF.

Shadowing Models [Bec1965] [TS1967] [Smi1967] 25 Hence, we can take advantage of the shadowing theory that was developed in the 1960s and progressively incorporated into microfacet theory.

Fresnel Mirrors pixel color LEADR mapping microfacet model 26 This is how LEADR mapping solves the problem of modeling visibility at all scales. In the paper, we use the Smith shadowing function. Note that we had to extend microfacet theory to work with non-central normal densities, in order to apply it on displacement maps. This is because displacement maps are used to deviate the mean microsurface normal from the tangent-frame up direction.

LEADR Mapping = LEADR map (6 MIP mapped floats) compact histogram per texel MIP 27 We also show that we can use the same representation as a LEAN map to compute the correct histogram at all scales.

Results 28 Here is the main video of the LEADR mapping paper. The video is available on YouTube: www.youtube.com/watch?v=nd1uswlngb0, or on HAL: hal.inria.fr/hal-00858220/en.

Validation Fresnel mirror BRDFs 29 We derived and validated our model by generating Gaussian microscopic slopes on the surface of a sphere.

Validation Diffuse BRDFs 30 And also introduced a model for rough diffuse surfaces.

Properties Scalable (constant per pixel rendering time) Linear (prefilter = GenerateMipmap()) Lightweight (5 floats per texel) Physically based BRDFs (energy conservation) All-frequency BRDFs (diffuse and specular) Anisotropic BRDFs All-frequency lighting ({point, directional, IBL} lighting) Compatible with animation (supports mesh deformation) 31 LEADR mapping possesses the fundamental properties that make it general enough to be applied in production.

Future Work More prefiltered representations Scalability beyond texture-mapped surfaces 32 This concludes the talk. With LEADR mapping, we switched rendering complexity from the number of details to the number of surfaces in the pixel. A big avenue for future work is to be able to deal with collections of such surfaces. We already have some theoretical models that we can use to start making progress as well, such as the microflakes model.

References [Bec1965] Petr Beckmann. Shadowing of Random Rough Surfaces. IEEE Trans. on Antennas and Propagation, 13(3):384--388, May 1965. [DHIPNO2013] Jonathan Dupuy, Eric Heitz, Jean-Claude Iehl, Pierre Poulin, Fabrice Neyret, and Victor Ostromoukhov. Linear Efficient Antialiased Displacement and Reflectance Mapping. ACM Trans. Graph., 32(6):211:1--11, November 2013. [OB2010] Marc Olano and Dan Baker. LEAN Mapping. In Proc. Symp. on Interactive 3D Graphics and Games, pages 181--188. ACM, 2010. [Smi1967] B. Smith. Geometrical Shadowing of a Random Rough Surface. IEEE Trans. on Antennas and Propagation, 15(5):668--671, 1967. [TS1967] K. E. Torrance and E. M. Sparrow. Theory for Off-Specular Reflection from Roughened Surfaces. J. Opt. Soc. Am., 57(9):1105--1112, Sep 1967. 33