VGP352 Week 8. Agenda: Post-processing effects. Filter kernels Separable filters Depth of field HDR. 2-March-2010

Similar documents
VGP352 Week March-2008

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu

Render all data necessary into textures Process textures to calculate final image

VGP353 Week 2. Agenda: Assignment #1 due Introduce shadow maps. Differences / similarities with shadow textures Added benefits Potential problems

Computer Graphics (CS 543) Lecture 10: Normal Maps, Parametrization, Tone Mapping

Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim

Applications of Explicit Early-Z Culling

Order Independent Transparency with Dual Depth Peeling. Louis Bavoil, Kevin Myers

Direct3D API Issues: Instancing and Floating-point Specials. Cem Cebenoyan NVIDIA Corporation

Applications of Explicit Early-Z Z Culling. Jason Mitchell ATI Research

Moving Mobile Graphics Advanced Real-time Shadowing. Marius Bjørge ARM

Mattan Erez. The University of Texas at Austin

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Models and Architectures

Real-time High Dynamic Range Image-based Lighting

INFOGR Computer Graphics

Evolution of GPUs Chris Seitz

Vulkan Multipass mobile deferred done right

Dave Shreiner, ARM March 2009

Depth of Field for Photorealistic Ray Traced Images JESSICA HO AND DUNCAN MACMICHAEL MARCH 7, 2016 CSS552: TOPICS IN RENDERING

GUERRILLA DEVELOP CONFERENCE JULY 07 BRIGHTON

There have been many debates in the past about whether eight bits of resolution

Image Processing Tricks in OpenGL. Simon Green NVIDIA Corporation

Photorealistic 3D Rendering for VW in Mobile Devices

Computer Graphics (CS 543) Lecture 10: Soft Shadows (Maps and Volumes), Normal and Bump Mapping

Multimedia Technology CHAPTER 4. Video and Animation

Fast HDR Image-Based Lighting Using Summed-Area Tables

Circular Separable Convolution Depth of Field Circular Dof. Kleber Garcia Rendering Engineer Frostbite Engine Electronic Arts Inc.

CS 354R: Computer Game Technology

Advanced Maya Texturing and Lighting

Mosaics. Today s Readings

A Qualitative Analysis of 3D Display Technology

Recall: Indexing into Cube Map

Project 1, 467. (Note: This is not a graphics class. It is ok if your rendering has some flaws, like those gaps in the teapot image above ;-)

3D Rendering Pipeline

Advanced Maya e Texturing. and Lighting. Second Edition WILEY PUBLISHING, INC.

Introduction to Computer Graphics with WebGL

CSE 167: Lecture #7: Color and Shading. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2011

Perceptual Effects in Real-time Tone Mapping

Introduction to Stereo Rendering

GPU Gems 3 - Chapter 28. Practical Post-Process Depth of Field. GPU Gems 3 is now available for free online!

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming

Optimizing and Profiling Unity Games for Mobile Platforms. Angelo Theodorou Senior Software Engineer, MPG Gamelab 2014, 25 th -27 th June

Image Processing. Overview. Trade spatial resolution for intensity resolution Reduce visual artifacts due to quantization. Sampling and Reconstruction

Chapter 7 - Light, Materials, Appearance

Programming Graphics Hardware

Abstract. Introduction. Kevin Todisco

The Application Stage. The Game Loop, Resource Management and Renderer Design

Rationale for Non-Programmable Additions to OpenGL 2.0

Raise your VR game with NVIDIA GeForce Tools

Oso Toon Shader. Step 1: Flat Color

Soft Particles. Tristan Lorach

Texture Compression. Jacob Ström, Ericsson Research

CS 179 GPU Programming

How to Work on Next Gen Effects Now: Bridging DX10 and DX9. Guennadi Riguer ATI Technologies

Whiz-Bang Graphics and Media Performance for Java Platform, Micro Edition (JavaME)

Computer Graphics Programming II

NVIDIA Parallel Nsight. Jeff Kiel

Enhancing Traditional Rasterization Graphics with Ray Tracing. March 2015

I ll be speaking today about BC6H compression, how it works, how to compress it in realtime and what s the motivation behind doing that.

CS559 Computer Graphics Fall 2015

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

An efficient and user-friendly tone mapping operator

Advanced Deferred Rendering Techniques. NCCA, Thesis Portfolio Peter Smith

Broad field that includes low-level operations as well as complex high-level algorithms

GPU Computation Strategies & Tricks. Ian Buck NVIDIA

The Rasterization Pipeline

Horizon-Based Ambient Occlusion using Compute Shaders. Louis Bavoil

Real-Time Universal Capture Facial Animation with GPU Skin Rendering

Advanced Post Processing

Photo Studio Optimizer

Graphics Performance Optimisation. John Spitzer Director of European Developer Technology

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Rendering Grass with Instancing in DirectX* 10

CS GPU and GPGPU Programming Lecture 16+17: GPU Texturing 1+2. Markus Hadwiger, KAUST

HIGHLY PARALLEL COMPUTING IN PHYSICS-BASED RENDERING OpenCL Raytracing Based. Thibaut PRADOS OPTIS Real-Time & Virtual Reality Manager

Real-Time Rendering of a Scene With Many Pedestrians

Deferred Rendering Due: Wednesday November 15 at 10pm

Hardware-Assisted Relief Texture Mapping

Computer Graphics. Shadows

CS GPU and GPGPU Programming Lecture 12: GPU Texturing 1. Markus Hadwiger, KAUST

Design Visualization with Autodesk Alias, Part 2

Real-time, Accurate Depth of Field using Anisotropic Diffusion and Programmable Graphics Cards

8/5/2012. Introduction. Transparency. Anti-Aliasing. Applications. Conclusions. Introduction

A Layered Depth-of-Field Method for Solving Partial Occlusion

Introduction to 3D Concepts

Interactive Vizualisation of Complex Real-World Light Sources

Real-Time Shadows. Last Time? Textures can Alias. Schedule. Questions? Quiz 1: Tuesday October 26 th, in class (1 week from today!

Fast BVH Construction on GPUs

Lecture: Edge Detection

Computer Graphics. Lecture 9 Environment mapping, Mirroring

CS201 Computer Vision Lect 4 - Image Formation

CS 563 Advanced Topics in Computer Graphics Film and Image Pipeline (Ch. 8) Physically Based Rendering by Travis Grant.

Rendering Algorithms: Real-time indirect illumination. Spring 2010 Matthias Zwicker

Hands-On Workshop: 3D Automotive Graphics on Connected Radios Using Rayleigh and OpenGL ES 2.0

Hardware-driven visibility culling

Transcription:

VGP352 Week 8 Agenda: Post-processing effects Filter kernels Separable filters Depth of field HDR

Filter Kernels Can represent our filter operation as a sum of products over a region of pixels Each pixel is multiplied by a factor Resulting products are accumulated Commonly represented as an n m matrix This matrix is called the filter kernel m is either 1 or is equal to n

Filter Kernels Uniform blur over 3x3 area: Larger kernel size results in more blurriness [ 1 1 1 1 1] 1 1 1 9 1 1

Filter Kernels Edge Detection Edge detection

Filter Kernels Edge Detection Edge detection Take the difference of each pixel and its left neighbor p x, y p x 1, y

Filter Kernels Edge Detection Edge detection Take the difference of each pixel and its left neighbor p x, y p x 1, y Take the difference of each pixel and its right neighbor p x, y p x 1, y

Filter Kernels Edge Detection Edge detection Take the difference of each pixel and its left neighbor p x, y p x 1, y Take the difference of each pixel and its right neighbor p x, y p x 1, y Add the two together 2 p x, y p x 1, y p x 1, y

Filter Kernels Edge Detection Rewrite as a kernel [ 0 0 0 ] 1 2 1 0 0 0

Filter Kernels Edge Detection Rewrite as a kernel Repeat in Y direction [ 0 0 0 ] 1 2 1 0 0 0 [ 0 1 0 ] 1 4 1 0 1 0

Filter Kernels Edge Detection Rewrite as a kernel Repeat in Y direction Repeat on diagonals [ 0 0 0 ] 1 2 1 0 0 0 [ 0 1 0 ] 1 4 1 0 1 0 [ 1 1 1 8 1 1 1]

Sobel Edge Detection Uses two filter kernels One in the Y direction One in the X direction F y =[ F x =[ 1 2 1 0 0 0 1 2 1] 1 0 2 0 2 1 0 1]

Sobel Edge Detection Apply each filter kernel to the image G x = F x A G y = F y A G x and G y are the gradients in the x and y directions The combined magnitude of these gradients can be used to detect edges G= G x 2 G y 2

Sobel Edge Detection Images from http://en.wikipedia.org/wiki/sobel_operator

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result Perform n 2 texture reads?!?

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result Perform n 2 texture reads?!? n larger than 4 or 5 won't work on most hardware Since the filter is a sum of products, it could be done in multiple passes

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result Perform n 2 texture reads?!? n larger than 4 or 5 won't work on most hardware Since the filter is a sum of products, it could be done in multiple passes Or maybe there's a different way altogether...

Separable Filter Kernels Some 2D kernels can be re-written as the product of 2 1D kernels These kernels are called separable Applying each 1D kernel requires n texture reads per pixel, doing both requires 2n 2n n 2

Separable Filter Kernels 2D kernel is calculated as the outer-product of the individual 1D kernels 0 b 0 a 0 b n A=a b=[a T a n b 0 a n b n]

Separable Filter Kernels The 2D Gaussian filter is the classic separable filter

Separable Filter Kernels The 2D Gaussian filter is the classic separable filter Product of a Gaussian along the X-axis

Separable Filter Kernels The 2D Gaussian filter is the classic separable filter Product of a Gaussian along the X-axis...and a Gaussian along the Y-axis

Separable Filter Kernels Implementing on a GPU: Use first 1D filter on source image to window Configure blending for source destination glblendfunc(gl_dst_color, GL_ZERO); Use second 1D filter on source image to window

Separable Filter Kernels Implementing on a GPU: Use first 1D filter on source image to window Configure blending for source destination glblendfunc(gl_dst_color, GL_ZERO); Use second 1D filter on source image to window Caveats: Precision can be a problem in intermediate steps May have to use floating-point output Can also use 10-bit or 16-bit per component outputs as well Choice ultimately depends on what the hardware supports

References http://www.archive.org/details/lectures_on_image_processing

Depth-of-field What is depth of field?...the depth of field (DOF) is the portion of a scene that appears acceptably sharp in the image. 1 1 http://en.wikipedia.org/wiki/depth_of_field Images also from http://en.wikipedia.org/wiki/depth_of_field

Depth-of-field Why is DOF important? Images from http://en.wikipedia.org/wiki/depth_of_field

Depth-of-field Why is DOF important? Draws viewer's attention Gives added information about spatial relationships etc. Images from http://en.wikipedia.org/wiki/depth_of_field

Depth-of-field Basic optics: A point of light focused through a lens becomes a point on the object plane

Depth-of-field Basic optics: A point of light focused through a lens becomes a point on the object plane A point farther than the focal distance becomes a blurry spot on the object plane

Depth-of-field Basic optics: A point of light focused through a lens becomes a point on the object plane A point farther than the focal distance becomes a blurry spot on the object plane A point closer than the focal distance becomes a blurry spot on the object plane These blurry spots are called circles of confusion (CoC hereafter)

Depth-of-field In most real-time graphics, there is no depth-offield Everything is perfectly in focus all the time

Depth-of-field In most real-time graphics, there is no depth-offield Everything is perfectly in focus all the time Most of the time this is okay The player may want to focus on foreground and background objects in rapid succession. Without eye tracking, the only way this works is to have everything in focus.

Depth-of-field In most real-time graphics, there is no depth-offield Everything is perfectly in focus all the time Most of the time this is okay The player may want to focus on foreground and background objects in rapid succession. Without eye tracking, the only way this works is to have everything in focus. Under some circumstances, DOF can be a very powerful tool Non-interactive sequences Special effects Very effective use in the game Borderlands

Depth-of-field Straight-forward GPU implementation: Render scene color and depth information to offscreen targets Post-process: At each pixel determine CoC size based on depth value Blur pixels within circle of confusion To prevent in-focus data from bleeding into out-of-focus data, do not use in-focus pixels that are closer than the center pixel

Depth-of-field Problem with this approach?

Depth-of-field Problem with this approach? Fixed number of samples within CoC Oversample for small CoC Undersample for large CoC Could improve quality with multiple passes, but performance would suffer

Depth-of-field Simplified GPU implementation: Render scene color and depth information to offscreen targets Post-process: Down-sample image and Gaussian blur down-sampled image Reduced size and filter kernel size are selected to produce maximum desired CoC size Linearly blend between original image and blurred image based on per-pixel CoC size

Depth-of-field Simplified GPU implementation: Render scene color and depth information to offscreen targets Post-process: Down-sample image and Gaussian blur down-sampled image Reduced size and filter kernel size are selected to produce maximum desired CoC size Linearly blend between original image and blurred image based on per-pixel CoC size Problems with this approach?

Depth-of-field Simplified GPU implementation: Render scene color and depth information to offscreen targets Post-process: Down-sample image and Gaussian blur down-sampled image Reduced size and filter kernel size are selected to produce maximum desired CoC size Linearly blend between original image and blurred image based on per-pixel CoC size Problems with this approach? No way to prevent in-focus data from bleeding into out-of-focus data

References J. D. Mulder, R. van Liere. Fast Perception-Based Depth of Field Rendering, In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (Seoul, Korea, October 22-25, 2000). VRST '00. ACM, New York, NY, 129-133. http://homepages.cwi.nl/~mullie/work/pubs/publications.html Guennadi Riguer, Natalya Tatarchuk, John Isidoro. Real-time Depth of Field Simulation, In ShaderX2, Wordware Publishing, Inc., October 25, 2003. http://developer.amd.com/documentation/reading/pages/shaderx.aspx M. Kass, A. Lefohn, J. Owens. 2006. Interactive Depth of Field Using Simulated Diffusion on a GPU. Technical Memo #06-01, Pixar Animation Studios. http://graphics.pixar.com/library/depthoffield/

High Dynamic Range Until now, our rendering has had a contrast ratio of 256:1 As noted in [Green 2004]: Bright things can be really bright Dark things can be really dark And the details can be seen in both

High Dynamic Range Several possible solutions depending on hardware support / performance: Render multiple exposures and composite results This is how HDR images are captured with a camera Yuck! Render to floating-point buffers Best quality Even fp16 buffers are large / expensive Differing levels of hardware support (esp. on mobile devices) Render to RGBe Smaller / faster Lower quality Issues with blending / multipass

Floating-Point Render Targets Create drawing surface with a floating-point internal format Surface is either a texture or a renderbuffer GL_RGB32F, GL_RGBA32F, GL_RGB16F, and GL_RGBA16F are most common Requires GL_ARB_texture_float (and GL_ARB_half_float_pixel for 16F formats) and GL_ARB_color_buffer_float or OpenGL 3.0

Floating-Point Render Targets Disable [0, 1] clamping of fragments glclampcolorarb(glenum target, Glenum clamp); target is one of GL_CLAMP_VERTEX_COLOR, GL_CLAMP_FRAGMENT_COLOR, or GL_CLAMP_READ_COLOR clamp is one of GL_FIXED_ONLY, GL_TRUE, or GL_FALSE OpenGL 3.x version drops ARB from name

Floating-Point Render Targets Common hardware limitations: May not be supported at all! Almost universal on desktop, not so much on mobile Intel GMA950 in most netbooks lacks support May not support blending to floating-point targets RGBA32F blending not supported on Geforce6 and similar generation chips May also be really slow May not support all texture filtering modes Some hardware can't do mipmap filtering from FP textures Many DX9 era cards can't do any filtering on RGBA32F textures

RGBe Store R, G, and B mantissa values with a single exponent Exponent store in alpha component Trades precision for huge savings on storage Keeps most of the useful range of FP32

RGBe Convert floating-point RGB in shader to RGBe: vec4 rgb_to_rgbe(vec3 color) { const float max_component = max(color.r, max(color.g, color.b)); const float e = ceil(log(max_component)); } return vec4(color / exp(e), (e + 128.0) / 255.0);

RGBe Limitations / problems: The log and exp calls in the shader aren't free May be a problem for compute bound vs. bandwidth bound shaders Blending is still possible, but it is rather painful Can't store components with vastly different magnitudes {10000, 0.1, 0.1 } becomes {10000, 0, 0} Usually fine for color data because the final display can't reproduce that much range anyway

Tone Mapping Remap HDR rendered image to LDR displayable image Display still limited to [0,1] with only 8-bit precision Remap using Reinhard's tone reproduction operator in 5 steps: Convert RGB image to luminance Calculate log-average luminance Used to calculate key value Scale luminance by key value Remap scaled luminance to [0, 1] Scale RGB values by remapped luminance

Tone Mapping Standard luminance calculation: l=[0.2125 0.7154 0.0721 ] T C If using RGBe, the color must be mapped back from RGBe to floating-point

Tone Mapping Image key: k= 1 n e all pixels ln l x, y Does this pixel averaging operation remind you of anything?

Tone Mapping Image key: k= 1 n e all pixels ln l x, y Does this pixel averaging operation remind you of anything? It's like calculating the lowest-level mipmap!...but with some other math and emitting HDR

Tone Mapping Scaled luminance: l scaled =l x, y k l mid zone is the mid zone reference reflectance value 0.18 is a common value... see references Remapped luminance: l final = Final pass modulates l final l mid zone l scaled 1 l scaled with original RGB Output in plain old 8-bit RGB, naturally

Tone Mapping Can alternately map based on the dimmest value that should be full intensity l min white is the minimum HDR intensity that should be mapped to fully bright l l final = scaled 1 l scaled l min white 1 l scaled

Tone Mapping Tone map operation is performed each frame

Tone Mapping Tone map operation is performed each frame Ouch! Common practice is to only recompute k every few frames Once every half second is common Has the realistic side-effect of not immediately responding to dramatic changes in scene brightness

Bloom Overly bright areas leak brightness into neighboring areas

Bloom Overly bright areas leak brightness into neighboring areas Apply bright pass filter to image Pixels above a certain threshold keep their luminance, everything else becomes black Apply Gaussian blur Add blurred image to final LDR image

Bloom Overly bright areas leak brightness into neighboring areas Apply bright pass filter to image Pixels above a certain threshold keep their luminance, everything else becomes black Apply Gaussian blur Add blurred image to final LDR image This step can be very expensive!

Bloom Blur optimization: Make multiple down-scaled images (i.e., mipmaps) Largest image should be 1/8 th the size of the original Blur each down-scaled image This approximates a doubling of the filter kernel size Apply small filter kernel [Kalogirou 2006] suggests 5x5 is sufficient

References Simon Green and Cem Cebenoyan (2004). "High Dynamic Range Rendering (on the GeForce 6800)." GeForce 6 Series. nvidia. http://download.nvidia.com/developer/presentations/2004/6800_leagues/6800_leagues_hdr.pdf Adam Lake, Cody Northrop, and Jeff Freeman. High Dynamic Range Environment Mapping On Mainstream Graphics Hardware. 2005. http://www.gamedev.net/reference/articles/article2485.asp Harry Kalogirou (2006). How to do good bloom for HDR rendering. http://harkal.sylphis3d.com/2006/05/20/how-to-do-good-bloom-for-hdr-rendering/

Next week... Beyond bumpmaps: Relief textures Parallax textures Interior mapping

Legal Statement This work represents the view of the authors and does not necessarily represent the view of Intel or the Art Institute of Portland. OpenGL is a trademark of Silicon Graphics, Inc. in the United States, other countries, or both. Khronos and OpenGL ES are trademarks of the Khronos Group. Other company, product, and service names may be trademarks or service marks of others.