VGP352 Week March-2008

Similar documents
VGP352 Week 8. Agenda: Post-processing effects. Filter kernels Separable filters Depth of field HDR. 2-March-2010

Information Coding / Computer Graphics, ISY, LiTH GLSL. OpenGL Shading Language. Language with syntax similar to C

Programmable GPUs. Real Time Graphics 11/13/2013. Nalu 2004 (NVIDIA Corporation) GeForce 6. Virtua Fighter 1995 (SEGA Corporation) NV1

Texture Mapping. Mike Bailey.

Models and Architectures

CS179: GPU Programming

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer

VGP353 Week 2. Agenda: Assignment #1 due Introduce shadow maps. Differences / similarities with shadow textures Added benefits Potential problems

Textures. Texture Mapping. Bitmap Textures. Basic Texture Techniques

Dominic Filion, Senior Engineer Blizzard Entertainment. Rob McNaughton, Lead Technical Artist Blizzard Entertainment

The Application Stage. The Game Loop, Resource Management and Renderer Design

CT5510: Computer Graphics. Texture Mapping

Rationale for Non-Programmable Additions to OpenGL 2.0

Introduction to Computer Graphics with WebGL

Computational Strategies

To Do. Computer Graphics (Fall 2008) Course Outline. Course Outline. Methodology for Lecture. Demo: Surreal (HW 3)

Surface Graphics. 200 polys 1,000 polys 15,000 polys. an empty foot. - a mesh of spline patches:

Rasterization Overview

CSCI E-74. Simulation and Gaming

Page 6 PROCESSOR PROCESSOR FRAGMENT FRAGMENT. texy[1] texx. texy[1] texx. texy[0] texy[0]

Ciril Bohak. - INTRODUCTION TO WEBGL

Today. Texture mapping in OpenGL. Texture mapping. Basic shaders for texturing. Today. Computergrafik

Moving Mobile Graphics Advanced Real-time Shadowing. Marius Bjørge ARM

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Transformation Pipeline

Today. Rendering - III. Outline. Texturing: The 10,000m View. Texture Coordinates. Specifying Texture Coordinates in GL

Screen Space Ambient Occlusion TSBK03: Advanced Game Programming

The GPGPU Programming Model

The Graphics Pipeline and OpenGL IV: Stereo Rendering, Depth of Field Rendering, Multi-pass Rendering!

X. GPU Programming. Jacobs University Visualization and Computer Graphics Lab : Advanced Graphics - Chapter X 1

Exercise Max. Points Total 90

Lighting and Texturing

E.Order of Operations

Models and Architectures. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Depth of Field for Photorealistic Ray Traced Images JESSICA HO AND DUNCAN MACMICHAEL MARCH 7, 2016 CSS552: TOPICS IN RENDERING

Sign up for crits! Announcments

Hands-On Workshop: 3D Automotive Graphics on Connected Radios Using Rayleigh and OpenGL ES 2.0

Computer Graphics - Treasure Hunter

Computer Graphics Programming II

PROFESSIONAL. WebGL Programming DEVELOPING 3D GRAPHICS FOR THE WEB. Andreas Anyuru WILEY. John Wiley & Sons, Ltd.

20 Years of OpenGL. Kurt Akeley. Copyright Khronos Group, Page 1

Graphics Pipeline & APIs

Dave Shreiner, ARM March 2009

Advanced Deferred Rendering Techniques. NCCA, Thesis Portfolio Peter Smith

Applications of Explicit Early-Z Culling

GPU Programming EE Midterm Examination

Technical Report. Non-Power-of-Two Mipmap Creation

GRAFIKA KOMPUTER. ~ M. Ali Fauzi

Shaders. Slide credit to Prof. Zwicker

CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015

Graphics Hardware. Instructor Stephen J. Guy

INF3320 Computer Graphics and Discrete Geometry

Assignment #6 2D Vector Field Visualization Arrow Plot and LIC

CSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

2.11 Particle Systems

3D Rendering Pipeline

CS450/550. Pipeline Architecture. Adapted From: Angel and Shreiner: Interactive Computer Graphics6E Addison-Wesley 2012

Mattan Erez. The University of Texas at Austin

Lecture 22 Sections 8.8, 8.9, Wed, Oct 28, 2009

Graphics. Texture Mapping 고려대학교컴퓨터그래픽스연구실.

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

Reading on the Accumulation Buffer: Motion Blur, Anti-Aliasing, and Depth of Field

Graphics Pipeline & APIs

CS 4620 Program 3: Pipeline

OpenGL & Visualization

Texture mapping. Computer Graphics CSE 167 Lecture 9

Homework #2 and #3 Due Friday, October 12 th and Friday, October 19 th

CS4621/5621 Fall Basics of OpenGL/GLSL Textures Basics

Shadows. Prof. George Wolberg Dept. of Computer Science City College of New York

GPU Gems 3 - Chapter 28. Practical Post-Process Depth of Field. GPU Gems 3 is now available for free online!

OpenGL. Toolkits.

Simpler Soft Shadow Mapping Lee Salzman September 20, 2007

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard

TSBK03 Screen-Space Ambient Occlusion

Computer Graphics (CS 543) Lecture 10: Soft Shadows (Maps and Volumes), Normal and Bump Mapping

Computer Graphics. Chapter 10 Three-Dimensional Viewing

OpenGL: Open Graphics Library. Introduction to OpenGL Part II. How do I render a geometric primitive? What is OpenGL

CS451Real-time Rendering Pipeline

3D Viewing Episode 2

CS 432 Interactive Computer Graphics

Lecture 4. Viewing, Projection and Viewport Transformations

GLSL Applications: 2 of 2

Computer Graphics (CS 563) Lecture 4: Advanced Computer Graphics Image Based Effects: Part 2. Prof Emmanuel Agu

A Layered Depth-of-Field Method for Solving Partial Occlusion

Grafica Computazionale: Lezione 30. Grafica Computazionale. Hiding complexity... ;) Introduction to OpenGL. lezione30 Introduction to OpenGL

Buffers. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Getting Started. Overview (1): Getting Started (1): Getting Started (2): Getting Started (3): COSC 4431/5331 Computer Graphics.

Computer graphics 2: Graduate seminar in computational aesthetics

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

OpenGL SUPERBIBLE. Fifth Edition. Comprehensive Tutorial and Reference. Richard S. Wright, Jr. Nicholas Haemel Graham Sellers Benjamin Lipchak

INF3320 Computer Graphics and Discrete Geometry

Fast Filter Spreading and its Applications

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Prof. Feng Liu. Fall /19/2016

General Purpose Computation (CAD/CAM/CAE) on the GPU (a.k.a. Topics in Manufacturing)

3D Graphics and OpenGl. First Steps

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

Grafica Computazionale

Transcription:

VGP352 Week 10 Agenda: Texture rectangles Post-processing effects Filter kernels Simple blur Edge detection Separable filter kernels Gaussian blur Depth-of-field

Texture Rectangle Cousin to 2D textures

Texture Rectangle Cousin to 2D textures Interface changes: New target: GL_TEXTURE_RECTANGLE_ARB New sampler: sampler2drect New sampler functions: texture2drect, texture2drectproj, etc.

Texture Rectangle Cousin to 2D textures Interface changes: New target: GL_TEXTURE_RECTANGLE_ARB New sampler: sampler2drect New sampler functions: texture2drect, texture2drectproj, etc. Minimization filter can only be GL_LINEAR or GL_NEAREST Coordinate wrap mode can only be GL_CLAMP, GL_CLAMP_TO_EDGE, or GL_CLAMP_TO_BORDER

Texture Rectangle Cousin to 2D textures Interface changes: New target: GL_TEXTURE_RECTANGLE_ARB New sampler: sampler2drect New sampler functions: texture2drect, texture2drectproj, etc. Minimization filter can only be GL_LINEAR or GL_NEAREST Coordinate wrap mode can only be GL_CLAMP, GL_CLAMP_TO_EDGE, or GL_CLAMP_TO_BORDER Dimensions need not be power of two

Texture Rectangle Cousin to 2D textures Interface changes: New target: GL_TEXTURE_RECTANGLE_ARB New sampler: sampler2drect New sampler functions: texture2drect, texture2drectproj, etc. Minimization filter can only be GL_LINEAR or GL_NEAREST Coordinate wrap mode can only be GL_CLAMP, GL_CLAMP_TO_EDGE, or GL_CLAMP_TO_BORDER Dimensions need not be power of two Texture accessed by coordinates on [0, w-1] [0, h-1] instead of [0, 1] [0, 1]

Post-processing Effects Apply an image space effect to the rendered scene after it has been drawn Examples: Blur Enhance contrast Heat ripple Color-space conversion (e.g., black & white, sepia, etc.) Many, many more

Post-processing Effects Overview: Render scene to off-screen target (framebuffer object) Off-screen target should be same size as on-screen window Additional information may need to be generated Render single, full-screen quad to window Use original off-screen target as source texture Configure texture coordinates to cover entire texture Texture rectangles are really useful here Configure fragment shader to perform desired effect

Post-processing Effects Configure viewport, projection, and modelview matrices to 1-to-1 mapping glviewport(0, 0, w, h); glmatrixmode(gl_projection); glloadidentity(); glortho(0, w, 0, h, -1, 1); glmatrixmode(gl_modelview); glloadidentity(); Draw full-screen quad with appropriate texture coordinates glbegin(gl_quads); gltexcoord2f(0.0, 0.0); glvertex3f(0.0, 0.0); gltexcoord2f(1.0, 0.0); glvertex3f(1.0, 0.0); gltexcoord2f(1.0, 1.0); glvertex3f(1.0, 1.0); gltexcoord2f(0.0, 1.0); glvertex3f(0.0, 1.0); glend();

Post-processing Effects Texture coordinate notes: If a texture rectangle is used, texture coordinates will be [0, w-1] [0, h-1] instead of [0, 1] [0, 1] If neighbor texels will need to be accessed, the texel size [1 / (w-1), 1 / (h-1)] must be supplied as a uniform Not needed for texture rectangles! Texel size is always 1! To access many neighbors, pre-calculate some in coordinates in vertex shader gl_texcoord[0] = gl_multitexcoord0; gl_texcoord[1] = gl_multitexcoord0 + vec4(ts.x, 0.0, 0.0, 0.0); gl_texcoord[2] = gl_multitexcoord0 + vec4(0.0, ts.y, 0.0, 0.0); gl_texcoord[3] =...

Filter Kernels Can represent our filter operation as a sum of products over a region of pixels Each pixel is multiplied by a factor Resulting products are accumulated Commonly represented as an n m matrix This matrix is called the filter kernel m is either 1 or is equal to n

Filter Kernels Uniform blur over 3x3 area: Larger kernel size results in more blurriness [ 1 1 1 1 1] 1 1 1 9 1 1

Filter Kernels Edge detection

Filter Kernels Edge detection Take the difference of each pixel and its left neighbor p x, y p x 1, y

Filter Kernels Edge detection Take the difference of each pixel and its left neighbor p x, y p x 1, y Take the difference of each pixel and its left neighbor p x, y p x 1, y

Filter Kernels Edge detection Take the difference of each pixel and its left neighbor p x, y p x 1, y Take the difference of each pixel and its left neighbor p x, y p x 1, y Add the two together 2 p x, y p x 1, y p x 1, y

Filter Kernels Rewrite as a kernel [ 0 0 0 ] 1 2 1 0 0 0

Filter Kernels Rewrite as a kernel Repeat in Y direction [ 0 0 0 ] 1 2 1 0 0 0 [ 0 1 0 ] 1 4 1 0 1 0

Filter Kernels Rewrite as a kernel Repeat in Y direction Repeat on diagonals [ 0 0 0 ] 1 2 1 0 0 0 [ 0 1 0 ] 1 4 1 0 1 0 [ 1 1 1 8 1 1 1]

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result Perform n 2 texture reads?!?

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result Perform n 2 texture reads?!? n larger than 4 or 5 won't work on most hardware

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result Perform n 2 texture reads?!? n larger than 4 or 5 won't work on most hardware Since the filter is a sum of products, it could be done in multiple passes

Filter Kernels Implement this easily on a GPU Supply filter kernel as uniforms Perform n 2 texture reads Apply kernel and write result Perform n 2 texture reads?!? n larger than 4 or 5 won't work on most hardware Since the filter is a sum of products, it could be done in multiple passes Or maybe there's a different way altogether...

Separable Filter Kernels Some 2D kernels can be re-written as the product of 2 1D kernels These kernels are called separable Applying each 1D kernel requires n texture reads per pixel, doing both requires 2n 2n n 2

Separable Filter Kernels Some 2D kernels can be re-written as the product of 2 1D kernels These kernels are called separable Applying each 1D kernel requires n texture reads per pixel, doing both requires 2n 2n n 2 2D kernel is calculated as the product of the 1D kernels k x, y =k x x k y y

Separable Filter Kernels The 2D Gaussian filter is the classic separable filter

Separable Filter Kernels The 2D Gaussian filter is the classic separable filter Product of a Gaussian along the X-axis

Separable Filter Kernels The 2D Gaussian filter is the classic separable filter Product of a Gaussian along the X-axis...and a Gaussian along the Y-axis

Separable Filter Kernels Implementing on a GPU: Use first 1D filter on source image to window Configure blending for source destination glblendfunc(gl_dst_color, GL_ZERO); Use second 1D filter on source image to window

Separable Filter Kernels Implementing on a GPU: Use first 1D filter on source image to window Configure blending for source destination glblendfunc(gl_dst_color, GL_ZERO); Use second 1D filter on source image to window Caveats: Precision can be a problem in intermediate steps May have to use floating-point output Can also use 10-bit or 16-bit per component outputs as well Choice ultimately depends on what the hardware supports

References http://www.archive.org/details/lectures_on_image_processing

Break

Depth-of-field A point of light focused through a lens becomes a point on object plane

Depth-of-field A point of light focused through a lens becomes a point on object plane A point farther than the focal distance becomes a blurry spot on the object plane

Depth-of-field A point of light focused through a lens becomes a point on object plane A point farther than the focal distance becomes a blurry spot on the object plane A point closer than the focal distance becomes a blurry spot on the object plane

Depth-of-field A point of light focused through a lens becomes a point on object plane A point farther than the focal distance becomes a blurry spot on the object plane A point closer than the focal distance becomes a blurry spot on the object plane These blurry spots are called circles of confusion (CoC hereafter)

Depth-of-field In most real-time graphics, there is no depth-offield Everything is perfectly in focus all the time

Depth-of-field In most real-time graphics, there is no depth-offield Everything is perfectly in focus all the time Most of the time this is okay In a game, the player may want to focus on foreground and background objects in rapid succession. Until we can track where the player is looking on the screen, the only way this works is to have everything in focus.

Depth-of-field In most real-time graphics, there is no depth-offield Everything is perfectly in focus all the time Most of the time this is okay In a game, the player may want to focus on foreground and background objects in rapid succession. Until we can track where the player is looking on the screen, the only way this works is to have everything in focus. For non-interactive sequences, DoF can be a very powerful tool! Film makers use this all the time to draw the audience's attention to certain things Note the use of DoF in Citizen Kane

Depth-of-field Straight-forward GPU implementation: Render scene color and depth information to offscreen targets Post-process: At each pixel determine CoC size based on depth value Blur pixels within circle of confusion To prevent in-focus data from bleeding into out-of-focus data, do not use in-focus pixels that are closer than the center pixel

Depth-of-field Straight-forward GPU implementation: Render scene color and depth information to offscreen targets Post-process: At each pixel determine CoC size based on depth value Blur pixels within circle of confusion To prevent in-focus data from bleeding into out-of-focus data, do not use in-focus pixels that are closer than the center pixel Problem with this approach?

Depth-of-field Straight-forward GPU implementation: Render scene color and depth information to offscreen targets Post-process: At each pixel determine CoC size based on depth value Blur pixels within circle of confusion To prevent in-focus data from bleeding into out-of-focus data, do not use in-focus pixels that are closer than the center pixel Problem with this approach? Fixed number of samples within CoC Oversample for small CoC Undersample for large CoC Could improve quality with multiple passes, but performance would suffer

Depth-of-field Simplified GPU implementation: Render scene color and depth information to offscreen targets Post-process: Down-sample image and Gaussian blur down-sampled image Reduced size and filter kernel size are selected to produce maximum desired CoC size Linearly blend between original image and blurred image based on per-pixel CoC size

Depth-of-field Simplified GPU implementation: Render scene color and depth information to offscreen targets Post-process: Down-sample image and Gaussian blur down-sampled image Reduced size and filter kernel size are selected to produce maximum desired CoC size Linearly blend between original image and blurred image based on per-pixel CoC size Problems with this approach?

Depth-of-field Simplified GPU implementation: Render scene color and depth information to offscreen targets Post-process: Down-sample image and Gaussian blur down-sampled image Reduced size and filter kernel size are selected to produce maximum desired CoC size Linearly blend between original image and blurred image based on per-pixel CoC size Problems with this approach? No way to prevent in-focus data from bleeding into out-of-focus data

References J. D. Mulder, R. van Liere. Fast Perception-Based Depth of Field Rendering, In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (Seoul, Korea, October 22-25, 2000). VRST '00. ACM, New York, NY, 129-133. http://homepages.cwi.nl/~mullie/work/pubs/publications.html Guennadi Riguer, Natalya Tatarchuk, John Isidoro. Real-time Depth of Field Simulation, In ShaderX2, Wordware Publishing, Inc., October 25, 2003. http://ati.amd.com/developer/shaderx/ M. Kass, A. Lefohn, J. Owens. 2006. Interactive Depth of Field Using Simulated Diffusion on a GPU. Technical Memo #06-01, Pixar Animation Studios. http://graphics.pixar.com/depthoffield/

Next week... Projects due at the start of class Oh yeah...the final!

Legal Statement This work represents the view of the authors and does not necessarily represent the view of IBM or the Art Institute of Portland. OpenGL is a trademark of Silicon Graphics, Inc. in the United States, other countries, or both. Khronos and OpenGL ES are trademarks of the Khronos Group. Other company, product, and service names may be trademarks or service marks of others.