Reading on the Accumulation Buffer: Motion Blur, Anti-Aliasing, and Depth of Field

Similar documents
Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

I N T R O D U C T I O N T O C O M P U T E R G R A P H I C S

1 (Practice 1) Introduction to OpenGL

Topic #1: Rasterization (Scan Conversion)

? Which intermediate. Recall: Line drawing algorithm. Programmer specifies (x,y) of end pixels Need algorithm to determine pixels on line path

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

Computer Graphics. Chapter 10 Three-Dimensional Viewing

Lecture 4. Viewing, Projection and Viewport Transformations

Prof. Feng Liu. Fall /19/2016

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

Lecture 5: Viewing. CSE Computer Graphics (Fall 2010)

Viewing. Part II (The Synthetic Camera) CS123 INTRODUCTION TO COMPUTER GRAPHICS. Andries van Dam 10/10/2017 1/31

Shadows in the graphics pipeline

Computer Graphics (CS 543) Lecture 10: Rasterization and Antialiasing

Chapter 12 Notes: Optics

L16. Scan Matching and Image Formation

CS 4731/543: Computer Graphics Lecture 7 (Part I): Raster Graphics: Polygons & Antialiasing. Emmanuel Agu

Overview. By end of the week:

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

Movie: Geri s Game. Announcements. Ray Casting 2. Programming 2 Recap. Programming 3 Info Test data for part 1 (Lines) is available

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

Problem Set 4 Part 1 CMSC 427 Distributed: Thursday, November 1, 2007 Due: Tuesday, November 20, 2007

Skybox. Ruoqi He & Chia-Man Hung. February 26, 2016

EF432. Introduction to spagetti and meatballs

3D Rendering and Ray Casting

CS451Real-time Rendering Pipeline

Perspective matrix, OpenGL style

Viewing COMPSCI 464. Image Credits: Encarta and

Rasterization Overview

EECS 487: Interactive Computer Graphics

Computer Graphics CS 543 Lecture 11 (Part 1) Polygon Filling & Antialiasing

Projection: Mapping 3-D to 2-D. Orthographic Projection. The Canonical Camera Configuration. Perspective Projection

Drawing Fast The Graphics Pipeline

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Drawing Fast The Graphics Pipeline

CS 543: Computer Graphics. Rasterization

Mouse Ray Picking Explained

Topic 0. Introduction: What Is Computer Graphics? CSC 418/2504: Computer Graphics EF432. Today s Topics. What is Computer Graphics?

Midterm Exam CS 184: Foundations of Computer Graphics page 1 of 11

two using your LensbAby

CS 130 Final. Fall 2015

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

Computer Graphics: Line Drawing Algorithms

SPRITES Moving Two At the Same Using Game State

COS340A Assignment 1 I Pillemer Student# March 25 th 2007 p1/15

Grade 7/8 Math Circles Fall Nov.4/5 The Pythagorean Theorem

lecture 10 - depth from blur, binocular stereo

Light: Geometric Optics

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

3D Rendering and Ray Casting

Computational Photography

COMP 175 COMPUTER GRAPHICS. Ray Casting. COMP 175: Computer Graphics April 26, Erik Anderson 09 Ray Casting

CSE528 Computer Graphics: Theory, Algorithms, and Applications

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

Topics and things to know about them:

Lecture 8 Ray tracing Part 1: Basic concept Yong-Jin Liu.

CSE328 Fundamentals of Computer Graphics

Projection Matrix Tricks. Eric Lengyel

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Rasterization Computer Graphics I Lecture 14. Scan Conversion Antialiasing Compositing [Angel, Ch , ]

Rasterization and Graphics Hardware. Not just about fancy 3D! Rendering/Rasterization. The simplest case: Points. When do we care?

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Scanline Rendering 1 1/42

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard

Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

3D computer graphics: geometric modeling of objects in the computer and rendering them

DH2323 DGI13. Lab 2 Raytracing

Ulf Assarsson Department of Computer Engineering Chalmers University of Technology

EF432. Introduction to spagetti and meatballs

UNIVERSITY OF NEBRASKA AT OMAHA Computer Science 4620/8626 Computer Graphics Spring 2014 Homework Set 1 Suggested Answers

CSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

CS559: Computer Graphics. Lecture 12: OpenGL Transformation Li Zhang Spring 2008

How to print a Hypercube

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52

3D Graphics for Game Programming (J. Han) Chapter II Vertex Processing

Computer Graphics and Visualization. Graphics Systems and Models

Drawing in 3D (viewing, projection, and the rest of the pipeline)

Computer Graphics 7: Viewing in 3-D

Game Architecture. 2/19/16: Rasterization

Assignment 10 Solutions Due May 1, start of class. Physics 122, sections and 8101 Laura Lising

Introduction to OpenGL

The 2 nd part of the photographic triangle

1 (5 max) 2 (10 max) 3 (20 max) 4 (30 max) 5 (10 max) 6 (15 extra max) total (75 max + 15 extra)

What is Clipping? Why do we Clip? Lecture 9 Comp 236 Spring Clipping is an important optimization

I have a meeting with Peter Lee and Bob Cosgrove on Wednesday to discuss the future of the cluster. Computer Graphics

diverging. We will be using simplified symbols of ideal lenses:

Sung-Eui Yoon ( 윤성의 )

Lectures OpenGL Introduction

Temporal Resolution. Flicker fusion threshold The frequency at which an intermittent light stimulus appears to be completely steady to the observer

0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Ray Tracing Basics I. Computer Graphics as Virtual Photography. camera (captures light) real scene. photo. Photographic print. Photography: processing

Anti-aliasing and Monte Carlo Path Tracing. Brian Curless CSE 557 Autumn 2017

Modeling Light. On Simulating the Visual Experience

Evening s Goals. Mathematical Transformations. Discuss the mathematical transformations that are utilized for computer graphics

Unit 1, Lesson 1: Moving in the Plane

Anti-aliasing and Monte Carlo Path Tracing

Distribution Ray-Tracing. Programação 3D Simulação e Jogos

Today. Global illumination. Shading. Interactive applications. Rendering pipeline. Computergrafik. Shading Introduction Local shading models

Transcription:

Reading on the Accumulation Buffer: Motion Blur, Anti-Aliasing, and Depth of Field 1 The Accumulation Buffer There are a number of effects that can be achieved if you can draw a scene more than once. You can do this by using the accumulation buffer. You can request an accumulation buffer with glutinitdisplaymode(... GLUT_ACCUM); Conceptually, we re doing the following: I = f 1 frame 1 + f 2 frame 1 +... + f n frame n That is, the final image, I, is a weighted sum of n frames, where the fractional contribution of frame i is f i. We initialize the accumulation buffer to zero, then add in each frame, with an associated constant, and then copy the result to the frame buffer. This is accomplished by Clear the accumulation buffer with glclearaccum(r,g,b,α); glclear(gl_accum_buffer_bit); This is just like clearing the color buffer or the depth buffer. The accumulation buffer is like a running sum, though, so you will usually initialize it to zero. Add a frame into the accumulation buffer with: glaccum(gl_accum,f i) This function allows other values (see the man page), but we ll use this today. Copy the accumulation buffer to the frame buffer: glaccum(gl_return, 1.0); The second argument is a multiplicative factor for the whole buffer. We ll always use 1.0. 1.1 Advice Get your scene working first, before adding the accumulation buffer stuff. Make sure you clear the accumulation buffer to zero. Make sure your factors add to 1. 2 Motion Blur By drawing a trail of fading images, you can simulate the blur that occurs with moving objects: 1

You can fade the images by using smaller coefficients on those images. Here s an example of the output from an OpenGL program done like this: Here s a link to the program that produced that image (you can click on the magenta link text in this PDF document): FallingTeapot.py Notice the three images of the teapot. Even without the accumulation buffer, just by drawing it three times, we can get an interesting effect. Type s and notice how if we draw them with a small motion, the teapot just looks bigger or distorted. Type s to return to the large motion. Type m to switch to using the accumulation buffer. Notice how the frames can be made to vary in strength. In fact, the first frame has more-or-less disappeared! Type s to use small motion and see the result. The teapot looks blurred by motion, which is roughly the effect we want. (Though if you think the effect without the accumulation buffer is nicer, I wouldn t blame you.) Type + or - to increase/decrease the number of frames. 2

Look at the code, and notice the following: There are two display functions, display1, which draws the scene once in the usual way, and displayn, which draws the scene N times using the accumulation buffer. In displayn, notice how the accumulation buffer is initialized, drawn into, and read out of. Notice that the code to draw the scene is parameterized by the location of the teapot (as a position on a line). Notice how the position of the teapot is computed by linear interpolation based on the frame number. This is another occurrence of our parametric equations! Notice how the fractions for each frame are computed. What is the fraction for the frame that is pretty much invisible? The problem with using the accumulation buffer for large-motion motion blur is that we really want to turn all of the coefficients up, so that the first image doesn t fade away and the last isn t too pale, yet that s mathematically nonsense and also doesn t work (the table turns blue!). Nevertheless, for small-motion blur, it works pretty well. 3 Anti-Aliasing An important use of the accumulation buffer is in anti-aliasing. Aliasing is the technical term for jaggies. It comes about because of the imposition of an arbitrary pixelation over a real world. For example, suppose we draw a roughly 2-pixel thick blue line at about a 30 degree angle to the horizontal. If we think about the rasterizing of that line, the situation might look like this: What s wrong with that? The problem comes with assigning colors to pixels. If we only make blue the pixels that are entirely covered by the line, we get something like this: 3

The line looks very jaggy. It doesn t get better if we make blue the pixels that are covered by any part of the line: What we want is to color the pixels that are partially covered by the line with a mixture of the line color and the background color, proportional to the amount that the line covers the pixel. So, how can we do that? The idea of anti-aliasing using the accumulation buffer is: The scene gets drawn multiple times with slight perturbations (jittering), so that Each pixel is a local average of the images that intersect it. Generally speaking, you have to jitter by less than one pixel Here s the key demo: Aliasing.py and here is a pair of screenshots from the demo, one with anti-aliasing and one without anti-aliasing. 4

Without anti-aliasing turned on, look at the four figures (callbacks 1-4) and notice the jaggies at the edges. Turn on anti-aliasing (callback a ), and notice the difference. Notice also that it doesn t quite work for the wire cube. Why not? Because the jitter amount is too small at the back and too big at the front. 4 Better Anti-Aliasing A better technique than jittering the objects is to jitter the camera, or more precisely, to modify the frustum just a little so that the pixels that images fall on are just slightly different. Again, we jitter by less than one pixel. Here s a figure that may help: 5

The red and blue cameras differ only by an adjustment to the location of the frustum. The center of projection (the big black dot) hasn t changed, so all the rays still project to that point. The projection rays intersect the two frustums at different pixel values, though, so by averaging these images, we can anti-alias these projections. How much should the two frustums differ, though? By less than one pixel. How can we move them by that amount? We only have control over left, right, top and bottom, and these are measured in world coordinates, not pixels. We need a conversion factor. We can find an conversion factor in a simple way: the width of the frustum in pixels is just the width of the window (more precisely, the viewport), while the width of the frustum in world coordinates is just right left. Therefore, the adjustment is: x units = x pixels right left window width Here s the code, adapted from the OpenGL Programming Guide: void acccamera(glfloat pixdx, GLfloat pixdy) { GLfloat dx, dy; GLint viewport[4]; glgetintegerv(gl_viewport, viewport); GLfloat windowwidth=viewport[2]; GLfloat windowheight=viewport[3]; GLfloat frustumwidth=right-left; GLfloat frustumheight=top-bottom; } glmatrixmode(gl_projection); glloadidentity(); dx = pixdx*frustumwidth/windowwidth; dy = pixdy*frustumheight/windowheight; printf("world delta = %f %f\n",dx,dy); glfrustum(left+dx,right+dx,bottom+dy,top+dy,near,far); glmatrixmode(gl_modelview); glloadidentity(); The pixdx and pixdy values are the jitter amounts (distances) in pixels (sub-pixels, actually). The frustum is altered in world coordinates Therefore, dx and dy are computed in world coordinates corresponding to the desired distance in pixels. void smoothdisplay() { int jitter; int numjitters = 8; glclear(gl_accum_buffer_bit); for(jitter=0; jitter<numjitters; jitter++) { acccamera(jittertable[jitter][0], jittertable[jitter][1]); glclear(gl_color_buffer_bit GL_DEPTH_BUFFER_BIT); drawobject(); glaccum(gl_accum, 1.0/numJitters); } glaccum(gl_return, 1.0); glflush(); glutswapbuffers(); } 6

This does just what you think: We draw the image 8 times, each time adjusting the pixel jitter. Each drawing has equal weight of 1/8. There are lots of ways to imagine how the pixel jitter distances are computed. The domino pattern is a good idea for 8. However, a paper on the subject argues for the following, which I just used without further investigation. One idea I have is that we want to avoid regular patterns. // From the OpenGL Programming Guide, first edition, table 10-5 float jittertable[][2] = { {0.5625, 0.4375}, {0.0625, 0.9375}, {0.3125, 0.6875}, {0.6875, 0.8124}, {0.8125, 0.1875}, {0.9375, 0.5625}, {0.4375, 0.0625}, {0.1875, 0.3125}}; Demos: AntiAliasing.py. I tried to include a screenshot from that demo, but the quality isn t very good. Notice the difference in quality between the two images We ll look at how the code is written, but it follows what we ve shown above. This better approach to anti-aliasing works regardless of how far the object is from the center of projection, unlike the object-jitter we did before. Furthermore, we have a well-founded procedure for choosing the jitter amount, not just trial and error. 5 Depth of Field Another thing we can do with the accumulation buffer is to blur things that a camera would blur. We ll investigate 7

why cameras blur things: the physics of lenses determines that there is one distance at which all the lines from the object converge at a single point on the film, even though they take different paths through the lens. what is meant by focal depth : it s that ideal depth where everything is in focus. what is meant by depth of field : it s how far off from the idea depth where objects still appear acceptably sharp (in focus). Objects that are nearer or farther from the focal depth than the depth of field will appear blurry (out of focus). how to accomplish this optical effect by using the accumulation buffer Here s a very good web site in terms of the figures and the pictures of the rulers. I ve also added this link to our course home page: Depth of Field. Another is Understanding Depth of Field in photography. Ignore what it says about the circles of confusion being caused by the amount of light going through the lens. The circles of confusion are caused by the aperture size (which also controls the amount of light, but the causality goes the other way): Smaller aperture means: greater depth of field, and less light Of course, in CG, we don t have apertures. Instead, we have to fake the blurriness. But how do we blur the things except at the focal distance? First, let s remind ourselves of the basic camera terminology, since our figures are about to get very complicated. The big blue dot is the eye location; everything about the camera is measured from that point. In particular, left, right, top and bottom are measured with respect to that. The frustum is the blue trapezoid. Make sure you understand what left, right, top and bottom are. The dashed line is where some vertices project; in this case they happen to project to the center of the frustum, but that s just to simplify some of the geometry we ll do later. Next, notice that if we move the camera, points project onto different pixels. In the following figure, the difference is that we ve moved the eye to the left (red) location; the camera moves as a result of moving the eye. The dashed lines show where the three vertices project to. In the blue setup, they all project to the center of the frustum. In the red setup, they all project to different locations. 8

That doesn t accomplish our depth-of-field goal, of course, because all the figures (the pentagons) are blurred. Suppose we want to make the middle one non-blurred. What we do is adjust the frustum s location, as it s measured from the new eye location, to cancel out the effect of the eye movement, so that that one ray still projects to the center of the frustum. The geometry is just by similar triangles: 9

If F is the focus distance, measured from the eye to the object (the height of the larger triangle) and x e is the amount that we move the eye (the width of the larger triangle), we can see that: x e F = x f F near So, we adjust the frustum by x f. Note that to accomplish the desired cancelation, the two deltas have to have opposite signs: if we move the eye to the left (negative), we have to move the frustum to the right (positive). We find: The code for setting up our camera (frustum), then, is: x f = ( x e /F)(F near) /* Set up camera. Note that near, far, and focus remain fixed; all that is modified with the accumulation buffer is the eye location.*/ void acccamera(glfloat eyedx, GLfloat eyedy, GLfloat focus) { GLfloat dx,dy; glmatrixmode(gl_projection); glloadidentity(); dx = (eyedx/focus)*(focus-near); dy = (eyedy/focus)*(focus-near); printf("dx = %.5f, dy = %.5f\n", dx, dy); glfrustum(left+dx,right+dx,bottom+dy,top+dy,near,far); glmatrixmode(gl_modelview); glloadidentity(); // Translating the coordinate system by (eyedx,eyedy) is the same as // moving the eye by (-eyedx,-eyedy). Nobody better re-load the // identity matrix! gltranslatef(eyedx,eyedy,0.0); } The next question is how far to move the eye. The farther the eye moves, the narrower the depth of field. First, look at this figure. The blurriness of the other two figures depends on the difference between where they project to in the two camera setups. 10

To understand this more, I want to focus on two pairs of similar triangles: c d F e near E One pair gives us the following proportion, where the left hand side comes from the big green triangle and the right hand side comes from the similar smaller triangle with two green sides: F E = F near e The other pair gives us the following proportion, where the left hand side comes from the large right triangle with the dashed line as its hypotenuse, and the left hand side is the similar smaller triangle with the dashed hypotenuse: What are these values? F d E near: the distance from the eye to the frustum F: the distance where objects are in focus E: the (maximum) distance the eye moves = F d near e c 11

d: the desired depth of field c: the blurriness distance, about 1 pixel The unknown is E. We d like to compute E as a function of d. By eliminating e between the two equations, we can do this. Skipping some horrendous algebra (check me on this, please), we get E = F d near F d c F near F Note that all these values are in world units except for c, which is about 1 pixel. So, we need to convert c from pixels to world units before we do this computation. Demos: DepthOfField.py Here s a screenshot from that demo: 12