CS488. Visible-Surface Determination. Luc RENAMBOT

Similar documents
Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Identifying those parts of a scene that are visible from a chosen viewing position, and only process (scan convert) those parts

Hidden Surface Removal

4.5 VISIBLE SURFACE DETECTION METHODES

Computer Graphics. Bing-Yu Chen National Taiwan University

Two basic types: image-precision and object-precision. Image-precision For each pixel, determine which object is visable Requires np operations

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

More Visible Surface Detection. CS116B Chris Pollett Mar. 16, 2005.

9. Visible-Surface Detection Methods

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Visible Surface Detection Methods

Werner Purgathofer

Culling. Computer Graphics CSE 167 Lecture 12

VISIBILITY & CULLING. Don t draw what you can t see. Thomas Larsson, Afshin Ameri DVA338, Spring 2018, MDH

CSE528 Computer Graphics: Theory, Algorithms, and Applications

(Refer Slide Time 03:00)

Clipping. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Computer Graphics II

CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling

Motivation. Culling Don t draw what you can t see! What can t we see? Low-level Culling

CSE328 Fundamentals of Computer Graphics: Concepts, Theory, Algorithms, and Applications

Pipeline Operations. CS 4620 Lecture 10

8. Hidden Surface Elimination

Overview. Pipeline implementation I. Overview. Required Tasks. Preliminaries Clipping. Hidden Surface removal

Clipping & Culling. Lecture 11 Spring Trivial Rejection Outcode Clipping Plane-at-a-time Clipping Backface Culling

Visible-Surface Detection 1. 2IV60 Computer graphics set 11: Hidden Surfaces. Visible-Surface Detection 3. Visible-Surface Detection 2

CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

Topic #1: Rasterization (Scan Conversion)

Shadows in the graphics pipeline

CEng 477 Introduction to Computer Graphics Fall 2007

Speeding up your game

VIII. Visibility algorithms (II)

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

The Traditional Graphics Pipeline

Renderer Implementation: Basics and Clipping. Overview. Preliminaries. David Carr Virtual Environments, Fundamentals Spring 2005

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Computer Graphics. Shadows

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

CS 498 VR. Lecture 18-4/4/18. go.illinois.edu/vrlect18

Lecture 4. Viewing, Projection and Viewport Transformations

COMP30019 Graphics and Interaction Scan Converting Polygons and Lines

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

3D Object Representation

The Traditional Graphics Pipeline

Computer Graphics. Chapter 1 (Related to Introduction to Computer Graphics Using Java 2D and 3D)

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

SFU CMPT 361 Computer Graphics Fall 2017 Assignment 2. Assignment due Thursday, October 19, 11:59pm

The Traditional Graphics Pipeline

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

CS Computer Graphics: Hidden Surface Removal

Class of Algorithms. Visible Surface Determination. Back Face Culling Test. Back Face Culling: Object Space v. Back Face Culling: Object Space.

Deferred Rendering Due: Wednesday November 15 at 10pm

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

8. Hidden Surface Elimination

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Hidden surface removal. Computer Graphics

Chapter 5. Projections and Rendering

Computer Graphics. Lecture 9 Hidden Surface Removal. Taku Komura

Ray Tracing. Foley & Van Dam, Chapters 15 and 16

Computer Graphics. Rendering. by Brian Wyvill University of Calgary. cpsc/enel P 1

Ray Tracing Foley & Van Dam, Chapters 15 and 16

Project 1, 467. (Note: This is not a graphics class. It is ok if your rendering has some flaws, like those gaps in the teapot image above ;-)

Hidden Surface Elimination Raytracing. Pre-lecture business. Outline for today. Review Quiz. Image-Space vs. Object-Space

(Refer Slide Time: 00:02:00)

Point Cloud Filtering using Ray Casting by Eric Jensen 2012 The Basic Methodology

Volume Shadows Tutorial Nuclear / the Lab

CS 488. More Shading and Illumination. Luc RENAMBOT

Hidden Surface Elimination: BSP trees

Spatial Data Structures

COMP371 COMPUTER GRAPHICS

Topics and things to know about them:

Graphics and Interaction Rendering pipeline & object modelling

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

Computer Graphics. Lecture 9 Environment mapping, Mirroring

(Refer Slide Time 05:03 min)

PowerVR Hardware. Architecture Overview for Developers

Chapter 4. Chapter 4. Computer Graphics 2006/2007 Chapter 4. Introduction to 3D 1

Graphics (Output) Primitives. Chapters 3 & 4

Robust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan

Today. CS-184: Computer Graphics. Lecture #10: Clipping and Hidden Surfaces. Clipping. Hidden Surface Removal

Spatial Data Structures

Visualization Concepts

Homework #2 and #3 Due Friday, October 12 th and Friday, October 19 th

CS488 2D Graphics. Luc RENAMBOT

COMP environment mapping Mar. 12, r = 2n(n v) v

Visible Surface Determination: Intro

Hidden-Surface Removal.

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Spatial Data Structures

Lecture 25 of 41. Spatial Sorting: Binary Space Partitioning Quadtrees & Octrees

Adaptive Point Cloud Rendering

Ray Tracing. Outline. Ray Tracing: History

Lecture 11: Ray tracing (cont.)

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

The Rendering Pipeline (1)

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz

Transcription:

CS488 Visible-Surface Determination Luc RENAMBOT 1

Visible-Surface Determination So far in the class we have dealt mostly with simple wireframe drawings of the models The main reason for this is so that we did not have to deal with hidden surface removal Now we want to deal with more sophisticated images so we need to deal with which parts of the model obscure other parts of the model 2

Examples The following sets of images show a wireframe version, a wireframe version with hidden line removal, and a solid polygonal representation of the same object 3

Examples

Drawing Order If we do not have a way of determining which surfaces are visible then which surfaces are visible depends on the order in which they are drawn with surfaces being drawn later appearing in front of surfaces drawn previously 5

Principles We do not want to draw surfaces that are hidden. If we can quickly compute which surfaces are hidden, we can bypass them and draw only the surfaces that are visible For example, if we have a solid 6 sided cube, at most 3 of the 6 sides are visible at any one time, so at least 3 of the sides do not even need to be drawn because they are the back sides 6

Principles We also want to avoid having to draw the polygons in a particular order. We would like to tell the graphics routines to draw all the polygons in whatever order we choose and let the graphics routines determine which polygons are in front of which other polygons With the same cube as above we do not want to have to compute for ourselves which order to draw the visible faces, and then tell the graphics routines to draw them in that order. 7

Principles The idea is to speed up the drawing, and give the programmer an easier time, by doing some computation before drawing Unfortunately these computations can take a lot of time, so special purpose hardware is often used to speed up the process 8

Techniques Two types of approaches Object space Image space 9

Object Space Object space algorithms do their work on the objects themselves before they are converted to pixels in the frame buffer. The resolution of the display device is irrelevant here as this calculation is done at the mathematical level of the objects For each object a in the scene Determine which parts of object a are visible (involves comparing the polygons in object a to other polygons in a and to polygons in every other object in the scene) 10

Image Space Image space algorithms do their work as the objects are being converted to pixels in the frame buffer The resolution of the display device is important here as this is done on a pixel by pixel basis For each pixel in the frame buffer Determine which polygon is closest to the viewer at that pixel location Determine the color of the pixel with the color of that polygon at that location 11

Approaches As in our discussion of vector vs raster graphics earlier in the term The mathematical (object space) algorithms tended to be used with the vector hardware Whereas the pixel based (image space) algorithms tended to be used with the raster hardware 12

Homogeneous Coordinates When we talked about 3D transformations we reached a point near the end when we converted the 3D (or 4D with homogeneous coordinates) to 2D by ignoring the Z values Now we will use those Z values to determine which parts of which polygons (or lines) are in front of which parts of other polygons 13

Technique There are different levels of checking that can be done: Object Polygon Part of a Polygon 14

Transparency There are also times when we may not want to cull out polygons that are behind other polygons If the frontmost polygon is transparent then we want to be able to 'see through' it to the polygons that are behind it as shown below 15

Transparent Objects Which objects are transparent in the scene? 16

Coherence We used the idea of coherence before in our line drawing algorithm We want to exploit 'local similarity' to reduce the amount of computation needed This is how compression algorithms work 17

Coherence Face - properties (such as color, lighting) vary smoothly across a face (or polygon) Depth - adjacent areas on a surface have similar depths Frame - images at successive time intervals tend to be similar Scan Line - adjacent scan lines tend to have similar spans of objects Area - adjacent pixels tend to be covered by the same face Object - if objects are separate from each other (ie they do not overlap) then we only need to compare polygons of the same object, and not one object to another Edge - edges only disappear when they go behind another edge or face Implied Edge - line of intersection of 2 faces can be determined by the endpoints of the intersection 18

Extent Rather than dealing with a complex object, it is often easier to deal with a simpler version of the object In 2D: a bounding box In 3D: a bounding volume 19

Bounding Box We convert a complex object into a simpler outline, generally in the shape of a box Every part of the object is guaranteed to fall within the bounding box 20

Bounding Box Checks can then be made on the bounding box to make quick decisions (ie does a ray pass through the box.) For more detail, checks would then be made on the object in the box. There are many ways to define the bounding box 21

Bounding Box The simplest way is to take the minimum and maximum X, Y, and Z values to create a box You can also have bounding boxes that rotate with the object, bounding spheres, bounding cylinders, etc. 22

Back-Face Culling Back-face culling an object space algorithm Works on 'solid' objects which you are looking at from the outside That is, the polygons of the surface of the object completely enclose the object 23

Normals Every planar polygon has a surface normal, that is, a vector that is normal to the surface of the polygon Actually every planar polygon has two normals Given that this polygon is part of a 'solid' object we are interested in the normal that points OUT, rather than the normal that points in 24

Back Face Front facing OpenGL specifies that all polygons be drawn such that the vertices are given in counterclockwise order as you look at the visible side of polygon in order to generate the 'correct' normal. Any polygons whose normal points away from the viewer is a 'back-facing' polygon and does not need to be further investigated Back facing 25

Computing To find back facing polygons, the dot product of the surface normal of each polygon is taken with a vector from the center of projection to any point on the polygon The dot product is then used to determine what direction the polygon is facing: greater than 0 : back facing equal to 0 : polygon viewed on edge less than 0 : front facing 26

Dot Product a.b = a b cos(theta) a.b = 0 orthogonal vectors a.b = ax*bx + ay*by+ az*bz 27

Example 28

OpenGL OpenGL back-face culling is turned on using: glcullface(gl_back); glenable(gl_cull_face); 29

Remarks Back-face culling can very quickly remove unnecessary polygons Unfortunately there are often times when back-face culling can not be used if you wish to make an open-topped box - the inside and the outside of the box both need to be visible, so either two sets of polygons must be generated, one set facing out and another facing in, or back-face culling must be turned off to draw that object 30

Depth Buffer Early on we talked about the frame buffer which holds the color for each pixel to be displayed This buffer could contain a variable number of bytes for each pixel depending on whether it was a grayscale, RGB, or color indexed frame buffer All of the elements of the frame buffer are initially set to be the background color As lines and polygons are drawn the color is set to be the color of the line or polygon at that point 31

Depth Buffer We now introduce another buffer which is the same size as the frame buffer but contains depth information instead of color information 32

Z-Buffering Image-space algorithm All of the elements of the z-buffer are initially set to be 'very far away Whenever a pixel color is to be changed, the depth of this new color is compared to the current depth in the z- buffer If this color is 'closer' than the previous color the pixel is given the new color The z-buffer entry for that pixel is updated as well Otherwise, the pixel retains the old color, the z-buffer retains its old value 33

Algorithm for each polygon for each pixel p in the polygon's projection { //z ranges from -1 to 0 pz = polygon's normalized z-value at (x, y); if (pz > zbuffer[x, y]) // closer to the camera { zbuffer[x, y] = pz; framebuffer[x, y] = colour of pixel p } } 34

Remarks This is very nice since the order of drawing polygons does not matter, the algorithm will always display the color of the closest point The biggest problem with the z-buffer is its finite precision It is important to set the near and far clipping planes to be as close together as possible to increase the resolution of the z-buffer within that range Otherwise, even though one polygon may mathematically be 'in front' of another that difference may disappear due to roundoff error 35

OpenGL OpenGL z-buffer and frame buffer are cleared using: glclear(gl_depth_buffer_bit GL_COLOR_BUFFER_BIT); OpenGL z-buffering is turned on using: glenable(gl_depth_test); Also gldepthfunc(gl_less) gldepthrange(0, 1) 36

Example The depth-buffer is especially useful when it is difficult to order the polygons in the scene based on their depth 37

Warnock's Algorithm Warnock's algorithm is a recursive areasubdivision algorithm It looks at an area of the image If is is easy to determine which polygons are visible in the area, they are drawn else the area is subdivided into smaller parts and the algorithm recurses. Eventually an area will be represented by a single non-intersecting polygon 38

Iteration At each iteration the area of interest is subdivided into four equal areas Each polygon is compared to each area and is put into one of four bins Surrounding polygons - completely contain the area Intersecting polygons - intersect the area Contained polygons - completely contained in the area Disjoint polygons - completely outside the area 39

For a given area: Iteration case 1. If all polygons are disjoint then the background color fills the area case 2. If there is a single contained polygon or intersecting polygon then the background color is used to fill the area, then the part of the polygon contained in the area is filled with the color of that polygon case 3. If there is a single surrounding polygon and no intersecting or contained polygons then the area is filled with the color of the surrounding polygon case 4. If there is a surrounding polygon in front of any other surrounding, intersecting, or contained polygons then the area is filled with the color of the front surrounding polygon Otherwise break the area into 4 equal parts and recurse 40

Example Book, pages 686-688 case 1. If all polygons are disjoint case 2. If there is a single contained polygon or intersecting polygon case 3. If there is a single surrounding polygon and no intersecting or contained polygons case 4. If there is a surrounding polygon in front of any other surrounding, intersecting, or contained polygons 41

Remarks Bounding boxes can help At worst, log base 2 of the max(screen width, screen height) recursive steps will be needed At that point the area being looked at is only a single pixel which can't be divided further At that point, the distance to each polygon intersecting,contained in, or surrounding the area is computed at the center of the polygon to determine the closest polygon and its color Could be faster than z-buffer, but only for small number of polygons 42

Next Time More Visible-Surface Determination Assignment 3: Monday 20th November 43