Hidden-Surface Removal.

Similar documents
Visible Surface Detection Methods

Werner Purgathofer

4.5 VISIBLE SURFACE DETECTION METHODES

Renderer Implementation: Basics and Clipping. Overview. Preliminaries. David Carr Virtual Environments, Fundamentals Spring 2005

Computer Graphics II

9. Visible-Surface Detection Methods

CS184 : Foundations of Computer Graphics Professor David Forsyth Final Examination

CS184 : Foundations of Computer Graphics Professor David Forsyth Final Examination (Total: 100 marks)

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Computer Graphics. Bing-Yu Chen National Taiwan University

CEng 477 Introduction to Computer Graphics Fall 2007

Hidden Surface Removal

8. Hidden Surface Elimination

8. Hidden Surface Elimination

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE328 Fundamentals of Computer Graphics: Concepts, Theory, Algorithms, and Applications

Computing Visibility. Backface Culling for General Visibility. One More Trick with Planes. BSP Trees Ray Casting Depth Buffering Quiz

Pipeline Operations. CS 4620 Lecture 10

(Refer Slide Time 03:00)

Visible-Surface Detection Methods. Chapter? Intro. to Computer Graphics Spring 2008, Y. G. Shin

CS602 Midterm Subjective Solved with Reference By WELL WISHER (Aqua Leo)

Computer Graphics: 8-Hidden Surface Removal

The Graphics Pipeline. Interactive Computer Graphics. The Graphics Pipeline. The Graphics Pipeline. The Graphics Pipeline: Clipping

Chapter 5. Projections and Rendering

Triangle Rasterization

Computer Graphics Viewing

COMP30019 Graphics and Interaction Scan Converting Polygons and Lines

Hidden surface removal. Computer Graphics

THE DEPTH-BUFFER VISIBLE SURFACE ALGORITHM

Visible Surface Detection. (Chapt. 15 in FVD, Chapt. 13 in Hearn & Baker)

Lecture 4. Viewing, Projection and Viewport Transformations

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Two basic types: image-precision and object-precision. Image-precision For each pixel, determine which object is visable Requires np operations

Today. CS-184: Computer Graphics. Lecture #10: Clipping and Hidden Surfaces. Clipping. Hidden Surface Removal

More Visible Surface Detection. CS116B Chris Pollett Mar. 16, 2005.

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

3D Object Representation

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Clipping. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Institutionen för systemteknik

Identifying those parts of a scene that are visible from a chosen viewing position, and only process (scan convert) those parts

Clipping and Intersection

CS 4204 Computer Graphics

Visible Surface Determination: Intro

Computer Graphics. Lecture 9 Hidden Surface Removal. Taku Komura

VISIBILITY & CULLING. Don t draw what you can t see. Thomas Larsson, Afshin Ameri DVA338, Spring 2018, MDH

BSP Trees. Chapter Introduction. 8.2 Overview

CS 498 VR. Lecture 18-4/4/18. go.illinois.edu/vrlect18

Visibility: Z Buffering

Drawing the Visible Objects

Einführung in Visual Computing

SE Mock Online Test 1-CG

Visible-Surface Detection 1. 2IV60 Computer graphics set 11: Hidden Surfaces. Visible-Surface Detection 3. Visible-Surface Detection 2

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU

Hidden Surface Elimination: BSP trees

CSCI 4620/8626. Coordinate Reference Frames

Pipeline Operations. CS 4620 Lecture 14

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52

(Refer Slide Time 05:03 min)

Lecture 8 Ray tracing Part 1: Basic concept Yong-Jin Liu.

Z- Buffer Store the depth for each pixel on the screen and compare stored value with depth of new pixel being drawn.

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

For each question, indicate whether the statement is true or false by circling T or F, respectively.

CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014

Chapter - 2: Geometry and Line Generations

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

Shadows in Computer Graphics

CS4620/5620: Lecture 14 Pipeline

Three Dimensional Geometry. Linear Programming

Hidden Line and Surface

Models and The Viewing Pipeline. Jian Huang CS456

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

CS 488. More Shading and Illumination. Luc RENAMBOT

Incremental Form. Idea. More efficient if we look at d k, the value of the decision variable at x = k

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Intro to Modeling Modeling in 3D

CS488. Visible-Surface Determination. Luc RENAMBOT

Example Examination 2IV

Class of Algorithms. Visible Surface Determination. Back Face Culling Test. Back Face Culling: Object Space v. Back Face Culling: Object Space.

PESIT Bangalore South Campus Hosur road, 1km before Electronic City, Bengaluru -100 Department of Computer Science

Revision Problems for Examination 2 in Algebra 1

CS 130 Exam I. Fall 2015

UNIT 2 GRAPHIC PRIMITIVES

Geometry. Zachary Friggstad. Programming Club Meeting

Game Architecture. 2/19/16: Rasterization

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Rasterization Overview

2D and 3D Transformations AUI Course Denbigh Starkey

CS Computer Graphics: Hidden Surface Removal

Answers to practice questions for Midterm 1

Rasterization. MIT EECS Frédo Durand and Barb Cutler. MIT EECS 6.837, Cutler and Durand 1

Intersection of an Oriented Box and a Cone

521493S Computer Graphics Exercise 1 (Chapters 1-3)

CS452/552; EE465/505. Clipping & Scan Conversion

MODULE - 4. e-pg Pathshala

INTRODUCTION TO COMPUTER GRAPHICS. cs123. It looks like a matrix Sort of. Viewing III. Projection in Practice 1 / 52

Transcription:

Hidden-Surface emoval. Here we need to discover whether an object is visible or another one obscures it. here are two fundamental approaches to remove the hidden surfaces: ) he object-space approach ) he image-space approach he object-space approach works with objects. It takes one object (polygon) and compares it pair wise with all the other polygons. hen we are done, we know the part of this polygon that is visible, we draw it, and then we proceed to work with the next polygon, doing the same. hen we are left with two remaining polygons, we compare them, and then we draw the visible parts. In the image-space approach, for every pixel we consider the intersection of all the planes of the objects with the ray emanating from that pixel. e take the intersection closest to the projection plane, and we draw that pixel according to the characteristics of the polygon whose intersection we are drawing. e are going to study the following algorithms: ) ack face removal ) Z-buffer ) Painter s ) Scan line 5) arnock s area subdivision ack face removal algorithm. As usual we start working in D, and the generalization to D is quite simple. e note that this algorithm will only work for convex polygons for the D case, and for convex polyhedra in the D case. esides we work with the orthographic projection, because we can apply the algorithm after the prewarping has been done (If we change the direction of projection, the algorithm will still work). In this algorithm we determine which faces (edges in the case of D) are in front and which are in back. e can make this determination looking at each edge independently. ack face removal algorithm: I For each edge of the polygon, compute a normal vector pointing out. See figure

II For each edge, compute the angle between the normal vector computed in step I, and the direction of projection, given by the dot product of these two vectors. Since we are considering a parallel projection, we have a single projection vector. e have then x, y ) ( x, y ) x x + y y x. cosθ. See figure ( y hen the rule is: If cos θ >, the edge is visible If cos θ <, the edge is invisible If cos θ, we don t care. If the projection is not orthogonal to the projection line, the same holds. Example: Case A. Consider the triangle with vertices (,), (5,) and (, 8). he direction of projection P is (,-, ), in homogeneous coordinates. See figure e need to find the normal vectors N, N, and N. For N we find the line going 8 through (, ) and (,8). m, and therefore y ( x ). In other words, (y ) + (x ). So the direction of this line is (, -, ) and the direction of the normal is (-, -, ). his implies that N is the vector (-, - ). Similar computations for the other two cases obtain that N is the vector (,) and N is the vector (-,). herefore, N P (-, - ) (, -). Edge visible

N P (, ) (, -) -. Edge invisible N P (-, ) (, -) -. Edge invisible Case : Suppose we change the direction of projection to Q (, -) (Oblique projection). e use prewarping.. e distort the triangle through prewarping. he 5,,],,] new coordinates will be: [ ] [ ] [,,] ] [,,] ] before, and we obtain:,,] N [ ], N [,,] ] and [,8,] ] e now do the orthographic projection: [,8,] ]. e find the normal vectors as, N [,,] ]. herefore, N P (-, - ) (, -). N P (, -) (, -). N P (-, ) (, -) -. Edge visible Edge visible Edge invisible emark: e can improve the process of finding the normal to each edge. he vertices are listed in counterclockwise order. e consider two vertices P and P, with coordinates (x, y ) and (x, y ). e compute the two vectors (x - x, y y, ) and (x - x, y y, ) and we perform the cross product of each of them with respect to the vector (,,). y the right hand site rule, the vector needed to obtain the outside normal vector, N, is going from (x, y ) to (x, y ). See figure

Initialize both buffers: Distance buffer to IGVAUE (a large negative constant) Color buffer to the background color. For each polygon we specify: a) Vertices b) Edge color c) Fill-in color e keep the pseudo-distance information of every projected point, since it is given by the prewarping transformation. e go over each polygon at the pixel level (image space). e find the distance from the point in the polygon that is going to be projected into a particular pixel, and we compare this distance with the distance in the z buffer. If the new distance is less than the distance we have in the buffer, update the z buffer and replace the color in the buffer color with the new color given by the polygon. If the distance is greater than the one in the z buffer, we have already processed a closer polygon, and we skip the point in the polygon. If the distances are equal, a decision should be made following a predetermined criterion. hen all the polygons have been scanned, draw the screen using the information in the color buffer. et s work with one polygon, because the process is the same for all of them. e use an incremental approach going from pixel to pixel. ) Find the equation of the plane in which the polygon lies (if we have not already found it). e work with the prewarped polygon, and take the first three non-collinear vertices, with coordinates (x, y, z ), (x, y, z ) and (x, y, z ) to compute the prewarped normal, N (x - x, y y, z z ) (x - x, y y, z z ) ((y y ).(z z ) (y y ).( z z ), (x - x ).(z z ) (x - x ).( z z ), (x - x ).( y y ) (x - x ),(y y ) ). aking any point in the plane, with coordinates (x, y, z), we determine the equation of the plane by writing the equation (x - x, y - y, z - z ) N. From this equation we obtain the equation of the plane A x + y + C z + D, where A y (z z ) + y (z z ) + y (z z ) z (x x ) + z (x x ) + z (x x ) C x (y y ) + x (y y ) + x (y y ) D - x (y z y z ) - x (y z - y z ) - x (y z y z ) hese coefficients are the same for the whole polygon. ) Using a polygon fill algorithm, we can cover all the pixels inside the polygon. Now for every pixel on the screen, with coordinates (x, y), we must find the corresponding world coordinates (x w, y w, z w ) of the prewarped point that goes into this pixel. his is done

using the inverse of the window-to-ndc-to-screen transformation first, and then the equation of the plane. e know that x NDC x w V V + V V y NDC y w V V + V V and x round ( MaxX x NDC ) y round ( - MaxY yndc + MaxY). Solving for x w and y w, we obtain x w x NDC y w y NDC - V - V - V V - V V V V V V V V Now we find the values of x NDC and y NDC by removing the round function from the formulas relating x NDC and y NDC with x and y: x x NDC MaxX y NDC ( MaxY y) * MaxY * Finally, we obtain x w y w - V V - V V x V V - MaxX V V ( MaxY y) V V - MaxY V V et us assume, for the sake of simplicity, that we have the whole screen, V, V, V, V.75. herefore we can write:

- x w x + MaxX ( MaxY y) y w ( - ) + MaxY MaxY y + et s call q the value of - MaxX, i.e. q - MaxX Now, since we know the point (x w, y w ), we know that the z coordinate of this point is the pseudo-depth, which can be found using the equation of the plane A x + y + C z + D, that we found before. Ax herefore, z w (x w, y w ) - consideration. + y C w w + D, is the depth of that polygon for the pixel under As we move to the next pixel on the screen, x is incremented by, and the y remains the same, since we are dealing with the scan line. ut an increment of for x on the screen corresponds to an increment of q for x w. Now consider two points P and Q on the polygon, and consequently on the plane. et (x p, y p, z p ) and (x q, y q, z q ) be the corresponding coordinates of these points. e define Δx x p - x q Δy y p y q Δz z p z q hen the equation of the plane can be written in differential form as A Δx + Δy + C Δz. Since this equation is in window coordinates, and the scan line is a line with constant value of y (the scan value), when we increment the value of x by a unit, the y w does not change, and the x w changes by q. herefore we can write that when moving on the scan line from one pixel to the next, the pseudo-depth changes according to the formula: A Δx w + Δy w + C Δz w. ut Δy w, and Δx w q. So Δz w - q C A, that is a constant, i.e. the depth of a certain pixel changes by a constant amount when stepping from one pixel to the next one in the same scan line. esides, this constant is the same for the entire polygon. herefore we need to compute the depth corresponding to the starting pixel on any scan line, and then add the fixed amount to compute the depths of the rest of the pixels on that scan line.

Example: Suppose that we have a resolution of on the monitor. Assume that the viewport covers the whole screen and the real world window is given by -, -.5,,, N -.5 and F -. Find the pseudo-depth corresponding to the pixel (,5) when the real world triangle (,, -), (,, -), (,, -) is being projected onto the xy-plane using the projection with center (-,,, ) H. Prewarping matrix. e transform the triangle 6 So the new triangle is A ( -.5,, -.5), ( -,.5, -.5) and C (-,.5, -.75). In order to find the plane where the triangle lives, we compute the normal, i.e. ( A ) ( C ) ( -.5,.5, ) ( -,, -.5) (-.5, -.5,.5). herefore the equation of the plane is (x +, y.5, z +.75) (-, -, ). In standard notation we have z.5 x +.5 y.75. e must prewarp the real world window, to check if the point is outside the far or near plane. So we consider a point on the far plane and another on the near plane (because points on the same z-plane, go to the same new z-plane on the pseudo-depth).

5 5 8 5 6 5 8 6 So the distorted window lies between the new N - and the new F - 5. So to compute the world coordinates corresponding to the screen coordinates x, y 5, we have x w MaxX - x + ) ( ) ( + + - 5 y w MaxY y + ) ( 5 + 8 z w - 5 + 8-8 -. his point lies inside the distorted world window. Painters algorithm. Here we don t work pixel by pixel as before, but we work with the set of polygons. e first paint the furthest polygon, and then we remove it. e then work with the next furthest one. hen we are done with all of them, the drawing is finished. In order to find the furthest polygon, we first need to sort the polygons. Sorting: ) For each polygon determine the deepest point (the most negative value of the pseudodepth z). Sort the polygons using the order of the deepest point: A > D >. See figure.

et s call S the surface with the deepest point. S is a candidate to be drawn first. ) Compare S to the other surfaces in the list to determine if there are any overlaps in depth. Use lowest and greatest value of z to make this comparison. If no depth-overlap occurs, the surface S is drawn and the process is repeated for the next surface in the list. As far as no overlap occurs, we may keep going until we finish with all the surfaces on the list Note that every time a surface is drawn, it is eliminated from the list. If an overlap occurs, we need to make additional comparisons to determine whether or not a reordering is necessary. If any of the following tests is true, no reordering is needed. e list the set of comparisons, in order of difficulty: A) No x-overlap ) No y-overlap See the following figure in D, to understand the previous two cases:

C) Surface S is on the outside of the overlapping surface, relative to the view plane. Or S is in the back of the plane generated by S. See figure In order to determine this condition, we find the equation of the plane containing S : A x + y + C z + D. e know that using one vertex P (p, p, p ) on S and the normal to the surface pointing towards the view plane we can compute the equation of the plane. See figure.

he normal was computed using the back face algorithm If the normal were pointing away from the view plane, that face would have been removed by the back face algorithm. Equation of the plane containing S : (X P) N ', i.e. (x - p ) N x + (y p ) N y + (z p ) N z. N x x + N y y + N z z + D Consider the function f(x, y, z) N x x + N y y + N z z + D. For any point (x, y, z ) in the plane generated by S, we have that f(x, y, z ). herefore N x p + N y p + N z p + D ake a vertex Q (q, q, q ) on S. (Q P) N ' <. In other words, (q - p ) N x + (q p ) N y + (q p ) N z <. herefore N x q + N y q + N z q + D <. If this last condition is true for every vertex of the polygon S, this polygon is on the outside of the surface S. D) Surface S is in the inside of surface S, relative to the view plane. S is in front of the plane generated by S. See figure.

Using a technique similar to the one in case C), S is on the inside of surface S if for every vertex in S, let s call it Q (q, q, q ), the following condition holds: N x q + N y q + N z q + D >. here N x x + N y y + N z z + D is the equation of the plane generated by S. E) he projections of the two surfaces onto the view plane do not overlap. Although the surfaces have overlapping x and y boundaries, the actual projection of the two surfaces does not overlap. See figure. If everything fails, we interchange the order of S and S. See the next figure for an example of this case, because although S has the deepest depth, it will obscure S.

. herefore S is now at the top of the list, and we should start comparing it with all the elements in the list. Note that it is possible for this algorithm to go into an infinite loop if two or more surfaces alternately obscure each other. See the next figure for an example of this situation: You may check that all the previous tests fail. o avoid such loops, flag any surface that has been reordered. If an attempt is made to switch that surface a second time, we must divide it into two parts, using the intersection line of the two planes under consideration. he original surface is replaced by these two new surfaces, and the process is started again with this new augmented list. arnock s area subdivision his is an image-space method. It tries to determine those view areas that represent part of a visible single surface. he methods is applied by successively dividing the total

viewing area into smaller and smaller rectangles, until each small area is the projection of part of a single visible surface or no surface at all. he idea is to check which area is part of a single surface, or it too complex to be analyzed. In order to do this, when studying a particular area, we classify the surface into four classes. Surrounding surface: One completely contains the area Intersecting surface: One that intersects the area Inside surface: One that is completely inside the area Outside surface: One that is completely outside the area See figure Consider the surfaces that project into this area. e can use these categories of surfaces. to see what we need to do. No need to subdivide the area if one of the following conditions is true i) All surfaces are outside surfaces. Use background color for this area ii) Only one intersecting or one inside surface is in the area. Use background color for the area, and then the fill in the part of the intersecting or inside surface. iii) Only one surrounding surface. Use color of surrounding surface. iv) A surrounding surface obscures all other surfaces within the area boundaries. (ook at the previous algorithm). Use the area with color of surrounding surface. Check depth of the all the planes of the surfaces at the four corners of the area, to determine if one obscures all other surfaces. If none of the cases resolves the problem, the area is subdivided into equal areas, and each area s surfaces are examined as before. he process stops when there is no more possible subdivision. he closest surface that is visible from that pixel determines the color of the pixel.