3D Graphics for Game Programming (J. Han) Chapter II Vertex Processing

Similar documents
Geometry: Outline. Projections. Orthographic Perspective

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Computer Graphics with OpenGL ES (J. Han) Chapter IV Spaces and Transforms

CITSTUDENTS.IN VIEWING. Computer Graphics and Visualization. Classical and computer viewing. Viewing with a computer. Positioning of the camera

Overview. By end of the week:

Rasterization Overview

1 Transformations. Chapter 1. Transformations. Department of Computer Science and Engineering 1-1

Chap 7, 2008 Spring Yeong Gil Shin

Computer Viewing Computer Graphics I, Fall 2008

Computer Graphics. Chapter 10 Three-Dimensional Viewing

COMS 4160: Problems on Transformations and OpenGL

CS354 Computer Graphics Viewing and Modeling

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

COMP Computer Graphics and Image Processing. 5: Viewing 1: The camera. In part 1 of our study of Viewing, we ll look at ˆʹ U ˆ ʹ F ˆʹ S

Lecture 4. Viewing, Projection and Viewport Transformations

Describe the Orthographic and Perspective projections. How do we combine together transform matrices?

3D Viewing. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

CS380: Computer Graphics Viewing Transformation. Sung-Eui Yoon ( 윤성의 ) Course URL:

OpenGL Transformations

CS 4204 Computer Graphics

Computer Graphics. Basic 3D Programming. Contents

CSE528 Computer Graphics: Theory, Algorithms, and Applications

Viewing Transformation

7. 3D Viewing. Projection: why is projection necessary? CS Dept, Univ of Kentucky

Three-Dimensional Viewing Hearn & Baker Chapter 7

CSE328 Fundamentals of Computer Graphics

The Viewing Pipeline adaptation of Paul Bunn & Kerryn Hugo s notes

Viewing with Computers (OpenGL)

3D Viewing. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

The Graphics Pipeline and OpenGL I: Transformations!

Computer Viewing. CITS3003 Graphics & Animation. E. Angel and D. Shreiner: Interactive Computer Graphics 6E Addison-Wesley

COMP3421. Introduction to 3D Graphics

Chap 7, 2009 Spring Yeong Gil Shin

Computer Viewing. Prof. George Wolberg Dept. of Computer Science City College of New York

2D and 3D Viewing Basics

Overview. Viewing and perspectives. Planar Geometric Projections. Classical Viewing. Classical views Computer viewing Perspective normalization

CSE452 Computer Graphics

COMP3421. Introduction to 3D Graphics

Viewing and Projection

MORE OPENGL. Pramook Khungurn CS 4621, Fall 2011

Viewing and Projection

Lecture 9 Sections 2.6, 5.1, 5.2. Wed, Sep 16, 2009

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard

GEOMETRIC TRANSFORMATIONS AND VIEWING

Models and The Viewing Pipeline. Jian Huang CS456

Getting Started. Overview (1): Getting Started (1): Getting Started (2): Getting Started (3): COSC 4431/5331 Computer Graphics.

Fachhochschule Regensburg, Germany, February 15, 2017

COMP3421. Introduction to 3D Graphics

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

3D Viewing Episode 2

Three Main Themes of Computer Graphics

Computer Graphics 7: Viewing in 3-D

Advanced Computer Graphics (CS & SE )

CSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

Viewing. Announcements. A Note About Transformations. Orthographic and Perspective Projection Implementation Vanishing Points

The View Frustum. Lecture 9 Sections 2.6, 5.1, 5.2. Robb T. Koether. Hampden-Sydney College. Wed, Sep 14, 2011

Projection and viewing. Computer Graphics CSE 167 Lecture 4

Three-Dimensional Graphics III. Guoying Zhao 1 / 67

Fundamental Types of Viewing

Game Architecture. 2/19/16: Rasterization

Virtual Cameras and The Transformation Pipeline

Computer Graphics with OpenGL ES (J. Han) Chapter VII Rasterizer

3D Viewing. With acknowledge to: Ed Angel. Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

3.1 Viewing and Projection

CS 432 Interactive Computer Graphics

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52

1 OpenGL - column vectors (column-major ordering)

CS451Real-time Rendering Pipeline

Transformations (Rotations with Quaternions) October 24, 2005

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker

Chapter 5. Projections and Rendering

Lecture 4: Geometry Processing. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Linear and Affine Transformations Coordinate Systems

The Graphics Pipeline and OpenGL I: Transformations!

Figure 1. Lecture 1: Three Dimensional graphics: Projections and Transformations

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

1.2.3 The Graphics Hardware Pipeline

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

CS230 : Computer Graphics Lecture 6: Viewing Transformations. Tamar Shinar Computer Science & Engineering UC Riverside

INTRODUCTION TO COMPUTER GRAPHICS. cs123. It looks like a matrix Sort of. Viewing III. Projection in Practice 1 / 52

To Do. Demo (Projection Tutorial) Motivation. What we ve seen so far. Outline. Foundations of Computer Graphics (Fall 2012) CS 184, Lecture 5: Viewing

CSE 690: GPGPU. Lecture 2: Understanding the Fabric - Intro to Graphics. Klaus Mueller Stony Brook University Computer Science Department

Culling. Computer Graphics CSE 167 Lecture 12

1 (Practice 1) Introduction to OpenGL

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

CT5510: Computer Graphics. Transformation BOCHANG MOON

CS 112 The Rendering Pipeline. Slide 1

SUMMARY. CS380: Introduction to Computer Graphics Projection Chapter 10. Min H. Kim KAIST School of Computing 18/04/12. Smooth Interpolation

Practical Shadow Mapping

Viewing and Projection

CS 381 Computer Graphics, Fall 2012 Midterm Exam Solutions. The Midterm Exam was given in class on Tuesday, October 16, 2012.

Models and Architectures

Wire-Frame 3D Graphics Accelerator IP Core. C Library Specification

Object Representation Affine Transforms. Polygonal Representation. Polygonal Representation. Polygonal Representation of Objects

One or more objects A viewer with a projection surface Projectors that go from the object(s) to the projection surface

Motivation. What we ve seen so far. Demo (Projection Tutorial) Outline. Projections. Foundations of Computer Graphics

Rendering If we have a precise computer representation of the 3D world, how realistic are the 2D images we can generate? What are the best way to mode

Computer Viewing. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

Transcription:

Chapter II Vertex Processing

Rendering Pipeline Main stages in the pipeline The vertex processing stage operates on every input vertex stored in the vertex buffer and performs various operations such as transform. The rasterization stage assembles polygons from the vertices and converts each polygon into a set of fragments. (A fragment refers to a set of data needed to update a pixel in the color buffer. The color buffer is a memory space storing the pixels to be displayed on the screen.) The fragment processing stage operates on each fragment and determines its color. In the output merging stage, the fragment competes or combines with the pixel in the color buffer to update the pixel color. 2-2

Rendering Pipeline (cont d) Two programmable and two hard-wired (fixed) stages The vertex and fragment processing stages are programmable. The programs for the stages are called vertex program and fragment program. Using the programs, for example, you can apply any transform you want to the vertex, and you can determine the fragment color through any way you want. The rasterization and output merging stages are fixed or hardwired. They are not programmable but are just configurable through user-defined parameters. 2-3

Spaces and Transforms for Vertex Processing Typical operations on a vertex Transform - This chapter Lighting - This chapter and Chapter 5 Animation - Chapter 11 Transforms at the vertex processing stage World transform View transform Projection transform. 2-4

Affine Transform Scaling The world and view transforms are built upon affine transforms. Affine transform Linear transform Scaling Rotation Translation Scaling examples in 2D (s x, s y )=(2,1) (s x, s y )=(1,1/2) (s x, s y )=(2,2) 4 2 2 1 1 2 (-1,-1) (-2,-1) (-1,-1/2) If all of the scaling factors, s x, s y, and s z, are identical, the (-2,-2) scaling is called uniform. Otherwise, it is a non-uniform scaling. 1 2 2-5

Affine Transform Rotation 2D Rotation 2-6

Affine Transform Rotation (cont d) 3D rotation about z-axis (R z ) 3D rotation about x-axis can be obtained through cyclic permutation, i.e., x-, y-, and z-coordinates are replaced by y-, z-, and x-coordinates, respectively. One more cyclic permutation leads to 3D rotation about y-axis. 2-7

Affine Transform Translation Translation does not fall into the class of linear transforms, and is represented as vector addition. We can describe translation as matrix multiplication if we use the homogeneous coordinates. Given the 3D Cartesian coordinates (x, y, z) of a point, we can simply take (x, y, z, 1) as its homogeneous coordinates. The matrices for scaling and rotation should be extended into 4x4 matrices. 2-8

More on Homogeneous Coordinates The fourth component of the homogeneous coordinates is not necessarily 1 and is denoted by w. The homogeneous coordinates (x, y, z, w) correspond to the Cartesian coordinates (x/w, y/w, z/w). For example, (1,2,3,1), (2,4,6,2) and (3,6,9,3) are different homogeneous coordinates for the same Cartesian coordinates (1,2,3). The w-component of the homogeneous coordinates is used to distinguish between vectors and points. If w is 0, (x, y, z, w) represent a vector. Otherwise, a point. 2-9

World Transform A model is created in its own object space. The object space for a model typically has no relationship to that of another model. The world transform assembles all models into a single coordinate system called world space. 2-10

World Transform (cont d) 2-11

More on Rotation Look at the origin of the coordinate system such that the axis of rotation points toward you. If the rotation is counter-clockwise, the rotation angle is positive. If the rotation is clockwise, it is negative. 2-12

Euler Transform When we successively rotate an object about the x-, y-, and z-axes, the object acquires a specific orientation. The rotations angles (θ x,θ y,θ z ) are called the Euler angles. When three rotations are combined into one, it is called Euler transform. 2-13

Normal Transform 2-14

Normal Transform (cont d) 2-15

View Transform The world transform has been done, and now all objects are in the world space. Next step: camera pose specification EYE: camera position AT: a reference point toward which the camera is aimed UP: view up vector that roughly describes where the top of the camera is pointing. (In most cases, UP is set to the y-axis of the world space.) Then, the camera space, {EYE, u, v, n}, can be created. 2-16

View Transform (cont d) A point is given different coordinates in distinct spaces. If all the world-space objects can be newly defined in terms of the camera space in the manner of the teapot's mouth, it becomes much easier to develop the rendering algorithms. In general, it is called space change. The view transform converts each vertex from the world space to the camera space. 2-17

Dot Product Given vectors, a and b, whose coordinates are (a 1, a 2,.., a n ) and (b 1, b 2,.., b n ), respectively, the dot product a b is defined to be a 1 b 1 +a 2 b 2 +.. +a n b n. In Euclidean geometry, a b = ǁaǁǁbǁcosθ, where ǁaǁ and ǁbǁ denote the lengths of a and b, respectively, and θ is the angle between a and b. If a and b are perpendicular to each other, a b = 0. If θ is an acute angle, a b > 0. If θ is an obtuse angle, a b < 0. y y y (0,1) (1,1) (-1,1) (1,0) x (1,0) x (1,0) x (1,0) (0,1) = 0 If a is a unit vector, a a = 1. (1,0) (1,1) = 1 (1,0) (-1,1) = -1 2-18

Orthonormal Basis Orthonormal basis = an orthogonal set of unit vectors orthonormal standard non-orthonormal non-standard orthonormal non-standard The camera space has an orthonormal basis {u, v, n}. Note that u u=v v=n n=1 and u v=v n=n u=0. 2-19

2D Analogy for View Transform The coordinates of p are (1,1) in the world space but (- 2,0) in the camera space. Let s superimpose the camera space to the world space while imagining invisible rods connecting p and the camera space such that the transform is applied to p. translation by (-2,-2) rotation by -45º As the camera space becomes identical to the world space, the world-space coordinates of p" can be taken as the camera-space coordinates. 2-20

2D Analogy for View Transform (cont d) Let s see if the combination of T and R correctly transforms p. How to compute R? It is obtained using the camera-space basis vectors. 2-21

2D Analogy for View Transform (cont d) It is straightforward to show that R transforms u into e 1 and v into e 2. R converts the coordinates defined in terms of the basis {e 1, e 2 }, e.g., (-1,-1), into those defined in terms of the basis {u, v}, e.g., (- 2,0). In other words, R performs the basis change from {e 1, e 2 } to {u, v}. The problem of space change is decomposed into translation and basis change. 2-22

View Transform (cont d) Let us do the same thing in 3D. First of all, EYE is translated to the origin of the world space. Imagine invisible rods connecting the scene objects and the camera space. The translation is applied to the scene objects. 2-23

View Transform (cont d) The world space and the camera space now share the origin, due to translation. We then need a rotation that transforms u, v, and n into e 1, e 2, and e 3, respectively, i.e., Ru=e 1, Rv=e 2, and Rn=e 3. R performs the basis change from {e 1, e 2, e 3 } to {u, v, n}. 2-24

View Transform (cont d) The view matrix OpenGL view matrix void glulookat( GLdouble Eye_x, GLdouble Eye_y, GLdouble Eye_z, GLdouble At_x, GLdouble At_y, GLdouble At_z, GLdouble Up_x, GLdouble Up_y, GLdouble Up_z ); 2-25

View Transform (cont d) D3DX (Direct3D Extension) is a library of tools designed to provide additional graphics functionality, and contains many features that would be a chore to directly implement with D3D. D3DXMatrixLookAtRH builds a right-handed viewing transformation. D3DXMATRIX *WINAPI D3DXMatrixLookAtRH( D3DXMATRIX *pout, CONST D3DXVECTOR3 *peye, CONST D3DXVECTOR3 *pat, CONST D3DXVECTOR3 *pup ); Note, however, that the returned matrix is the transpose of the 4 4 matrix we derived because Direct3D follows the vector-on-the-left convention whereas this lecture follows the vector-on-the-right convention. 2-26

Per-vertex Lighting Light emitted from a light source is reflected by the object surface and then reaches the camera. The above figure describes what kinds of parameters are needed for computing lighting at vertex p. It is called per-vertex lighting and is done by the vertex program. Per-vertex lighting is old-fashioned. More popular is per-fragment lighting. It is performed by the fragment program and produces a better result. Understand that a vertex color can be computed at the vertex processing stage, and wait until Chapter 5 for more discussion on lighting. 2-27

View Frustum Let us denote the basis of the camera space by {x, y, z} instead of {u, v, n}. Recall that, for constructing the view transform, we defined the external parameters of the camera, i.e., EYE, AT, and UP. Now let us control the camera's internals. It is analogous to choosing a lens for the camera and controlling zoom-in/zoom-out. The view frustum parameters, fovy, aspect, n, and f, define a truncated pyramid. The near and far planes run counter to the real-world camera or human vision system, but have been introduced for the sake of computational efficiency. 2-28

View Frustum (cont d) View-frustum culling A large enough box or sphere bounding a polygon mesh is computed at the preprocessing step, and then at run time a CPU program tests if the bounding volume is outside the view frustum. If it is, the polygon mesh is discarded and does not enter the rendering pipeline. It can save a fair amount of GPU computing cost with a little CPU overhead. The cylinder and sphere would be discarded by the view-frustum culling whereas the teapot would survive. If a polygon intersects the boundary of the view frustum, it is clipped with respect to the boundary, and only the portion inside the view frustum is processed for display. 2-29

Projection Transform It is not easy to clip the polygons with respect to the view frustum. If there is a transform that converts the view frustum to the axis-aligned box, and the transform is applied to the polygons of the scene, clipping the transformed polygons with respect to the box is much easier. The transform is called projection transform. 2-30

Projection Transform (cont d) Consider pinhole camera, which is the simplest imaging device with an infinitesimally small aperture. film The convergent pencil of projection lines focuses on the aperture. The film corresponds to the projection plane. 2-31

Projection Transform (cont d) The view frustum can be taken as a convergent pencil of projection lines. The lines converge on the origin, where the camera (EYE) is located. The origin is often called the center of projection (COP). All 3D points on a projection line are mapped onto a single 2D point in the projected image. It brings the effect of perspective projection, where objects farther away look smaller. The projection transform ensures that the projection lines become parallel, i.e., we have a universal projection line. Now viewing is done along the universal projection line. It is called the orthographic projection. The projection transform brings the effect of perspective projection within a 3D space. 2-32

Projection Transform (cont d) Projection transform matrix The projection-transformed objects will enter the rasterization stage. Unlike the vertex processing stage, the rasterization stage is implemented in hardware, and assumes that the clip space is left-handed. In order for the vertex processing stage to be compatible with the hard-wired rasterization stage, the objects should be z-negated. 2-33

Projection Transform (cont d) Projection transform from the camera space (RHS) to the clip space (LHS) D3DXMatrixPerspectiveFovRH builds the projection transform matrix. D3DXMATRIX *WINAPI D3DXMatrixPerspectiveFovRH( D3DXMATRIX *pout, FLOAT fovy, // in radians FLOAT Aspect, // width divided by height FLOAT zn, FLOAT zf ); 2-34

Projection Transform (cont d) OpenGL function for projection matrix void gluperspective( GLdouble fovy, GLdouble aspect, GLdouble n, GLdouble f ); In OpenGL, the clip-space cuboid has a different dimension, and consequently the projection transform is different. See the book. 2-35

Deriving Projection Transform Based on the fact that projection-transformed y coordinate (y ) is in the range of [-1,1], we can compute the general representation of y. As shown above, we could compute x in a similar way if fovx were given. 2-36

Deriving Projection Transform (cont d) Unfortunately fovx is not given, and therefore let s define x in terms of fovy and aspect. 2-37

Deriving Projection Transform (cont d) We have found x and y. D A D Homogeneous coordinates representation Then, we have the following projection matrix. Note that z and z are independent of x and y, and therefore m 1 and m 2 are 0s. just 0s 2-38

Deriving Projection Transform (cont d) Let s apply the projection matrix. In projection transform, observe that f and -n are transformed to -1 and 0, respectively. Using the fact, the projection matrix can be completed. 2-39