THE VIEWING TRANSFORMATION

Similar documents
THE CAMERA TRANSFORM

On-Line Computer Graphics Notes CLIPPING

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE328 Fundamentals of Computer Graphics

Three-Dimensional Viewing Hearn & Baker Chapter 7

Lecture No Perspective Transformation (course: Computer Vision)

So we have been talking about 3D viewing, the transformations pertaining to 3D viewing. Today we will continue on it. (Refer Slide Time: 1:15)

CSE452 Computer Graphics

COMP 175 COMPUTER GRAPHICS. Ray Casting. COMP 175: Computer Graphics April 26, Erik Anderson 09 Ray Casting

Specifying Complex Scenes

CS 4204 Computer Graphics

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52

Viewing. Reading: Angel Ch.5

Viewing with Computers (OpenGL)

Overview. By end of the week:

Visualisation Pipeline : The Virtual Camera

CT5510: Computer Graphics. Transformation BOCHANG MOON

UNIT 2 2D TRANSFORMATIONS

Perspective matrix, OpenGL style

Figure 1. Lecture 1: Three Dimensional graphics: Projections and Transformations

Models and The Viewing Pipeline. Jian Huang CS456

Shadows in Computer Graphics

Camera Model and Calibration

Computer Viewing. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

Computer Graphics: Geometric Transformations

3D Geometry and Camera Calibration

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

Appendix D Trigonometry

Lecture 4. Viewing, Projection and Viewport Transformations

CS 381 Computer Graphics, Fall 2012 Midterm Exam Solutions. The Midterm Exam was given in class on Tuesday, October 16, 2012.

Basic Elements. Geometry is the study of the relationships among objects in an n-dimensional space

Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

Rendering If we have a precise computer representation of the 3D world, how realistic are the 2D images we can generate? What are the best way to mode

CS251 Spring 2014 Lecture 7

CV: 3D sensing and calibration

3D Computer Graphics. Jared Kirschner. November 8, 2010

Answers to practice questions for Midterm 1

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

INTRODUCTION TO COMPUTER GRAPHICS. cs123. It looks like a matrix Sort of. Viewing III. Projection in Practice 1 / 52

6.837 LECTURE 7. Lecture 7 Outline Fall '01. Lecture Fall '01

Chapter 8 Three-Dimensional Viewing Operations

and F is an antiderivative of f

Hello, welcome to the video lecture series on Digital Image Processing. So in today's lecture

521493S Computer Graphics Exercise 2 Solution (Chapters 4-5)

3D Viewing Episode 2

CSE 167: Introduction to Computer Graphics Lecture #3: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

CS602 Midterm Subjective Solved with Reference By WELL WISHER (Aqua Leo)

Chapter 1. Linear Equations and Straight Lines. 2 of 71. Copyright 2014, 2010, 2007 Pearson Education, Inc.

Polar Coordinates. 2, π and ( )

Geometric transformations assign a point to a point, so it is a point valued function of points. Geometric transformation may destroy the equation

Year 7 Set 1 : Unit 1 : Number 1. Learning Objectives: Level 5

CSE 167: Introduction to Computer Graphics Lecture #3: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

CS230 : Computer Graphics Lecture 6: Viewing Transformations. Tamar Shinar Computer Science & Engineering UC Riverside

SPECIAL TECHNIQUES-II

Evening s Goals. Mathematical Transformations. Discuss the mathematical transformations that are utilized for computer graphics

Computer Viewing. Prof. George Wolberg Dept. of Computer Science City College of New York

Lecture 4: Transforms. Computer Graphics CMU /15-662, Fall 2016

Geometry: Outline. Projections. Orthographic Perspective

GEOMETRIC OBJECTS AND TRANSFORMATIONS I

Alaska Mathematics Standards Vocabulary Word List Grade 7

Camera Model and Calibration. Lecture-12

JUST THE MATHS SLIDES NUMBER 5.2. GEOMETRY 2 (The straight line) A.J.Hobson

Fundamental Types of Viewing

Perspective Mappings. Contents

Translations. Geometric Image Transformations. Two-Dimensional Geometric Transforms. Groups and Composition

Transforms. COMP 575/770 Spring 2013

1 OpenGL - column vectors (column-major ordering)

Last week. Machiraju/Zhang/Möller/Fuhrmann

Planar homographies. Can we reconstruct another view from one image? vgg/projects/singleview/

Advanced Computer Graphics Transformations. Matthias Teschner

SNAP Centre Workshop. Introduction to Trigonometry

2D and 3D Transformations AUI Course Denbigh Starkey

Matrices. Chapter Matrix A Mathematical Definition Matrix Dimensions and Notation

Viewing. Announcements. A Note About Transformations. Orthographic and Perspective Projection Implementation Vanishing Points

Review Notes for the Calculus I/Precalculus Placement Test

MODULE - 7. Subject: Computer Science. Module: Other 2D Transformations. Module No: CS/CGV/7

Object Representation Affine Transforms. Polygonal Representation. Polygonal Representation. Polygonal Representation of Objects

Game Architecture. 2/19/16: Rasterization

Transformations. Ed Angel Professor of Computer Science, Electrical and Computer Engineering, and Media Arts University of New Mexico

Mathematical Analysis of Tetrahedron (solid angle subtended by any tetrahedron at its vertex)

To graph the point (r, θ), simply go out r units along the initial ray, then rotate through the angle θ. The point (1, 5π 6. ) is graphed below:

Computing the 3D Viewing Transformation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

CSE 252B: Computer Vision II

Perspective transformations

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

Lesson Polygons

3D Mathematics. Co-ordinate systems, 3D primitives and affine transformations

Introduction to Homogeneous coordinates

The Viewing Pipeline adaptation of Paul Bunn & Kerryn Hugo s notes

Example 1: Give the coordinates of the points on the graph.

Linear and Affine Transformations Coordinate Systems

(Refer Slide Time: 00:04:20)

Lesson 20: Exploiting the Connection to Cartesian Coordinates

SPLITTING THE CUBIC UNIFORM B-SPLINE CURVE

Section 10.1 Polar Coordinates

The Importance of Matrices in the DirectX API. by adding support in the programming language for frequently used calculations.

Review for Mastery Using Graphs and Tables to Solve Linear Systems

Transcription:

ECS 178 Course Notes THE VIEWING TRANSFORMATION Kenneth I. Joy Institute for Data Analysis and Visualization Department of Computer Science University of California, Davis Overview One of the most important operations in rendering is the projection of a a three-dimensional scene onto a two-dimensional screen from an arbitrary camera position. A fundamental part of this operation is the specification of a viewing transformation, a 4 4 matrix that transforms a region of space into image space. Specification of the Parameters We will assume that the user has defined a camera transform which can be applied to the object in the Cartesian frame. This transform will convert the coordinates of the object into the local coordinate system of the camera. Assuming this, we can assume that we are viewing the object from the origin of the camera frame and the scene has been transformed to lie on the negative w-axis of the frame. We also assume that the user has defined the following parameters: A field-of-view angle, α, representing the angle subtended at the apex of the viewing pyramid, and The specification of near and far bounding planes. These planes are perpendicular to the z-axis at a distance of n and f from the camera, respectively. A three-dimensional view of the camera and its viewing space is given in the following figure.

v Far Plane Near Plane (u, v) A side view of the this space, with the u-axis coming out of the paper, is given in the following figure. Note the field-of-view angle α. v α Near Plane Far Plane n f The specification of α forms a viewing volume in the shape of a pyramid with the camera (placed 2

at the origin) at the apex of the pyramid and the negative-w axis forming the axis of the pyramid. This pyramid is commonly referred to as the viewing pyramid. The specification of the near and far planes forms a truncated viewing pyramid giving a region of space that contains the portion of the scene which is the center of attention of the camera. The viewing transform, defined below, will transform this truncated viewing pyramid onto the image space volume 1 u, v, w 1. The Viewing Transformation Matrix Given the specification of the parameters (α, n, f), we define a transformation that can be applied to all elements of a scene and takes the truncated viewing volume (bounded by the viewing pyramid and the planes z n and z f) to the cube 1 u, v, w 1. This transformation is given by cot α 2 0 0 cot α 2 A α,n,f f+n 1 2fn 0 The transformation A α,n,f is commonly referred to as the viewing transformation and is developed below. Development of the Matrix The viewing transformation A α,n,f is not a combination of simple translations, rotations, scales or shears: its development is more complex. First, we motivate the development of the projection portion of the matrix and then apply this knowledge to the construction of the actual matrix. Motivation As motivation, consider the case when the camera is at the origin, the viewing direction is along the negative w-axis, and points are to be projected along the line that passes through both the origin and the point, onto a plane defined by w d. The following figure illustrates the projection of a point (u, v, w) onto the plane. 3

v (u, v, d) (u, v, w) v v d w d By a similar triangle argument v ( ) d v w (we note that the distance is w since the w coordinate of (u, v, w) is negative) and similarly u ( ) d u w A transformation that projects (u, v, w) ( du w, dv ) w, d ( du w, dv w, dw ) w can be expressed in 4-dimensional homogeneous coordinates by (u, v, w, 1) (du, dv, dw, w) 4

which can be expressed in a matrix form by P d d 0 0 d d 1 since u v w 1 d 0 0 d d 1 du dv dw w So the projection induces a unique fourth column in the matrix. When the matrix is applied to a point (u, v, w) it returns the distance of the w coordinate from the xy plane ( w because w is negative). Since we divide by the w coordinate, the result of the operation is inversely proportional to the distance of the point from the xy plane. Now, consider the case where n, f and the field of view α are present as parameters, and it is necessary to transform the viewing pyramid defined by the angle α and the planes w n and w f into the cube 1 u, v, w 1. v f tan α 2 n tan α 2 α 2 (0, 0, n) (0, 0, f) To transform the truncated viewing pyramid to the cube, we will start with a transformation P 5

of the form P 1 0 0 1 a 1 b 0. Here we have incorporated the projection in the fourth column, and have recognized that we must scale the resulting z values (the z values in the pyramid range between n and f and the result in image space must lie between 1 and 1) and must translate the center truncated pyramid over to the origin. The values a and b are chosen so that the face of the truncated pyramid defined by the near plane (z n) goes to the face w 1 of the image space cube, and the face defined by the far plane (z f) goes to the face w 1 of the cube. At a minimum, this implies that (0, 0, n)p (0, 0, 1), and (0, 0, f)p (0, 0, 1) and these two equations will enable us to calculate a and b. To calculate these, we apply the matrix to obtain 1 0 0 1 n 1 an + b n a 1 b 0 and f 1 1 0 0 1 a 1 b 0 af + b f Projecting these back to three dimensions, we obtain (0, 0, n)p (0, 0, an + b ), and n (0, 0, f)p (0, 0, af + b ) f 6

That is, in order that the values on the left map to (0, 0, 1) and (0, 0, 1), respectively, we must have an + b n, and af + b f Subtracting the equations, obtains or and by substitution, af an f + n a ( ) f + n f n b n + an n(1 + a) n(1 + f + n f n ) n( f n + f + n ) f n 2fn f n Substituting for a and b, the transformation P becomes 1 0 0 1 P f+n 1 2fn 0 Unfortunately, this is not quite correct, as we have mapped only the near and far faces that correspond to the near and far planes. We need to also adjust for the left, right, top and bottom faces so that the viewing pyramid is transformed to the image space cube. So consider points on the top plane that bounds the viewing pyramid. 7

v (0, f tan α 2, f) (0, n tan α 2, n) α 2 (0, 0, n) (0, 0, f) If we apply our transformation to these points, we obtain 0 n tan α 2 n 1 P 0 n tan α 2 n 0 n tan α 2 n n f+n + 2nf f+n n Similarly 0 f tan α 2 f 1 P 0 f tan α 2 f 0 f tan α 2 f f f+n + 2nf f+n f So after dividing by the fourth coordinate we see that the v coordinates of the points that lie on the line { (0, z tan α 2, z) for w < 0} are all transformed to have a constant v value of tan α 2 Since we wish v coordinates to be mapped to 1 (which is the v value of the top of the image space cube), we must multiply our projection matrix by a matrix that scales the x and y coordinates by c 1 tan α 2 That is, we define A α,n,f S c,c,1 P 8

giving cot α 2 0 0 cot α 2 A α,n,f f+n 1 2fn 0 This is the matrix that transforms the truncated viewing pyramid into image space. The Inverse of the Viewing Transformation The above transformation transforms the truncated viewing pyramid into image space. The inverse of this transformation does the opposite transforms points from image space into the truncated viewing pyramid. This inverse is given by tan α 2 0 A 1 0 tan α 2 α,n,f 0 + 2fn 1 + f+n 2fn That this is actually the inverse can be easily checked. Summary We have developed a matrix that works in the local coordinates of the camera space and transforms the points of an object into image space. The matrix is applied in homogeneous space, so that the perspective divide must be done after the viewing matrix is applied. All contents copyright (c) 1997-2012 Computer Science Department, University of California, Davis All rights reserved. 9