SUMMARY. CS380: Introduction to Computer Graphics Projection Chapter 10. Min H. Kim KAIST School of Computing 18/04/12. Smooth Interpolation

Similar documents
CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE328 Fundamentals of Computer Graphics

CS 112 The Rendering Pipeline. Slide 1

SUMMARY. CS380: Introduction to Computer Graphics Texture Mapping Chapter 15. Min H. Kim KAIST School of Computing 18/05/03.

Perspective matrix, OpenGL style

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

CS452/552; EE465/505. Intro to Lighting

SUMMARY. CS380: Introduction to Computer Graphics Varying Variables Chapter 13. Min H. Kim KAIST School of Computing 18/04/26.

CS 4204 Computer Graphics

Viewing COMPSCI 464. Image Credits: Encarta and

CS 130 Final. Fall 2015

Virtual Cameras and The Transformation Pipeline

Rasterization Overview

Fachhochschule Regensburg, Germany, February 15, 2017

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52

Computer Viewing. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

The Graphics Pipeline and OpenGL I: Transformations!

Evening s Goals. Mathematical Transformations. Discuss the mathematical transformations that are utilized for computer graphics

3D Graphics for Game Programming (J. Han) Chapter II Vertex Processing

Three-Dimensional Viewing Hearn & Baker Chapter 7

Computer Viewing. Prof. George Wolberg Dept. of Computer Science City College of New York

Overview. By end of the week:

CSE452 Computer Graphics

Movie: Geri s Game. Announcements. Ray Casting 2. Programming 2 Recap. Programming 3 Info Test data for part 1 (Lines) is available

Geometry: Outline. Projections. Orthographic Perspective

CS 4620 Midterm, October 23, 2018 SOLUTION

CS464 Oct 3 rd Assignment 3 Due 10/6/2017 Due 10/8/2017 Implementation Outline

The Graphics Pipeline and OpenGL I: Transformations!

CS 559 Computer Graphics Midterm Exam March 22, :30-3:45 pm

CS 381 Computer Graphics, Fall 2012 Midterm Exam Solutions. The Midterm Exam was given in class on Tuesday, October 16, 2012.

3D Viewing. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard

For each question, indicate whether the statement is true or false by circling T or F, respectively.

Perspective transformations

3D Viewing. CS 4620 Lecture 8

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker

1 Transformations. Chapter 1. Transformations. Department of Computer Science and Engineering 1-1

Raycasting. Chapter Raycasting foundations. When you look at an object, like the ball in the picture to the left, what do

Lecture 4. Viewing, Projection and Viewport Transformations

CS 354R: Computer Game Technology

Three Main Themes of Computer Graphics

SUMMARY. CS380: Introduction to Computer Graphics Track-/Arc-ball Chapter 8. Min H. Kim KAIST School of Computing 18/04/06.

Prof. Feng Liu. Fall /19/2016

CS451Real-time Rendering Pipeline

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

Models and The Viewing Pipeline. Jian Huang CS456

Lecture 5: Transforms II. Computer Graphics and Imaging UC Berkeley CS184/284A

Lecture 17: Shading in OpenGL. CITS3003 Graphics & Animation

Single-view 3D Reconstruction

OpenGL Transformations

The Projection Matrix

The Transition from RenderMan to the OpenGL Shading Language (GLSL)

Overview of Projections: From a 3D world to a 2D screen.

CHAPTER 1 Graphics Systems and Models 3

1 OpenGL - column vectors (column-major ordering)

3D Viewing. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

Game Architecture. 2/19/16: Rasterization

INTRODUCTION TO COMPUTER GRAPHICS. cs123. It looks like a matrix Sort of. Viewing III. Projection in Practice 1 / 52

The Graphics Pipeline and OpenGL IV: Stereo Rendering, Depth of Field Rendering, Multi-pass Rendering!

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Graphics pipeline and transformations. Composition of transformations

Models and Architectures

The Viewing Pipeline adaptation of Paul Bunn & Kerryn Hugo s notes

Shadows. Prof. George Wolberg Dept. of Computer Science City College of New York

Homework #2. Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

COMP 175 COMPUTER GRAPHICS. Ray Casting. COMP 175: Computer Graphics April 26, Erik Anderson 09 Ray Casting

EECS 487, Fall 2005 Exam 2

Mouse Ray Picking Explained

One or more objects A viewer with a projection surface Projectors that go from the object(s) to the projection surface

Pipeline Operations. CS 4620 Lecture 14

Computergrafik. Matthias Zwicker. Herbst 2010

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

Lab 9 - Metal and Glass

Viewing with Computers (OpenGL)

COMP3421. Vector geometry, Clipping

Objectives Shading in OpenGL. Front and Back Faces. OpenGL shading. Introduce the OpenGL shading methods. Discuss polygonal shading

C P S C 314 S H A D E R S, O P E N G L, & J S RENDERING PIPELINE. Mikhail Bessmeltsev

ECE-161C Cameras. Nuno Vasconcelos ECE Department, UCSD

Basics of Computational Geometry

CS354 Computer Graphics Ray Tracing. Qixing Huang Januray 24th 2017

3D Viewing. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 9

3D Rendering and Ray Casting

Viewing. Part II (The Synthetic Camera) CS123 INTRODUCTION TO COMPUTER GRAPHICS. Andries van Dam 10/10/2017 1/31

Introduction to Computer Graphics with WebGL

Viewing. Reading: Angel Ch.5

Chapter 5. Transforming Shapes

COMP3421. Introduction to 3D Graphics

Computer Graphics. Chapter 10 Three-Dimensional Viewing

Overview. By end of the week:

3D Rendering and Ray Casting

CS 381 Computer Graphics, Fall 2008 Midterm Exam Solutions. The Midterm Exam was given in class on Thursday, October 23, 2008.

Computer Graphics with OpenGL ES (J. Han) Chapter VII Rasterizer

Projection and viewing. Computer Graphics CSE 167 Lecture 4

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

Ray Tracing Basics I. Computer Graphics as Virtual Photography. camera (captures light) real scene. photo. Photographic print. Photography: processing

I N T R O D U C T I O N T O C O M P U T E R G R A P H I C S

Transcription:

CS38: Introduction to Computer Graphics Projection Chapter Min H. Kim KAIST School of Computing Smooth Interpolation SUMMARY 2

Cubic Bezier Spline To evaluate the function c(t) at any value of t, we perform the following sequence of linear interpolations: f = ( t)c + td g = ( t)d + te h = ( t)e + tc m = ( t) f + tg n = ( t)g + th c(t) = ( t)m + tn when t =.3 3 Cubic Bezier Spline Let s take the first-order derivatives: c(t)= c ( t) 3 +3d t( t) 2 +3e t 2 ( t)+c t 3 c (t) = 3c t 2 3e t 2 3c (t ) 2 + 3d (t ) 2 6e t(t ) + 3d t(2t 2) We see that c () = 3(d c ), c () = 3(c e ) We see indeed that the slope of c(t) matches the slope of the control polygon at and. syms a b c d t f(t) = a*(-t)^3 + 3*b*t*(-t)^2 + 3*c*t^2*(-t) + d*t^3 df = diff(f) df() = 3*b - 3*a Min df() H. Kim = (KAIST) 3*d - 3*c Foundations of 3D Computer Graphics, S. Gortler, MIT Press, 22 4 2

Catmull-Rom Splines (CRS) This also can be stated as ( ) c (t) i = 2 c i+ c i ( ) c (t) i+ = 2 c c i+2 i Since c (i) = 3(d i c i ) and c (i +) = 3(c i+ e i ), in the Bezier representation, this tells us that we need to set: d i = ( 6 c c i+ i )+c i e i ( 6 c c i+2 i )+c i+ c i c(t)= c i ( t) 3 +3d i t( t) 2 +3e i t 2 ( t)+c i+ t 3 e i c i+ c i c i+2 d i 5 Curves We can cleanly apply the scalar theory to describe curves in the plane or space. The spline curve is controlled by a set of control points c i in 2D or 3D. Bezier curve (2D) Catmull-Rom curve (2D) 6 3

Quaternion Splining Bezier evaluation steps of the form: r = ( t)p + tq Become: r = slerp(p, q, t) And the d i and e i equation values are defined as d i = ((c i+ c i ) /6 )c d i = i 6 c i+ c i e i = ((c i+2 c i ) /6 )c i+ ( ) + c i e i ( 6 c c i+2 i ) + c i+ In order to interpolate the short way, we do conditional negation on (c i+ c i ) before applying the power operator. 7 Chapters Object & Animation Chapter PROJECTION Chapters 3 Camera 8 4

RECAP: Modelview matrix Modelview matrix (MVM) Describes the orientation and position of the view and the orientation and position of the object O with respect to the eye frame p = o t c = w t Oc = e t E Oc The vertex shader will take these vertex data and perform the multiplication E Oc, producing the eye coordinates used in rendering uniform mat4 umodelviewmatrix; normalmatrix() produces the inverse transpose of the linear factor to get uniform normal NMVM uniform mat4 unormalmatrix; E O e t E 9 RECAP: Stages of vertex transformation vertex x o y o z o Modelview matrix eye coordinates Projection matrix clip coordinates Perspective division normalized device coordinates Viewport transforma tion window coordinates x e y e z e = E O x o y o z o http://www.glprogramming.com/red/chapter3.html 5

RECAP: Camera projection (overview) Intrinsic properties of the camera (the eye) A field of view Window aspect ratio Near/far field in Z (clipping) Represented as camera frustum sendprojectionmatrix: send the camera matrix to the vertex shader uniform mat4 uprojmatrix; RECAP: Vertex shader Takes the object coordinates of every vertex position and turns them into eye coordinates, as well as the vertex s normal coordinates #version 3 uniform Matrix4 umodelviewmatrix; uniform Matrix4 unormalmatrix; uniform Matrix4 uprojmatrix; in vec3 acolor; in vec4 anormal; in vec4 avertex; void main() { vcolor = acolor; vposition = umodelviewmatrix * avertex; vec4 normal = vec4(anormal.x, anormal.y, anormal.z,.); vnormal = vec3(unormalmatrix * normal); gl_position = uprojmatrix * vposition; out vec3 vcolor; out vec3 vnormal; } out vec3 vposition; Min H. Kim (KAIST) Foundations of 3D Computer For camera Graphics, (3D S. Gortler, à 2D) MIT Press, 22 E Oc PE Oc 2 6

Camera transforms Until now we have considered all of our geometry in a 3D space Ultimately everything ended up in eye coordinates with coordinates [x e, y e,z e,] t We said that the camera is placed at the origin of the eye frame e t, and that it is looking down the eye s negative z-axis. This somehow produces a 2D image. We had a magic matrix which created gl_position Now we will study this step 3 Pinhole camera model As light travels towards the film plane, most is blocked by an opaque surface placed at the z e = plane. But we place a very small hole in the center of the surface, at the point with eye coordinates 4 7

Pinhole camera model z e = Only rays of light that pass through this point reach the film plane and have their intensity recorded on film. The image is recorded at a film plane placed at, say, z e = 5 Pinhole camera model z e A physical camera needs a finite aperture and a lens, but we will ignore this. To avoid the image flip, we can mathematically model this with the film plane in front of the pinhole, say at the z e 6 8

Pinhole camera model z e If we hold up the photograph at the z e plane, and observe it with our own eye, placed at the origin, it will look to us just like the origin scene would have. 7 Basic mathematical model p Let us use normalized coordinates [x n, ] t to specify points on our film plane. For now, let them match eye coordinates on this film plane. Where does the ray from p to the origin hits the film plane? 8 9

Basic mathematical model p α All points on the ray hit the same pixel. All points on the ray are all scales So points on ray are: [x e, y e,z e ] t =α[x n,, ] t 9 Basic mathematical model p y e z e So So [x e, y e,z e ] t = z e [x n,, ] t x n = x e z e, = y e z e 2

Projection matrix p We can model this expression as a matrix operation as follows. x e x n w n x c y e z = w n e = y c w n w c 2 In matrix form p The raw output of the matrix multiply, [x c, y c,,w c ] t are called the clip coordinates of p. w n = w c is a new variable called the w-coordinate. In such clip coordinates, the fourth entry of the coordinate 4-vector is not necessarily a zero or a one. 22

Divide by w p We say that x n w n = x c and w n = y c. If we want to extract x n alone, we must perform the division This recovers our camera model x e x n w n y e z = w n e = w n x n = x c w n = x n w n w n x c y c w c 23 Divide by w p Our output coordinates, with subscripts n, are called normalized device coordinates (NDC) because they address points on the image in abstract units without specific reference to numbers of pixels. 24 2

Divide by w p We keep all of the image data in the canonical square, x n +, +, and ultimately map this onto a window on the screen. Data outside of this square does not be recorded or displayed. This is exactly the model we used to describe 2D OpenGL 25 Scales = By changing the entries in the projection matrix, we can slightly alter geometry of the camera transformation. We could push the film plane out to z e = n, where n is some negative number (zoom lens) 26 3

Scales = So points on ray are: So So [x e, y e,z e ] t = z e n [x n,,z n ]t x n = x en z e, = y en z e [x e, y e,z e ] t =α[x n,,z n ] t 27 In matrix form = = In matrix form, this becomes: x n w n w n = n n z w e n (supposing n is some negative number) x e y e 28 4

In matrix form = = Note this matrix is the same as n n 29 In matrix form = = This has the same effect as starting with our original camera, scaling by n, and cropping to the canonical square. 3 5

fovy Scale can be determined by vertical angular field of view of the desired camera. If we want our camera to have a field of view of θ degrees, then we can set n = giving us tan θ 2 3 fovy Verify that any point who s ray from the origin forms a vertical angle of θ / 2 with the negative z axis maps to the boundary of the canonical square tan θ 2 tan θ 2 32 6

fovy The point with eye coordinates: [,tan θ maps to normalized 2,,] t device coordinates [,] t tan θ 2 tan θ 2 33 Dealing with aspect ratio Suppose the window is wider than it is high. In our camera transform, we need to squish things horizontally so a wider horizontal field of view fits into our retained canonical square. When the data is later mapped to the window, it will be stretched out correspondingly and will not appear distorted. Define a, the aspect ratio of a window, to be its width divided by its height (measured say in pixels). width px a = ( ) ( height px) 34 7

Dealing with aspect ratio We can then set our projection matrix to be: atan θ 2 tan θ 2 So when the window is wide, we will keep more horizontal FOV, and when the window is tall, we will keep less horizontal FOV. 35 Dealing with aspect ratio As an alternative, we could have an fovmin, and when the window is tall, we would need to calculate an appropriate larger fovy and then build the matrix. 36 8

FOV issues To be a window onto the world, the FOV should match the angular extents of the window in the viewers field. This might give a too limited view onto the world. So we can increase it to see more. But this might give a somewhat unnatural look. 37 Shifts Sometimes, we wish to crop the image noncentrally. This can be modeled as translating the normalized device coordinates (NDC) s and then cropping centrally. 38 9

Shifts x n w n c x w n = c y w n c x = c y x e y e z e x e y e z e 39 Shifts Useful for tiled displays, stereo viewing, and certain kinds of images. 4 2

Frustum Shifts are often specified by first specifying a near plane z e = n. On this plane, a rectangle is specified with the eye coordinates of an axis aligned rectangle. (for non-distorted output, the aspect ratio of this rectangle should match that of the final window.) Using l, r, t, b. 2n r + l r l r l 2n t + b t b t b 4 Frustum Example: Example2: l = 3 t =2 r =3 (,) b = 2 y! e t n = 2 x z l =2 t = 4 r = 4 y (3,3) x! z b =2 (,) e t n = 2 2 2 3+3 3+3 2 2 2+2 2+2 2n r l r +l r l t +b t b 2n t b 2 2 4 2 4+2 4 2 4+2 4 2 2 2 4 2 42 2

Context Projection could be applied to every point in the scene. In CG, we will apply it to the vertices to position a triangle on the screen. The rest of the triangle will then get filled in on the screen as we shall see. 43 Summary: Projection from 3D to 2D camera center NB The camera frame origin looks the opposite direction in computer vision Y X +Z C 3D à 2D x n = x c y c w c x x y p image plane n c x = n c y x e y e z e Z X principal axis c y p x v w c = w n = z e n < normalization u x n = y x c c x w n y c w n w c w n n x e + c x z e = n y e + c y z e 44 22