EECS : Introduction to Computer Graphics Building the Virtual Camera ver. 1.4

Size: px
Start display at page:

Download "EECS : Introduction to Computer Graphics Building the Virtual Camera ver. 1.4"

Transcription

1 EECS : Introduction to Computer Graphics Building the Virtual Camera ver D Transforms (cont d): 3D Transformation Types: did we really describe ALL of them? No! --All fit in a 4x4 matrix, suggesting up to 16 degrees of freedom. We already have 9 of them: 3 kinds of Translate (in x,y,z directions) + 3 kinds of Rotate (around x,y,z axes) + 3 kinds of Scale (along x,y,z directions).?where are the other 7 degrees of freedom? 3 kinds of shear (aka skew ): the transforms that turn a square into a parallelogram: -- Sxy sets the x-y shear amount; similarly, Sxz and Syz set x-z and y-z shear amount: Shear(Sx,Sy,Sz) = [ 1 Sxy Sxz 0] [ Sxy 1 Syz 0] [ Sxz Syz 1 0] [ ] 3 kinds of perspective transform: --Px causes perspective distortions along x-axis direction, Py, Pz along y and z axes: Persp(px,py,pz) = [ ] [ ] [ ] [px py pz 1] --Finally, we have that lower-left 1 in the matrix. Remember how we convert homogeneous coordinates [x,y,z,w] to real or Cartesian 3D coordinates [x/w, y/w, z/w]? If we change the 1 in the lower left, it scales the w it acts as a simple scale factor on all real coordinates, and thus it s redundant: we have only 15 degrees of freedom in our 4x4 matrix. We can describe any 4x4 transform matrix by a combination of these three simpler classes: rigid body transforms == preserves angles and lengths; does not distort or reshape a model, includes any combination of rotation and translation. affine transforms== preserves parallel lines, but not angles or line lengths includes rotation, translation, scale, shear or skew perspective transforms; links x,y or z to the homogeneous coord w: provides the math behind a pinhole camera image, but in orderly 4x4 matrix form! Cameras and 3D Viewing: Algebra and Intuition Camera: device that flattens 3D space onto a 2D plane. Get the intuition first, then the math: y f (focal length) (x v, y v, z v,z v /f) (x v, y v, z v, 1) Center of Projection 2D focal plane 3D model (connected vertices) z

2 A planar perspective camera, at its simplest most general form, is just a plane and a point. The plane splits the universe into two halves; the half that contains the point, which is the location of our eye or camera, and then the other half of the universe that we ll see with our eye or camera. The point is known by many names; the camera location, the center of projection (COP), the viewpoint the view reference point etc., and the plane is the image plane, the near clip plane, or even the focal plane. We paint our planar perspective image onto that plane in a very simple way. Draw a line from our point (viewpoint, camera point, etc) through the plane, and trace it into the other half of the universe until it hits something. Copy the color we find on that something and paint it onto the plane where our tracing-line pierced it. Use this method to set the color of all the plane points to form the complete planar perspective image of one half of the universe. However, real cameras are finite: we can t capture the entire infinitely large picture plane, and instead we usually select the picture within a rectangle region of that plane. First, let s define the camera s focal length f as the shortest distance from its center-of-projection (COP) to the universe-splitting image plane. We make a vector perpendicular to the image plane that measures the distance from our eye-point to the plane, and call that our direction-of-gaze, or lookat direction. We define a rectangular region of the plane, usually (but not always) centered at the point nearest the COP, and record its colors as our perspective image. The size and placement of that rectangle, along with focal length f describes all the basic requirements for simple lenses in any kind of planar perspective camera: Now let s formalize these ideas. Given a plane and a point, suppose your eye is located at the point and that it becomes the origin of a coordinate system. Call the origin point the eye point, or the center of projection or the view reference point (VRP), and use your eye to look down the -z axis. Why not +z? because we want to use right-handed coordinate systems, and we want to keep these x & y axes of our 3D camera for use as the x & y axes of our 2D image, aimed rightward & upward. At z= -f, = -znear, we construct the focal plane that splits the universe in half.

3 draw a picture on a 2D focal plane. How? Trace rays from the eye origin (VRP) to points (vertices, usually) in that other half of the universe, and find their location on the focal plane. Given a 3D vertex point (x v, y v, z v ), where do we draw a point on our image plane? Ans: the image point is (f x v / -z v, f y v / -z v, f z v / -z v ) = f (x v / -z v, y v / -z v, -1) As we change the focal distance f between the camera origin (e.g. center of projection ) and the 2D image plane, the image gets scaled up or down, larger or smaller, but does not otherwise change: adjusting the f parameter is equivalent to adjusting a zoom lens. However, if we instead chose to keep f fixed and instead move the vertices of a 3D model forwards or backwards along the z v axis, the effect is more complex. Vertices nearer the camera, those with small z v values, change their on-screen image positions far more than vertices farther away. Therefore, moving a 3D object closer or further away is NOT equivalent to adjusting a zoom lens, because it causes depth-dependent image changes: some call it foreshortening, others call it changing perspective or flattening the image. As a 3D object moves away from the camera, the rays that carry color from its surface points to our camera s center-of-projection (COP) get more and more parallel, with larger angles for shorter distances. Changing those angles yields a different appearance due to color changes (imagine the surface of a CD), and may change its occlusion: a surface that moves near the camera may block your view of more distant points. No matter how large this 3D object may be, as -z v reaches infinity the image of the object shrinks to a single point. This is the image s z-axis vanishing point ; the points that painters use to help them make an accurate perspective drawing. For scenes with world-space axes that don t match the camera axes, drawing lines parallel to each of these world-space axes will cause them to converge on other points; if they fall within the picture itself, we can see 1, 2, or 3 vanishing points. Any parallel lines in 3D that aren t parallel to our image plane will form vanishing points somewhere on our image plane, so you can have as many vanishing points as you wish in a drawing; the number of vanishing points doesn t really tell you much about the camera that made an image, only the content of the scene that the camera captured. ( vanishing points in paintings define convergence of major axes of viewed objects, such as horizontal lines where walls meet floors, or vertical lines where walls meet each other). --Easy Question: suppose your camera DIDN T divide by z v : then where does the 3D vertex point ()appear in the image plane? Then we form no vanishing points, and the model stays the same size no matter how far away. This is known as orthographic projection, and if we define d as the size of the image, we can use it to replace the f, and specify the distance from the origin like this: Answer: the image point is (d xv d yv, d zv) = d (xv, yv, 1) Homogeneous Coordinates: Clever Matrix Math for Cameras and 3D Viewing: The intuitive camera-construction method above isn t linear, and isn t suitable for reliable graphics programs. It requires us to divide by numbers that can change; if we use it to draw pictures by the millions, eventually our 3D drawing program will fail with a divide-by-zero problem. It is messy to use, too how can you move the camera freely? What if you want an image plane that isn t perpendicular to the Z axis, that doesn t have its vanishing point in the very center of the picture it makes? THIS is the real reason why we use homogeneous coords & matrices (4x4) for graphics: --it lets us make a 4x4 matrix that maps 3-D points to a 2-D image plane, like a camera; --it lets you position the camera separately from the model, place them in world space; --it never causes divide-by-zero errors while you compute your image, --it lets you adjust your camera easily, and even seems to let you tilt the image plane

4 (e.g. emulate an architectural view camera), though you actually just adjust the position of the rectangle on the plane that you will use as your image. --You can transform just the vertices of a model to the image plane, and then draw the lines and polygons in 2D. The most-basic perspective camera matrix: (same as the algebraic one above) just messes around with the bottom-most row of the 4x4 matrix; it avoids the divide by coupling z v to the w value now w is doing something very useful: T pers = [ ] [ ] [ ] [ 0 0 1/f 0] (note last two elements) That s all you need for any perspective camera! Compare it with the coords in the drawing above. Transform a 3D point (xv, yv,zv,1) by Tpers, and you get another 3D point; convert to real (Cartesian) 3D coordinates (x/w,y/w,z/w). Try it: T pers [xv,yv,zv,1] T = [xv,yv,zv, zv/f] T in 3D homogeneous coords; convert to 3D Cartesian coords by dividing by (zv/f): [xv*f/zv, yv*f/zv, zv*f/zv] T = f * [xv/zv, yv/zv, 1 ] T This gives us 3D coordinates in an image plane, at location z=f, with the same coordinates we found by algebra above. Note that our picture-drawing program does not have to convert points to real (Cartesian) coordinates (x/w, y/w, z/w) until we are ready to make pixels; divide by zero never happens when we re doing all the complicated and time-consuming transformations needed to construct our image. (Challenging puzzle: To draw a perspective image of a 3D line, you only need to transform its endpoints to image space, and then draw a line between those image points. Can you prove this is always true? Make a parameterized point P(t) that moves between P0 and P1, as we did before. Transform each of them to image space: transform P0 to make P0, transform P1 to make P1, Transform Pt to make Pt. Make a parameterized image point Ps that moves between P0 and P1 ; is Pt (t) = Ps(s)? Can you solve for the function s(t) or t(s)? Online books, tutorials show you how ***(Be careful with signs: in a right-handed coordinate system, the image plane is always positioned on the -Z axis, but the f parameter used by graphics software often isn t negative! In OpenGL and WebGL, the znear parameter defines imageplane position, but functions that use it expect POSITIVE znear values. *** Earlier we learned how to make jointed objects by concatenating Translate, Rotate, and Scale matrices arranged in a scene graph. If we apply those same skills to position and aim a virtual camera, we will discover they re not well suited for camera positioning. For our first attempt, suppose we a) translate() along z axis to push the world drawing axes out and away from the camera, b) then rotate() to spin the world around its own origin, and c) then translate() to move the world under our camera s view. While this will let us explore the world, it will not be an easy exploration; steps a) and b) force us to position the camera using spherical coordinates; step a) sets radius, the distance from camera to world origin, and step b) chooses camera azimuth, elevation, and spin. While step c) can move us to any position in the world, our camera aiming direction was fixed by step b), and we cannot easily change that aiming direction.?!?!why is this so hard?!? you may ask;

5 ?!?! Why can t I just specify the camera s position in the 3D world, and aim it at something interesting in that 3D world?!?! The answer is that we re using the wrong tools. Our existing transformations (translate, rotate, and scale) always begin with the CVV drawing axes (our camera axes) and then transform them successively to each nested set of drawing axes we need to draw jointed objects. Before we used cameras, our CVV coordinate system would suffice as our 3D world coordinate system. But when we create a virtual camera we changed the purpose of the CVV; now it holds a view of the 3D world as seen by a camera. Our existing tools now require us to construct our 3D world coordinate axes by using values we specified from the camera s drawing axes (the CVV axes) in steps a,b,c above. That s not easy! Instead we want an opposite, an inverse of that process. Let s call it the LookAt() function: this new tool must create a matrix that constructs the camera s drawing axes by using values we specify in the world s drawing axes. We will call it the view matrix, at place it at the root of our scene-graph, where it transforms the CVV (camera) coordinate axes into the world coordinate axes. The VIEW matrix, or How to build a camera-positioning matrix from world-space values To specify camera position in world drawing axes, begin with just 3 values, each with 3 coordinates: EYE(a point), AT(a point), and view up or VUP(a vector). We want to place our virtual camera in a world coordinate system so that: -the camera position is at a 3D point EYE (or view reference point or camera location ). -the camera lens is aimed in the direction of the 3D world-space point AT, or look-at, aimed towards a point we will look-at point in the world. -the camera is level, it has no unwanted tilt to one side or the other. Do this by specifying a camera up vector direction VUP in world space. The VUP vector, the AT point, and the EYE point define a plane that always slices through the 2D output image perpendicularly to include its y axis. Don t be fooled - VUP is NOT restricted to the +y direction in the 3D world the camera will view. Instead, the VUP vector defines the world-space direction that will appear as up in the camera image. For example, changing the VUP vector from the +x direction to the +y direction will turn the camera on its side. Note that you can use VUP to help you define where to put the ground and the sky in the world coordinate system. If you choose the world s ground plane as (x,y,0) with the +z axis pointing upwards towards the sky, then a level camera will have a VUP vector aimed towards +z as well. We then use EYE, AT, and VUP to construct our viewing transform, following a step-by-step process below described in many textbooks (but not ours). You will need a viewing matrix for Project B, but the cuon-matrix.js library can build this matrix for you with its LookAt() function. Definitions: --World coordinates denoted WC. Position specified using (x,y,z) --Camera coordinate system, or CAM. To avoid confusion with WC, we will rename the camera s x,y,z coordinates as u,v,n instead. Step 1) Define Eye Coordinate System Given EYE, AT & VUP, lets first construct each part of a camera coordinate system as measured in World coordinates WC. The CAM origin is just the eye point EYE, already defined in world coordinates. To construct the drawing axes: (careful here!-- we only care about directions, not positions, so we only use vectors, not points).

6 --Find N vector in WC (Careful! Right-handed coordinates! the N vector direction is backwards; it points to the eyepoint from the chosen point our camera is looking at). We already know this one it s the normalized vector from AT to EYE: N raw = (EYE - AT) Make it unit length: N = N raw / N raw --Find the U,V vectors. We know VUP, EYE and N define a plane P, and this plane always contains the camera image s V vector (recall V is the direction of the +y axis in the camera image). But if the P plane includes both the N and V vectors(axes), then plane P is perpendicular to the U vector (axis). It s easiest to find a vector in the U direction first, by using the cross-product: U raw = VUP x N Make it unit length: U = U raw / U raw. --From these two coordinate system axis vectors we can easily find the third. Given U and N (in a right-handed coordinate system) we can find V by another cross product (as N and U are already unit vectors, we don t need to normalize V): V = N x U Step 2) Backwards but Easy:Transform World coords Into CAM coords With our new unit-length U,V,N vectors and the EYE point expressed in world-coordinate system values, we can convert a world coordinate system point P0 = (P0x, P0y, P0z, 1) into a CAM coordinates (u,v,n,1) quite easily if we choose the special case where we placed the EYE point at the WC origin. Here, the vector from the P0 to the EYE point (the CAM coord. origin) is obvious: it s just (P0x, P0y, P0z, 0). Here, that vector s dot-product with the U,V, and N unit vectors will give us the (u,v,n) coordinates we seek. We can write those dot-products in matrix form quite easily, and they represent a simple rigid-body rotation from WC to CAM axes: CAM = [Rcam] P0 [u] = [ Ux Uy Uz 0 ] [P0x] when EYE = (0,0,0,1). [v] [ Vx Vy Vz 0 ] [P0y] [n] [ Nx Ny Nz 0 ] [P0z] [1] [ ] [ 1 ] That s easy enough, but it s the OPPOSITE of what we want! This matrix converts vertex numbers (and drawing axes) from the world coordinate system WC to the CAM coordinate system, but we need a matrix for our scene graph that converts the CAM coordinate system to the WC coordinate system. We need the inverse of this matrix: Step 3): The Othonormal Inverse Fortunately, [Rcam] is a special kind matrix: each row is a unit-length vector (U,V, or N), each one orthogonal to all the other rows: their dot-products are all zero because they re perpendicular vectors, and thus [Rcam] is orthonormal matrix: its inverse is its transpose: Rcam 1 = Rcam T, and we just exchange its rows and columns. If we apply this [Rcam T ] matrix to our CAM drawing axes, we get a new set of world drawing axes where our camera is aimed in the correct direction, but has its eyepoint (EYE) positioned at the origin. To complete our camera positioning, we apply a second drawing-axis transformation, one that that moves the world out, away from our eye, but using world coordinate measurements; we call the function Translate(-EYEx, -EYEy, -EYEz) to form our final view matrix: VIEW = [Rcam T ][Trans(-EYE)] = [ Ux Vx Nx -EYEx ]. [ Uy Vy Vz -EYEy ] [ Uz Vz Nz -EYEz ] [ ] Note the translation moves the new world drawing axes away from the old world axes at the eyepoint.

7 The PROJECTION matrix This matrix emulates the lens system of a camera; it performs the 3D 2D transformation that may include perspective and foreshortening. Step 4A) Apply a matrix that does the 3D 2D perspective transformation. Now each (translated) vertex is expressed in (u,v,n,w) coordinates defined by the camera. We could apply the perspective transform Tpers we defined above, convert from homogeneous to Cartesian coordinates, and at last we d have the 2D image locations for each vertex. But this is naïve in at least two different ways: First, we need a computed depth value for each pixel we render so that we can perform hidden-surface removal; if we stay in homogeneous coordinates we can compare the z values at every pixel for every drawing primitive, and draw that pixel s fragment only if no previously-drawn fragment had a z-value nearer to the eye. Second, we may have precision problems while the 3D half-universe is unbounded, our hardware is not; we need to measure our (u,v,n,w) values by their distance from the camera s viewing frustum the six-sided pyramid-like box made by the camera s field of view (a 4-sided pyramid; apex at the eyepoint) and the camera s near and far clipping planes. World Coord Axes What s all this about near and far clipping planes?!? Left, right, top, bottom planes? Alas, all computers have finite precision. The x,y,z axes we ve discussed easily stretch to infinity in all directions; we could never adequately describe all positionsl in half the universe with nothing more than a set of four GLSL float values! Instead, we have to choose a subset, some finite volume of 3D space to describe with the finite, limited set of floating-point numbers our computers give us. Of course, we choose the volume of 3D space that is in front of our camera everywhere else will be off-screen in the picture we make. THUS, we can limit our 3D camera view-frustum size in 6 ways: For the z axis: (znear, zfar)

8 -the near and far planes perpendicular to the z axis, with z=f=z near and z=z far respectively. -ONLY points between znear and zfar will be drawn on-screen, and both must be >0; the znear is also known as the focal distance f, and the zfar value limits the distance to the visible horizon in the scene. -For better results, keep the ratio (zfar/znear) modest; < ~ 10,000:1. As this ratio increases, you reduce the ability of WebGL to distinguish whether one surface is behind another; foreground objects might not occlude background objects! (We ll explain more when we discuss z-buffering ). If you separate znear and zfar too widely, your program will lose precision in distinguishing objects with nearly-identical Z values (See Z-fighting ), but there is no other penalty. With modern graphics hardware, floating-point z values have greatly reduced the likelihood of z-fighting except for the very largest world-space models (e.g. the earth, described with 1mm resolution). For the x and y axis: (left, right, top, bottom) -The combination znear and zfar and your camera s angular field-of-view set the maximum values for x,y. If you choose to use the gl.perspective() function in the cuon-matrix-quat.js library supplied in starter code, this field-of-view is symmetric, and set by your selection of the camera s aspect ratio and its vertical field-of-view in degrees. If instead you use the gl.frustum() function, you can individually specify the left, right, top, and bottom limits of the frustum as measured at the znear clipping plane; this permits you to construct unusual camera images that emulate a leather-bellowed view camera. Pseudo-Depth: Remember, when we convert to Cartesian coordinates, our naïve Tpers matrix gave us (fx/z, fy/z, fz/z) = f*(x/z,y/z,1). Note that the 3 rd Cartesian result doesn t really tell us anything at all it s just the position of the image plane. We d like to have a z value that tells us where we are between znear and zfar; in fact, we NEED this value to make depth comparisons when rendering. We shouldn t draw any drawing primitive that s BEHIND those we ve already drawn in the frame buffer; accordingly we need to find a depth-like value we can store with pixels as we draw them. OpenGL will keep a depth value along with the color of any pixel it draws, and then check that depth before drawing each primitive. We only draw a new pixel if its depth is shorter (nearer the eyepoint) than any previous value drawn here. Instead of the naïve Tpers matrix that yields (fx/z, fy/z, fz/z) = f ( x/z, y/z, 1), we can construct a 4x4 homogeneous matrix that gives us the same image coordinates AND pseudo-depth determined by two magic constants a,b: f (x/z, y/z, (az +b)/z). While not quite the same as actual depth, this pseudo-depth is almost as good; --it is monotonic (we don t change the order of depth) and --it gives us greatest precision for depth for vertex positions near the camera, and least precision for depths nearest the far-clip plane. We can find values a,b from user-specified near/far planes, and then construct a 4x4 matrix to give us this result. While quite clever and interesting, deriving the mapping from the frustum to the CVV is rather tedious and arcane, and instead we will let the widely-used function gl.frustum() and gl.perspective() functions do it for us; just choose one and use it (both are implemented for you in cuon-matrix-quat.js).

9 See: Both functions use the same underlying matrix: n = znear, f = zfar, both>0. (f also equals focal distance, from origin to image plane, as before) t = ytop, b = ybottom, r = xright, l = xleft Tpers=[ 2n/(r-l) 0 (r+l)/(r-l) 0 ] [ 0 2n/(t-b) (t+b)/(t-b) 0 ] [ 0 0 -(f+n)/(f-n) -2fn/(f-n)] [ ] Step 4B: Suppose you want an orthographic camera instead of a perspective (pinhole) camera? Like the naïve Tpers, you could use a naïve Tortho matrix, which is just the 4x4 identity matrix with a 3 rd row set entirely to zero (so that z is ignored), but that s a poor strategy. Orthographic cameras also need near,far,left,right,top, and bottom clipping planes for the most precise results, and the ortho() function implemented for you in cuon-matrix-quat.js applies this orthographic matrix: Tortho= [ 2/(r-l) 0 0 -(r+l)/(r-l) ] [ 0 2/(t-b) 0 -(t+b)/(t-b) ] [ 0 0-2/(f-n) -(f+n)/(f-n) ] [ ] Step 5: VIEWPORT-to-screen: After we apply Tpers or Tortho and the perspective divide, all transformed vertices (u,v,n,w) fit into the Canonical View Volume (CVV), so that. 1 <= u/w <= +1, - 1<= v/w <= +1, 0 <= n/w <= +1). The u/w, v/w are the 2D coordinates within camera image plane, and n/w is a normalized measure of true depth from the eyepoint to the vertex (e.g. not the same as z distance) it is the position of a vertex between the znear and zfar planes. To make the final image, we only need to map the camera s coordinates (u/w, v/w) to our HTML5 canvas or viewport or screen. These coordinate values all stay within [-1,1] (because they were clipped to that range) but now we want to change these coordinates to match our on-screen viewport; the rectangle from [0,0] to [width,height] measured in pixels. Once again we apply a scale matrix, followed by a translate matrix to adjust the u and v values (w is unchanged, n can be ignored): TSviewport = [ sx 0 0 tx ] [ 0 sy 0 ty ] [ ] [ ] After this change to viewport coordinates, divide by w : pixel coordinates (x,y) = (u/w, v/w) 3D Virtual Camera Summary: The viewing transformation, the 4x4 matrix that transforms a world-space vertex (x,y,z,w) to its onscreen pixel coordinates (u/w, v/w ) is the concatenation of 4 matrices: [u] = [Tsviewport] [Tpers or Tortho] [VIEW][MODEL] [x] [v] [y] [n] [z] [w ] [w]

10 Viewing in WebGL: See gl. lookat( ) and gl.setlookat() functions.: Eye point= the center of projection, the VRP, specified in 3D world space coordinates Center point= the 3D world-space aiming point for the camera. VPN = Center - Eye Up = same as VUP vector. See gl.perspective() and gl.setpersepective() function: fovy: field-of-view angle in degrees in y direction: determines angle between top and bottom clip planes aspect: ratio of camera image width to camera image height. znear, zfar = always positive values; be sure znear < zfar See gl.frustum() and gl.setfrustum function: Constructs Tpers matrix as described above; user supplies left, right, top, bottom, near, far values. See gl.ortho() and gl.setortho() function: Constructs orthographic matrix described above; user supplies left, right, bottom, top, near, far values.

EECS : Introduction to Computer Graphics Building the Virtual Camera ver. 1.5

EECS : Introduction to Computer Graphics Building the Virtual Camera ver. 1.5 EECS 351-1 : Introduction to Computer Graphics Building the Virtual Camera ver. 1.5 3D Transforms (cont d): We havent yet explored ALL of the geometric transformations available within a 4x4 matrix. --All

More information

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment Notes on Assignment Notes on Assignment Objects on screen - made of primitives Primitives are points, lines, polygons - watch vertex ordering The main object you need is a box When the MODELVIEW matrix

More information

Lecture 4. Viewing, Projection and Viewport Transformations

Lecture 4. Viewing, Projection and Viewport Transformations Notes on Assignment Notes on Assignment Hw2 is dependent on hw1 so hw1 and hw2 will be graded together i.e. You have time to finish both by next monday 11:59p Email list issues - please cc: elif@cs.nyu.edu

More information

CS 4204 Computer Graphics

CS 4204 Computer Graphics CS 4204 Computer Graphics 3D Viewing and Projection Yong Cao Virginia Tech Objective We will develop methods to camera through scenes. We will develop mathematical tools to handle perspective projection.

More information

Models and The Viewing Pipeline. Jian Huang CS456

Models and The Viewing Pipeline. Jian Huang CS456 Models and The Viewing Pipeline Jian Huang CS456 Vertex coordinates list, polygon table and (maybe) edge table Auxiliary: Per vertex normal Neighborhood information, arranged with regard to vertices and

More information

CSE328 Fundamentals of Computer Graphics

CSE328 Fundamentals of Computer Graphics CSE328 Fundamentals of Computer Graphics Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 794--44 Tel: (63)632-845; Fax: (63)632-8334 qin@cs.sunysb.edu

More information

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline

Today. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline Computergrafik Today Rendering pipeline s View volumes, clipping Viewport Matthias Zwicker Universität Bern Herbst 2008 Rendering pipeline Rendering pipeline Hardware & software that draws 3D scenes on

More information

Virtual Cameras & Their Matrices

Virtual Cameras & Their Matrices Virtual Cameras & Their Matrices J.Tumblin-Modified, highly edited SLIDES from: Ed Angel Professor Emeritus of Computer Science University of New Mexico 1 What is Projection? Any operation that reduces

More information

Geometry: Outline. Projections. Orthographic Perspective

Geometry: Outline. Projections. Orthographic Perspective Geometry: Cameras Outline Setting up the camera Projections Orthographic Perspective 1 Controlling the camera Default OpenGL camera: At (0, 0, 0) T in world coordinates looking in Z direction with up vector

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu

More information

COMP3421. Introduction to 3D Graphics

COMP3421. Introduction to 3D Graphics COMP3421 Introduction to 3D Graphics 3D coodinates Moving to 3D is simply a matter of adding an extra dimension to our points and vectors: 3D coordinates 3D coordinate systems can be left or right handed.

More information

COMP3421. Introduction to 3D Graphics

COMP3421. Introduction to 3D Graphics COMP3421 Introduction to 3D Graphics 3D coodinates Moving to 3D is simply a matter of adding an extra dimension to our points and vectors: 3D coordinates 3D coordinate systems can be left or right handed.

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

COMP3421. Introduction to 3D Graphics

COMP3421. Introduction to 3D Graphics COMP3421 Introduction to 3D Graphics 3D coordinates Moving to 3D is simply a matter of adding an extra dimension to our points and vectors: 3D coordinates 3D coordinate systems can be left or right handed.

More information

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52

INTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52 cs337 It looks like a matrix Sort of Viewing III Projection in Practice / 52 cs337 Arbitrary 3D views Now that we have familiarity with terms we can say that these view volumes/frusta can be specified

More information

Projection: Mapping 3-D to 2-D. Orthographic Projection. The Canonical Camera Configuration. Perspective Projection

Projection: Mapping 3-D to 2-D. Orthographic Projection. The Canonical Camera Configuration. Perspective Projection Projection: Mapping 3-D to 2-D Our scene models are in 3-D space and images are 2-D so we need some wa of projecting 3-D to 2-D The fundamental approach: planar projection first, we define a plane in 3-D

More information

Three-Dimensional Viewing Hearn & Baker Chapter 7

Three-Dimensional Viewing Hearn & Baker Chapter 7 Three-Dimensional Viewing Hearn & Baker Chapter 7 Overview 3D viewing involves some tasks that are not present in 2D viewing: Projection, Visibility checks, Lighting effects, etc. Overview First, set up

More information

3D Graphics for Game Programming (J. Han) Chapter II Vertex Processing

3D Graphics for Game Programming (J. Han) Chapter II Vertex Processing Chapter II Vertex Processing Rendering Pipeline Main stages in the pipeline The vertex processing stage operates on every input vertex stored in the vertex buffer and performs various operations such as

More information

Overview. By end of the week:

Overview. By end of the week: Overview By end of the week: - Know the basics of git - Make sure we can all compile and run a C++/ OpenGL program - Understand the OpenGL rendering pipeline - Understand how matrices are used for geometric

More information

Prof. Feng Liu. Fall /19/2016

Prof. Feng Liu. Fall /19/2016 Prof. Feng Liu Fall 26 http://www.cs.pdx.edu/~fliu/courses/cs447/ /9/26 Last time More 2D Transformations Homogeneous Coordinates 3D Transformations The Viewing Pipeline 2 Today Perspective projection

More information

3D Viewing. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

3D Viewing. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller 3D Viewing CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Chapter 4 of Angel Chapter 6 of Foley, van Dam, 2 Objectives What kind of camera we use? (pinhole) What projections make sense

More information

INTRODUCTION TO COMPUTER GRAPHICS. cs123. It looks like a matrix Sort of. Viewing III. Projection in Practice 1 / 52

INTRODUCTION TO COMPUTER GRAPHICS. cs123. It looks like a matrix Sort of. Viewing III. Projection in Practice 1 / 52 It looks like a matrix Sort of Viewing III Projection in Practice 1 / 52 Arbitrary 3D views } view volumes/frusta spec d by placement and shape } Placement: } Position (a point) } look and up vectors }

More information

3D Viewing. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

3D Viewing. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller 3D Viewing Introduction to Computer Graphics Torsten Möller Machiraju/Zhang/Möller Reading Chapter 4 of Angel Chapter 13 of Hughes, van Dam, Chapter 7 of Shirley+Marschner Machiraju/Zhang/Möller 2 Objectives

More information

CSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

CSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 CSE 167: Introduction to Computer Graphics Lecture #5: Projection Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 Announcements Friday: homework 1 due at 2pm Upload to TritonEd

More information

Projection and viewing. Computer Graphics CSE 167 Lecture 4

Projection and viewing. Computer Graphics CSE 167 Lecture 4 Projection and viewing Computer Graphics CSE 167 Lecture 4 CSE 167: Computer Graphics Review: transformation from the object (or model) coordinate frame to the camera (or eye) coordinate frame Projection

More information

Computer Graphics. Chapter 10 Three-Dimensional Viewing

Computer Graphics. Chapter 10 Three-Dimensional Viewing Computer Graphics Chapter 10 Three-Dimensional Viewing Chapter 10 Three-Dimensional Viewing Part I. Overview of 3D Viewing Concept 3D Viewing Pipeline vs. OpenGL Pipeline 3D Viewing-Coordinate Parameters

More information

Computer Vision Projective Geometry and Calibration. Pinhole cameras

Computer Vision Projective Geometry and Calibration. Pinhole cameras Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole

More information

Graphics pipeline and transformations. Composition of transformations

Graphics pipeline and transformations. Composition of transformations Graphics pipeline and transformations Composition of transformations Order matters! ( rotation * translation translation * rotation) Composition of transformations = matrix multiplication: if T is a rotation

More information

Viewing. Part II (The Synthetic Camera) CS123 INTRODUCTION TO COMPUTER GRAPHICS. Andries van Dam 10/10/2017 1/31

Viewing. Part II (The Synthetic Camera) CS123 INTRODUCTION TO COMPUTER GRAPHICS. Andries van Dam 10/10/2017 1/31 Viewing Part II (The Synthetic Camera) Brownie camera courtesy of http://www.geh.org/fm/brownie2/htmlsrc/me13000034_ful.html 1/31 The Camera and the Scene } What does a camera do? } Takes in a 3D scene

More information

3D Viewing Episode 2

3D Viewing Episode 2 3D Viewing Episode 2 1 Positioning and Orienting the Camera Recall that our projection calculations, whether orthographic or frustum/perspective, were made with the camera at (0, 0, 0) looking down the

More information

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard

Announcements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard Now Playing: Vertex Processing: Viewing Coulibaly Amadou & Mariam from Dimanche a Bamako Released August 2, 2005 Rick Skarbez, Instructor COMP 575 September 27, 2007 Announcements Programming Assignment

More information

Game Architecture. 2/19/16: Rasterization

Game Architecture. 2/19/16: Rasterization Game Architecture 2/19/16: Rasterization Viewing To render a scene, need to know Where am I and What am I looking at The view transform is the matrix that does this Maps a standard view space into world

More information

1 OpenGL - column vectors (column-major ordering)

1 OpenGL - column vectors (column-major ordering) OpenGL - column vectors (column-major ordering) OpenGL uses column vectors and matrices are written in a column-major order. As a result, matrices are concatenated in right-to-left order, with the first

More information

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker

CMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker CMSC427 Transformations II: Viewing Credit: some slides from Dr. Zwicker What next? GIVEN THE TOOLS OF The standard rigid and affine transformations Their representation with matrices and homogeneous coordinates

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions

More information

3D Viewing Episode 2

3D Viewing Episode 2 3D Viewing Episode 2 1 Positioning and Orienting the Camera Recall that our projection calculations, whether orthographic or frustum/perspective, were made with the camera at (0, 0, 0) looking down the

More information

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009

Lecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009 Model s Lecture 3 Sections 2.2, 4.4 World s Eye s Clip s s s Window s Hampden-Sydney College Mon, Aug 31, 2009 Outline Model s World s Eye s Clip s s s Window s 1 2 3 Model s World s Eye s Clip s s s Window

More information

Computer Graphics. P05 Viewing in 3D. Part 1. Aleksandra Pizurica Ghent University

Computer Graphics. P05 Viewing in 3D. Part 1. Aleksandra Pizurica Ghent University Computer Graphics P05 Viewing in 3D Part 1 Aleksandra Pizurica Ghent University Telecommunications and Information Processing Image Processing and Interpretation Group Viewing in 3D: context Create views

More information

Viewing. Reading: Angel Ch.5

Viewing. Reading: Angel Ch.5 Viewing Reading: Angel Ch.5 What is Viewing? Viewing transform projects the 3D model to a 2D image plane 3D Objects (world frame) Model-view (camera frame) View transform (projection frame) 2D image View

More information

Viewing COMPSCI 464. Image Credits: Encarta and

Viewing COMPSCI 464. Image Credits: Encarta and Viewing COMPSCI 464 Image Credits: Encarta and http://www.sackville.ednet.ns.ca/art/grade/drawing/perspective4.html Graphics Pipeline Graphics hardware employs a sequence of coordinate systems The location

More information

OpenGL Transformations

OpenGL Transformations OpenGL Transformations R. J. Renka Department of Computer Science & Engineering University of North Texas 02/18/2014 Introduction The most essential aspect of OpenGL is the vertex pipeline described in

More information

Ray Tracer I: Ray Casting Due date: 12:00pm December 3, 2001

Ray Tracer I: Ray Casting Due date: 12:00pm December 3, 2001 Computer graphics Assignment 5 1 Overview Ray Tracer I: Ray Casting Due date: 12:00pm December 3, 2001 In this assignment you will implement the camera and several primitive objects for a ray tracer. We

More information

Drawing in 3D (viewing, projection, and the rest of the pipeline)

Drawing in 3D (viewing, projection, and the rest of the pipeline) Drawing in 3D (viewing, projection, and the rest of the pipeline) CS559 Spring 2017 Lecture 6 February 2, 2017 The first 4 Key Ideas 1. Work in convenient coordinate systems. Use transformations to get

More information

CSC 305 The Graphics Pipeline-1

CSC 305 The Graphics Pipeline-1 C. O. P. d y! "#"" (-1, -1) (1, 1) x z CSC 305 The Graphics Pipeline-1 by Brian Wyvill The University of Victoria Graphics Group Perspective Viewing Transformation l l l Tools for creating and manipulating

More information

3D Viewing. CS 4620 Lecture 8

3D Viewing. CS 4620 Lecture 8 3D Viewing CS 46 Lecture 8 13 Steve Marschner 1 Viewing, backward and forward So far have used the backward approach to viewing start from pixel ask what part of scene projects to pixel explicitly construct

More information

I N T R O D U C T I O N T O C O M P U T E R G R A P H I C S

I N T R O D U C T I O N T O C O M P U T E R G R A P H I C S 3D Viewing: the Synthetic Camera Programmer s reference model for specifying 3D view projection parameters to the computer General synthetic camera (e.g., PHIGS Camera, Computer Graphics: Principles and

More information

Viewing. Announcements. A Note About Transformations. Orthographic and Perspective Projection Implementation Vanishing Points

Viewing. Announcements. A Note About Transformations. Orthographic and Perspective Projection Implementation Vanishing Points Viewing Announcements. A Note About Transformations. Orthographic and Perspective Projection Implementation Vanishing Points Viewing Announcements. A Note About Transformations. Orthographic and Perspective

More information

Shadows in the graphics pipeline

Shadows in the graphics pipeline Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects

More information

The Graphics Pipeline and OpenGL I: Transformations!

The Graphics Pipeline and OpenGL I: Transformations! ! The Graphics Pipeline and OpenGL I: Transformations! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 2! stanford.edu/class/ee267/!! Albrecht Dürer, Underweysung der Messung mit

More information

UNIT 2 2D TRANSFORMATIONS

UNIT 2 2D TRANSFORMATIONS UNIT 2 2D TRANSFORMATIONS Introduction With the procedures for displaying output primitives and their attributes, we can create variety of pictures and graphs. In many applications, there is also a need

More information

Computer Graphics. Lecture 04 3D Projection and Visualization. Edirlei Soares de Lima.

Computer Graphics. Lecture 04 3D Projection and Visualization. Edirlei Soares de Lima. Computer Graphics Lecture 4 3D Projection and Visualization Edirlei Soares de Lima Projection and Visualization An important use of geometric transformations in computer

More information

Single View Geometry. Camera model & Orientation + Position estimation. What am I?

Single View Geometry. Camera model & Orientation + Position estimation. What am I? Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing points & line http://www.wetcanvas.com/ http://pennpaint.blogspot.com/ http://www.joshuanava.biz/perspective/in-other-words-the-observer-simply-points-in-thesame-direction-as-the-lines-in-order-to-find-their-vanishing-point.html

More information

Chapter 8 Three-Dimensional Viewing Operations

Chapter 8 Three-Dimensional Viewing Operations Projections Chapter 8 Three-Dimensional Viewing Operations Figure 8.1 Classification of planar geometric projections Figure 8.2 Planar projection Figure 8.3 Parallel-oblique projection Figure 8.4 Orthographic

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Computer Graphics Viewing

Computer Graphics Viewing Computer Graphics Viewing What Are Projections? Our 3-D scenes are all specified in 3-D world coordinates To display these we need to generate a 2-D image - project objects onto a picture plane Picture

More information

Camera Placement for Ray Tracing

Camera Placement for Ray Tracing Camera Placement for Ray Tracing Lecture #3 Tuesday 0/4/4 st Review Camera Placement! The following slides review last Thursday s Lecture on world to camera transforms.! To see shift to raytracing context,

More information

Fachhochschule Regensburg, Germany, February 15, 2017

Fachhochschule Regensburg, Germany, February 15, 2017 s Operations Fachhochschule Regensburg, Germany, February 15, 2017 s Motivating Example s Operations To take a photograph of a scene: Set up your tripod and point camera at the scene (Viewing ) Position

More information

CS 325 Computer Graphics

CS 325 Computer Graphics CS 325 Computer Graphics 02 / 29 / 2012 Instructor: Michael Eckmann Today s Topics Questions? Comments? Specifying arbitrary views Transforming into Canonical view volume View Volumes Assuming a rectangular

More information

Introduction to Computer Graphics 4. Viewing in 3D

Introduction to Computer Graphics 4. Viewing in 3D Introduction to Computer Graphics 4. Viewing in 3D National Chiao Tung Univ, Taiwan By: I-Chen Lin, Assistant Professor Textbook: E.Angel, Interactive Computer Graphics, 5 th Ed., Addison Wesley Ref: Hearn

More information

Chap 7, 2008 Spring Yeong Gil Shin

Chap 7, 2008 Spring Yeong Gil Shin Three-Dimensional i Viewingi Chap 7, 28 Spring Yeong Gil Shin Viewing i Pipeline H d fi i d? How to define a window? How to project onto the window? Rendering "Create a picture (in a synthetic camera)

More information

(Refer Slide Time: 00:01:26)

(Refer Slide Time: 00:01:26) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 9 Three Dimensional Graphics Welcome back everybody to the lecture on computer

More information

Getting Started. Overview (1): Getting Started (1): Getting Started (2): Getting Started (3): COSC 4431/5331 Computer Graphics.

Getting Started. Overview (1): Getting Started (1): Getting Started (2): Getting Started (3): COSC 4431/5331 Computer Graphics. Overview (1): Getting Started Setting up OpenGL/GLUT on Windows/Visual Studio COSC 4431/5331 Computer Graphics Thursday January 22, 2004 Overview Introduction Camera analogy Matrix operations and OpenGL

More information

Overview. Viewing and perspectives. Planar Geometric Projections. Classical Viewing. Classical views Computer viewing Perspective normalization

Overview. Viewing and perspectives. Planar Geometric Projections. Classical Viewing. Classical views Computer viewing Perspective normalization Overview Viewing and perspectives Classical views Computer viewing Perspective normalization Classical Viewing Viewing requires three basic elements One or more objects A viewer with a projection surface

More information

Virtual Cameras and The Transformation Pipeline

Virtual Cameras and The Transformation Pipeline Virtual Cameras and The Transformation Pipeline Anton Gerdelan gerdela@scss.tcd.ie with content from Rachel McDonnell 13 Oct 2014 Virtual Camera We want to navigate through our scene in 3d Solution = create

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

CITSTUDENTS.IN VIEWING. Computer Graphics and Visualization. Classical and computer viewing. Viewing with a computer. Positioning of the camera

CITSTUDENTS.IN VIEWING. Computer Graphics and Visualization. Classical and computer viewing. Viewing with a computer. Positioning of the camera UNIT - 6 7 hrs VIEWING Classical and computer viewing Viewing with a computer Positioning of the camera Simple projections Projections in OpenGL Hiddensurface removal Interactive mesh displays Parallelprojection

More information

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 Announcements Project 2 due Friday, October 11

More information

CS230 : Computer Graphics Lecture 6: Viewing Transformations. Tamar Shinar Computer Science & Engineering UC Riverside

CS230 : Computer Graphics Lecture 6: Viewing Transformations. Tamar Shinar Computer Science & Engineering UC Riverside CS230 : Computer Graphics Lecture 6: Viewing Transformations Tamar Shinar Computer Science & Engineering UC Riverside Rendering approaches 1. image-oriented foreach pixel... 2. object-oriented foreach

More information

MORE OPENGL. Pramook Khungurn CS 4621, Fall 2011

MORE OPENGL. Pramook Khungurn CS 4621, Fall 2011 MORE OPENGL Pramook Khungurn CS 4621, Fall 2011 SETTING UP THE CAMERA Recall: OpenGL Vertex Transformations Coordinates specified by glvertex are transformed. End result: window coordinates (in pixels)

More information

2D/3D Geometric Transformations and Scene Graphs

2D/3D Geometric Transformations and Scene Graphs 2D/3D Geometric Transformations and Scene Graphs Week 4 Acknowledgement: The course slides are adapted from the slides prepared by Steve Marschner of Cornell University 1 A little quick math background

More information

Perspective transformations

Perspective transformations Perspective transformations Transformation pipeline Modelview: model (position objects) + view (position the camera) Projection: map viewing volume to a standard cube Perspective division: project D to

More information

COMS 4160: Problems on Transformations and OpenGL

COMS 4160: Problems on Transformations and OpenGL COMS 410: Problems on Transformations and OpenGL Ravi Ramamoorthi 1. Write the homogeneous 4x4 matrices for the following transforms: Translate by +5 units in the X direction Rotate by 30 degrees about

More information

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018

CSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 CSE 167: Introduction to Computer Graphics Lecture #9: Visibility Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 Announcements Midterm Scores are on TritonEd Exams to be

More information

Computer Viewing. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science

Computer Viewing. CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science Computer Viewing CS 537 Interactive Computer Graphics Prof. David E. Breen Department of Computer Science 1 Objectives Introduce the mathematics of projection Introduce OpenGL viewing functions Look at

More information

3D Viewing. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 9

3D Viewing. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 9 3D Viewing CS 46 Lecture 9 Cornell CS46 Spring 18 Lecture 9 18 Steve Marschner 1 Viewing, backward and forward So far have used the backward approach to viewing start from pixel ask what part of scene

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE45 Computer Graphics Lecture 8: Computer Projection CSE45 Lecture 8: Computer Projection 1 Review In the last lecture We set up a Virtual Camera Position Orientation Clipping planes Viewing angles Orthographic/Perspective

More information

Projection Lecture Series

Projection Lecture Series Projection 25.353 Lecture Series Prof. Gary Wang Department of Mechanical and Manufacturing Engineering The University of Manitoba Overview Coordinate Systems Local Coordinate System (LCS) World Coordinate

More information

CT5510: Computer Graphics. Transformation BOCHANG MOON

CT5510: Computer Graphics. Transformation BOCHANG MOON CT5510: Computer Graphics Transformation BOCHANG MOON 2D Translation Transformations such as rotation and scale can be represented using a matrix M.., How about translation? No way to express this using

More information

Viewing with Computers (OpenGL)

Viewing with Computers (OpenGL) We can now return to three-dimension?', graphics from a computer perspective. Because viewing in computer graphics is based on the synthetic-camera model, we should be able to construct any of the classical

More information

Perspective Projection and Texture Mapping

Perspective Projection and Texture Mapping Lecture 7: Perspective Projection and Texture Mapping Computer Graphics CMU 15-462/15-662, Spring 2018 Perspective & Texture PREVIOUSLY: - transformation (how to manipulate primitives in space) - rasterization

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Drawing in 3D (viewing, projection, and the rest of the pipeline)

Drawing in 3D (viewing, projection, and the rest of the pipeline) Drawing in 3D (viewing, projection, and the rest of the pipeline) CS559 Fall 2016 Lecture 6/7 September 26-28 2016 The first 4 Key Ideas 1. Work in convenient coordinate systems. Use transformations to

More information

Computing the 3D Viewing Transformation

Computing the 3D Viewing Transformation Computing the 3D Viewing Transformation John E. Howland Department of Computer Science Trinity University 715 Stadium Drive San Antonio, Texas 78212-7200 Voice: (210) 999-7380 Fax: (210) 999-7477 E-mail:

More information

COMP 175 COMPUTER GRAPHICS. Ray Casting. COMP 175: Computer Graphics April 26, Erik Anderson 09 Ray Casting

COMP 175 COMPUTER GRAPHICS. Ray Casting. COMP 175: Computer Graphics April 26, Erik Anderson 09 Ray Casting Ray Casting COMP 175: Computer Graphics April 26, 2018 1/41 Admin } Assignment 4 posted } Picking new partners today for rest of the assignments } Demo in the works } Mac demo may require a new dylib I

More information

CSC 470 Computer Graphics. Three Dimensional Viewing

CSC 470 Computer Graphics. Three Dimensional Viewing CSC 470 Computer Graphics Three Dimensional Viewing 1 Today s Lecture Three Dimensional Viewing Developing a Camera Fly through a scene Mathematics of Projections Producing Stereo Views 2 Introduction

More information

Coordinate Transformations & Homogeneous Coordinates

Coordinate Transformations & Homogeneous Coordinates CS-C31 (Introduction to) Computer Graphics Jaakko Lehtinen Coordinate Transformations & Homogeneous Coordinates Lots of slides from Frédo Durand CS-C31 Fall 216 Lehtinen Outline Intro to Transformations

More information

Agenda. Perspective projection. Rotations. Camera models

Agenda. Perspective projection. Rotations. Camera models Image formation Agenda Perspective projection Rotations Camera models Light as a wave + particle Light as a wave (ignore for now) Refraction Diffraction Image formation Digital Image Film Human eye Pixel

More information

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482

Rigid Body Motion and Image Formation. Jana Kosecka, CS 482 Rigid Body Motion and Image Formation Jana Kosecka, CS 482 A free vector is defined by a pair of points : Coordinates of the vector : 1 3D Rotation of Points Euler angles Rotation Matrices in 3D 3 by 3

More information

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM NAME: Login name: Computer Science 46 Midterm 3//4, :3PM-:5PM This test is 5 questions, of equal weight. Do all of your work on these pages (use the back for scratch space), giving the answer in the space

More information

Chap 3 Viewing Pipeline Reading: Angel s Interactive Computer Graphics, Sixth ed. Sections 4.1~4.7

Chap 3 Viewing Pipeline Reading: Angel s Interactive Computer Graphics, Sixth ed. Sections 4.1~4.7 Chap 3 Viewing Pipeline Reading: Angel s Interactive Computer Graphics, Sixth ed. Sections 4.~4.7 Chap 3 View Pipeline, Comp. Graphics (U) CGGM Lab., CS Dept., NCTU Jung Hong Chuang Outline View parameters

More information

Vector Algebra Transformations. Lecture 4

Vector Algebra Transformations. Lecture 4 Vector Algebra Transformations Lecture 4 Cornell CS4620 Fall 2008 Lecture 4 2008 Steve Marschner 1 Geometry A part of mathematics concerned with questions of size, shape, and relative positions of figures

More information

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6

Math background. 2D Geometric Transformations. Implicit representations. Explicit representations. Read: CS 4620 Lecture 6 Math background 2D Geometric Transformations CS 4620 Lecture 6 Read: Chapter 2: Miscellaneous Math Chapter 5: Linear Algebra Notation for sets, functions, mappings Linear transformations Matrices Matrix-vector

More information

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015 Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather

More information

Three-Dimensional Graphics III. Guoying Zhao 1 / 67

Three-Dimensional Graphics III. Guoying Zhao 1 / 67 Computer Graphics Three-Dimensional Graphics III Guoying Zhao 1 / 67 Classical Viewing Guoying Zhao 2 / 67 Objectives Introduce the classical views Compare and contrast image formation by computer with

More information

2D and 3D Transformations AUI Course Denbigh Starkey

2D and 3D Transformations AUI Course Denbigh Starkey 2D and 3D Transformations AUI Course Denbigh Starkey. Introduction 2 2. 2D transformations using Cartesian coordinates 3 2. Translation 3 2.2 Rotation 4 2.3 Scaling 6 3. Introduction to homogeneous coordinates

More information

3D Mathematics. Co-ordinate systems, 3D primitives and affine transformations

3D Mathematics. Co-ordinate systems, 3D primitives and affine transformations 3D Mathematics Co-ordinate systems, 3D primitives and affine transformations Coordinate Systems 2 3 Primitive Types and Topologies Primitives Primitive Types and Topologies 4 A primitive is the most basic

More information

Drawing in 3D (viewing, projection, and the rest of the pipeline)

Drawing in 3D (viewing, projection, and the rest of the pipeline) Drawing in 3D (viewing, projection, and the rest of the pipeline) CS559 Spring 2016 Lecture 6 February 11, 2016 The first 4 Key Ideas 1. Work in convenient coordinate systems. Use transformations to get

More information

CS464 Oct 3 rd Assignment 3 Due 10/6/2017 Due 10/8/2017 Implementation Outline

CS464 Oct 3 rd Assignment 3 Due 10/6/2017 Due 10/8/2017 Implementation Outline CS464 Oct 3 rd 2017 Assignment 3 Due 10/6/2017 Due 10/8/2017 Implementation Outline Assignment 3 Skeleton A good sequence to implement the program 1. Start with a flat terrain sitting at Y=0 and Cam at

More information

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices

Today. Today. Introduction. Matrices. Matrices. Computergrafik. Transformations & matrices Introduction Matrices Computergrafik Matthias Zwicker Universität Bern Herbst 2008 Today Transformations & matrices Introduction Matrices Homogeneous Affine transformations Concatenating transformations Change of Common coordinate

More information

Figure 1. Lecture 1: Three Dimensional graphics: Projections and Transformations

Figure 1. Lecture 1: Three Dimensional graphics: Projections and Transformations Lecture 1: Three Dimensional graphics: Projections and Transformations Device Independence We will start with a brief discussion of two dimensional drawing primitives. At the lowest level of an operating

More information