EECS : Introduction to Computer Graphics Building the Virtual Camera ver. 1.5
|
|
- Gyles Higgins
- 6 years ago
- Views:
Transcription
1 EECS : Introduction to Computer Graphics Building the Virtual Camera ver D Transforms (cont d): We havent yet explored ALL of the geometric transformations available within a 4x4 matrix. --All fit in a 4x4 matrix, suggesting up to 16 independent degrees of freedom. We already know a bit about 9 of them: 3 kinds of Translate (in x,y,z directions) + 3 kinds of Rotate (around x,y,z axes) + 3 kinds of Scale (along x,y,z directions).?what geometric explanations can we find for the remaining 7 degrees of freedom? 3 kinds of shear (aka skew ): the transforms that turn a square into a parallelogram: -- Sxy sets the x-y shear amount; similarly, Sxz and Syz set x-z and y-z shear amount: Shear(Sx,Sy,Sz) = [ 1 Sxy Sxz 0] [ Sxy 1 Syz 0] [ Sxz Syz 1 0] [ ] 3 kinds of perspective transform: --Px causes perspective distortions along x-axis direction, Py, Pz along y and z axes: Persp(px,py,pz) = [ ] [ ] [ ] [px py pz 1] --Finally, we have that lower-left 1 in the matrix. Remember how we convert homogeneous coordinates [x,y,z,w] to real or Cartesian 3D coordinates [x/w, y/w, z/w]? If we change the 1 in the lower left, it scales the w it acts as a simple scale factor on all real coordinates, and thus it s redundant: we have only 15 degrees of freedom in our 4x4 matrix. Computer vision researchers classify a 4x4 transform matrix by a hierarchy of categories: rigid body transforms == preserves angles and lengths; does not distort or reshape a model, includes any combination of rotation and translation. affine transforms== preserves parallel lines, but not angles or line lengths includes rotation, translation, scale, shear or skew perspective transforms; links x,y or z to the homogeneous coord w, but loses any ability to guarantee its transformations will preserve angles, lengths, or parallelism. I lacks easy intuition, but it provides the math behind a pinhole camera image, neatly fitted into one 4x4 matrix form! Cameras and 3D Viewing: Algebra and Intuition Camera: device that flattens 3D space onto a 2D plane. Let s get the intuition first, y then the math: A planar perspective camera, at its simplest most general form, Center of is just a plane and a point. The Projection plane splits the universe into point two halves; the half that contains the point, which is the f (focal length) 2D focal plane (xv, yv, zv,zv/f) (xv, yv, zv, 1) 3D model (connected vertices) z
2 location of our eye or camera, and then the other half of the universe that we ll see with our eye or camera. The point is known by many names; the camera location, the center of projection (COP), the viewpoint the view reference point etc., and the plane is the image plane, the near clip plane, or even the focal plane. We paint our planar perspective image onto that plane in a very simple way. Draw a line from our point (viewpoint, camera point, etc.) through the plane, and trace it into the other half of the universe until it hits something. Copy the color we find on that something and paint it onto the plane where our tracing-line pierced it. Use this method to set the color of all the plane points to form the complete planar perspective image of one half of the universe. However, real cameras are finite: we can t capture the entire infinitely large picture plane, and instead we usually select the picture within a rectangle region of that plane. First, let s define the camera s focal length f as the shortest distance from its center-of-projection (COP) to the universe-splitting image plane. We make a vector perpendicular to the image plane that measures the distance from our eye-point to the plane, and call that our direction-of-gaze, or lookat direction. We define a rectangular region of the plane, usually (but not always) centered at the point nearest the COP, and record its colors as our perspective image. The size and placement of that rectangle, along with focal length f describes all the basic requirements for simple lenses in any kind of planar perspective camera: Now let s formalize these ideas, as illustrated in the simple 2D diagram on the previous page: Given a plane and a point, suppose your eye is located at the point and that it becomes the origin of a coordinate system. Call the origin point the eye point, or the center of projection or the view reference point (VRP), and use your eye to look down the -z axis. Why not +z? because we want to use right-handed coordinate systems, and we want to keep these x & y axes of our 3D camera for use as the x & y axes of our 2D image, aimed rightward & upward. At z= -f, = -znear, we construct the focal plane that splits the universe in half. draw a picture on a 2D focal plane. How? Trace rays from the eye origin (VRP) to points (vertices, usually) in that other half of the universe, and find their location on the focal plane. Given a 3D vertex point (xv, yv, zv), where do we draw a point on our image plane? ANS: the image point is (f xv / -zv, f yv / -zv, f zv / -zv) = f (xv / -zv, yv / -zv, -1) As we change the focal distance f between the camera origin (e.g. center of projection ) and the 2D image plane, the image gets scaled up or down, larger or smaller, but does not otherwise change: adjusting the f parameter is equivalent to moving the plane closer or further away from the eye, and also equivalent to adjusting a zoom lens. However, if we instead chose to keep f fixed and instead move the vertices of a 3D model forwards or backwards along the zv axis, the effect is more complex. Vertices nearer the camera, those with small zv values, change their on-screen image positions far more than vertices farther away.
3 Therefore, moving a 3D object closer or further away is NOT equivalent to adjusting a zoom lens, because it causes depth-dependent image changes: some call it foreshortening, others call it changing perspective or flattening the image. As a 3D object moves away from the camera, the rays that carry color from its surface points to our camera s center-of-projection (COP) get more and more parallel, with larger angles for shorter distances. Changing those angles yields a different appearance due to color changes (imagine the surface of a CD), and may change its occlusion: a surface that moves near the camera may block your view of more distant points. No matter how large this 3D object may be, as -zv reaches infinity the image of the object shrinks to a single point. This is the image s z-axis vanishing point ; the points that painters use to help them make an accurate perspective drawing. For scenes with world-space axes that don t match the camera axes, drawing lines parallel to each of these world-space axes will cause them to converge on other points; if they fall within the picture itself, we can see 1, 2, or 3 vanishing points. Any parallel lines in 3D that aren t parallel to our image plane will form vanishing points somewhere on our image plane, so you can have as many vanishing points as you wish in a drawing; the number of vanishing points doesn t really tell you much about the camera that made an image, only the content of the scene that the camera captured. ( vanishing points in paintings define convergence of major axes of viewed objects, such as horizontal lines where walls meet floors, or vertical lines where walls meet each other). --Easy Question: suppose your camera DIDN T divide by zv: then where does the 3D vertex point appear in the image plane? Then we form no vanishing points, and the model stays the same size no matter how far away. This is known as orthographic projection, and if we define d as the size of the image, we can use it to replace the f, and specify the distance from the origin like this: Answer: the image point is (d xv d yv, d zv) = d (xv, yv, 1) Homogeneous Coordinates: Clever Matrix Math for Cameras and for 3D Viewing: The intuitive camera-construction method above isn t linear, and isn t suitable for reliable graphics programs. It requires us to divide by numbers that can change; if we use it to draw pictures by the millions, eventually our 3D drawing program will fail with a divide-by-zero problem. It is messy to use, too how can you move the camera freely? What if you want an image plane that isn t perpendicular to the Z axis, that doesn t have its vanishing point in the very center of the picture it makes? THIS is the real reason why we use homogeneous coords & matrices (4x4) for graphics: --it lets us make one 4x4 matrix that maps all 3-D points to a 2-D image plane, like a camera; --it lets us position the camera separately from the model, and position them all in world space; --it never causes divide-by-zero errors as you compute transforms and images, --it lets you adjust your camera easily, and can make pictures that tilt and/or shift the image plane (e.g. emulate an architectural view camera), but by a method simpler than moving the camera s lens mounts: you actually just adjust the position of the image rectangle you chose on the plane that splits the universe. --You can transform just the vertices of a model to the image plane, and then draw the lines and polygons in 2D. The most-basic perspective camera matrix: (same as the algebraic one above) just messes around with the bottom-most row of the 4x4 matrix; it avoids the divide by coupling zv to the w value now w is doing something very useful: Tpers = [ ] [ ] [ ] [ 0 0 1/f 0] (note last two elements)
4 That solution can work for any perspective camera! Compare it with the coordinates in our simple 2D camera drawing on the first page. Transform a 3D point (xv, yv,zv,1) by Tpers, and you get another 3D point; convert to real (Cartesian) 3D coordinates (x/w,y/w,z/w). Try it: Tpers [xv,yv,zv,1] T = [xv,yv,zv, zv/f] T in 3D homogeneous coords; convert to 3D Cartesian coords by dividing by (zv/f): [xv*f/zv, yv*f/zv, zv*f/zv] T = f * [xv/zv, yv/zv, 1 ] T This gives us 3D coordinates in an image plane, at location z=f, with the same coordinates we found by algebra above. Note that our picture-drawing program does not have to convert points to real (Cartesian) coordinates (x/w, y/w, z/w) until we are ready to draw pixels; divide by zero never happens when we re doing all the complicated and time-consuming transformations for moving shapes and cameras. Challenging puzzle: To draw a perspective image of a 3D line, you only need to transform its endpoints to image space, and then draw a line between those image points. Can you prove this is always true? Make a parameterized point P(t) that moves between P0 and P1, as we did before. Transform each of them to image space: transform P0 to make P0, transform P1 to make P1, Transform Pt to make Pt. Make a parameterized image point Ps that moves between P0 and P1 ; is Pt (t) = Ps(s)? Can you solve for the function s(t) or t(s)? Online books, tutorials show you how ***(Be careful with signs: in a right-handed coordinate system, the image plane is always positioned on the -Z axis, but the f parameter used by graphics software often isn t negative! In OpenGL and WebGL, the znear parameter defines imageplane position, but functions that use it expect POSITIVE znear values. *** Viewing Transformations: Earlier we learned how to make jointed objects by concatenating Translate, Rotate, and Scale matrices arranged in a scene graph. If we apply those same skills to position and aim a virtual camera, we will discover they re not well suited for camera positioning. For our first attempt, suppose we a) translate() along z axis to push the world drawing axes out and away from the camera, b) then rotate() to spin the world around its own origin, and c) then translate() to move the world under our camera s view. While this will let us explore the world, it will not be an easy exploration; steps a) and b) force us to position the camera using spherical coordinates; step a) sets radius, the distance from camera to world origin, and step b) chooses camera azimuth, elevation, and spin. While step c) can move us to any position in the world, our camera aiming direction was fixed by step b), and we cannot easily change that aiming direction.?!?!why is this so hard?!? you may ask;?!?! Why can t I just specify the camera s position in the 3D world, and aim it at something interesting in that 3D world?!?! The answer is that we re using the wrong tools. Our existing transformations (translate, rotate, and scale) always begin with the CVV drawing axes (our camera axes) and then transform them successively to each nested set of drawing axes we need to draw jointed objects. Before we used cameras, our CVV coordinate system would suffice as our 3D world coordinate system. But when we create a virtual camera we changed the purpose of the CVV; now it holds a view of the 3D world as seen by a camera. Our existing tools now require us to construct our 3D world coordinate axes by using values we specified from the camera s drawing axes (the CVV axes) in steps a,b,c above. That s not easy!
5 Instead we want an opposite, an inverse of that process. Let s call it the LookAt() function: this new tool must create a matrix that constructs the camera s drawing axes by using values we specify in the world s drawing axes. We will call it the view matrix, at place it at the root of our scene-graph, where it transforms the CVV (camera) coordinate axes into the world coordinate axes. The VIEW matrix, or How to build a camera-positioning matrix from world-space values To specify camera position in world drawing axes, begin with just 3 values, each with 3 coordinates: EYE(a point), AT(a point), and view up or VUP(a vector). We want to place our virtual camera in a world coordinate system so that: -the camera position is at a 3D point EYE (or view reference point or camera location ). -the camera lens is aimed in the direction of the 3D world-space point AT, or look-at, aimed towards a point we will look-at point in the world. -the camera is level, it has no unwanted tilt to one side or the other. Do this by specifying a camera up vector direction VUP in world space. The VUP vector, the AT point, and the EYE point define a plane that always slices through the 2D output image perpendicularly to include its y axis. Don t be fooled - VUP is NOT restricted to the +y direction in the 3D world the camera will view. Instead, the VUP vector defines the world-space direction that will appear as up in the camera image. For example, changing the VUP vector from the +x direction to the +y direction will turn the camera on its side. Note that you can use VUP to help you define where to put the ground and the sky in the world coordinate system. If you choose the world s ground plane as (x,y,0) with the +z axis pointing upwards towards the sky, then a level camera will have a VUP vector aimed towards +z as well. We then use EYE, AT, and VUP to construct our viewing transform, following a step-by-step process below described in many textbooks (but not ours). You will need a viewing matrix for Project B, but the cuon-matrix.js library can build this matrix for you with its LookAt() function. Definitions: --World coordinates denoted WC. Position specified using (x,y,z) --Camera coordinate system, or CAM. To avoid confusion with WC, we will rename the camera s x,y,z coordinates as u,v,n instead. Step 1) Define the Eye Coordinate System Given EYE, AT & VUP, lets first construct each part of a camera coordinate system as measured in World coordinates WC. The CAM origin is just the eye point EYE, already defined in world coordinates. To construct the drawing axes: (Work carefully here!-- we only care about directions, not positions, so we only use vectors, not points). --Find N vector in WC (Careful! Right-handed coordinates! the N vector direction is backwards; it points to the eyepoint from the chosen point our camera is looking at). We already know this one it s the normalized vector from AT to EYE: Nraw = (EYE - AT) Make it unit length: N = Nraw/ Nraw --Find the U,V vectors. We know VUP, EYE and N define a plane P, and this plane always contains the camera image s V vector (recall V is the direction of the +y axis in the camera image). But if the P plane includes both the N and V vectors(axes), then plane P is perpendicular to the U vector (axis). It s easiest to find a vector in the U direction first, by using the cross-product: Uraw = VUP x N Make it unit length: U = Uraw / Uraw. --From these two coordinate system axis vectors we can easily find the third. Given U and N (in a right-handed coordinate system) we can find V by another cross product (as N and U are already unit vectors, we don t need to normalize V): V = N x U
6 Step 2) Backwards but Easy: Transform World coord. system into CAM coord. system With our new unit-length U,V,N vectors and the EYE point expressed in world-coordinate system values, we can convert a world coordinate system point P0 = (P0x, P0y, P0z, 1) into its CAM coordinate values (u,v,n,1). The task is especially easy if we choose the special case where we placed the EYE point at the WC origin. Here, the vector from the P0 to the EYE point (the CAM coord. origin) is obvious: it s just (P0x, P0y, P0z, 0). Here, that vector s dot-product with the U,V, and N unit vectors will give us the (u, v, n) coordinates we seek. We can write those 3 dot-products in matrix form as well, and they represent a simple rigid-body rotation from WC to CAM axes: CAM = [Rcam] P0 [u] = [ Ux Uy Uz 0 ] [P0x] when EYE = (0,0,0,1). [v] [ Vx Vy Vz 0 ] [P0y] [n] [ Nx Ny Nz 0 ] [P0z] [1] [ ] [ 1 ] If we used the Rcam matrix as our view matrix, it would create the World drawing axes as a rotated copy of the CAM drawing axes; the camera would aim in the desired direction in the World coordinate system, but the origin point of the CAM drawing axes would remain fixed to the origin of the World drawing axes. To complete our camera positioning, we need a second transformation; we create a copy of the World drawing axes, and then push the world out, away from our eye we move the new World drawing axes (measuring against the previous World axes) by just enough to position the camera at its correct world-space position; we could call Translate(-EYEx, -EYEy, -EYEz) to form our final view matrix, or use that function s methods to construct it: VIEW = [Rcam][Trans(-EYE)] = [ Ux Uy Uz -EYE U ]. [ Vx Vy Vz -EYE V ] [ Nx Ny Nz -EYE N ] [ ] (Note the translation moves the new world drawing axes away from the old world axes at the eyepoint). The PROJECTION matrix This matrix emulates the lens system of a camera; it performs the 3D 2D transformation that may include perspective and foreshortening. Step 4A) Apply a matrix that does the 3D 2D perspective transformation. Now each (translated) vertex is expressed in (u,v,n,w) coordinates defined by the camera. We could apply the perspective transform Tpers we defined above, convert from homogeneous to Cartesian coordinates, and at last we d have the 2D image locations for each vertex, but this is naïve in at least two different ways: First, we need a computed depth value for each pixel we render so that we can perform hidden-surface removal; if we stay in homogeneous coordinates we can compare the z values at every pixel for every drawing primitive, and draw that pixel s fragment only if no previously-drawn fragment had a z-value nearer to the eye. Second, we may have floating-point precision problems while the 3D half-universe is unbounded, our hardware is not; we need to measure our (u,v,n,w) values by their distance from the camera s viewing frustum the six-sided pyramid-like box made by the camera s field of view (a 4-sided pyramid; apex at the eyepoint) and the camera s near and far clipping planes.
7 What s all this about near and far clipping planes?!? Left, right, top, bottom planes? Alas, all computers have finite precision. The x, y, z axes we ve discussed easily stretch to infinity in all directions; we could never adequately describe all positions in half the universe on all scales (from nanometers to lightyears) with nothing more than a set of four GLSL World Coord Axes float values! Instead, we have to choose a subset, some finite volume of 3D space to describe with the finite, limited set of floating-point numbers our computers give us. Of course, we choose the volume of 3D space that is in front of our camera everywhere else will be off-screen in the picture we make. THUS, we can limit our 3D camera view-frustum size in 6 ways: For the z axis: (znear, zfar) -the near and far planes perpendicular to the z axis, with z=f=znear and z=zfar respectively. -ONLY points between znear and zfar will be drawn on-screen, and both must be >0; the znear is also known as the focal distance f, and the zfar value limits the distance to the visible horizon in the scene. -For better results, keep the ratio (zfar/znear) modest; < ~ 10,000:1. As this ratio increases, you reduce the ability of WebGL to distinguish whether one surface is behind another; foreground objects might not occlude background objects! (We ll explain more when we discuss z-buffering ). If you separate znear and zfar too widely, your program will lose precision in distinguishing objects with nearly-identical Z values (See Z-fighting ), but there is no other penalty. With modern graphics hardware, floating-point z values have greatly reduced the likelihood of z-fighting except for the very largest world-space models (e.g. the earth, described with 1mm resolution). For the x and y axis: (left, right, top, bottom) -The combination znear and zfar and your camera s angular field-of-view set the maximum values for x,y. If you choose to use the gl.perspective() function in the cuon-matrix-quat.js library supplied in starter code, this field-of-view is symmetric, and set by your selection of the camera s aspect ratio and its vertical field-of-view in degrees. If instead you use the gl.frustum() function, you can individually specify the left, right, top, and bottom limits of the frustum as measured at the znear clipping plane; this permits you to construct unusual camera images that emulate a leather-bellowed view camera. Pseudo-Depth: Remember, when we convert to Cartesian coordinates, our naïve Tpers matrix gave us (fx/z, fy/z, fz/z) = f*(x/z,y/z,1). Note that the 3 rd Cartesian result doesn t really tell us anything at all it s just the position of the image plane. We d like to have a z value that tells us where we are between znear and zfar; in fact, we NEED this value to make depth comparisons when rendering. We shouldn t draw any drawing
8 primitive that s BEHIND those we ve already drawn in the frame buffer; accordingly we need to find a depth-like value we can store with pixels as we draw them. WebGL will keep a depth value along with the color of any pixel it draws, and then check that depth before drawing each primitive. We only draw a new pixel if its depth is shorter (nearer the eyepoint) than any previous value drawn here. Instead of the naïve Tpers matrix that yields (fx/z, fy/z, fz/z) = f ( x/z, y/z, 1), we can construct a 4x4 homogeneous matrix that gives us the same image coordinates AND pseudo-depth determined by two magic constants a,b: f (x/z, y/z, (az +b)/z). While not quite the same as actual depth, this pseudo-depth is almost as good; --it is monotonic (we don t change the order of depth for any sets of objects or surfaces) and --it yields its greatest precision for depth for vertex positions near the camera, and least precision for depths nearest the far-clip plane, where we may never notice errors. We can find values a,b from user-specified near/far planes, and then construct a 4x4 matrix to give us this result. While quite clever and interesting, deriving the mapping from the frustum to the CVV is rather tedious and arcane, and instead we will let the widely-used function gl.frustum() and gl.perspective() functions do it for us; just choose one and use it (both are implemented for you in cuon-matrix-quat.js). To derive these matrices yourself, see: Both functions use the same underlying matrix: n = znear, f = zfar, both>0. (f also equals focal distance, from origin to image plane, as before) t = ytop, b = ybottom, r = xright, l = xleft Tpers=[ 2n/(r-l) 0 (r+l)/(r-l) 0 ] [ 0 2n/(t-b) (t+b)/(t-b) 0 ] [ 0 0 -(f+n)/(f-n) -2fn/(f-n)] [ ] Step 4B: Suppose you want an orthographic camera instead of a perspective (pinhole) camera? Like the naïve Tpers, you could use a naïve Tortho matrix, which is just the 4x4 identity matrix with a 3 rd row set entirely to zero (so that z is ignored), but that s a poor strategy. Orthographic cameras also need near, far, left, right, top, and bottom clipping planes for the most precise results, and the ortho() function implemented for you in cuon-matrix-quat.js applies this orthographic matrix: Tortho= [ 2/(r-l) 0 0 -(r+l)/(r-l) ] [ 0 2/(t-b) 0 -(t+b)/(t-b) ] [ 0 0-2/(f-n) -(f+n)/(f-n) ] [ ]
9 Step 5: VIEWPORT-to-screen: After we apply Tpers or Tortho and the perspective divide, all transformed vertices (u,v,n,w) fit into the Canonical View Volume (CVV), so that 1 <= u/w <= +1, -1<= v/w <= +1, 0 <= n/w <= +1. As you may recall, the u/w, v/w are the 2D coordinates within camera image plane, and n/w is a normalized measure of true depth from the eyepoint to the vertex (e.g. not the same as z distance) it is the position of a vertex between the znear and zfar planes slightly distorted to its pseudo-depth value. To make the final image, we only need to map the camera s coordinates (u/w, v/w) to our HTML5 canvas or viewport or screen. These coordinate values all stay within [-1,1] (because they were clipped to that range) but now we want to change these coordinates to match our on-screen viewport; the rectangle from [0,0] to [width, height] measured in pixels. Once again we apply a scale matrix, followed by a translate matrix to adjust the u and v values (w is unchanged, n can be ignored): TSviewport = [ sx 0 0 tx ] [ 0 sy 0 ty ] [ ] [ ] After this change to viewport coordinates, divide by w : pixel coordinates (x,y) = (u/w, v/w) 3D Virtual Camera Summary: The viewing transformation, the 4x4 matrix that transforms a world-space vertex (x,y,z,w) to its onscreen pixel coordinates (u/w, v/w ) is the concatenation of 4 matrices: [u] = [Tsviewport] [Tpers or Tortho] [VIEW][MODEL] [x] [v] [y] [n] [z] [w ] [w] Viewing in WebGL: See gl. lookat( ) and gl.setlookat() functions.: Eye point= the center of projection, the VRP, specified in 3D world space coordinates Center point= the 3D world-space aiming point for the camera. VPN = Center - Eye Up = same as VUP vector. See gl.perspective() and gl.setpersepective() function: fovy: field-of-view angle in degrees in y direction: determines angle between top and bottom clip planes aspect: ratio of camera image width to camera image height. znear, zfar = always positive values; be sure znear < zfar See gl.frustum() and gl.setfrustum function: Constructs Tpers matrix as described above; user supplies left, right, top, bottom, near, far values. See gl.ortho() and gl.setortho() function: Constructs orthographic matrix described above; user supplies left, right, bottom, top, near, far values.
EECS : Introduction to Computer Graphics Building the Virtual Camera ver. 1.4
EECS 351-1 : Introduction to Computer Graphics Building the Virtual Camera ver. 1.4 3D Transforms (cont d): 3D Transformation Types: did we really describe ALL of them? No! --All fit in a 4x4 matrix, suggesting
More informationCS 4204 Computer Graphics
CS 4204 Computer Graphics 3D Viewing and Projection Yong Cao Virginia Tech Objective We will develop methods to camera through scenes. We will develop mathematical tools to handle perspective projection.
More informationModels and The Viewing Pipeline. Jian Huang CS456
Models and The Viewing Pipeline Jian Huang CS456 Vertex coordinates list, polygon table and (maybe) edge table Auxiliary: Per vertex normal Neighborhood information, arranged with regard to vertices and
More informationNotes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment
Notes on Assignment Notes on Assignment Objects on screen - made of primitives Primitives are points, lines, polygons - watch vertex ordering The main object you need is a box When the MODELVIEW matrix
More informationLecture 4. Viewing, Projection and Viewport Transformations
Notes on Assignment Notes on Assignment Hw2 is dependent on hw1 so hw1 and hw2 will be graded together i.e. You have time to finish both by next monday 11:59p Email list issues - please cc: elif@cs.nyu.edu
More informationGeometry: Outline. Projections. Orthographic Perspective
Geometry: Cameras Outline Setting up the camera Projections Orthographic Perspective 1 Controlling the camera Default OpenGL camera: At (0, 0, 0) T in world coordinates looking in Z direction with up vector
More informationCSE328 Fundamentals of Computer Graphics
CSE328 Fundamentals of Computer Graphics Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 794--44 Tel: (63)632-845; Fax: (63)632-8334 qin@cs.sunysb.edu
More informationToday. Rendering pipeline. Rendering pipeline. Object vs. Image order. Rendering engine Rendering engine (jtrt) Computergrafik. Rendering pipeline
Computergrafik Today Rendering pipeline s View volumes, clipping Viewport Matthias Zwicker Universität Bern Herbst 2008 Rendering pipeline Rendering pipeline Hardware & software that draws 3D scenes on
More informationCSE528 Computer Graphics: Theory, Algorithms, and Applications
CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu
More informationVirtual Cameras & Their Matrices
Virtual Cameras & Their Matrices J.Tumblin-Modified, highly edited SLIDES from: Ed Angel Professor Emeritus of Computer Science University of New Mexico 1 What is Projection? Any operation that reduces
More informationCOMP3421. Introduction to 3D Graphics
COMP3421 Introduction to 3D Graphics 3D coodinates Moving to 3D is simply a matter of adding an extra dimension to our points and vectors: 3D coordinates 3D coordinate systems can be left or right handed.
More informationCOMP3421. Introduction to 3D Graphics
COMP3421 Introduction to 3D Graphics 3D coodinates Moving to 3D is simply a matter of adding an extra dimension to our points and vectors: 3D coordinates 3D coordinate systems can be left or right handed.
More informationCOMP3421. Introduction to 3D Graphics
COMP3421 Introduction to 3D Graphics 3D coordinates Moving to 3D is simply a matter of adding an extra dimension to our points and vectors: 3D coordinates 3D coordinate systems can be left or right handed.
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationProjection: Mapping 3-D to 2-D. Orthographic Projection. The Canonical Camera Configuration. Perspective Projection
Projection: Mapping 3-D to 2-D Our scene models are in 3-D space and images are 2-D so we need some wa of projecting 3-D to 2-D The fundamental approach: planar projection first, we define a plane in 3-D
More informationThree-Dimensional Viewing Hearn & Baker Chapter 7
Three-Dimensional Viewing Hearn & Baker Chapter 7 Overview 3D viewing involves some tasks that are not present in 2D viewing: Projection, Visibility checks, Lighting effects, etc. Overview First, set up
More informationINTRODUCTION TO COMPUTER GRAPHICS. It looks like a matrix Sort of. Viewing III. Projection in Practice. Bin Sheng 10/11/ / 52
cs337 It looks like a matrix Sort of Viewing III Projection in Practice / 52 cs337 Arbitrary 3D views Now that we have familiarity with terms we can say that these view volumes/frusta can be specified
More informationProf. Feng Liu. Fall /19/2016
Prof. Feng Liu Fall 26 http://www.cs.pdx.edu/~fliu/courses/cs447/ /9/26 Last time More 2D Transformations Homogeneous Coordinates 3D Transformations The Viewing Pipeline 2 Today Perspective projection
More information3D Viewing. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller
3D Viewing CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Chapter 4 of Angel Chapter 6 of Foley, van Dam, 2 Objectives What kind of camera we use? (pinhole) What projections make sense
More informationOverview. By end of the week:
Overview By end of the week: - Know the basics of git - Make sure we can all compile and run a C++/ OpenGL program - Understand the OpenGL rendering pipeline - Understand how matrices are used for geometric
More information3D Viewing. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller
3D Viewing Introduction to Computer Graphics Torsten Möller Machiraju/Zhang/Möller Reading Chapter 4 of Angel Chapter 13 of Hughes, van Dam, Chapter 7 of Shirley+Marschner Machiraju/Zhang/Möller 2 Objectives
More informationSingle View Geometry. Camera model & Orientation + Position estimation. What am I?
Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing point Mapping from 3D to 2D Point & Line Goal: Point Homogeneous coordinates represent coordinates in 2 dimensions
More informationComputer Vision Projective Geometry and Calibration. Pinhole cameras
Computer Vision Projective Geometry and Calibration Professor Hager http://www.cs.jhu.edu/~hager Jason Corso http://www.cs.jhu.edu/~jcorso. Pinhole cameras Abstract camera model - box with a small hole
More informationViewing. Part II (The Synthetic Camera) CS123 INTRODUCTION TO COMPUTER GRAPHICS. Andries van Dam 10/10/2017 1/31
Viewing Part II (The Synthetic Camera) Brownie camera courtesy of http://www.geh.org/fm/brownie2/htmlsrc/me13000034_ful.html 1/31 The Camera and the Scene } What does a camera do? } Takes in a 3D scene
More information3D Graphics for Game Programming (J. Han) Chapter II Vertex Processing
Chapter II Vertex Processing Rendering Pipeline Main stages in the pipeline The vertex processing stage operates on every input vertex stored in the vertex buffer and performs various operations such as
More informationAnnouncements. Submitting Programs Upload source and executable(s) (Windows or Mac) to digital dropbox on Blackboard
Now Playing: Vertex Processing: Viewing Coulibaly Amadou & Mariam from Dimanche a Bamako Released August 2, 2005 Rick Skarbez, Instructor COMP 575 September 27, 2007 Announcements Programming Assignment
More informationComputer Graphics. P05 Viewing in 3D. Part 1. Aleksandra Pizurica Ghent University
Computer Graphics P05 Viewing in 3D Part 1 Aleksandra Pizurica Ghent University Telecommunications and Information Processing Image Processing and Interpretation Group Viewing in 3D: context Create views
More informationProjection and viewing. Computer Graphics CSE 167 Lecture 4
Projection and viewing Computer Graphics CSE 167 Lecture 4 CSE 167: Computer Graphics Review: transformation from the object (or model) coordinate frame to the camera (or eye) coordinate frame Projection
More informationComputer Graphics. Chapter 10 Three-Dimensional Viewing
Computer Graphics Chapter 10 Three-Dimensional Viewing Chapter 10 Three-Dimensional Viewing Part I. Overview of 3D Viewing Concept 3D Viewing Pipeline vs. OpenGL Pipeline 3D Viewing-Coordinate Parameters
More informationCSE 167: Introduction to Computer Graphics Lecture #5: Projection. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017
CSE 167: Introduction to Computer Graphics Lecture #5: Projection Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 Announcements Friday: homework 1 due at 2pm Upload to TritonEd
More informationViewing COMPSCI 464. Image Credits: Encarta and
Viewing COMPSCI 464 Image Credits: Encarta and http://www.sackville.ednet.ns.ca/art/grade/drawing/perspective4.html Graphics Pipeline Graphics hardware employs a sequence of coordinate systems The location
More informationRay Tracer I: Ray Casting Due date: 12:00pm December 3, 2001
Computer graphics Assignment 5 1 Overview Ray Tracer I: Ray Casting Due date: 12:00pm December 3, 2001 In this assignment you will implement the camera and several primitive objects for a ray tracer. We
More informationGame Architecture. 2/19/16: Rasterization
Game Architecture 2/19/16: Rasterization Viewing To render a scene, need to know Where am I and What am I looking at The view transform is the matrix that does this Maps a standard view space into world
More informationUNIT 2 2D TRANSFORMATIONS
UNIT 2 2D TRANSFORMATIONS Introduction With the procedures for displaying output primitives and their attributes, we can create variety of pictures and graphs. In many applications, there is also a need
More informationGraphics pipeline and transformations. Composition of transformations
Graphics pipeline and transformations Composition of transformations Order matters! ( rotation * translation translation * rotation) Composition of transformations = matrix multiplication: if T is a rotation
More informationINTRODUCTION TO COMPUTER GRAPHICS. cs123. It looks like a matrix Sort of. Viewing III. Projection in Practice 1 / 52
It looks like a matrix Sort of Viewing III Projection in Practice 1 / 52 Arbitrary 3D views } view volumes/frusta spec d by placement and shape } Placement: } Position (a point) } look and up vectors }
More informationI N T R O D U C T I O N T O C O M P U T E R G R A P H I C S
3D Viewing: the Synthetic Camera Programmer s reference model for specifying 3D view projection parameters to the computer General synthetic camera (e.g., PHIGS Camera, Computer Graphics: Principles and
More informationLecture 3 Sections 2.2, 4.4. Mon, Aug 31, 2009
Model s Lecture 3 Sections 2.2, 4.4 World s Eye s Clip s s s Window s Hampden-Sydney College Mon, Aug 31, 2009 Outline Model s World s Eye s Clip s s s Window s 1 2 3 Model s World s Eye s Clip s s s Window
More informationSingle View Geometry. Camera model & Orientation + Position estimation. What am I?
Single View Geometry Camera model & Orientation + Position estimation What am I? Vanishing points & line http://www.wetcanvas.com/ http://pennpaint.blogspot.com/ http://www.joshuanava.biz/perspective/in-other-words-the-observer-simply-points-in-thesame-direction-as-the-lines-in-order-to-find-their-vanishing-point.html
More informationCMSC427 Transformations II: Viewing. Credit: some slides from Dr. Zwicker
CMSC427 Transformations II: Viewing Credit: some slides from Dr. Zwicker What next? GIVEN THE TOOLS OF The standard rigid and affine transformations Their representation with matrices and homogeneous coordinates
More informationShadows in the graphics pipeline
Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects
More informationViewing. Reading: Angel Ch.5
Viewing Reading: Angel Ch.5 What is Viewing? Viewing transform projects the 3D model to a 2D image plane 3D Objects (world frame) Model-view (camera frame) View transform (projection frame) 2D image View
More information1 OpenGL - column vectors (column-major ordering)
OpenGL - column vectors (column-major ordering) OpenGL uses column vectors and matrices are written in a column-major order. As a result, matrices are concatenated in right-to-left order, with the first
More informationChapter 5. Projections and Rendering
Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.
More informationOpenGL Transformations
OpenGL Transformations R. J. Renka Department of Computer Science & Engineering University of North Texas 02/18/2014 Introduction The most essential aspect of OpenGL is the vertex pipeline described in
More information3D Viewing Episode 2
3D Viewing Episode 2 1 Positioning and Orienting the Camera Recall that our projection calculations, whether orthographic or frustum/perspective, were made with the camera at (0, 0, 0) looking down the
More informationDrawing in 3D (viewing, projection, and the rest of the pipeline)
Drawing in 3D (viewing, projection, and the rest of the pipeline) CS559 Spring 2017 Lecture 6 February 2, 2017 The first 4 Key Ideas 1. Work in convenient coordinate systems. Use transformations to get
More informationCSC 305 The Graphics Pipeline-1
C. O. P. d y! "#"" (-1, -1) (1, 1) x z CSC 305 The Graphics Pipeline-1 by Brian Wyvill The University of Victoria Graphics Group Perspective Viewing Transformation l l l Tools for creating and manipulating
More informationChapter 8 Three-Dimensional Viewing Operations
Projections Chapter 8 Three-Dimensional Viewing Operations Figure 8.1 Classification of planar geometric projections Figure 8.2 Planar projection Figure 8.3 Parallel-oblique projection Figure 8.4 Orthographic
More information2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into
2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel
More information3D Viewing. CS 4620 Lecture 8
3D Viewing CS 46 Lecture 8 13 Steve Marschner 1 Viewing, backward and forward So far have used the backward approach to viewing start from pixel ask what part of scene projects to pixel explicitly construct
More informationCamera Placement for Ray Tracing
Camera Placement for Ray Tracing Lecture #3 Tuesday 0/4/4 st Review Camera Placement! The following slides review last Thursday s Lecture on world to camera transforms.! To see shift to raytracing context,
More informationComputer Graphics Viewing
Computer Graphics Viewing What Are Projections? Our 3-D scenes are all specified in 3-D world coordinates To display these we need to generate a 2-D image - project objects onto a picture plane Picture
More informationCS 325 Computer Graphics
CS 325 Computer Graphics 02 / 29 / 2012 Instructor: Michael Eckmann Today s Topics Questions? Comments? Specifying arbitrary views Transforming into Canonical view volume View Volumes Assuming a rectangular
More informationProjection Lecture Series
Projection 25.353 Lecture Series Prof. Gary Wang Department of Mechanical and Manufacturing Engineering The University of Manitoba Overview Coordinate Systems Local Coordinate System (LCS) World Coordinate
More informationChap 7, 2008 Spring Yeong Gil Shin
Three-Dimensional i Viewingi Chap 7, 28 Spring Yeong Gil Shin Viewing i Pipeline H d fi i d? How to define a window? How to project onto the window? Rendering "Create a picture (in a synthetic camera)
More informationComputer Graphics. Lecture 04 3D Projection and Visualization. Edirlei Soares de Lima.
Computer Graphics Lecture 4 3D Projection and Visualization Edirlei Soares de Lima Projection and Visualization An important use of geometric transformations in computer
More informationOverview. Viewing and perspectives. Planar Geometric Projections. Classical Viewing. Classical views Computer viewing Perspective normalization
Overview Viewing and perspectives Classical views Computer viewing Perspective normalization Classical Viewing Viewing requires three basic elements One or more objects A viewer with a projection surface
More information3D Viewing Episode 2
3D Viewing Episode 2 1 Positioning and Orienting the Camera Recall that our projection calculations, whether orthographic or frustum/perspective, were made with the camera at (0, 0, 0) looking down the
More information(Refer Slide Time: 00:01:26)
Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture - 9 Three Dimensional Graphics Welcome back everybody to the lecture on computer
More informationIntroduction to Computer Graphics 4. Viewing in 3D
Introduction to Computer Graphics 4. Viewing in 3D National Chiao Tung Univ, Taiwan By: I-Chen Lin, Assistant Professor Textbook: E.Angel, Interactive Computer Graphics, 5 th Ed., Addison Wesley Ref: Hearn
More informationViewing. Announcements. A Note About Transformations. Orthographic and Perspective Projection Implementation Vanishing Points
Viewing Announcements. A Note About Transformations. Orthographic and Perspective Projection Implementation Vanishing Points Viewing Announcements. A Note About Transformations. Orthographic and Perspective
More informationCOMP 175 COMPUTER GRAPHICS. Ray Casting. COMP 175: Computer Graphics April 26, Erik Anderson 09 Ray Casting
Ray Casting COMP 175: Computer Graphics April 26, 2018 1/41 Admin } Assignment 4 posted } Picking new partners today for rest of the assignments } Demo in the works } Mac demo may require a new dylib I
More informationVirtual Cameras and The Transformation Pipeline
Virtual Cameras and The Transformation Pipeline Anton Gerdelan gerdela@scss.tcd.ie with content from Rachel McDonnell 13 Oct 2014 Virtual Camera We want to navigate through our scene in 3d Solution = create
More informationCSE 167: Introduction to Computer Graphics Lecture #9: Visibility. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018
CSE 167: Introduction to Computer Graphics Lecture #9: Visibility Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2018 Announcements Midterm Scores are on TritonEd Exams to be
More informationChapter 5. Transforming Shapes
Chapter 5 Transforming Shapes It is difficult to walk through daily life without being able to see geometric transformations in your surroundings. Notice how the leaves of plants, for example, are almost
More informationCITSTUDENTS.IN VIEWING. Computer Graphics and Visualization. Classical and computer viewing. Viewing with a computer. Positioning of the camera
UNIT - 6 7 hrs VIEWING Classical and computer viewing Viewing with a computer Positioning of the camera Simple projections Projections in OpenGL Hiddensurface removal Interactive mesh displays Parallelprojection
More informationThe Graphics Pipeline and OpenGL I: Transformations!
! The Graphics Pipeline and OpenGL I: Transformations! Gordon Wetzstein! Stanford University! EE 267 Virtual Reality! Lecture 2! stanford.edu/class/ee267/!! Albrecht Dürer, Underweysung der Messung mit
More informationGetting Started. Overview (1): Getting Started (1): Getting Started (2): Getting Started (3): COSC 4431/5331 Computer Graphics.
Overview (1): Getting Started Setting up OpenGL/GLUT on Windows/Visual Studio COSC 4431/5331 Computer Graphics Thursday January 22, 2004 Overview Introduction Camera analogy Matrix operations and OpenGL
More informationCSC 470 Computer Graphics. Three Dimensional Viewing
CSC 470 Computer Graphics Three Dimensional Viewing 1 Today s Lecture Three Dimensional Viewing Developing a Camera Fly through a scene Mathematics of Projections Producing Stereo Views 2 Introduction
More informationSingle View Geometry. Camera model & Orientation + Position estimation. Jianbo Shi. What am I? University of Pennsylvania GRASP
Single View Geometry Camera model & Orientation + Position estimation Jianbo Shi What am I? 1 Camera projection model The overall goal is to compute 3D geometry of the scene from just 2D images. We will
More informationComputer Science 426 Midterm 3/11/04, 1:30PM-2:50PM
NAME: Login name: Computer Science 46 Midterm 3//4, :3PM-:5PM This test is 5 questions, of equal weight. Do all of your work on these pages (use the back for scratch space), giving the answer in the space
More informationCSC 470 Computer Graphics
CSC 47 Computer Graphics Three Dimensional Viewing Today s Lecture Three Dimensional Viewing Developing a Camera Fly through a scene Mathematics of Producing Stereo Views 1 2 Introduction We have already
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationMORE OPENGL. Pramook Khungurn CS 4621, Fall 2011
MORE OPENGL Pramook Khungurn CS 4621, Fall 2011 SETTING UP THE CAMERA Recall: OpenGL Vertex Transformations Coordinates specified by glvertex are transformed. End result: window coordinates (in pixels)
More informationThree-Dimensional Graphics III. Guoying Zhao 1 / 67
Computer Graphics Three-Dimensional Graphics III Guoying Zhao 1 / 67 Classical Viewing Guoying Zhao 2 / 67 Objectives Introduce the classical views Compare and contrast image formation by computer with
More informationFachhochschule Regensburg, Germany, February 15, 2017
s Operations Fachhochschule Regensburg, Germany, February 15, 2017 s Motivating Example s Operations To take a photograph of a scene: Set up your tripod and point camera at the scene (Viewing ) Position
More information3D Viewing. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 9
3D Viewing CS 46 Lecture 9 Cornell CS46 Spring 18 Lecture 9 18 Steve Marschner 1 Viewing, backward and forward So far have used the backward approach to viewing start from pixel ask what part of scene
More information2D and 3D Transformations AUI Course Denbigh Starkey
2D and 3D Transformations AUI Course Denbigh Starkey. Introduction 2 2. 2D transformations using Cartesian coordinates 3 2. Translation 3 2.2 Rotation 4 2.3 Scaling 6 3. Introduction to homogeneous coordinates
More informationMidterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer
Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer The exam consists of 10 questions. There are 2 points per question for a total of 20 points. You
More informationCIS 580, Machine Perception, Spring 2016 Homework 2 Due: :59AM
CIS 580, Machine Perception, Spring 2016 Homework 2 Due: 2015.02.24. 11:59AM Instructions. Submit your answers in PDF form to Canvas. This is an individual assignment. 1 Recover camera orientation By observing
More informationCSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation
CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 Announcements Project 2 due Friday, October 11
More informationFigure 1. Lecture 1: Three Dimensional graphics: Projections and Transformations
Lecture 1: Three Dimensional graphics: Projections and Transformations Device Independence We will start with a brief discussion of two dimensional drawing primitives. At the lowest level of an operating
More informationViewing with Computers (OpenGL)
We can now return to three-dimension?', graphics from a computer perspective. Because viewing in computer graphics is based on the synthetic-camera model, we should be able to construct any of the classical
More informationDrawing in 3D (viewing, projection, and the rest of the pipeline)
Drawing in 3D (viewing, projection, and the rest of the pipeline) CS559 Fall 2016 Lecture 6/7 September 26-28 2016 The first 4 Key Ideas 1. Work in convenient coordinate systems. Use transformations to
More informationCOMP Computer Graphics and Image Processing. a6: Projections. In part 2 of our study of Viewing, we ll look at. COMP27112 Toby Howard
Computer Graphics and Image Processing a6: Projections Tob.Howard@manchester.ac.uk Introduction In part 2 of our stud of Viewing, we ll look at The theor of geometrical planar projections Classes of projections
More informationCOMS 4160: Problems on Transformations and OpenGL
COMS 410: Problems on Transformations and OpenGL Ravi Ramamoorthi 1. Write the homogeneous 4x4 matrices for the following transforms: Translate by +5 units in the X direction Rotate by 30 degrees about
More informationCS230 : Computer Graphics Lecture 6: Viewing Transformations. Tamar Shinar Computer Science & Engineering UC Riverside
CS230 : Computer Graphics Lecture 6: Viewing Transformations Tamar Shinar Computer Science & Engineering UC Riverside Rendering approaches 1. image-oriented foreach pixel... 2. object-oriented foreach
More informationCOMP30019 Graphics and Interaction Perspective Geometry
COMP30019 Graphics and Interaction Perspective Geometry Department of Computing and Information Systems The Lecture outline Introduction to perspective geometry Perspective Geometry Virtual camera Centre
More informationTransformations in Ray Tracing. MIT EECS 6.837, Durand and Cutler
Transformations in Ray Tracing Linear Algebra Review Session Tonight! 7:30 9 PM Last Time: Simple Transformations Classes of Transformations Representation homogeneous coordinates Composition not commutative
More informationBasics of Computational Geometry
Basics of Computational Geometry Nadeem Mohsin October 12, 2013 1 Contents This handout covers the basic concepts of computational geometry. Rather than exhaustively covering all the algorithms, it deals
More informationGRAFIKA KOMPUTER. ~ M. Ali Fauzi
GRAFIKA KOMPUTER ~ M. Ali Fauzi Drawing 2D Graphics VIEWPORT TRANSFORMATION Recall :Coordinate System glutreshapefunc(reshape); void reshape(int w, int h) { glviewport(0,0,(glsizei) w, (GLsizei) h); glmatrixmode(gl_projection);
More informationRaycasting. Chapter Raycasting foundations. When you look at an object, like the ball in the picture to the left, what do
Chapter 4 Raycasting 4. Raycasting foundations When you look at an, like the ball in the picture to the left, what do lamp you see? You do not actually see the ball itself. Instead, what you see is the
More informationAgenda. Perspective projection. Rotations. Camera models
Image formation Agenda Perspective projection Rotations Camera models Light as a wave + particle Light as a wave (ignore for now) Refraction Diffraction Image formation Digital Image Film Human eye Pixel
More informationCS464 Oct 3 rd Assignment 3 Due 10/6/2017 Due 10/8/2017 Implementation Outline
CS464 Oct 3 rd 2017 Assignment 3 Due 10/6/2017 Due 10/8/2017 Implementation Outline Assignment 3 Skeleton A good sequence to implement the program 1. Start with a flat terrain sitting at Y=0 and Cam at
More informationN-Views (1) Homographies and Projection
CS 4495 Computer Vision N-Views (1) Homographies and Projection Aaron Bobick School of Interactive Computing Administrivia PS 2: Get SDD and Normalized Correlation working for a given windows size say
More informationPerspective Projection and Texture Mapping
Lecture 7: Perspective Projection and Texture Mapping Computer Graphics CMU 15-462/15-662, Spring 2018 Perspective & Texture PREVIOUSLY: - transformation (how to manipulate primitives in space) - rasterization
More informationCSCI 4620/8626. The 2D Viewing Pipeline
CSCI 4620/8626 Computer Graphics Two-Dimensional Viewing (Chapter 8) Last update: 2016-03-3 The 2D Viewing Pipeline Given a 2D scene, we select the part of it that we wish to see (render, display) using
More informationComputing the 3D Viewing Transformation
Computing the 3D Viewing Transformation John E. Howland Department of Computer Science Trinity University 715 Stadium Drive San Antonio, Texas 78212-7200 Voice: (210) 999-7380 Fax: (210) 999-7477 E-mail:
More information2D/3D Geometric Transformations and Scene Graphs
2D/3D Geometric Transformations and Scene Graphs Week 4 Acknowledgement: The course slides are adapted from the slides prepared by Steve Marschner of Cornell University 1 A little quick math background
More information