2.1 Three Dimensional Concepts

Size: px
Start display at page:

Download "2.1 Three Dimensional Concepts"

Transcription

1 UNIT - II THREE-DIMENSIONAL CONCEPTS Parallel and Perspective projections-three-dimensional Object Representations Polygons, Curved lines,splines, Quadric Surfaces- Visualization of data sets- Three- Transformations Three- Dimensional Viewing Visible surface identification. 2.1 Three Dimensional Concepts Three Dimensional Display Methods: To obtain a display of a three dimensional scene that has been modeled in world coordinates, we must setup a co-ordinate reference for the camera. This coordinate reference defines the position and orientation for the plane of the camera film which is the plane we want to use to display a view of the objects in the scene. Object descriptions are then transferred to the camera reference coordinates and projected onto the selected display plane. The objects can be displayed in wire frame form, or we can apply lighting and surface rendering techniques to shade the visible surfaces. Parallel Projection: Parallel projection is a method for generating a view of a solid object is to project points on the object surface along parallel lines onto the display plane. In parallel projection, parallel lines in the world coordinate scene project into parallel lines on the two dimensional display planes. This technique is used in engineering and architectural drawings to represent an object with a set of views that maintain relative proportions of the object. The appearance of the solid object can be reconstructed from the major views. 1

2 Fig. Three parallel projection views of an object, showing relative proportions from different viewing positions. Perspective Projection: It is a method for generating a view of a three dimensional scene is to project points to the display plane alone converging paths. This makes objects further from the viewing position be displayed smaller than objects of the same size that are nearer to the viewing position. In a perspective projection, parallel lines in a scene that are not parallel to the display plane are projected into converging lines. Scenes displayed using perspective projections appear more realistic, since this is the way that our eyes and a camera lens form images. Depth Cueing: Depth information is important to identify the viewing direction, which is the front and which is the back of displayed object. Depth cueing is a method for indicating depth with wire frame displays is to vary the intensity of objects according to their distance from the viewing position. Depth cueing is applied by choosing maximum and minimum intensity (or color) values and a range of distance over which the intensities are to vary. Visible line and surface identification: A simplest way to identify the visible line is to highlight the visible lines or to display them in a different color. Another method is to display the non visible lines as dashed lines. Surface Rendering: Surface rendering method is used to generate a degree of realism in a displayed scene. 2

3 Realism is attained in displays by setting the surface intensity of objects according to the lighting conditions in the scene and surface characteristics. Lighting conditions include the intensity and positions of light sources and the background illumination. Surface characteristics include degree of transparency and how rough or smooth the surfaces are to be. Exploded and Cutaway views: Exploded and cutaway views of objects can be to show the internal structure and relationship of the objects parts. An alternative to exploding an object into its component parts is the cut away view which removes part of the visible surfaces to show internal structure. Three-dimensional and Stereoscopic Views: In Stereoscopic views, three dimensional views can be obtained by reflecting a raster image from a vibrating flexible mirror. The vibrations of the mirror are synchronized with the display of the scene on the CRT. As the mirror vibrates, the focal length varies so that each point in the scene is projected to a position corresponding to its depth. Stereoscopic devices present two views of a scene; one for the left eye and the other for the right eye. The two views are generated by selecting viewing positions that corresponds to the two eye positions of a single viewer. These two views can be displayed on alternate refresh cycles of a raster monitor, and viewed through glasses that alternately darken first one lens then the other in synchronization with the monitor refresh cycles Three Dimensional Graphics Packages The 3D package must include methods for mapping scene descriptions onto a flat viewing surface. There should be some consideration on how surfaces of solid objects are to be modeled, how visible surfaces can be identified, how transformations of objects are preformed in space, and how to describe the additional spatial properties. World coordinate descriptions are extended to 3D, and users are provided with output and input routines accessed with specifications such as o Polyline3(n, WcPoints) o Fillarea3(n, WcPoints) 3

4 o Text3(WcPoint, string) o Getlocator3(WcPoint) o Translate3(translateVector, matrix Translate) Where points and vectors are specified with 3 components and transformation matrices have 4 rows and 4 columns. 2.2 Three Dimensional Object Representations Representation schemes for solid objects are divided into two categories as follows: 1. Boundary Representation ( B-reps) It describes a three dimensional object as a set of surfaces that separate the object interior from the environment. Examples are polygon facets and spline patches. 2. Space Partitioning representation It describes the interior properties, by partitioning the spatial region containing an object into a set of small, nonoverlapping, contiguous solids(usually cubes) Polygon Surfaces Eg: Octree Representation Polygon surfaces are boundary representations for a 3D graphics object is a set of polygons that enclose the object interior. Polygon Tables The polygon surface is specified with a set of vertex coordinates and associated attribute parameters. For each polygon input, the data are placed into tables that are to be used in the subsequent processing. Polygon data tables can be organized into two groups: Geometric tables and attribute tables. Geometric Tables Contain vertex coordinates and parameters to identify the spatial orientation of the polygon surfaces. Attribute tables Contain attribute information for an object such as parameters specifying the degree of transparency of the object and its surface reflectivity and texture characteristics. A convenient organization for storing geometric data is to create three lists: this table. 1. The Vertex Table Coordinate values for each vertex in the object are stored in 4

5 2. The Edge Table It contains pointers back into the vertex table to identify the vertices for each polygon edge. 3. The Polygon Table It contains pointers back into the edge table to identify the edges for each polygon. This is shown in fig Vertex table Edge Table Polygon surface table V1 : X1, Y1, Z1 E1 : V1, V2 S1 : E1, E2, E3 V2 : X2, Y2, Z2 E2 : V2, V3 S2 : E3, E4, E5, E6 V3 : X3, Y3, Z3 V4 : X4, Y4, Z4 V5 : X5, Y5, Z5 E3 : V3, V1 E4 : V3, V4 E5 : V4, V5 E6 : V5, V1 Listing the geometric data in three tables provides a convenient reference to the individual components (vertices, edges and polygons) of each object. The object can be displayed efficiently by using data from the edge table to draw the component lines. Extra information can be added to the data tables for faster information extraction. For instance, edge table can be expanded 5

6 to include forward points into the polygon table so that common edges between polygons can be identified more rapidly. E1 : V1, V2, S1 E2 : V2, V3, S1 E3 : V3, V1, S1, S2 E4 : V3, V4, S2 E5 : V4, V5, S2 E6 : V5, V1, S2 This is useful for the rendering procedure that must vary surface shading smoothly across the edges from one polygon to the next. Similarly, the vertex table can be expanded so that vertices are cross-referenced to corresponding edges. Additional geometric information that is stored in the data tables includes the slope for each edge and the coordinate extends for each polygon. As vertices are input, we can calculate edge slopes and we can scan the coordinate values to identify the minimum and maximum x, y and z values for individual polygons. The more information included in the data tables will be easier to check for errors. Some of the tests that could be performed by a graphics package are: Plane Equations: 1. That every vertex is listed as an endpoint for at least two edges. 2. That every edge is part of at least one polygon. 3. That every polygon is closed. 4. That each polygon has at least one shared edge. 5. That if the edge table contains pointers to polygons, every edge referenced by a polygon pointer has a reciprocal pointer back to the polygon. To produce a display of a 3D object, we must process the input data representation for the object through several procedures such as, - Transformation of the modeling and world coordinate descriptions to viewing coordinates. - Then to device coordinates: - Identification of visible surfaces - The application of surface-rendering procedures. For these processes, we need information about the spatial orientation of the individual surface components of the object. This 6

7 information is obtained from the vertex coordinate value and the equations that describe the polygon planes. The equation for a plane surface is Ax + By+ Cz + D = (1) Where (x, y, z) is any point on the plane, and the coefficients A,B,C and D are constants describing the spatial properties of the plane. We can obtain the values of A, B,C and D by solving a set of three plane equations using the coordinate values for three non collinear points in the plane. For that, we can select three successive polygon vertices (x1, y1, z1), (x2, y2, z2) and (x3, y3, z3) and solve the following set of simultaneous linear plane equations for the ratios A/D, B/D and C/D. (A/D)xk + (B/D)yk + (c/d)zk = -1, k=1,2, (2) The solution for this set of equations can be obtained in determinant form, using Cramer s rule as 1 y1 z1 x1 1 z1 A = 1 y2 z2 B = x2 1 z2 1 y3 z3 x3 1 z3 x1 y1 1 x1 y1 z1 C = x2 y2 1 D = - x2 y2 z (3) x3 y3 1 x3 y3 z3 Expanding the determinants, we can write the calculations for the plane coefficients in the form: A = y1 (z2 z3 ) + y2(z3 z1 ) + y3 (z1 z2 ) B = z1 (x2 -x3 ) + z2 (x3 -x1 ) + z3 (x1 -x2 ) C = x1 (y2 y3 ) + x2 (y3 y1 ) + x3 (y1 -y2 ) D = -x1 (y2 z3 -y3 z2 ) - x2 (y3 z1 -y1 z3 ) - x3 (y1 z2 -y2 z1) (4) As vertex values and other information are entered into the polygon data structure, values for A, B, C and D are computed for each polygon and stored with the other polygon data. Plane equations are used also to identify the position of spatial points relative to the plane surfaces of an object. For any point (x, y, z) hot on a plane with parameters A,B,C,D, we have Ax + By + Cz + D 0 7

8 We can identify the point as either inside or outside the plane surface according o the sigh (negative or positive) of Ax + By + Cz + D: If Ax + By + Cz + D < 0, the point (x, y, z) is inside the surface. If Ax + By + Cz + D > 0, the point (x, y, z) is outside the surface. These inequality tests are valid in a right handed Cartesian system, provided the plane parmeters A,B,C and D were calculated using vertices selected in a counter clockwise order when viewing the surface in an outside-to-inside direction. Polygon Meshes A single plane surface can be specified with a function such as fillarea. But when object surfaces are to be tiled, it is more convenient to specify the surface facets with a mesh function. One type of polygon mesh is the triangle strip.a triangle strip formed with 11 triangles connecting 13 vertices. This function produces n-2 connected triangles given the coordinates for n vertices. Another similar function in the quadrilateral mesh, which generates a mesh of (n-1) by (m-1) quadrilaterals, given the coordinates for an n by m array of vertices. Figure shows 20 vertices forming a mesh of 12 quadrilaterals Curved Lines and Surfaces Displays of three dimensional curved lines and surface can be generated from an input set of mathematical functions defining the objects or from a set of user specified data points. 8

9 When functions are specified, a package can project the defining equations for a curve to the display plane and plot pixel positions along the path of the projected function. For surfaces, a functional description in decorated to produce a polygon-mesh approximation to the surface Quadric Surfaces Sphere The quadric surfaces are described with second degree equations (quadratics). They include spheres, ellipsoids, tori, parabolids, and hyperboloids. In Cartesian coordinates, a spherical surface with radius r centered on the coordinates origin is defined as the set of points (x, y, z) that satisfy the equation. x 2 + y 2 + z 2 = r (1) The spherical surface can be represented in parametric form by using latitude and longitude angles Ellipsoid x = r cosφ cosθ, -л/2 <= φ<= л/2 y = r cosφ sinθ, -л <= φ <= л (2) z = rsinφ The parameter representation in eqn (2) provides a symmetric range for the angular parameter θ and φ. Ellipsoid surface is an extension of a spherical surface where the radius in three mutually perpendicular directions can have different values 9

10 The Cartesian representation for points over the surface of an ellipsoid centered on the origin is x r x 2 + y r y 2 + z r z 2 = 1 The parametric representation for the ellipsoid in terms of the latitude angle φ and the longitude angle θ is x = rx cosφ cosθ, -л/2 <= φ <= л/2 y = ry cosφ sinθ, -л <= φ <= л z = rz sinφ Torus Torus is a doughnut shaped object. It can be generated by rotating a circle or other conic about a specified axis. A torus with a circular cross section centered on the coordinate origin 10

11 The Cartesian representation for points over the surface of a torus can be written in the form 2 x y z r = 1 r r r x y 2 2 z 2 where r in any given offset value. Parametric representation for a torus is similar to those for an ellipse, except that angle φ extends over 360 o. Using latitude and longitude angles φ and θ, we can describe the torus surface as the set of points that satisfy. x = rx (r + cosφ) cosθ, y = ry(r+ cosφ )sinθ, z = rz sinφ Spline Representations -л <= φ <= л -л <= φ <= л A Spline is a flexible strip used to produce a smooth curve through a designated set of points. Several small weights are distributed along the length of the strip to hold it in position on the drafting table as the curve is drawn. The Spline curve refers to any sections curve formed with polynomial sections satisfying specified continuity conditions at the boundary of the pieces. A Spline surface can be described with two sets of orthogonal spline curves. Splines are used in graphics applications to design curve and surface shapes, to digitize drawings for computer storage, and to 11

12 specify animation paths for the objects or the camera in the scene. CAD applications for splines include the design of automobiles bodies, aircraft and spacecraft surfaces, and ship hulls. Interpolation and Approximation Splines Spline curve can be specified by a set of coordinate positions called control points which indicates the general shape of the curve. These control points are fitted with piecewise continuous parametric polynomial functions in one of the two ways. 1. When polynomial sections are fitted so that the curve passes through each control point the resulting curve is said to interpolate the set of control points. A set of six control points interpolated with piecewise continuous polynomial sections 2. When the polynomials are fitted to the general control point path without necessarily passing through any control points, the resulting curve is said to approximate the set of control points. A set of six control points approximated with piecewise continuous polynomial sections Interpolation curves are used to digitize drawings or to specify animation paths. Approximation curves are used as design tools to structure object surfaces. 12

13 A spline curve is designed, modified and manipulated with operations on the control points.the curve can be translated, rotated or scaled with transformation applied to the control points. The convex polygon boundary that encloses a set of control points is called the convex hull. The shape of the convex hull is to imagine a rubber band stretched around the position of the control points so that each control point is either on the perimeter of the hull or inside it. Convex hull shapes (dashed lines) for two sets of control points Parametric Continuity Conditions For a smooth transition from one section of a piecewise parametric curve to the next various continuity conditions are needed at the connection points. If each section of a spline in described with a set of parametric coordinate functions or the form x = x(u), y = y(u), z = z(u), u1<= u <= u (a) We set parametric continuity by matching the parametric derivatives of adjoining curve sections at their common boundary. Zero order parametric continuity referred to as C 0 continuity, means that the curves meet. (i.e) the values of x,y, and z evaluated at u2 for the first curve section are equal. Respectively, to the value of x,y, and z evaluated at u1 for the next curve section. First order parametric continuity referred to as C 1 continuity means that the first parametric derivatives of the coordinate functions in equation (a) for two successive curve sections are equal at their joining point. Second order parametric continuity, or C2 continuity means that both the first and second parametric derivatives of the two curve sections are equal at their intersection. 13

14 Higher order parametric continuity conditions are defined similarly. Piecewise construction of a curve by joining two curve segments using different orders of continuity a)zero order continuity only b)first order continuity only c) Second order continuity only Geometric Continuity Conditions To specify conditions for geometric continuity is an alternate method for joining two successive curve sections. The parametric derivatives of the two sections should be proportional to each other at their common boundary instead of equal to each other. Zero order Geometric continuity referred as G 0 continuity means that the two curves sections must have the same coordinate position at the boundary point. First order Geometric Continuity referred as G 1 continuity means that the parametric first derivatives are proportional at the interaction of two successive sections. Second order Geometric continuity referred as G 2 continuity means that both the first and second parametric derivatives of the two curve sections are proportional at their boundary. Here the curvatures of two sections will match at the joining position. 14

15 Three control points fitted with two curve sections joined with a) parametric continuity b)geometric continuity where the tangent vector of curve C3 at point p1 has a greater magnitude than the tangent vector of curve C1 at p1. Spline specifications There are three methods to specify a spline representation: 1. We can state the set of boundary conditions that are imposed on the spline; (or) 2. We can state the matrix that characterizes the spline; (or) 3. We can state the set of blending functions that determine how specified geometric constraints on the curve are combined to calculate positions along the curve path. To illustrate these three equivalent specifications, suppose we have the following parametric cubic polynomial representation for the x coordinate along the path of a spline section. x(u)=axu 3 + axu 2 + cxu + dx 0<= u <= (1) Boundary conditions for this curve might be set on the endpoint coordinates x(0) and x(1) and on the parametric first derivatives at the endpoints x (0) and x (1). These boundary conditions are sufficient to determine the values of the four coordinates ax, bx, cx and dx. From the boundary conditions we can obtain the matrix that characterizes this spline curve by first rewriting eq(1) as the matrix product 15

16 x(u) = [u 3 u 2 u 1 1] ax bx cx ( 2 ) dx = U.C where U is the row matrix of power of parameter u and C is the coefficient column matrix. Using equation (2) we can write the boundary conditions in matrix form and solve for the coefficient matrix C as C = Mspline. Mgeom -----(3) Where Mgeom in a four element column matrix containing the geometric constraint values on the spline and Mspline in the 4 * 4 matrix that transforms the geometric constraint values to the polynomial coefficients and provides a characterization for the spline curve. Matrix Mgeom contains control point coordinate values and other geometric constraints. We can substitute the matrix representation for C into equation (2) to obtain. x (u) = U. Mspline. Mgeom (4) The matrix Mspline, characterizing a spline representation, called the basis matriz is useful for transforming from one spline representation to another. Finally we can expand equation (4) to obtain a polynomial representation for coordinate x in terms of the geometric constraint parameters. x(u) = gk. BFk(u) where gk are the constraint parameters, such as the control point coordinates and slope of the curve at the control points and BFk(u) are the polynomial blending functions. 2.3 Visualization of Data Sets The use of graphical methods as an aid in scientific and engineering analysis is commonly referred to as scientific visualization. This involves the visualization of data sets and processes that may be difficult or impossible to analyze without graphical methods. Example medical scanners, satellite and spacecraft scanners. 16

17 Visualization techniques are useful for analyzing process that occur over a long period of time or that cannot observed directly. Example quantum mechanical phenomena and special relativity effects produced by objects traveling near the speed of light. Scientific visualization is used to visually display, enhance and manipulate information to allow better understanding of the data. Similar methods employed by commerce, industry and other nonscientific areas are sometimes referred to as business visualization. Data sets are classified according to their spatial distribution ( 2D or 3D ) and according to data type (scalars, vectors, tensors and multivariate data ). Visual Representations for Scalar Fields A scalar quantity is one that has a single value. Scalar data sets contain values that may be distributed in time as well as over spatial positions also the values may be functions of other scalar parameters. Examples of physical scalar quantities are energy, density, mass, temperature and water content. A common method for visualizing a scalar data set is to use graphs or charts that show the distribution of data values as a function of other parameters such as position and time. Pseudo-color methods are also used to distinguish different values in a scalar data set, and color coding techniques can be combined with graph and chart models. To color code a scalar data set we choose a range of colors and map the range of data values to the color range. Color coding a data set can be tricky because some color combinations can lead to misinterpretations of the data. Contour plots are used to display isolines ( lines of constant scalar value) for a data set distributed over a surface. The isolines are spaced at some convenient interval to show the range and variation of the data values over the region of space. Contouring methods are applied to a set of data values that is distributed over a regular grid. A 2D contouring algorithm traces the isolines from cell to cell within the grid by checking the four corners of grid cells to determine which cell edges are crossed by a particular isoline. The path of an isoline across five grid cells 17

18 Sometimes isolines are plotted with spline curves but spline fitting can lead to misinterpretation of the data sets. Example two spline isolines could cross or curved isoline paths might not be a true indicator of data trends since data values are known only at the cell corners. For 3D scalar data fields we can take cross sectional slices and display the 2D data distributions over the slices. Visualization packages provide a slicer routine that allows cross sections to be taken at any angle. Instead of looking at 2D cross sections we plot one or more isosurfaces which are simply 3D contour plots. When two overlapping isosurfaces are displayed the outer surface is made transparent so that we can view the shape of both isosurfaces. Volume rendering which is like an X-ray picture is another method for visualizing a 3D data set. The interior information about a data set is projected to a display screen using the raycasting method. Along the ray path from each screen pixel. Volume visualization of a regular, Cartesian data grid using ray casting to examine interior data values. Data values at the grid positions. are averaged so that one value is stored for each voxel of the data space. How the data are encoded for display depends on the application. 18

19 For this volume visualization, a color-coded plot of the distance to the maximum voxel value along each pixel ray was displayed. Visual representation for Vector fields A vector quantity V in three-dimensional space has three scalar values ( Vx, Vy,Vz, ) one for each coordinate direction, and a twodimensional vector has two components (Vx, Vy,). Another way to describe a vector quantity is by giving its magnitude IV I and its direction as a unit vector u. As with scalars, vector quantities may be functions of position, time, and other parameters. Some examples of physical vector quantities are velocity, acceleration, force, electric fields, magnetic fields, gravitational fields, and electric current. One way to visualize a vector field is to plot each data point as a small arrow that shows the magnitude and direction of the vector. This method is most often used with cross-sectional slices, since it can be difficult to see the trends in a three-dimensional region cluttered with overlapping arrows. Magnitudes for the vector values can be shown by varying the lengths of the arrows. Vector values are also represented by plotting field lines or streamlines. Field lines are commonly used for electric, magnetic and gravitational fields. The magnitude of the vector values is indicated by spacing between field lines, and the direction is the tangent to the field. Field line representation for a vector data set Visual Representations for Tensor Fields A tensor quantity in three-dimensional space has nine components and can be represented with a 3 by 3 matrix. This representation is used for a second-order tensor, and higher-order tensors do occur in some applications. Some examples of physical, second-order tensors are stress and strain in a material subjected to external forces, conductivity of an electrical conductor, and the metric tensor, which gives the properties of a particular coordinate space. 19

20 The stress tensor in Cartesian coordinates,for example, can be represented as Tensor quantities are frequently encountered in anisotropic materials, which have different properties in different directions. The x, xy, and xz elements of the conductivity tensor, for example, describe the contributions of electric field components in the x, y, and z diretions to the current in the x direction. Usually, physical tensor quantities are symmetric, so that the tensor has only six distinct values. Visualization schemes for representing all six components of a second-order tensor quantity are based on devising shapes that have six parameters. Instead of trying to visualize all six components of a tensor quantity, we can reduce the tensor to a vector or a scalar. And by applying tensor-contraction operations, we can obtain a scalar representation. Visual Representations for Multivariate Data Fields In some applications, at each grid position over some region of space, we may have multiple data values, which can be a mixture of scalar, vector, and even tensor values. A method for displaying multivariate data fields is to construct graphical objects, sometimes referred to as glyphs, with multiple parts. Each part of a glyph represents a physical quantity. The size and color of each part can be used to display information about scalar magnitudes. To give directional information for a vector field, we can use a wedge, a cone, or some other pointing shape for the glyph part representing the vector. 2.4 Three Dimensional Geometric and Modeling Transformations Geometric transformations and object modeling in three dimensions are extended from two-dimensional methods by including considerations for the z-coordinate Translation 20

21 In a three dimensional homogeneous coordinate representation, a point or an object is translated from position P = (x,y,z) to position P = (x,y,z ) with the matrix operation. (1) x tx x y = ty y z yz z (or) P = T.P (2) Parameters tx, ty and tz specifying translation distances for the coordinate directions x,y and z are assigned any real values. The matrix representation in equation (1) is equivalent to the three equations x = x + tx y = y + ty z = z + tz (3) Translating a point with translation vector T = (tx, ty, tz) Inverse of the translation matrix in equation (1) can be obtained by negating the translation distance tx, ty and tz. This produces a translation in the opposite direction and the product of a translation matrix and its inverse produces the identity matrix Rotation 21

22 To generate a rotation transformation for an object an axis of rotation must be designed to rotate the object and the amount of angular rotation is also be specified. Positive rotation angles produce counter clockwise rotations about a coordinate axis. Co-ordinate Axes Rotations The 2D z axis rotation equations are easily extended to 3D. x = x cos θ y sin θ y = x sin θ + y cos θ z = z (2) Parameters θ specifies the rotation angle. In homogeneous coordinate form, the 3D z axis rotation equations are expressed as x cosθ -sinθ 0 0 x y = sinθ cosθ 0 0 y z z (3) which we can write more compactly as P = Rz (θ). P (4) The below figure illustrates rotation of an object about the z axis. Transformation equations for rotation about the other two coordinate axes can be obtained with a cyclic permutation of the 22

23 coordinate parameters x, y and z in equation (2) i.e., we use the replacements x y z x (5) Substituting permutations (5) in equation (2), we get the equations for an x-axis rotation y = ycosθ - zsinθ z = ysinθ + zcosθ (6) x = x which can be written in the homogeneous coordinate form x x y = 0 cosθ -sinθ 0 y z 0 sinθ cosθ 0 z (7) (or) P = Rx (θ). P (8) Rotation of an object around the x-axis is demonstrated in the below fig Cyclically permuting coordinates in equation (6) give the transformation equation for a y axis rotation. z = zcosθ - xsinθ x = zsinθ + xcosθ (9) y = y The matrix representation for y-axis rotation is x cosθ 0 sinθ 0 x 23

24 y = y z -sinθ 0 cosθ 0 z (10) (or) P = Ry (θ). P ( 11 ) An example of y axis rotation is shown in below figure An inverse rotation matrix is formed by replacing the rotation angle θ by θ. Negative values for rotation angles generate rotations in a clockwise direction, so the identity matrix is produces when any rotation matrix is multiplied by its inverse. Since only the sine function is affected by the change in sign of the rotation angle, the inverse matrix can also be obtained by interchanging rows and columns. (i.e.,) we can calculate the inverse of any rotation matrix R by evaluating its transpose (R -1 = R T ). General Three Dimensional Rotations A rotation matrix for any axis that does not coincide with a coordinate axis can be set up as a composite transformation involving combinations of translations and the coordinate axes rotations. We obtain the required composite matrix by 1. Setting up the transformation sequence that moves the selected rotation axis onto one of the coordinate axis. 2. Then set up the rotation matrix about that coordinate axis for the specified rotation angle. 24

25 3. Obtaining the inverse transformation sequence that returns the rotation axis to its original position. In the special case where an object is to be rotated about an axis that is parallel to one of the coordinate axes, we can attain the desired rotation with the following transformation sequence: 1. Translate the object so that the rotation axis coincides with the parallel coordinate axis. 2. Perform the specified rotation about the axis. 3. Translate the object so that the rotation axis is moved back to its original position. When an object is to be rotated about an axis that is not parallel to one of the coordinate axes, we need to perform some additional transformations. In such case, we need rotations to align the axis with a selected coordinate axis and to bring the axis back to its original orientation. Given the specifications for the rotation axis and the rotation angle, we can accomplish the required rotation in five steps: 1. Translate the object so that the rotation axis passes through the coordinate origin. 2. Rotate the object so that the axis of rotation coincides with one of the coordinate axes. 3. Perform the specified rotation about that coordinate axis. 4. Apply inverse rotations to bring the rotation axis back to its original orientation. 5. Apply the inverse translation to bring the rotation axis back to its original position. Five transformation steps 25

26 2.4.3 Scaling The matrix expression for the scaling transformation of a position P = (x,y,.z) relative to the coordinate origin can be written as x sx x y = 0 sy 0 0 y z 0 0 sz 0 z (11) (or) P = S.P (12) where scaling parameters sx, sy, and sz are assigned any position values. Explicit expressions for the coordinate transformations for scaling relative to the origin are x = x.sx y = y.sy (13) z = z.sz Scaling an object changes the size of the object and repositions the object relatives to the coordinate origin. If the transformation parameters are not equal, relative dimensions in the object are changed. The original shape of the object is preserved with a uniform scaling (sx = sy= sz). Scaling with respect to a selected fixed position (x f, yf, zf) can be represented with the following transformation sequence: 1. Translate the fixed point to the origin. 2. Scale the object relative to the coordinate origin using Eq

27 3. Translate the fixed point back to its original position. This sequence of transformation is shown in the below figure. The matrix representation for an arbitrary fixed point scaling can be expressed as the concatenation of the translate-scale-translate transformation are T (xf, yf, zf). S(sx, sy, sz ). T(-xf,-yf, -zf ) = sx 0 0 (1-sx)xf 0 sy 0 (1-sy)yf (14) 0 0 sz (1-sz)zf Inverse scaling matrix m formed by replacing the scaling parameters sx, sy and sz with their reciprocals. The inverse matrix generates an opposite scaling transformation, so the concatenation of any scaling matrix and its inverse produces the identity matrix Other Transformations Reflections A 3D reflection can be performed relative to a selected reflection axis or with respect to a selected reflection plane. Reflection relative to a given axis are equivalent to rotations about the axis. 27

28 Reflection relative to a plane are equivalent to rotations in 4D space. When the reflection plane in a coordinate plane ( either xy, xz or yz) then the transformation can be a conversion between left-handed and right-handed systems. An example of a reflection that converts coordinate specifications from a right handed system to a left-handed system is shown in the figure This transformation changes the sign of z coordinates, leaves the x and y coordinate values unchanged. The matrix representation for this reflection of points relative to the xy plane is RFz = Shears Reflections about other planes can be obtained as a combination of rotations and coordinate plane reflections. Shearing transformations are used to modify object shapes. They are also used in three dimensional viewing for obtaining general projections transformations. The following transformation produces a z-axis shear. 1 0 a 0 SHz = 0 1 b Parameters a and b can be assigned any real values. 28

29 This transformation matrix is used to alter x and y coordinate values by an amount that is proportional to the z value, and the z coordinate will be unchanged. Boundaries of planes that are perpendicular to the z axis are shifted by an amount proportional to z the figure shows the effect of shearing matrix on a unit cube for the values a = b = Composite Transformation Composite three dimensional transformations can be formed by multiplying the matrix representation for the individual operations in the transformation sequence. This concatenation is carried out from right to left, where the right most matrixes is the first transformation to be applied to an object and the left most matrix is the last transformation. A sequence of basic, three-dimensional geometric transformations is combined to produce a single composite transformation which can be applied to the coordinate definition of an object Three Dimensional Transformation Functions Some of the basic 3D transformation functions are: translate ( translatevector, matrixtranslate) rotatex(thetax, xmatrixrotate) rotatey(thetay, ymatrixrotate) rotatez(thetaz, zmatrixrotate) scale3 (scalevector, matrixscale) Each of these functions produces a 4 by 4 transformation matrix that can be used to transform coordinate positions expressed as homogeneous column vectors. Parameter translate Vector is a pointer to list of translation distances tx, ty, and tz. 29

30 Parameter scale vector specifies the three scaling parameters sx, sy and sz. Rotate and scale matrices transform objects with respect to the coordinate origin. Composite transformation can be constructed with the following functions: composematrix3 buildtransformationmatrix3 composetransformationmatrix3 The order of the transformation sequence for the buildtransformationmarix3 and composetransfomationmarix3 functions, is the same as in 2 dimensions: 1. scale 2. rotate 3. translate Once a transformation matrix is specified, the matrix can be applied to specified points with transformpoint3 (inpoint, matrix, outpoint) The transformations for hierarchical construction can be set using structures with the function setlocaltransformation3 (matrix, type) where parameter matrix specifies the elements of a 4 by 4 transformation matrix and parameter type can be assigned one of the values of: Preconcatenate, Postconcatenate, or replace Modeling and Coordinate Transformations In modeling, objects are described in a local (modeling) coordinate reference frame, then the objects are repositioned into a world coordinate scene. For instance, tables, chairs and other furniture, each defined in a local coordinate system, can be placed into the description of a room defined in another reference frame, by transforming the furniture coordinates to room coordinates. Then the room might be transformed into a larger scene constructed in world coordinate. Three dimensional objects and scenes are constructed using structure operations. Object description is transformed from modeling coordinate to world coordinate or to another system in the hierarchy. 30

31 Coordinate descriptions of objects are transferred from one system to another system with the same procedures used to obtain two dimensional coordinate transformations. Transformation matrix has to be set up to bring the two coordinate systems into alignment: - First, a translation is set up to bring the new coordinate origin to the position of the other coordinate origin. - Then a sequence of rotations are made to the corresponding coordinate axes. - If different scales are used in the two coordinate systems, a scaling transformation may also be necessary to compensate for the differences in coordinate intervals. If a second coordinate system is defined with origin (x0, y0,z0) and axis vectors as shown in the figure relative to an existing Cartesian reference frame, then first construct the translation matrix T(-x0, -y0, -z0), then we can use the unit axis vectors to form the coordinate rotation matrix u x1 u x2 u x3 0 R = u y1 u y2 u y3 0 u z1 u z2 u z which transforms unit vectors u x, u y and u z onto the x, y and z axes respectively. Transformation of an object description from one coordinate system to another. The complete coordinate-transformation sequence is given by the composite matrix R.T. This matrix correctly transforms coordinate descriptions from one Cartesian system to another even if one system is left-handed and the other is right handed. 31

32 2.5Three-Dimensional Viewing In three dimensional graphics applications, - we can view an object from any spatial position, from the front, from above or from the back. - We could generate a view of what we could see if we were standing in the middle of a group of objects or inside object, such as a building Viewing Pipeline: In the view of a three dimensional scene, to take a snapshot we need to do the following steps. 1. Positioning the camera at a particular point in space. 2. Deciding the camera orientation (i.e.,) pointing the camera and rotating it around the line of right to set up the direction for the picture. 3. When snap the shutter, the scene is cropped to the size of the window of the camera and light from the visible surfaces is projected into the camera film. In such a way the below figure shows the three dimensional transformation pipeline, from modeling coordinates to final device coordinate. Modeling World Viewing Viewing Modeling Co-ordinates Co-ordinates transformation Coordinates transformation Projection Device. co- Projection ordinates Transformation Co-ordinates Work Station Transformation Processing Steps 1. Once the scene has been modeled, world coordinates position is converted to viewing coordinates. 2. The viewing coordinates system is used in graphics packages as a reference for specifying the observer viewing position and the position of the projection plane. 3. Projection operations are performed to convert the viewing coordinate description of the scene to coordinate positions on the projection plane, which will then be mapped to the output device. 32

33 4. Objects outside the viewing limits are clipped from further consideration, and the remaining objects are processed through visible surface identification and surface rendering procedures to produce the display within the device viewport Viewing Coordinates Specifying the view plane The view for a scene is chosen by establishing the viewing coordinate system, also called the view reference coordinate system. A viewplane or projection plane is set-up perpendicular to the viewing Zv axis. World coordinate positions in the scene are transformed to viewing coordinates, then viewing coordinates are projected to the view plane. The view reference point is a world coordinate position, which is the origin of the viewing coordinate system. It is chosen to be close to or on the surface of some object in a scene. Then we select the positive direction for the viewing Zv axis, and the orientation of the view plane by specifying the view plane normal vector, N. Here the world coordinate position establishes the direction for N relative either to the world origin or to the viewing coordinate origin. Then we select the up direction for the view by specifying a vector V called the view-up vector. This vector is used to establish the positive direction for the yv axis. 33

34 Specifying the view up vector with a twist angle θt Transformation from world to viewing coordinates Before object descriptions can be projected to the view plane, they must be transferred to viewing coordinate. This transformation sequence is, 1. Translate the view reference point to the origin of the world coordinate system. 2. Apply rotations to align the xv, yv and zv axes with the world xw,yw and zw axes respectively. If the view reference point is specified at world position(x0,y0,z0) this point is translated to the world origin with the matrix transformation x0 T = y z The rotation sequence can require up to 3 coordinate axis rotations depending on the direction chosen for N. Aligning a viewing system with the world coordinate axes using a sequence of translate rotate transformations Another method for generation the rotation transformation matrix is to calculate unit uvn vectors and form the composite rotation matrix directly. Given vectors N and V, these unit vectors are calculated as n = N / ( N ) = (n1, n2, n3) 34

35 u = (V*N) / ( V*N ) = (u1, u2, u3) v = n*u = (v1, v2, v3) This method automatically adjusts the direction for v, so that v is perpendicular to n. The composite rotation matrix for the viewing transformation is u1 u2 u3 0 R = v1 v2 v3 0 n1 n2 n which transforms u into the world xw axis, v onto the yw axis and n onto the zw axis. The complete world-to-viewing transformation matrix is obtained as the matrix product. Mwc, vc = R.T This transformation is applied to coordinate descriptions of objects in the scene transfer them to the viewing reference frame. 2.5 Projections Once world coordinate descriptions of the objects are converted to viewing coordinates, we can project the 3 dimensional objects onto the two dimensional view planes. There are two basic types of projection. 1. Parallel Projection - Here the coordinate positions are transformed to the view plane along parallel lines. Parallel projection of an object to the view plane 2. Perspective Projection Here, object positions are transformed to the view plane along lines that converge to a point called the projection reference point. 35

36 Perspective projection of an object to the view plane Parallel Projections Parallel projections are specified with a projection vector that defines the direction for the projection lines. When the projection in perpendicular to the view plane, it is said to be an Orthographic parallel projection, otherwise it said to be an Oblique parallel projection. Orientation of the projection vector Vp to produce an orthographic projection (a) and an oblique projection (b) Orthographic Projection Orthographic projections are used to produce the front, side and top views of an object. Front, side and rear orthographic projections of an object are called elevations. A top orthographic projection is called a plan view. This projection gives the measurement of lengths and angles accurately. 36

37 Orthographic projections of an object, displaying plan and elevation views The orthographic projection that displays more than one face of an object is called axonometric orthographic projections. The most commonly used axonometric projection is the isometric projection. It can be generated by aligning the projection plane so that it intersects each coordinate axis in which the object is defined as the same distance from the origin. Isometric projection for a cube Transformation equations for an orthographic parallel projection are straight forward. 37

38 If the view plane is placed at position zvp along the zv axis then any point (x,y,z) in viewing coordinates is transformed to projection coordinates as xp = x, yp = y where the original z coordinates value is kept for the depth information needed in depth cueing and visible surface determination procedures. Oblique Projection An oblique projection in obtained by projecting points along parallel lines that are not perpendicular to the projection plane. The below figure α and φ are two angles. Point (x,y,z) is projected to position (xp,yp) on the view plane. The oblique projection line form (x,y,z) to (xp,yp) makes an angle α with the line on the projection plane that joins (xp,yp) and (x,y). This line of length L in at an angle φ with the horizontal direction in the projection plane. The projection coordinates are expressed in terms of x,y, L and φ as xp = x + Lcosφ (1) yp = y + Lsinφ Length L depends on the angle α and the z coordinate of the point to be projected: thus, L = z / tanα = z L1 tanα = z / L where L1 is the inverse of tanα, which is also the value of L when z = 1. 38

39 The oblique projection equation (1) can be written as xp = x + z(l1cosφ) yp = y + z(l1sinφ) The transformation matrix for producing any parallel projection onto the xvyv plane is 1 0 L1cosφ 0 Mparallel = 0 1 L1sinφ An orthographic projection is obtained when L1 = 0 (which occurs at a projection angle α of 90 0 ) Oblique projections are generated with non zero values for L1. Perspective Projections To obtain perspective projection of a 3D object, we transform points along projection lines that meet at the projection reference point. If the projection reference point is set at position zprp along the zv axis and the view plane is placed at zvp as in fig, we can write equations describing coordinate positions along this perspective projection line in parametric form as x = x - xu y = y - yu z = z (z zprp) u Perspective projection of a point P with coordinates (x,y,z). to position (xp, yp,zvp) on the view plane. Parameter u takes values from 0 to 1 and coordinate position (x, y,z ) represents any point along the projection line. 39

40 When u = 0, the point is at P = (x,y,z). At the other end of the line, u = 1 and the projection reference point coordinates (0,0,zprp) On the view plane z` = zvp and z` can be solved for parameter u at this position along the projection line: Substituting this value of u into the equations for x` and y`, we obtain the perspective transformation equations. xp = x((zprp zvp) / (zprp z)) = x( dp/(zprp z)) yp = y((zprp - zvp) / (zprp z)) = y(dp / (zprp z)) (2) where dp = zprp zvp is the distance of the view plane from the projection reference point. Using a 3D homogeneous coordinate representation we can write the perspective projection transformation (2) in matrix form as xh x yh = y zh 0 0 -(zvp/dp) zvp(zprp/dp) z (3) h 0 0-1/dp zprp/dp 1 In this representation, the homogeneous factor is h = (zprp-z)/dp (4) and the projection coordinates on the view plane are calculated from eq (2)the homogeneous coordinates as xp = xh / h yp = yh / h (5) where the original z coordinate value retains in projection coordinates for depth processing. 2.6 CLIPPING An algorithm for three-dimensional clipping identifies and saves all surface segments within the view volume for display on the output device. All parts of objects that are outside the view volume are discarded. Instead of clipping against straight-line window boundaries, we now clip objects against the boundary planes of the view volume. To clip a line segment against the view volume, we would need to test the relative position of the line using the view volume's boundary plane equations. By substituting the line endpoint coordinates into the plane equation of each boundary in turn, we 40

41 could determine whether the endpoint is inside or outside that boundary. An endpoint (x, y, z) of a line segment is outside a boundary plane if Ax + By + Cz + D > 0, where A, B, C, and D are the plane parameters for that boundary. Similarly, the point is inside the boundary if Ax + By + Cz +D < 0. Lines with both endpoints outside a boundary plane are discarded, and those with both endpoints inside all boundary planes are saved. The intersection of a line with a boundary is found using the line equations along with the plane equation. Intersection coordinates (x1, y1, z1) are values that are on the line and that satisfy the plane equation Ax1, + By1 + Cz1 + D = 0. To clip a polygon surface, we can clip the individual polygon edges. First, we could test the coordinate extents against each boundary of the view volume to determine whether the object is completely inside or completely outside that boundary. If the coordinate extents of the object are inside all boundaries, we save it. If the coordinate extents are outside all boundaries, we discard it. Otherwise, we need to apply the intersection calculations. Viewport Clipping Lines and polygon surfaces in a scene can be clipped against the viewport boundaries with procedures similar to those used for two dimensions, except that objects are now processed against clipping planes instead of clipping edges. The two-dimensional concept of region codes can be extended to three dimensions by considering positions in front and in back of the three-dimensional viewport, as well as positions that are left, right, below, or above the volume. For three dimensionalpoints, we need to expand the region code to six bits. Each point in the description of a scene is then assigned a six-bit region code that identifies the relative position of the point with respect to the viewport. For a line endpoint at position (x, y, z), we assign the bit positions in the region code from right to left as bit 1 = 1, if x < xvmin(left) bit 2 = 1, if x > xvmax(right) bit 3 = 1, if y < yvmin(below) bit 4 = 1, if y > yvmax(above) bit 5 = 1, if z <zvmin(front) bit 6 = 1, if z > zvmax(back) 41

42 For example, a region code of identifies a point as above and behind the viewport, and the region code indicates a point within the volume. A line segment can immediately identified as completely within the viewport if both endpoints have a region code of If either endpoint of a line segment does not have a region code of , we perform the logical and operation on the two endpoint codes. The result of this and operation will be nonzero for any line segment that has both endpoints in one of the six outside regions. As in two-dimensional line clipping, we use the calculated intersection of a line with a viewport plane to determine how much of the line can be thrown away. The two-dimensional parametric clipping methods of Cyrus-Beck or Liang-Barsky can be extended to three-dimensional scenes. For a line segment with endpoints P1 = (x1, y1, z1,) and P2 = (x2, y2, z2), we can write the parametric line equations as x = x1 + (x2 - x1)u 0<=u <=1 y = y1 + (y2 - y1)u z= z1 + (z2 - z1)u ( 1 ) Coordinates (x, y, z) represent any point on the line between the two endpoints. At u = 0, we have the point PI, and u = 1 puts us at P2. To find the intersection of a line with a plane of the viewport, we substitute the coordinate value for that plane into the appropriate parametric expression of Eq.1 and solve for u. For instance, suppose we are testing a line against the zvmin, plane of the viewport. Then u= zvmin z1 z2 z ( 2 ) When the calculated value for u is not in the range from 0 to 1, the line segment does not intersect the plane under consideration at any point between endpoints P1 and P2 (line A in fig). If the calculated value for u in Eq.2 is in the interval from 0 to 1, we calculate the intersection's x and y coordinates as x1 = x1 + ( x2 x1) zvmin z1 z2 z1 42

43 y1 = y1 + ( y2 y1) zvmin z1 z2 z1 If either x1 or y1 is not in the range of the boundaries of the viewport, then this line intersects the front plane beyond the boundaries of the volume (line B in Fig.) 2.7 Three Dimensional Viewing Functions 1. With parameters specified in world coordinates, elements of the matrix for transforming world coordinate descriptions to the viewing reference frame are calculated using the function. EvaluateViewOrientationMatrix3(x0,y0,z0,xN,yN,zN,xV,yV,zV,error,vi ewmatrix) - This function creates the viewmatrix from input coordinates defining the viewing system. - Parameters x0,y0,z0 specify the sign of the viewing system. - World coordinate vector (xn, yn, zn) defines the normal to the view plane and the direction of the positive zv viewing axis. - The world coordinates (xv, yv, zv) gives the elements of the view up vector. - An integer error code is generated in parameter error if input values are not specified correctly. 2. The matrix proj matrix for transforming viewing coordinates to normalized projection coordinates is created with the function. EvaluateViewMappingMatrix3 (xwmin,xwmax,ywmin,ywmax,xvmin,xvmax,yvmin,yvmax,zvmin,zv max, projtype,xprojref,yprojref,zprojref,zview,zback,zfront,error,projma trix) 43

44 - Window limits on the view plane are given in viewing coordinates with parameters xwmin, xwmax, ywmin and ywmax. - Limits of the 3D view port within the unit cube are set with normalized coordinates xvmin, xvmax, yvmin, yvmax, zvmin and zvmax. - Parameter projtype is used to choose the projection type either parallel or perspective. - Coordinate position (xprojref, yprojrdf, zprojref) sets the projection reference point. This point is used as the center of projection if projtype is set to perspective; otherwise, this point and the center of the viewplane window define the parallel projection vector. - The position of the viewplane along the viewing zv axis is set with parameter z view. - Positions along the viewing zv axis for the front and back planes of the view volume are given with parameters z front and z back. - The error parameter returns an integer error code indicating erroneous input data. 2.8VISIBLE SURFACE IDENTIFICATION A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position Classification of Visible Surface Detection Algorithms These are classified into two types based on whether they deal with object definitions directly or with their projected images 1. Object space methods: compares objects and parts of objects to each other within the scene definition to determine which surfaces as a whole we should label as visible. 2. Image space methods: visibility is decided point by point at each pixel position on the projection plane. Most Visible Surface Detection Algorithms use image space methods Back Face Detection A point (x, y,z) is "inside" a polygon surface with plane parameters A, B, C, and D if Ax + By + Cz + D < (1 ) When an inside point is along the line of sight to the surface, the polygon must be a back face. 44

45 We can simplify this test by considering the normal vector N to a polygon surface, which has Cartesian components (A, B, C). In general, if V is a vector in the viewing direction from the eye position, as shown in Fig., then this polygon is a back face if V. N > 0 Furthermore, if object descriptions have been converted to projection coordinates and our viewing direction is parallel to the viewing zv. axis, then V = (0, 0, Vz) and V. N = VzC so that we only need to consider the sign of C, the ; component of the normal vector N. In a right-handed viewing system with viewing direction along the negative zv axis in the below Fig. the polygon is a back face if C < 0. Thus, in general, we can label any polygon as a back face if its normal vector has a z component value C<= 0 By examining parameter C for the different planes defining an object, we can immediately identify all the back faces Depth Buffer Method A commonly used image-space approach to detecting visible surfaces is the depth-buffer method, which compares surface depths at each pixel position on the projection plane. This procedure is also referred to as the z-buffer method. Each surface of a scene is processed separately, one point at a 45

46 time across the surface. The method is usually applied to scenes containing only polygon surfaces, because depth values can be computed very quickly and the method is easy to implement. But the mcthod can be applied to nonplanar surfaces. With object descriptions converted to projection coordinates, each (x, y, z) position on a polygon surface corresponds to the orthographic projection point (x, y) on the view plane. Therefore, for each pixel position (x, y) on the view plane, object depths can be compared by comparing z values. The figure shows three surfaces at varying distances along the orthographic projection line from position (x,y ) in a view plane taken as the (xv,yv) plane. Surface S1, is closest at this position, so its surface intensity value at (x, y) is saved. We can implement the depth-buffer algorithm in normalized coordinates, so that z values range from 0 at the back clipping plane to Zmax at the front clipping plane. Two buffer areas are required.a depth buffer is used to store depth values for each (x, y) position as surfaces are processed, and the refresh buffer stores the intensity values for each position. Initially,all positions in the depth buffer are set to 0 (minimum depth), and the refresh buffer is initialized to the background intensity. We summarize the steps of a depth-buffer algorithm as follows: 1. Initialize the depth buffer and refresh buffer so that for all buffer positions (x, y), depth (x, y)=0, refresh(x, y )=Ibackgnd 2. For each position on each polygon surface, compare depth values to previously stored values in the depth buffer to determine visibility. 46

47 Calculate the depth z for each (x, y) position on the polygon. If z > depth(x, y), then set depth ( x, y)=z, refresh(x,y)= Isurf(x, y) where Ibackgnd is the value for the background intensity, and Isurf(x, y) is the projected intensity value for the surface at pixel position (x,y). After all surfaces have been processed, the depth buffer contains depth values for the visible surfaces and the refresh buffer contains the corresponding intensity values for those surfaces. Depth values for a surface position (x, y) are calculated from the plane equation for each surface: Ax By D z (1) C For any scan line adjacent horizontal positions across the line differ by1, and a vertical y value on an adjacent scan line differs by 1. If the depth of position(x, y) has been determined to be z, then the depth z' of the next position (x +1, y) along the scan line is obtained from Eq. (1) as Or A( x 1) By D z' (2) C A z' z (3) C On each scan line, we start by calculating the depth on a left edge of the polygon that intersects that scan line in the below fig. Depth values at each successive position across the scan line are then calculated by Eq. (3). Scan lines intersecting a polygon surface 47

48 We first determine the y-coordinate extents of each polygon, and process the surface from the topmost scan line to the bottom scan line. Starting at a top vertex, we can recursively calculate x positions down a left edge of the polygon as x' = x - l/m, where m is the slope of the edge. Depth values down the edge are then obtained recursively as A/ m B z' z (4) C Intersection positions on successive scan lines along a left polygon edge If we are processing down a vertical edge, the slope is infinite and the recursive calculations reduce to B z' z (5) C An alternate approach is to use a midpoint method or Bresenhamtype algorithm for determining x values on left edges for each scan line. Also the method can be applied to curved surfaces by determining depth and intensity values at each surface projection point. For polygon surfaces, the depth-buffer method is very easy to implement, and it requires no sorting of the surfaces in a scene. But it does require the availability of a second buffer in addition to the refresh buffer A- BUFFER METHOD An extension of the ideas in the depth-buffer method is the A- buffer method. The A buffer method represents an antialiased, areaaveraged, accumulation-buffer method developed by Lucasfilm for implementation in the surface-rendering system called REYES (an acronym for "Renders Everything You Ever Saw"). 48

3.2 THREE DIMENSIONAL OBJECT REPRESENTATIONS

3.2 THREE DIMENSIONAL OBJECT REPRESENTATIONS 3.1 THREE DIMENSIONAL CONCEPTS We can rotate an object about an axis with any spatial orientation in threedimensional space. Two-dimensional rotations, on the other hand, are always around an axis that

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

9. Three Dimensional Object Representations

9. Three Dimensional Object Representations 9. Three Dimensional Object Representations Methods: Polygon and Quadric surfaces: For simple Euclidean objects Spline surfaces and construction: For curved surfaces Procedural methods: Eg. Fractals, Particle

More information

UNIT 2 2D TRANSFORMATIONS

UNIT 2 2D TRANSFORMATIONS UNIT 2 2D TRANSFORMATIONS Introduction With the procedures for displaying output primitives and their attributes, we can create variety of pictures and graphs. In many applications, there is also a need

More information

Three-Dimensional Viewing Hearn & Baker Chapter 7

Three-Dimensional Viewing Hearn & Baker Chapter 7 Three-Dimensional Viewing Hearn & Baker Chapter 7 Overview 3D viewing involves some tasks that are not present in 2D viewing: Projection, Visibility checks, Lighting effects, etc. Overview First, set up

More information

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM

Computer Science 426 Midterm 3/11/04, 1:30PM-2:50PM NAME: Login name: Computer Science 46 Midterm 3//4, :3PM-:5PM This test is 5 questions, of equal weight. Do all of your work on these pages (use the back for scratch space), giving the answer in the space

More information

Graphics and Interaction Transformation geometry and homogeneous coordinates

Graphics and Interaction Transformation geometry and homogeneous coordinates 433-324 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

Computer Graphics I Lecture 11

Computer Graphics I Lecture 11 15-462 Computer Graphics I Lecture 11 Midterm Review Assignment 3 Movie Midterm Review Midterm Preview February 26, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

MET71 COMPUTER AIDED DESIGN

MET71 COMPUTER AIDED DESIGN UNIT - II BRESENHAM S ALGORITHM BRESENHAM S LINE ALGORITHM Bresenham s algorithm enables the selection of optimum raster locations to represent a straight line. In this algorithm either pixels along X

More information

SAZ4C COMPUTER GRAPHICS. Unit : 1-5. SAZ4C Computer Graphics

SAZ4C COMPUTER GRAPHICS. Unit : 1-5. SAZ4C Computer Graphics SAZ4C COMPUTER GRAPHICS Unit : 1-5 1 UNIT :1 SYLLABUS Introduction to computer Graphics Video display devices Raster scan Systems Random Scan Systems Interactive input devices Hard copy devices Graphics

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

For each question, indicate whether the statement is true or false by circling T or F, respectively.

For each question, indicate whether the statement is true or false by circling T or F, respectively. True/False For each question, indicate whether the statement is true or false by circling T or F, respectively. 1. (T/F) Rasterization occurs before vertex transformation in the graphics pipeline. 2. (T/F)

More information

Computer Graphics Viewing

Computer Graphics Viewing Computer Graphics Viewing What Are Projections? Our 3-D scenes are all specified in 3-D world coordinates To display these we need to generate a 2-D image - project objects onto a picture plane Picture

More information

(Refer Slide Time: 00:04:20)

(Refer Slide Time: 00:04:20) Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 8 Three Dimensional Graphics Welcome back all of you to the lectures in Computer

More information

CS602 Midterm Subjective Solved with Reference By WELL WISHER (Aqua Leo)

CS602 Midterm Subjective Solved with Reference By WELL WISHER (Aqua Leo) CS602 Midterm Subjective Solved with Reference By WELL WISHER (Aqua Leo) www.vucybarien.com Question No: 1 What are the two focusing methods in CRT? Explain briefly. Page no : 26 1. Electrostatic focusing

More information

Overview. Affine Transformations (2D and 3D) Coordinate System Transformations Vectors Rays and Intersections

Overview. Affine Transformations (2D and 3D) Coordinate System Transformations Vectors Rays and Intersections Overview Affine Transformations (2D and 3D) Coordinate System Transformations Vectors Rays and Intersections ITCS 4120/5120 1 Mathematical Fundamentals Geometric Transformations A set of tools that aid

More information

3D Viewing. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

3D Viewing. CMPT 361 Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller 3D Viewing CMPT 361 Introduction to Computer Graphics Torsten Möller Reading Chapter 4 of Angel Chapter 6 of Foley, van Dam, 2 Objectives What kind of camera we use? (pinhole) What projections make sense

More information

Data Representation in Visualisation

Data Representation in Visualisation Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have

More information

Computer Graphics. Chapter 5 Geometric Transformations. Somsak Walairacht, Computer Engineering, KMITL

Computer Graphics. Chapter 5 Geometric Transformations. Somsak Walairacht, Computer Engineering, KMITL Chapter 5 Geometric Transformations Somsak Walairacht, Computer Engineering, KMITL 1 Outline Basic Two-Dimensional Geometric Transformations Matrix Representations and Homogeneous Coordinates Inverse Transformations

More information

3D Viewing. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller

3D Viewing. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller 3D Viewing Introduction to Computer Graphics Torsten Möller Machiraju/Zhang/Möller Reading Chapter 4 of Angel Chapter 13 of Hughes, van Dam, Chapter 7 of Shirley+Marschner Machiraju/Zhang/Möller 2 Objectives

More information

COMPUTER AIDED ARCHITECTURAL GRAPHICS FFD 201/Fall 2013 HAND OUT 1 : INTRODUCTION TO 3D

COMPUTER AIDED ARCHITECTURAL GRAPHICS FFD 201/Fall 2013 HAND OUT 1 : INTRODUCTION TO 3D COMPUTER AIDED ARCHITECTURAL GRAPHICS FFD 201/Fall 2013 INSTRUCTORS E-MAIL ADDRESS OFFICE HOURS Özgür Genca ozgurgenca@gmail.com part time Tuba Doğu tubadogu@gmail.com part time Şebnem Yanç Demirkan sebnem.demirkan@gmail.com

More information

3D Mathematics. Co-ordinate systems, 3D primitives and affine transformations

3D Mathematics. Co-ordinate systems, 3D primitives and affine transformations 3D Mathematics Co-ordinate systems, 3D primitives and affine transformations Coordinate Systems 2 3 Primitive Types and Topologies Primitives Primitive Types and Topologies 4 A primitive is the most basic

More information

Unit 3 Transformations and Clipping

Unit 3 Transformations and Clipping Transformation Unit 3 Transformations and Clipping Changes in orientation, size and shape of an object by changing the coordinate description, is known as Geometric Transformation. Translation To reposition

More information

Content. Coordinate systems Orthographic projection. (Engineering Drawings)

Content. Coordinate systems Orthographic projection. (Engineering Drawings) Projection Views Content Coordinate systems Orthographic projection (Engineering Drawings) Graphical Coordinator Systems A coordinate system is needed to input, store and display model geometry and graphics.

More information

Projection Lecture Series

Projection Lecture Series Projection 25.353 Lecture Series Prof. Gary Wang Department of Mechanical and Manufacturing Engineering The University of Manitoba Overview Coordinate Systems Local Coordinate System (LCS) World Coordinate

More information

Figure 1. Lecture 1: Three Dimensional graphics: Projections and Transformations

Figure 1. Lecture 1: Three Dimensional graphics: Projections and Transformations Lecture 1: Three Dimensional graphics: Projections and Transformations Device Independence We will start with a brief discussion of two dimensional drawing primitives. At the lowest level of an operating

More information

3D Modeling Parametric Curves & Surfaces. Shandong University Spring 2013

3D Modeling Parametric Curves & Surfaces. Shandong University Spring 2013 3D Modeling Parametric Curves & Surfaces Shandong University Spring 2013 3D Object Representations Raw data Point cloud Range image Polygon soup Surfaces Mesh Subdivision Parametric Implicit Solids Voxels

More information

CSE328 Fundamentals of Computer Graphics

CSE328 Fundamentals of Computer Graphics CSE328 Fundamentals of Computer Graphics Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 794--44 Tel: (63)632-845; Fax: (63)632-8334 qin@cs.sunysb.edu

More information

4.5 VISIBLE SURFACE DETECTION METHODES

4.5 VISIBLE SURFACE DETECTION METHODES 4.5 VISIBLE SURFACE DETECTION METHODES A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position. There

More information

521493S Computer Graphics Exercise 2 Solution (Chapters 4-5)

521493S Computer Graphics Exercise 2 Solution (Chapters 4-5) 5493S Computer Graphics Exercise Solution (Chapters 4-5). Given two nonparallel, three-dimensional vectors u and v, how can we form an orthogonal coordinate system in which u is one of the basis vectors?

More information

Object Representation Affine Transforms. Polygonal Representation. Polygonal Representation. Polygonal Representation of Objects

Object Representation Affine Transforms. Polygonal Representation. Polygonal Representation. Polygonal Representation of Objects Object Representation Affine Transforms Polygonal Representation of Objects Although perceivable the simplest form of representation they can also be the most problematic. To represent an object polygonally,

More information

COMP30019 Graphics and Interaction Scan Converting Polygons and Lines

COMP30019 Graphics and Interaction Scan Converting Polygons and Lines COMP30019 Graphics and Interaction Scan Converting Polygons and Lines Department of Computer Science and Software Engineering The Lecture outline Introduction Scan conversion Scan-line algorithm Edge coherence

More information

3D Modeling Parametric Curves & Surfaces

3D Modeling Parametric Curves & Surfaces 3D Modeling Parametric Curves & Surfaces Shandong University Spring 2012 3D Object Representations Raw data Point cloud Range image Polygon soup Solids Voxels BSP tree CSG Sweep Surfaces Mesh Subdivision

More information

Computer Graphics 7: Viewing in 3-D

Computer Graphics 7: Viewing in 3-D Computer Graphics 7: Viewing in 3-D In today s lecture we are going to have a look at: Transformations in 3-D How do transformations in 3-D work? Contents 3-D homogeneous coordinates and matrix based transformations

More information

Viewing with Computers (OpenGL)

Viewing with Computers (OpenGL) We can now return to three-dimension?', graphics from a computer perspective. Because viewing in computer graphics is based on the synthetic-camera model, we should be able to construct any of the classical

More information

CSE528 Computer Graphics: Theory, Algorithms, and Applications

CSE528 Computer Graphics: Theory, Algorithms, and Applications CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin Stony Brook University (SUNY at Stony Brook) Stony Brook, New York 11794-2424 Tel: (631)632-845; Fax: (631)632-8334 qin@cs.stonybrook.edu

More information

CSC418 / CSCD18 / CSC2504

CSC418 / CSCD18 / CSC2504 5 5.1 Surface Representations As with 2D objects, we can represent 3D objects in parametric and implicit forms. (There are also explicit forms for 3D surfaces sometimes called height fields but we will

More information

2D and 3D Coordinate Systems and Transformations

2D and 3D Coordinate Systems and Transformations Graphics & Visualization Chapter 3 2D and 3D Coordinate Systems and Transformations Graphics & Visualization: Principles & Algorithms Introduction In computer graphics is often necessary to change: the

More information

CSCI 4620/8626. Coordinate Reference Frames

CSCI 4620/8626. Coordinate Reference Frames CSCI 4620/8626 Computer Graphics Graphics Output Primitives Last update: 2014-02-03 Coordinate Reference Frames To describe a picture, the world-coordinate reference frame (2D or 3D) must be selected.

More information

Images from 3D Creative Magazine. 3D Modelling Systems

Images from 3D Creative Magazine. 3D Modelling Systems Images from 3D Creative Magazine 3D Modelling Systems Contents Reference & Accuracy 3D Primitives Transforms Move (Translate) Rotate Scale Mirror Align 3D Booleans Deforms Bend Taper Skew Twist Squash

More information

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26]

Visualization. Images are used to aid in understanding of data. Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Visualization Images are used to aid in understanding of data Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [chapter 26] Tumor SCI, Utah Scientific Visualization Visualize large

More information

End-Term Examination

End-Term Examination Paper Code: MCA-108 Paper ID : 44108 Second Semester [MCA] MAY-JUNE 2006 Q. 1 Describe the following in brief :- (3 x 5 = 15) (a) QUADRATIC SURFACES (b) RGB Color Models. (c) BSP Tree (d) Solid Modeling

More information

Measuring Lengths The First Fundamental Form

Measuring Lengths The First Fundamental Form Differential Geometry Lia Vas Measuring Lengths The First Fundamental Form Patching up the Coordinate Patches. Recall that a proper coordinate patch of a surface is given by parametric equations x = (x(u,

More information

Homework #2. Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves

Homework #2. Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves Computer Graphics Instructor: Brian Curless CSE 457 Spring 2013 Homework #2 Hidden Surfaces, Projections, Shading and Texture, Ray Tracing, and Parametric Curves Assigned: Sunday, May 12 th Due: Thursday,

More information

2D and 3D Transformations AUI Course Denbigh Starkey

2D and 3D Transformations AUI Course Denbigh Starkey 2D and 3D Transformations AUI Course Denbigh Starkey. Introduction 2 2. 2D transformations using Cartesian coordinates 3 2. Translation 3 2.2 Rotation 4 2.3 Scaling 6 3. Introduction to homogeneous coordinates

More information

Aptitude Test Question

Aptitude Test Question Aptitude Test Question Q.1) Which software is not used for Image processing? a) MatLab b) Visual Basic c) Java d) Fortran Q.2) Which software is not used for Computer graphics? a) C b) Java c) Fortran

More information

COMPUTER AIDED ENGINEERING DESIGN (BFF2612)

COMPUTER AIDED ENGINEERING DESIGN (BFF2612) COMPUTER AIDED ENGINEERING DESIGN (BFF2612) BASIC MATHEMATICAL CONCEPTS IN CAED by Dr. Mohd Nizar Mhd Razali Faculty of Manufacturing Engineering mnizar@ump.edu.my COORDINATE SYSTEM y+ y+ z+ z+ x+ RIGHT

More information

CS 4204 Computer Graphics

CS 4204 Computer Graphics CS 4204 Computer Graphics 3D Viewing and Projection Yong Cao Virginia Tech Objective We will develop methods to camera through scenes. We will develop mathematical tools to handle perspective projection.

More information

Coordinate transformations. 5554: Packet 8 1

Coordinate transformations. 5554: Packet 8 1 Coordinate transformations 5554: Packet 8 1 Overview Rigid transformations are the simplest Translation, rotation Preserve sizes and angles Affine transformation is the most general linear case Homogeneous

More information

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer The exam consists of 10 questions. There are 2 points per question for a total of 20 points. You

More information

(a) rotating 45 0 about the origin and then translating in the direction of vector I by 4 units and (b) translating and then rotation.

(a) rotating 45 0 about the origin and then translating in the direction of vector I by 4 units and (b) translating and then rotation. Code No: R05221201 Set No. 1 1. (a) List and explain the applications of Computer Graphics. (b) With a neat cross- sectional view explain the functioning of CRT devices. 2. (a) Write the modified version

More information

Computer Graphics Ray Casting. Matthias Teschner

Computer Graphics Ray Casting. Matthias Teschner Computer Graphics Ray Casting Matthias Teschner Outline Context Implicit surfaces Parametric surfaces Combined objects Triangles Axis-aligned boxes Iso-surfaces in grids Summary University of Freiburg

More information

Chapter 15: Functions of Several Variables

Chapter 15: Functions of Several Variables Chapter 15: Functions of Several Variables Section 15.1 Elementary Examples a. Notation: Two Variables b. Example c. Notation: Three Variables d. Functions of Several Variables e. Examples from the Sciences

More information

Lesson 1 Parametric Modeling Fundamentals

Lesson 1 Parametric Modeling Fundamentals 1-1 Lesson 1 Parametric Modeling Fundamentals Create Simple Parametric Models. Understand the Basic Parametric Modeling Process. Create and Profile Rough Sketches. Understand the "Shape before size" approach.

More information

CS2401 COMPUTER GRAPHICS ANNA UNIV QUESTION BANK

CS2401 COMPUTER GRAPHICS ANNA UNIV QUESTION BANK CS2401 Computer Graphics CS2401 COMPUTER GRAPHICS ANNA UNIV QUESTION BANK CS2401- COMPUTER GRAPHICS UNIT 1-2D PRIMITIVES 1. Define Computer Graphics. 2. Explain any 3 uses of computer graphics applications.

More information

Vectors and the Geometry of Space

Vectors and the Geometry of Space Vectors and the Geometry of Space In Figure 11.43, consider the line L through the point P(x 1, y 1, z 1 ) and parallel to the vector. The vector v is a direction vector for the line L, and a, b, and c

More information

Topics and things to know about them:

Topics and things to know about them: Practice Final CMSC 427 Distributed Tuesday, December 11, 2007 Review Session, Monday, December 17, 5:00pm, 4424 AV Williams Final: 10:30 AM Wednesday, December 19, 2007 General Guidelines: The final will

More information

CS 465 Program 4: Modeller

CS 465 Program 4: Modeller CS 465 Program 4: Modeller out: 30 October 2004 due: 16 November 2004 1 Introduction In this assignment you will work on a simple 3D modelling system that uses simple primitives and curved surfaces organized

More information

ME-430 Introduction to CAD Lecture Notes- Part 3

ME-430 Introduction to CAD Lecture Notes- Part 3 ME-43 Introduction to CAD Lecture Notes- Part 3 Dr. Rajesh N. Dave Office: 36 MEC (only during office hours); 28 YCEES Phone: 973 596-586 e-mail: dave@adm.njit.edu Web Page: web.njit.edu/~rdave Office

More information

form are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates.

form are graphed in Cartesian coordinates, and are graphed in Cartesian coordinates. Plot 3D Introduction Plot 3D graphs objects in three dimensions. It has five basic modes: 1. Cartesian mode, where surfaces defined by equations of the form are graphed in Cartesian coordinates, 2. cylindrical

More information

CS602- Computer Graphics Solved MCQS From Midterm Papers. MIDTERM EXAMINATION Spring 2013 CS602- Computer Graphics

CS602- Computer Graphics Solved MCQS From Midterm Papers. MIDTERM EXAMINATION Spring 2013 CS602- Computer Graphics CS602- Computer Graphics Solved MCQS From Midterm Papers Dec 18,2013 MC100401285 Moaaz.pk@gmail.com Mc100401285@gmail.com PSMD01 Question No: 1 ( Marks: 1 ) - Please choose one DDA abbreviated for. Discrete

More information

An introduction to interpolation and splines

An introduction to interpolation and splines An introduction to interpolation and splines Kenneth H. Carpenter, EECE KSU November 22, 1999 revised November 20, 2001, April 24, 2002, April 14, 2004 1 Introduction Suppose one wishes to draw a curve

More information

Reflection and Shading

Reflection and Shading Reflection and Shading R. J. Renka Department of Computer Science & Engineering University of North Texas 10/19/2015 Light Sources Realistic rendering requires that we model the interaction between light

More information

COMP30019 Graphics and Interaction Perspective & Polygonal Geometry

COMP30019 Graphics and Interaction Perspective & Polygonal Geometry COMP30019 Graphics and Interaction Perspective & Polygonal Geometry Department of Computing and Information Systems The Lecture outline Introduction Perspective Geometry Virtual camera Centre of projection

More information

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016 Computergrafik Matthias Zwicker Universität Bern Herbst 2016 Today Curves NURBS Surfaces Parametric surfaces Bilinear patch Bicubic Bézier patch Advanced surface modeling 2 Piecewise Bézier curves Each

More information

GEOMETRIC TRANSFORMATIONS AND VIEWING

GEOMETRIC TRANSFORMATIONS AND VIEWING GEOMETRIC TRANSFORMATIONS AND VIEWING 2D and 3D 1/44 2D TRANSFORMATIONS HOMOGENIZED Transformation Scaling Rotation Translation Matrix s x s y cosθ sinθ sinθ cosθ 1 dx 1 dy These 3 transformations are

More information

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination Rendering Pipeline 3D Polygon Rendering Many applications use rendering of 3D polygons with direct illumination 3D Polygon Rendering What steps are necessary to utilize spatial coherence while drawing

More information

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes

11/1/13. Visualization. Scientific Visualization. Types of Data. Height Field. Contour Curves. Meshes CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 2.11] Jernej Barbic University of Southern California Scientific Visualization

More information

Visualization. CSCI 420 Computer Graphics Lecture 26

Visualization. CSCI 420 Computer Graphics Lecture 26 CSCI 420 Computer Graphics Lecture 26 Visualization Height Fields and Contours Scalar Fields Volume Rendering Vector Fields [Angel Ch. 11] Jernej Barbic University of Southern California 1 Scientific Visualization

More information

DRAFT: Mathematical Background for Three-Dimensional Computer Graphics. Jonathan R. Senning Gordon College

DRAFT: Mathematical Background for Three-Dimensional Computer Graphics. Jonathan R. Senning Gordon College DRAFT: Mathematical Background for Three-Dimensional Computer Graphics Jonathan R. Senning Gordon College September 2006 ii Contents Introduction 2 Affine Geometry 3 2. Affine Space...................................

More information

Computer Graphics! LECTURE 4 HIERARCHICAL MODELING INTRODUCTION TO 3D CONCEPTS. Doç. Dr. Mehmet Gokturk Gebze Institute of Technology!

Computer Graphics! LECTURE 4 HIERARCHICAL MODELING INTRODUCTION TO 3D CONCEPTS. Doç. Dr. Mehmet Gokturk Gebze Institute of Technology! Computer Graphics! LECTURE 4 HIERARCHICAL MODELING INTRODUCTION TO 3D CONCEPTS Doç. Dr. Mehmet Gokturk Gebze Institute of Technology! Structure Concept! In many graphics applications, it is necessary to

More information

Overview. By end of the week:

Overview. By end of the week: Overview By end of the week: - Know the basics of git - Make sure we can all compile and run a C++/ OpenGL program - Understand the OpenGL rendering pipeline - Understand how matrices are used for geometric

More information

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment

Notes on Assignment. Notes on Assignment. Notes on Assignment. Notes on Assignment Notes on Assignment Notes on Assignment Objects on screen - made of primitives Primitives are points, lines, polygons - watch vertex ordering The main object you need is a box When the MODELVIEW matrix

More information

Cs602-computer graphics MCQS MIDTERM EXAMINATION SOLVED BY ~ LIBRIANSMINE ~

Cs602-computer graphics MCQS MIDTERM EXAMINATION SOLVED BY ~ LIBRIANSMINE ~ Cs602-computer graphics MCQS MIDTERM EXAMINATION SOLVED BY ~ LIBRIANSMINE ~ Question # 1 of 10 ( Start time: 08:04:29 PM ) Total Marks: 1 Sutherland-Hodgeman clipping algorithm clips any polygon against

More information

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY

SRM INSTITUTE OF SCIENCE AND TECHNOLOGY SRM INSTITUTE OF SCIENCE AND TECHNOLOGY DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK SUB.NAME: COMPUTER GRAPHICS SUB.CODE: IT307 CLASS : III/IT UNIT-1 2-marks 1. What is the various applications

More information

3D Modeling and Design Glossary - Beginner

3D Modeling and Design Glossary - Beginner 3D Modeling and Design Glossary - Beginner Align: to place or arrange (things) in a straight line. To use the Align tool, select at least two objects by Shift left-clicking on them or by dragging a box

More information

Game Engineering: 2D

Game Engineering: 2D Game Engineering: 2D CS420-2010F-07 Objects in 2D David Galles Department of Computer Science University of San Francisco 07-0: Representing Polygons We want to represent a simple polygon Triangle, rectangle,

More information

Implicit Surfaces & Solid Representations COS 426

Implicit Surfaces & Solid Representations COS 426 Implicit Surfaces & Solid Representations COS 426 3D Object Representations Desirable properties of an object representation Easy to acquire Accurate Concise Intuitive editing Efficient editing Efficient

More information

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015 Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather

More information

CS488. Implementation of projections. Luc RENAMBOT

CS488. Implementation of projections. Luc RENAMBOT CS488 Implementation of projections Luc RENAMBOT 1 3D Graphics Convert a set of polygons in a 3D world into an image on a 2D screen After theoretical view Implementation 2 Transformations P(X,Y,Z) Modeling

More information

Graphics Hardware and Display Devices

Graphics Hardware and Display Devices Graphics Hardware and Display Devices CSE328 Lectures Graphics/Visualization Hardware Many graphics/visualization algorithms can be implemented efficiently and inexpensively in hardware Facilitates interactive

More information

The Graphics Pipeline. Interactive Computer Graphics. The Graphics Pipeline. The Graphics Pipeline. The Graphics Pipeline: Clipping

The Graphics Pipeline. Interactive Computer Graphics. The Graphics Pipeline. The Graphics Pipeline. The Graphics Pipeline: Clipping Interactive Computer Graphics The Graphics Pipeline: The Graphics Pipeline Input: - geometric model - illumination model - camera model - viewport Some slides adopted from F. Durand and B. Cutler, MIT

More information

Until now we have worked with flat entities such as lines and flat polygons. Fit well with graphics hardware Mathematically simple

Until now we have worked with flat entities such as lines and flat polygons. Fit well with graphics hardware Mathematically simple Curves and surfaces Escaping Flatland Until now we have worked with flat entities such as lines and flat polygons Fit well with graphics hardware Mathematically simple But the world is not composed of

More information

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI Department of Computer Science and Engineering CS6504-COMPUTER GRAPHICS Anna University 2 & 16 Mark Questions & Answers Year / Semester: III / V Regulation:

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

OpenGL Graphics System. 2D Graphics Primitives. Drawing 2D Graphics Primitives. 2D Graphics Primitives. Mathematical 2D Primitives.

OpenGL Graphics System. 2D Graphics Primitives. Drawing 2D Graphics Primitives. 2D Graphics Primitives. Mathematical 2D Primitives. D Graphics Primitives Eye sees Displays - CRT/LCD Frame buffer - Addressable pixel array (D) Graphics processor s main function is to map application model (D) by projection on to D primitives: points,

More information

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a coordinate system and then the measuring of the point with

More information

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11 Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION

More information

Computer graphics 2 Marks 1. Define Computer graphics.

Computer graphics 2 Marks 1. Define Computer graphics. Computer graphics 2 Marks 1. Define Computer graphics. Computer graphics remains one of the most existing and rapidly growing computer fields. Computer graphics may be defined as a pictorial representation

More information

Course Review. Computer Animation and Visualisation. Taku Komura

Course Review. Computer Animation and Visualisation. Taku Komura Course Review Computer Animation and Visualisation Taku Komura Characters include Human models Virtual characters Animal models Representation of postures The body has a hierarchical structure Many types

More information

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability

7 Fractions. Number Sense and Numeration Measurement Geometry and Spatial Sense Patterning and Algebra Data Management and Probability 7 Fractions GRADE 7 FRACTIONS continue to develop proficiency by using fractions in mental strategies and in selecting and justifying use; develop proficiency in adding and subtracting simple fractions;

More information

Overview. Viewing and perspectives. Planar Geometric Projections. Classical Viewing. Classical views Computer viewing Perspective normalization

Overview. Viewing and perspectives. Planar Geometric Projections. Classical Viewing. Classical views Computer viewing Perspective normalization Overview Viewing and perspectives Classical views Computer viewing Perspective normalization Classical Viewing Viewing requires three basic elements One or more objects A viewer with a projection surface

More information

Lecture 4. Viewing, Projection and Viewport Transformations

Lecture 4. Viewing, Projection and Viewport Transformations Notes on Assignment Notes on Assignment Hw2 is dependent on hw1 so hw1 and hw2 will be graded together i.e. You have time to finish both by next monday 11:59p Email list issues - please cc: elif@cs.nyu.edu

More information

Viewing. Reading: Angel Ch.5

Viewing. Reading: Angel Ch.5 Viewing Reading: Angel Ch.5 What is Viewing? Viewing transform projects the 3D model to a 2D image plane 3D Objects (world frame) Model-view (camera frame) View transform (projection frame) 2D image View

More information

Rendering If we have a precise computer representation of the 3D world, how realistic are the 2D images we can generate? What are the best way to mode

Rendering If we have a precise computer representation of the 3D world, how realistic are the 2D images we can generate? What are the best way to mode Graphic Pipeline 1 Rendering If we have a precise computer representation of the 3D world, how realistic are the 2D images we can generate? What are the best way to model 3D world? How to render them?

More information

Game Architecture. 2/19/16: Rasterization

Game Architecture. 2/19/16: Rasterization Game Architecture 2/19/16: Rasterization Viewing To render a scene, need to know Where am I and What am I looking at The view transform is the matrix that does this Maps a standard view space into world

More information

Computer Graphics Geometric Transformations

Computer Graphics Geometric Transformations Computer Graphics 2016 6. Geometric Transformations Hongxin Zhang State Key Lab of CAD&CG, Zhejiang University 2016-10-31 Contents Transformations Homogeneous Co-ordinates Matrix Representations of Transformations

More information

Interactive Math Glossary Terms and Definitions

Interactive Math Glossary Terms and Definitions Terms and Definitions Absolute Value the magnitude of a number, or the distance from 0 on a real number line Addend any number or quantity being added addend + addend = sum Additive Property of Area the

More information