View-Independent Non-photorealistic Real-time Rendering for Terrain
|
|
- Mitchell Walker
- 5 years ago
- Views:
Transcription
1 View-Independent Non-photorealistic Real-time Rendering for Terrain Motivation How much information is in a square meter of terrain? In a square kilometer? Depending how closely one looks, even a small patch of ground has an incredible and daunting level of complexity. Contemporary technology allows us to sample elevation on the surface of the Earth at one-meter increments. One square kilometer of data provides more than a million points of data. Ironically, that very capacity for highresolution can hinder the viewer from comprehending the greatest trends in the data. Furthermore, rendering out all one million points (besides being computationally problematic) borders on an attempt to attain realism and the human eye is never more critical than when it sees a representation that is striving to be realistic but, nevertheless, fails. However, that same, critical eye becomes far more permissive when confronted by artistic renderings. A certain amount of latitude is created in the viewer with artistic renderings of real objects. Human perception cavalierly accepts far rougher approximations and stylizations than would ever be found in reality. An accomplished artist can capture the trends and major features of an object in a few strokes. The viewer is not inundated with a superfluity of visual data. Instead, they are given an effective visual summary of the object the most critical data concerning the object are communicated in an efficient and powerful manner. A Turner landscape may be a powerful means to present terrain, but it has a significant weakness. It is impossible for the viewer to change his viewpoint. One cannot enter the painting and look back out or see one hill in the painting from the top of a neighboring hill. But that ability is one of the greatest devices to fully comprehend a piece of terrain. The ability to interact with the presentation of the terrain to see it from different viewpoints greatly increases the viewer s understanding and, from an aesthetic standpoint, appreciation of the terrain. This level of interaction requires that any effort to present terrain data must be couched in an interactive real-time context. There is more than one way to present terrain that satisfies both the needs for interactivity and the appeal and effect of an artistic rendering style. This paper focuses on one method in particular a view-independent method. The advantage of this method is that the brunt of the calculation workload is performed once and the results of these computations can be blindly pushed through a real-time graphics pipeline, regardless of view port orientation. As long as the means of accomplishing the artistic rendering style are not particularly troublesome for the graphics pipeline, this can lead to some incredibly high rendering rates. Background The fundamental question that must be answered in trying to build a viewindependent system is Which features are so significant that they should be drawn no matter what? Before a single frame is ever drawn, decisions must be made concerning what is to be drawn and what is to be left out to say nothing of how it is to be represented.
2 It seems apparent, when considering terrain, that there are several features that carry that significance: ridges, valleys, creases, etc.; or, to put it another way, areas of rapid change. At first, my thoughts tended toward a simple analysis of local extrema and then somehow linking these extrema into coherent lines or contours. However, this approach had several obvious flaws. First, interesting trends in the data which aren t explicitly extrema fall through the cracks like a slope ending in a flat plane, such as a coastline where water meets land. Secondly, finding local extrema doesn t give any context for how they should be connected. Both of these could be dealt with but the solutions would extend the algorithm in unwanted directions with increased complexity, which remove it distantly from the original, simple algorithm for dealing with local extrema. A second, and far superior approach, presented itself after reading a paper on suggestive contours (DeCarlo, et al). Although suggestive contours are view-dependent, the idea of using the curvature of a function to identify features seemed to solve both of the problems that the extrema approach left unresolved. First, the case of one region of basically uniform slope quickly transitioning into a different uniform slope, but without any significant local extrema, can be resolved with curvature. This case can be thought of as a crease. Away from the crease the surfaces forming the crease will have relatively low curvature. But at the crease, the curvature will increase in magnitude. So, the points along the transition will have a greater curvature value than their immediate neighbors, and this delineation can be detected and used as a base for a method of artistic rendering. Second, curvature analysis implies a means for determining connectivity. The curvature is calculated on a function in 2-space created by a plane intersecting the terrain data. For a given vertex in the terrain, there is an infinite number of planes that can intersect the terrain and pass through that vertex. In general, we chose to define the intersecting plane by the plane spanned by the normal of the vertex in question and the projection of the vector from the vertex to the viewpoint onto the vertex s tangent plane. What this means is that the curvature value is associated with an explicit direction. With the vertex normal constant, varying the second vector will cause a different plane to intersect the terrain and that intersection will form a different poly line on the plane. The poly line is an approximation of the space curve whose curvature is to be determined. Picture a piece of terrain with a ridge running parallel with the x-axis. If we choose an intersection plane that is parallel with the x-axis and upon which the ridge in the terrain lies, the resultant intersecting space curve will have almost no curvature. However, if we rotate the plane ninety degrees around the vertical axis, the intersecting space curve will have a significantly greater curvature value. Quite simply, a vertex will likely have one direction of maximum curvature. The neighboring vertices it would most likely connected to form some sort of contour are in the directions perpendicular to the direction of maximum curvature. So, knowing what the maximum curvature is and in which direction it occurs implies a viable connection to some subset of its neighboring vertices. So, curvature analysis seems to provide one of the strongest foundations for meaningful analysis of terrain s most significant features.
3 Approach Turning the principle of curvature analysis into finished images is a two-stage effort. The first is the actual analysis and the second is utilizing the analysis to create some form of detail-suggestive image. Determining curvature The simple reality is that terrain data is not made up of mathematical functions it is made up of discreet points. So, there is no inherent mathematical function to determine the curvature of the terrain at any point; approximations are unavoidable. There exist methods for extracting curvature of arbitrary polygonal meshes by curve approximation, curve interpolation and similar methods. But terrain representations usually encompass a very small, structured subset of polygonal geometry. The very nature of the terrain suggests some means of approximation. First, raw terrain data is very regular. All points are spaced evenly on a regular grid aligned with an axis. The distances from one point to its neighbors are known and constant. Only the y- value varies from point to point it is, after all, a height field. This suggests that curvature can most easily be approximated in four discreet directions along the terrain s x-axis, along the z-axis, along the axis 45 degrees between the positive x- and z-axes and 45 degrees between the positive x- and negative z-axes. The opposite directions are unnecessary; the curvature would be the same whether from looking it from the front or the back. Certainly more directions would provide a more complete understanding of the contours of the terrain but only these four directions can be directly derived from the source data. Any direction between these cardinal orientations would require us to infer information from the height field. If we were to do this, we might as well use more complex methods directly to obtain the curvature. At this stage, the goal is to find a reasonable, low-cost approximation. So, at this stage, whatever means is used to calculate the actual curvature value, it should be performed along those four, discreet axes. Later, choices can be made regarding the utilization of those quantized values. Four directions implies four planes; one plane with a normal in each of the four directions. The intersection of the plain and curve form an approximate space curve. Again, this curve is in 2-space. So, for a simple function, expressed in terms of x, the curvature at a point, p, can be calculated with the following equation: y κ p = Equation ( y ) ( ) 2 So, the challenge now becomes deriving values for the first- and secondderivatives at p. For my research I implemented four different methods of approximating the value, each with its own particular resultant characteristics. Two are based on a derived quantity specifically derived vertex normals, which I refer to as normal based approximations. The other two are based strictly on the actual terrain dataset, referred to as geometry based approximations. Normal based approximations Because these approximations are based on vertex normals, the actual method of deriving those normals plays a great role. I tested three different means: edge-based,
4 triangle based and quad based. Edge based normals look at the edges radiating from the vertex in pairs. The slope of the edges to the neighbors in one direction are calculated independently and averaged together and then the same with the slopes of the edges in the orthogonal direction. These two slopes are converted to vectors, added together and normalized to create the vertex normal. The triangle based method uses the same neighboring vertices as edge-based, but instead of examining edges, four imaginary triangles are constructed from the five vertices (four neighbors plus center vertex creating a open tetrahedron). Each triangle s normal is calculated and the four normals are averaged to create the vertex normal. The quad based method uses the eight nearest neighbors to the target vertex. Instead of four imaginary triangles, four quads are constructed. Their normals are evaluated and then averaged together to create the vertex normal. If the quad itself is not planar, than its normal is an average of two triangles that form the quad. The results of the edge-based and the triangle-based methods seem to be basically the same. However, the quad-based method yields slightly different results. Figure 1 shows the same terrain smoothed by each of the algorithms. The edge-based (a) and triangle-based (b) methods both have the same kind of noisy patterns in the high contrast region on the left side of the ridge the side in shadow. The quad-based image has a smoother look. The intuitive reason should be clear the quad-based method takes into account twice as many vertices so the resultant normal has a more averaged quality. Obviously a noisier distribution of normals will lead to a noisier curvature calculation. Examinations of curvature algorithms will all use the quad-based smoothing. (a) (b) (c) Figure 1: (a) edge-based, (b) triangle-based, (c) quad-based. Equation 1 will always yield non-negative values. It is strictly a measurement of curvature and doesn t care whether it s curving inward or outward. In reality, the range of the function spans [0, ). In practice, the actual range could be much, much smaller (experimental evidence has shown something in the range of [0, 2] to be a more realistic value with the majority of the values landing below one.) Currently, each of the curvature algorithms accepts a scaling parameter which will evenly scale all of the curvature values. This simplistic scaling can (and should) be later replaced with some means of level control, to create a greater distinction between the meaningful curvature values and the meaningless, noisy curvatures. But for now, scaling is sufficient.
5 Two-normal The first and, in many ways, simplest approach utilizes only two normals: the normal of the vertex in question and the normal of the immediate neighbor in the direction of calculation (i.e., one of the four directions listed earlier). The first and second derivatives arise naturally from the usage of normals. The first derivative is merely the slope of the tangent through the vertex. The tangent is perpendicular to the normal and so, given the normal, obtaining the slope of the tangent is trivial. The second-derivative is merely the change of slopes. The slope is calculated for the target vertex and the neighbor nearest to the vertex in the direction of analysis. The average change in slope is calculated by taking their difference and dividing by the distance between the points. Then the curvature equation above is applied and the curvature for the given direction is stored. The process is repeated for each of the three other directions. Three-normal The second method is very similar to the first. There are two differences. The first is implied by the name, the normals from three vertices are used. The third normal comes from the neighboring vertex in the opposite direction from the direction of analysis. The curvature is calculated for both pairs of vertices (previous-target and target-next). The results are averaged together to generate the curvature value. The three-normal approach also has a second layer of utility. This additional functionality takes advantage of the unique characteristic of the terrain dataset. It is a trivial task to select the n th neighbor in any of the eight compass directions. Because of this, the three-normal approach can also examine the target vertex with a second set of neighbors some arbitrary distance away. The curvature from these three vertices is calculated as it was for the immediate neighbors, and then the result is multiplied to the previous curvature result. This has the effect of pushing curvature values away from 1.0. Numbers smaller than 1.0 get even smaller and number bigger than 1.0 get even bigger. In effect it takes the local curvature phenomenon, compares it with a bigger picture and reconciles the two cases. So, a tight ridge becomes tighter and noise gets diminished. Geometry based approximations Three-vertex The first of the two geometry-based approximations is the three-vertex approach. As its name implies, it uses three vertices to calculate the curvature. It uses the target vertex and its two immediate neighbors along the vector to the viewpoint. It calculates the slope of the two edges connecting those three points and calculates the secondderivative to be the difference in the two slopes divided by the point distance and the first derivative as the average of the two slopes. This approach could also use a secondary set of vertices at some specified distance, d > 1, to offer the same kind of reinforcement as the three-normal technique uses, but this has yet to be implemented.
6 Edge-angle The last of the curvature algorithms doesn t actually use Equation 1. Instead, it uses the inner product of the edges extending from the target vector to its neighbors to calculate the cosine of the angle between them. The range of this function is [-1, 1] but in practice the range is [-1, 0] because very little terrain data (particularly high-resolution terrain data) has angles between adjacent edges less than 90 degrees. The cosine of the angle is then subtracted from 1 and the result is treated as a curvature value. This has several significant differences from the previous three approaches. First, it s output values are clamped in the range [0, 1], whereas Equation 1 has no actual upper bound. Secondly, the function is strangely attuned to the type of curvature we re seeking. At 90 degrees the slope of the function is 1 and as the function approaches 180 degrees the slope gradually goes from 1 and then accelerates towards 0 closer to 180. This means the function is most sensitive towards differentiating curvature nearer 90 degrees than 180 degrees. This is ideal because if two edges are 180 degrees away from each other, nothing of significance is happening there and it can happily be ignored. Finally, the same distance reinforcement could be applied to this algorithm as has been applied to the three-normal algorithm. To help visualize the results and characteristics of the algorithm, the mesh has been colored to represent curvature data. For each vertex the greatest curvature value is mapped as an intensity level (with curvature values above 1.0 clamped to 1.0). The following figure illustrates the characteristics of each algorithm. (a) (b) (c) (a ) (b ) (c )
7 (d) (e) (d ) (e ) Figure 2: Each algorithm shows results scaled 100% and results scaled 400% (the letter with the mark). (a) two-normal. (b) three-normal no reinforcement. (c) three-normal, reinforcement distance = 2. (d) three vertex. (e) edge angle. The differences between figures 2.b and 2.c clearly illustrate the reinforcement of the second set of calculations. On the whole (b) has less noise than (a), which is to be expected because of the greater sampling. (c) and (e) offer the highest contrast. (d) seems to be similar to either (a) or (b), but tends toward higher values. There are other, more subtle differences not immediately apparent in these small images. For example, (c) creates a narrower band of high curvature along the upper-right coastline than either (a), (b) or (d). Utilizing Curvature Utilizing curvature is otherwise known as drawing lines. Having analyzed and cached the information regarding curvature for the terrain, it behooves us to put it to work. This paper discusses two basic approaches to using the curvature (and suggests several variations). The two approaches are called line shading and hatch shading. Others are certainly possible. Although the focus of this research is view-independent rendering, these approaches are consistent and compatible with many view-dependent methods. The obvious complement to the following shading algorithms is a simple view-dependent, contour algorithm which draws contour edges. Line shading The principles behind line shading arise naturally from the issues raised in the background, which led to the use of curvature analysis: the desire to connect significant
8 points. Line shading attempts to trace paths through points of high curvature. The direction the path takes is determined by the direction of the greatest curvature value associated with a vertex. The union of all these paths creates a static framework, which can be drawn, view-independently, to represent the terrain. Several parameters control the overall quality and character of the resultant image. The algorithm, in its simplest incarnation, is straightforward. For each point, the curvature is tested. If the curvature exceeds a certain threshold, the point is added to a list and the two vertices in the directions perpendicular to the direction of maximum curvature are tested as well. This continues in both directions until each end hits a vertex with insufficient curvature to pass the threshold test. The ordered list of vertices is tested for a minimum length, if long enough, a line strip is drawn. Each vertex in the line strip is colored black, but given an alpha equal to its curvature. So, more significant vertices (i.e. higher curvature) will appear darker. Currently, this algorithm only follows paths in discreet directions (one of the four tested directions.) One possible refinement is to blend the four discrete directions together, weighting the unit vectors according to the curvature in each direction to find an average direction and follow the path in that direction. Points in the path would no longer simply be points from the height field and curvature values would have to be interpolated for these new points from its neighbors. The four-direction path algorithm can recognize when one path merges with another and can eliminate redundancy in drawing paths. When paths can follow arbitrary directions and pass through unique points, paths that might, in principle, merge could end up lying next to each other. Special care must be taken in drawing paths in these blended directions. Perhaps blending them in discrete amounts would balance this difficulty with the ease of path administration. Combinations of curvature scaling, line widths, minimum path lengths and minimum thresholds can create vastly different rendering styles. The following figure shows several variations of the line shading style with varying parameters.
9 shaded (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) Fig # style Dist. Scale Has contours Min. Len. Line width Fig # style Dist. Scale Has contours Min. Len. Line width a 3 norm 2 6 no 6 1 h 2 norm n/a 3 no 5 1 b 3 norm 2 6 yes 6 1 i Edge n/a 2 yes 5 1 c 3 norm 2 11 yes 2 1 j 3 vert n/a 2 no 5 1 d 2 norm n/a 2 no 5 1 k 2 norm n/a 1 no 5 1 e 2 norm n/a 11 no 2 1 l Edge n/a 1 yes 5 1 f 2 norm n/a 6 no 2 1 m 3 vert n/a 1 no 5 1 g 3 norm 2 3 yes 5 1 n 3 norm no 2 6 Figure 3: Renderings of terrain with varying parameter settings for line shading.
10 Hatch shading Hatch shading is another means of using the curvature analysis. Rather than trying to create continuous contours, hatch shading focuses on each point, individually. It relies on a similar principle as pointillism the idea that the effect of light at discreet, regular locations can combine to form a coherent image of the whole object. In its first and, again, simplest incarnation the algorithm iterates through each of the points, evaluates its curvature value and draws a stroke of parameter-determined length. Like line shading, the color of the stroke is black and its opacity is determined by the magnitude of the greatest curvature at that point. The stroke is generated on the tangent plane at the vertex. Its direction is determined by the projection onto the tangent plane of the weighted curvatures the four directional vectors are summed, each multiplied by the curvature value in that direction and the resultant vector determines the direction of the stroke. The stroke can be drawn either in the direction of greatest curvature or perpendicular to it. Varying stroke length, stroke width and curvature values yield a wide range of different rendering styles (as seen in Figure 4 on the following page.) What s next? This is obviously, only a beginning; and even then, the beginning of only a single technique for the NPR representation of terrain. The present results are, indeed, promising. But more can be done. The algorithms presented should be further refined. Distance reinforcement for the curvature calculations should be added to the three curvature approximations currently lacking it (two-normal, three-vertex and edge-angle). The line-shading algorithm needs to allow lines to go in directions beyond the eight compass directions by weighting curvature directions (as the hatch shading has done.). After even a small time interacting with terrain rendered in this manner, one simple principle, currently abrogated, becomes painfully clear: line density. The effectiveness of the line shader stands in direct proportion to the density of lines. If the camera draws too close to the image, the lines pull so far apart that they fail to communicate anything about the contour; too far and they merge into an incomprehensible mess. Some means of control needs to be applied to control line density. One approach is to use interpolated starting points. The current algorithm iterates through actual vertex locations as starting points for the lines. If the spaces between those vertices were subdivided, an arbitrary number of additional sets of lines could be created. They could be drawn or excluded, depending on the distance of the surface from the camera. This, of course, suggests one of the most obvious directions in which research can, and should, continue: view-dependent modifications. Increasing and decreasing the number of lines drawn would be one such example. Others could include taking a heavily rendered image (like those in figures 3(e) or 4(d)) and actually lighting the vertices to help determine color, adding additional, more sophisticated contours (such as suggested contours), replacing simple line segments with other primitives, like polygons with brush stroke textures, etc. The inclusion of view-dependent concepts further implies research into accelerating these algorithms through the use of modern GPUs.
11 shaded (a) (b) (c) (d) (e) (f) (g) (h) Fig # style Dist. Scale Has contours Hatch. Len. Line width Hatch Dir. a 3 norm 2 4 no With b 3 norm 1 4 no With c 3 norm 1 4 no Perp. d 2 norm 1 4 no Perp. e 2 norm 1 8 no Perp. f 3 norm 2 8 yes Perp. g 3 norm 1 3 no Perp. h 3 vert n/a 2 no With Figure 4: Renderings of terrain with varying parameter settings for hatch shading. Hatch dir. Controls whether the hatches are drawn in the direction of greatest curvature or perpendicular.
12 These images are all rendered in such a way to hint at pencil/ink drawings on paper. It is not strictly necessary to have the occluding surface in the hidden-line renderings to be paper colored. Combinations of textured surfaces and lines could yield very compelling images. Mapmakers throughout history have understood the necessity and value of simplifying representations of terrain; extraneous data can be discarded and the viewer can be led to focus on specific features more effectively. With the advent of the internet, more data is publicly available than ever before. Much of this data can be tied together based on geographical location. Efficient and compelling non-photorealistic, interactive renderings of terrain are the first step to a new, compelling interface for accessing geosensitive data. References DeCarlo, D., Finkelstein, A., Rusinkiewicz, S., Santella., A Suggestive Contours for Conveying Shape. ACM Transactions on Graphics 22, 3 (July),
Non-Photorealistic Experimentation Jhon Adams
Non-Photorealistic Experimentation Jhon Adams Danny Coretti Abstract Photo-realistic rendering techniques provide an excellent method for integrating stylized rendering into an otherwise dominated field
More informationAdvanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim
Advanced Real- Time Cel Shading Techniques in OpenGL Adam Hutchins Sean Kim Cel shading, also known as toon shading, is a non- photorealistic rending technique that has been used in many animations and
More informationChapter 2 Basic Structure of High-Dimensional Spaces
Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,
More informationLet s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render
1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),
More informationReal-Time Non- Photorealistic Rendering
Real-Time Non- Photorealistic Rendering Presented by: Qing Hu LIAO SOCS, McGill Feb 1, 2005 Index Introduction Motivation Appel s Algorithm Improving Schema Rendering Result Economy of line A great deal
More informationCSC 2521 Final Project Report. Hanieh Bastani
CSC 2521 Final Project Report Hanieh Bastani December, 2007 NPR Renderer: Overview I implemented a 3D NPR renderer which supports contours, suggestive contours, and toon shading. For this implementation,
More informationNon-Photo Realistic Rendering. Jian Huang
Non-Photo Realistic Rendering Jian Huang P and NP Photo realistic has been stated as the goal of graphics during the course of the semester However, there are cases where certain types of non-photo realistic
More information1. Mesh Coloring a.) Assign unique color to each polygon based on the polygon id.
1. Mesh Coloring a.) Assign unique color to each polygon based on the polygon id. Figure 1: The dragon model is shown rendered using a coloring scheme based on coloring each triangle face according to
More informationTSBK03 Screen-Space Ambient Occlusion
TSBK03 Screen-Space Ambient Occlusion Joakim Gebart, Jimmy Liikala December 15, 2013 Contents 1 Abstract 1 2 History 2 2.1 Crysis method..................................... 2 3 Chosen method 2 3.1 Algorithm
More information2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into
2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel
More informationAdaptive Point Cloud Rendering
1 Adaptive Point Cloud Rendering Project Plan Final Group: May13-11 Christopher Jeffers Eric Jensen Joel Rausch Client: Siemens PLM Software Client Contact: Michael Carter Adviser: Simanta Mitra 4/29/13
More informationAUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER
AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal
More informationHardware Displacement Mapping
Matrox's revolutionary new surface generation technology, (HDM), equates a giant leap in the pursuit of 3D realism. Matrox is the first to develop a hardware implementation of displacement mapping and
More informationCEng 477 Introduction to Computer Graphics Fall 2007
Visible Surface Detection CEng 477 Introduction to Computer Graphics Fall 2007 Visible Surface Detection Visible surface detection or hidden surface removal. Realistic scenes: closer objects occludes the
More informationDisplacement Mapping
HELSINKI UNIVERSITY OF TECHNOLOGY 16.4.2002 Telecommunications Software and Multimedia Laboratory Tik-111.500 Seminar on computer graphics Spring 2002: Rendering of High-Quality 3-D Graphics Displacement
More information3 Polygonal Modeling. Getting Started with Maya 103
3 Polygonal Modeling In Maya, modeling refers to the process of creating virtual 3D surfaces for the characters and objects in the Maya scene. Surfaces play an important role in the overall Maya workflow
More informationPhysically-Based Laser Simulation
Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on
More informationScalar Visualization
Scalar Visualization Visualizing scalar data Popular scalar visualization techniques Color mapping Contouring Height plots outline Recap of Chap 4: Visualization Pipeline 1. Data Importing 2. Data Filtering
More information3.7. Vertex and tangent
3.7. Vertex and tangent Example 1. At the right we have drawn the graph of the cubic polynomial f(x) = x 2 (3 x). Notice how the structure of the graph matches the form of the algebraic expression. The
More informationApplications of Explicit Early-Z Culling
Applications of Explicit Early-Z Culling Jason L. Mitchell ATI Research Pedro V. Sander ATI Research Introduction In past years, in the SIGGRAPH Real-Time Shading course, we have covered the details of
More informationView-Dependent Selective Refinement for Fast Rendering
1 View-Dependent Selective Refinement for Fast Rendering Kyle Brocklehurst Department of Computer Science and Engineering The Pennsylvania State University kpb136@psu.edu Abstract Triangle meshes are used
More information3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models
3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts
More informationPreviously... contour or image rendering in 2D
Volume Rendering Visualisation Lecture 10 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Previously... contour or image rendering in 2D 2D Contour line
More informationDeferred Rendering Due: Wednesday November 15 at 10pm
CMSC 23700 Autumn 2017 Introduction to Computer Graphics Project 4 November 2, 2017 Deferred Rendering Due: Wednesday November 15 at 10pm 1 Summary This assignment uses the same application architecture
More informationCOMP 558 lecture 16 Nov. 8, 2010
Shading The term shading typically refers to variations in irradiance along a smooth Lambertian surface. Recall that if a surface point is illuminated by parallel light source from direction l, then the
More information3D Object Representation
3D Object Representation Object Representation So far we have used the notion of expressing 3D data as points(or vertices) in a Cartesian or Homogeneous coordinate system. We have simplified the representation
More informationProject report Augmented reality with ARToolKit
Project report Augmented reality with ARToolKit FMA175 Image Analysis, Project Mathematical Sciences, Lund Institute of Technology Supervisor: Petter Strandmark Fredrik Larsson (dt07fl2@student.lth.se)
More informationComputer Graphics Fundamentals. Jon Macey
Computer Graphics Fundamentals Jon Macey jmacey@bournemouth.ac.uk http://nccastaff.bournemouth.ac.uk/jmacey/ 1 1 What is CG Fundamentals Looking at how Images (and Animations) are actually produced in
More informationCMSC 491A/691A Artistic Rendering. Announcements
CMSC 491A/691A Artistic Rendering Penny Rheingans UMBC Announcements Lab meeting: Tues 2pm, ITE 352, starting next week Proposal due Thurs 1 Shape Cues: Outlines Outline flat parts Outline important boundaries
More informationImages from 3D Creative Magazine. 3D Modelling Systems
Images from 3D Creative Magazine 3D Modelling Systems Contents Reference & Accuracy 3D Primitives Transforms Move (Translate) Rotate Scale Mirror Align 3D Booleans Deforms Bend Taper Skew Twist Squash
More informationLabeling a Molecular Triangle Meshes
Labeling a Molecular Triangle Meshes Cody Robson Abstract This project addresses the problem of molecular visualization and the application of decals or labels onto a molecule mesh. Often biochemists will
More informationTerrain rendering (part 1) Due: Monday, March 10, 10pm
CMSC 3700 Winter 014 Introduction to Computer Graphics Project 4 February 5 Terrain rendering (part 1) Due: Monday, March 10, 10pm 1 Summary The final two projects involves rendering large-scale outdoor
More informationComputer Graphics 1. Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2011
Computer Graphics 1 Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling 1 The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons in
More informationGame Architecture. 2/19/16: Rasterization
Game Architecture 2/19/16: Rasterization Viewing To render a scene, need to know Where am I and What am I looking at The view transform is the matrix that does this Maps a standard view space into world
More informationPoint based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural
1 Point based global illumination is now a standard tool for film quality renderers. Since it started out as a real time technique it is only natural to consider using it in video games too. 2 I hope that
More informationArtistic Rendering of Function-based Shape Models
Artistic Rendering of Function-based Shape Models by Shunsuke Suzuki Faculty of Computer and Information Science Hosei University n00k1021@k.hosei.ac.jp Supervisor: Alexander Pasko March 2004 1 Abstract
More informationComputer Graphics 1. Chapter 7 (June 17th, 2010, 2-4pm): Shading and rendering. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2010
Computer Graphics 1 Chapter 7 (June 17th, 2010, 2-4pm): Shading and rendering 1 The 3D rendering pipeline (our version for this class) 3D models in model coordinates 3D models in world coordinates 2D Polygons
More informationUnderstanding Geospatial Data Models
Understanding Geospatial Data Models 1 A geospatial data model is a formal means of representing spatially referenced information. It is a simplified view of physical entities and a conceptualization of
More informationReal Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03
1 Real Time Rendering of Complex Height Maps Walking an infinite realistic landscape By: Jeffrey Riaboy Written 9/7/03 Table of Contents 1 I. Overview 2 II. Creation of the landscape using fractals 3 A.
More informationGeometry and Gravitation
Chapter 15 Geometry and Gravitation 15.1 Introduction to Geometry Geometry is one of the oldest branches of mathematics, competing with number theory for historical primacy. Like all good science, its
More informationLight: Geometric Optics
Light: Geometric Optics Regular and Diffuse Reflection Sections 23-1 to 23-2. How We See Weseebecauselightreachesoureyes. There are two ways, therefore, in which we see: (1) light from a luminous object
More informationCS 354R: Computer Game Technology
CS 354R: Computer Game Technology Texture and Environment Maps Fall 2018 Texture Mapping Problem: colors, normals, etc. are only specified at vertices How do we add detail between vertices without incurring
More informationCOMP30019 Graphics and Interaction Scan Converting Polygons and Lines
COMP30019 Graphics and Interaction Scan Converting Polygons and Lines Department of Computer Science and Software Engineering The Lecture outline Introduction Scan conversion Scan-line algorithm Edge coherence
More informationData Representation in Visualisation
Data Representation in Visualisation Visualisation Lecture 4 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Taku Komura Data Representation 1 Data Representation We have
More informationUsing Perspective Rays and Symmetry to Model Duality
Using Perspective Rays and Symmetry to Model Duality Alex Wang Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-13 http://www.eecs.berkeley.edu/pubs/techrpts/2016/eecs-2016-13.html
More informationPipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11
Pipeline Operations CS 4620 Lecture 11 1 Pipeline you are here APPLICATION COMMAND STREAM 3D transformations; shading VERTEX PROCESSING TRANSFORMED GEOMETRY conversion of primitives to pixels RASTERIZATION
More informationOrthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015
Orthogonal Projection Matrices 1 Objectives Derive the projection matrices used for standard orthogonal projections Introduce oblique projections Introduce projection normalization 2 Normalization Rather
More informationWho has worked on a voxel engine before? Who wants to? My goal is to give the talk I wish I would have had before I started on our procedural engine.
1 Who has worked on a voxel engine before? Who wants to? My goal is to give the talk I wish I would have had before I started on our procedural engine. Three parts to this talk. A lot of content, so I
More informationScalar Visualization
Scalar Visualization 5-1 Motivation Visualizing scalar data is frequently encountered in science, engineering, and medicine, but also in daily life. Recalling from earlier, scalar datasets, or scalar fields,
More informationIntroduction Rasterization Z-buffering Shading. Graphics 2012/2013, 4th quarter. Lecture 09: graphics pipeline (rasterization and shading)
Lecture 9 Graphics pipeline (rasterization and shading) Graphics pipeline - part 1 (recap) Perspective projection by matrix multiplication: x pixel y pixel z canonical 1 x = M vpm per M cam y z 1 This
More informationScreen Space Ambient Occlusion TSBK03: Advanced Game Programming
Screen Space Ambient Occlusion TSBK03: Advanced Game Programming August Nam-Ki Ek, Oscar Johnson and Ramin Assadi March 5, 2015 This project report discusses our approach of implementing Screen Space Ambient
More informationNon-Photorealistic Rendering
15-462 Computer Graphics I Lecture 22 Non-Photorealistic Rendering November 18, 2003 Doug James Carnegie Mellon University http://www.cs.cmu.edu/~djames/15-462/fall03 Pen-and-Ink Illustrations Painterly
More informationThere we are; that's got the 3D screen and mouse sorted out.
Introduction to 3D To all intents and purposes, the world we live in is three dimensional. Therefore, if we want to construct a realistic computer model of it, the model should be three dimensional as
More informationGeometric Computations for Simulation
1 Geometric Computations for Simulation David E. Johnson I. INTRODUCTION A static virtual world would be boring and unlikely to draw in a user enough to create a sense of immersion. Simulation allows things
More informationHow to draw and create shapes
Adobe Flash Professional Guide How to draw and create shapes You can add artwork to your Adobe Flash Professional documents in two ways: You can import images or draw original artwork in Flash by using
More informationLecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19
Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line
More informationExaggerated Shading for Depicting Shape and Detail. Szymon Rusinkiewicz Michael Burns Doug DeCarlo
Exaggerated Shading for Depicting Shape and Detail Szymon Rusinkiewicz Michael Burns Doug DeCarlo Motivation Style of technical, medical, and topographic illustrations is designed to communicate surface
More informationAn Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners
An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners Mohammad Asiful Hossain, Abdul Kawsar Tushar, and Shofiullah Babor Computer Science and Engineering Department,
More informationOn the Visibility of the Shroud Image. Author: J. Dee German ABSTRACT
On the Visibility of the Shroud Image Author: J. Dee German ABSTRACT During the 1978 STURP tests on the Shroud of Turin, experimenters observed an interesting phenomenon: the contrast between the image
More informationChapter 7 - Light, Materials, Appearance
Chapter 7 - Light, Materials, Appearance Types of light in nature and in CG Shadows Using lights in CG Illumination models Textures and maps Procedural surface descriptions Literature: E. Angel/D. Shreiner,
More informationShading. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller/Fuhrmann
Shading Introduction to Computer Graphics Torsten Möller Machiraju/Zhang/Möller/Fuhrmann Reading Chapter 5.5 - Angel Chapter 6.3 - Hughes, van Dam, et al Machiraju/Zhang/Möller/Fuhrmann 2 Shading Illumination
More informationVirtual Reality for Human Computer Interaction
Virtual Reality for Human Computer Interaction Appearance: Lighting Representation of Light and Color Do we need to represent all I! to represent a color C(I)? No we can approximate using a three-color
More informationLets assume each object has a defined colour. Hence our illumination model is looks unrealistic.
Shading Models There are two main types of rendering that we cover, polygon rendering ray tracing Polygon rendering is used to apply illumination models to polygons, whereas ray tracing applies to arbitrary
More information2D/3D Geometric Transformations and Scene Graphs
2D/3D Geometric Transformations and Scene Graphs Week 4 Acknowledgement: The course slides are adapted from the slides prepared by Steve Marschner of Cornell University 1 A little quick math background
More informationCharacter Modeling COPYRIGHTED MATERIAL
38 Character Modeling p a r t _ 1 COPYRIGHTED MATERIAL 39 Character Modeling Character Modeling 40 1Subdivision & Polygon Modeling Many of Maya's features have seen great improvements in recent updates
More informationCS4620/5620: Lecture 14 Pipeline
CS4620/5620: Lecture 14 Pipeline 1 Rasterizing triangles Summary 1! evaluation of linear functions on pixel grid 2! functions defined by parameter values at vertices 3! using extra parameters to determine
More informationCHAPTER 1 Graphics Systems and Models 3
?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........
More informationChapter 5. Projections and Rendering
Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.
More informationv Mesh Generation SMS Tutorials Prerequisites Requirements Time Objectives
v. 12.3 SMS 12.3 Tutorial Mesh Generation Objectives This tutorial demostrates the fundamental tools used to generate a mesh in the SMS. Prerequisites SMS Overview SMS Map Module Requirements Mesh Module
More informationPetrel TIPS&TRICKS from SCM
Petrel TIPS&TRICKS from SCM Knowledge Worth Sharing Merging Overlapping Files into One 2D Grid Often several files (grids or data) covering adjacent and overlapping areas must be combined into one 2D Grid.
More informationCS 465 Program 4: Modeller
CS 465 Program 4: Modeller out: 30 October 2004 due: 16 November 2004 1 Introduction In this assignment you will work on a simple 3D modelling system that uses simple primitives and curved surfaces organized
More informationMidterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer
Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer The exam consists of 10 questions. There are 2 points per question for a total of 20 points. You
More informationcoding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight
Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image
More informationThis work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you
This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional
More informationDirect Rendering of Trimmed NURBS Surfaces
Direct Rendering of Trimmed NURBS Surfaces Hardware Graphics Pipeline 2/ 81 Hardware Graphics Pipeline GPU Video Memory CPU Vertex Processor Raster Unit Fragment Processor Render Target Screen Extended
More informationThe Problem of Calculating Vertex Normals for Unevenly Subdivided Smooth Surfaces
The Problem of Calculating Vertex Normals for Unevenly Subdivided Smooth Surfaces Ted Schundler tschundler (a) gmail _ com Abstract: Simply averaging normals of the faces sharing a vertex does not produce
More informationBarycentric Coordinates and Parameterization
Barycentric Coordinates and Parameterization Center of Mass Geometric center of object Center of Mass Geometric center of object Object can be balanced on CoM How to calculate? Finding the Center of Mass
More informationGeometric Features for Non-photorealistiic Rendering
CS348a: Computer Graphics Handout # 6 Geometric Modeling and Processing Stanford University Monday, 27 February 2017 Homework #4: Due Date: Mesh simplification and expressive rendering [95 points] Wednesday,
More informationA simple problem that has a solution that is far deeper than expected!
The Water, Gas, Electricity Problem A simple problem that has a solution that is far deeper than expected! Consider the diagram below of three houses and three utilities: water, gas, and electricity. Each
More informationO Hailey: Chapter 3 Bonus Materials
O Hailey: Chapter 3 Bonus Materials Maya s Toon Line For those familiar with toon lines in Maya, you may skip ahead past this section. Those not familiar might find it useful to understand the basics of
More informationShadows in the graphics pipeline
Shadows in the graphics pipeline Steve Marschner Cornell University CS 569 Spring 2008, 19 February There are a number of visual cues that help let the viewer know about the 3D relationships between objects
More informationImage Precision Silhouette Edges
Image Precision Silhouette Edges by Ramesh Raskar and Michael Cohen Presented at I3D 1999 Presented by Melanie Coggan Outline Motivation Previous Work Method Results Conclusions Outline Motivation Previous
More information0. Introduction: What is Computer Graphics? 1. Basics of scan conversion (line drawing) 2. Representing 2D curves
CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~elf Instructor: Eugene Fiume Office: BA 5266 Phone: 416 978 5472 (not a reliable way) Email:
More informationModule Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1
UNIVERSITY OF EAST ANGLIA School of Computing Sciences Main Series PG Examination 2013-14 COMPUTER GAMES DEVELOPMENT CMPSME27 Time allowed: 2 hours Answer any THREE questions. (40 marks each) Notes are
More informationVolumetric Particle Separating Planes for Collision Detection
Volumetric Particle Separating Planes for Collision Detection by Brent M. Dingle Fall 2004 Texas A&M University Abstract In this paper we describe a method of determining the separation plane of two objects
More informationLesson 01 Polygon Basics 17. Lesson 02 Modeling a Body 27. Lesson 03 Modeling a Head 63. Lesson 04 Polygon Texturing 87. Lesson 05 NURBS Basics 117
Table of Contents Project 01 Lesson 01 Polygon Basics 17 Lesson 02 Modeling a Body 27 Lesson 03 Modeling a Head 63 Lesson 04 Polygon Texturing 87 Project 02 Lesson 05 NURBS Basics 117 Lesson 06 Modeling
More information(Refer Slide Time: 0:32)
Digital Image Processing. Professor P. K. Biswas. Department of Electronics and Electrical Communication Engineering. Indian Institute of Technology, Kharagpur. Lecture-57. Image Segmentation: Global Processing
More informationFilling Space with Random Line Segments
Filling Space with Random Line Segments John Shier Abstract. The use of a nonintersecting random search algorithm with objects having zero width ("measure zero") is explored. The line length in the units
More informationParameterization. Michael S. Floater. November 10, 2011
Parameterization Michael S. Floater November 10, 2011 Triangular meshes are often used to represent surfaces, at least initially, one reason being that meshes are relatively easy to generate from point
More informationPipeline Operations. CS 4620 Lecture 10
Pipeline Operations CS 4620 Lecture 10 2008 Steve Marschner 1 Hidden surface elimination Goal is to figure out which color to make the pixels based on what s in front of what. Hidden surface elimination
More informationChapter 1. Introduction
Introduction 1 Chapter 1. Introduction We live in a three-dimensional world. Inevitably, any application that analyzes or visualizes this world relies on three-dimensional data. Inherent characteristics
More informationMidterm Exam CS 184: Foundations of Computer Graphics page 1 of 11
Midterm Exam CS 184: Foundations of Computer Graphics page 1 of 11 Student Name: Class Account Username: Instructions: Read them carefully! The exam begins at 2:40pm and ends at 4:00pm. You must turn your
More informationDigital Image Processing Fundamentals
Ioannis Pitas Digital Image Processing Fundamentals Chapter 7 Shape Description Answers to the Chapter Questions Thessaloniki 1998 Chapter 7: Shape description 7.1 Introduction 1. Why is invariance to
More informationVolume Rendering. Computer Animation and Visualisation Lecture 9. Taku Komura. Institute for Perception, Action & Behaviour School of Informatics
Volume Rendering Computer Animation and Visualisation Lecture 9 Taku Komura Institute for Perception, Action & Behaviour School of Informatics Volume Rendering 1 Volume Data Usually, a data uniformly distributed
More informationVANSTEENKISTE LEO DAE GD ENG UNFOLD SHADER. Introduction
VANSTEENKISTE LEO 2015 G E O M E T RY S H A D E R 2 DAE GD ENG UNFOLD SHADER Introduction Geometry shaders are a powerful tool for technical artists, but they always seem to be used for the same kind of
More informationDesign and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute. Week 02 Module 06 Lecture - 14 Merge Sort: Analysis
Design and Analysis of Algorithms Prof. Madhavan Mukund Chennai Mathematical Institute Week 02 Module 06 Lecture - 14 Merge Sort: Analysis So, we have seen how to use a divide and conquer strategy, we
More informationSimple Silhouettes for Complex Surfaces
Eurographics Symposium on Geometry Processing(2003) L. Kobbelt, P. Schröder, H. Hoppe (Editors) Simple Silhouettes for Complex Surfaces D. Kirsanov, P. V. Sander, and S. J. Gortler Harvard University Abstract
More information4.5 VISIBLE SURFACE DETECTION METHODES
4.5 VISIBLE SURFACE DETECTION METHODES A major consideration in the generation of realistic graphics displays is identifying those parts of a scene that are visible from a chosen viewing position. There
More informationSynthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1.
Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons sub047 Abstract BTF has been studied extensively and much progress has been done for measurements, compression
More informationCSE528 Computer Graphics: Theory, Algorithms, and Applications
CSE528 Computer Graphics: Theory, Algorithms, and Applications Hong Qin State University of New York at Stony Brook (Stony Brook University) Stony Brook, New York 11794--4400 Tel: (631)632-8450; Fax: (631)632-8334
More information