MESH DECIMATION. A Major Qualifying Project Report: submitted to the Faculty. of the WORCESTER POLYTECHNIC INSTITUTE

Size: px
Start display at page:

Download "MESH DECIMATION. A Major Qualifying Project Report: submitted to the Faculty. of the WORCESTER POLYTECHNIC INSTITUTE"

Transcription

1 Project Number: MOW-2733 MESH DECIMATION A Major Qualifying Project Report: submitted to the Faculty of the WORCESTER POLYTECHNIC INSTITUTE in partial fulfillment of the requirements for the Degree of Bachelor of Science by Tim Garthwaite Jason Reposa Date: January 8, 2002 Approved: 1. decimation 2. mesh simplification 3. rendering Professor Matt Ward, Major Advisor

2 Chapter 0 Preliminaries 0.1 Preface This report is the documentation of our project. It serves as a history of our process in creating the resulting computer program. It is best used as a reference for the process of creating a computer program that uses a decimation algorithm in OpenGL on Linux. It should be used by any party seeking documentation of a practical process that results in a decimating computer program. 0.2 Acknowledgments We would like to thank Professor Matt O. Ward, Ph. D and Professor Mark Stevens, Ph.D for their continued support and confidence in our work, and Worcester Polytechnic Institute for having the resources capable for these Major Qualifying Projects: the project was exciting and demanding. We would also like to acknowledge Bjarne Stroustrup for his impressive guide The C++ Programming Language Special Edition. The book was not only our expert reference, it was also the ultimate resource for using C++ effectively[33]. We would also like to acknowledge the two books that helped us learn OpenGL, the Red Book and Super Bible. Jason Reposa would like to thank his family and friends for putting up with his attitude and sometimes extreme personality from spending hours on the computer. Tim Garthwaite would like to thank Mike Gesner and his other friends in the WPI Game Development Club for their support during months of nearly absent leadership. ii

3 Contents 0 Preliminaries ii 0.1 Preface ii 0.2 Acknowledgments ii 0.3 Abstract Introduction 1 2 Literature Review Mesh Simplification Decimation of Triangle Meshes Simplifying Surfaces with Color and Texture Using Quadric Error Metrics Mesh Optimization Progressive Meshes Fast and Memory Efficient Polygonal Simplification A Unified Approach for Simplifying Polygonal and Spline Models Analyzing Geometric Error Metro: Measuring Error on Simplified Surfaces Evaluation of Memoryless Simplification The Decimation Program Design Data Structures Algorithm Brief Algorithm Comparison Interface Implementation Standards Target Platforms Code Organization Code Walk Deployment iii

4 4 Results Visual Comparison Geometric Error Time Taken to Decimate Computers Used Time Test Results Analysis Visual Analysis Edge Detection Simplifying Models Smooth Shading Geometric Error Computing Platform Computational Complexity Final Analysis Conclusions and Recommendations Conclusions Design Implementation and Deployment Recommendations A Glossary 105 iv

5 List of Figures 1.1 This example is a torso of a person. This image was produced from a method from the field of medical imagery[25] This example is from the movie Final Fantasy: The Spirits Within. All the computer processing needed to make this movie was done before the movie was viewed, so the models were created with the highest feasible level of detail[34] This is a monster from the popular game Quake 2. Because the game is interactive, the computer must render 2D images from the 3D model in real time, so this model has less detail to make this possible[31] This figure shows a model of yoda on the left and a wireframe view of the same model, showing the model s mesh, on the right. The image was generated with our program. The model was retrieved from 3DCafe.com[15] The vertex categories. Reproduced from Schroeder et al The distance to the average plane. Reproduced from Schroeder et al The split-plane algorithm. The average plane of the removed vertex s neighbors is shown, along with the split plane, shaded, that is orthogonal to it, and the split line, bold, which will become a new edge in the model. The three lines connected by two points in the average plane of the picture are the edges of the hole that is being filled in. Reproduced from Schroeder et al The Triangle data structure has a list of 3 Vertex objects The Vertex data structure has a list of Triangle objects The distance to plane formula Find the average plane The average plane in point-normal form Calculate the distance from the vertex to the plane Candidate for vertex removal Locate the neighbors Retriangulate neighbors, pointing to one neighbor and removing some triangles Remove the vertex The model test001 rendered solid The model test001 rendered solid with wireframe lines to show edges The model test001 rendered with various options enabled The model test002 rendered solid The model test002 rendered solid with wireframe lines to show edges v

6 4.6 The model test002 rendered in points The model test003 rendered solid The model test003 rendered solid with wireframe lines to show edges The model test003 rendered in points The model test004 rendered solid The model test004 rendered solid with wireframe lines to show edges The model test004 rendered in points The model test005 rendered solid The model test005 rendered solid with wireframe lines to show edges The model test005 rendered in points The model test006 rendered solid The model test006 rendered solid with wireframe lines to show edges The model test006 rendered in points The model test007 rendered solid The model test007 rendered solid with wireframe lines to show edges The model test007 rendered in points The model test008 rendered solid The model test008 rendered solid with wireframe lines to show edges The model test008 rendered in points The model test009 rendered solid The model test009 rendered solid with OpenGL point smoothing enabled The model test009 rendered solid with wireframe lines to show edges The model test009 rendered in points The model test009 rendered solid (detail) The model test009 rendered solid with OpenGL point smoothing enabled (detail) The model test009 rendered solid with wireframe lines to show edges (detail) The original bunny model rendered solid and with only points The bunny model rendered solid in eight levels of detail The bunny model rendered showing only points in eight levels of detail The results of each computer for three levels of detail (60, 80, 90 percent) for model bunny The results of each computer for three levels of detail (60, 80, 90 percent) for model test The results of each computer for three levels of detail (60, 80, 90 percent) for model hand The time comparison for each computer across the six largest test models for the sixty percent level of detail The time comparison for each computer across the six largest test models for the eighty percent level of detail The time comparison for each computer across the six largest test models for the ninety percent level of detail The model test002 rendered solid and with wireframe vi

7 5.2 The model test002 fully decimated, rendered solid and with wireframe The model test003 rendered solid and with wireframe The model test003 fully decimated, rendered solid and with wireframe The model test004 rendered solid and with wireframe The model test004 fully decimated, rendered solid and with wireframe The model test005 rendered solid and with wireframe The model test005 decimated to eighty percent, rendered solid and with wireframe The model test009 rendered solid and with flat shading The model test009 rendered solid and with smooth shading enabled The decimated model rendered solid and with flat shading The decimated model rendered solid and with smooth shading enabled The model test009 rendered with flat shading on the top and smooth shading on the bottom. The two levels of detail are the original model on the left, and the decimated model on the right Comparison of the mean geometric error for eight levels of detail among seven mesh simplification programs[14, 13, 5, 10, 19, 3] Comparison of the maximum geometric error for eight levels of detail among seven mesh simplification programs[14, 13, 5, 10, 19, 3] The time taken on each test computer for ninety percent removal on the largest six test models The number of vertices removed per second for each of the largest six test models vii

8 List of Tables 4.1 The maximum and mean geometric error as calculated by Metro between the original Stanford bunny model and eight progressively simplified levels of detail generated with our program viii

9 0.3 Abstract Rendering a three dimensional mesh with a large number of polygons requires an immense amount of computing power. Decimation is the concept of removing a large number of those polygons from the mesh in order to lower the requirements that a computer must maintain in order to render the model. Our project is the implementation and evaluation of a computer program that uses a non-trivial decimation algorithm to reduce the polygon count of a three dimensional mesh. 0

10 Chapter 1 Introduction Computer graphics has become an important aspect of many different fields, ranging from medicine and engineering to marketing and entertainment. 3D computer graphics are created in many different ways and are used for a variety of purposes. Some 3D graphics are created with digital cameras or sonography equipment by utilizing the techniques of computer vision, while others are created by artists completely from their imagination. In medicine, 3D models of human organs are created using data collected from sonography and other medical imagery devices to aid in surgery[2]. In engineering, real-world objects that are yet to be created are modeled first on computers[24]. The marketing and film industries create unique 3D models that do not exist and never will except on a computer screen[8]. In the field of computer graphics there is an ever-present concern for graphics professionals to give viewers the best representation of an object. Of course, the more detail a 3D model of an object has, the more information is needed to represent the object in a computer s memory, and the more time it will take the 1

11 Figure 1.1: This example is a torso of a person. This image was produced from a method from the field of medical imagery[25]. Figure 1.2: This example is from the movie Final Fantasy: The Spirits Within. All the computer processing needed to make this movie was done before the movie was viewed, so the models were created with the highest feasible level of detail[34]. 2

12 Figure 1.3: This is a monster from the popular game Quake 2. Because the game is interactive, the computer must render 2D images from the 3D model in real time, so this model has less detail to make this possible[31]. computer to process that information and create, or render, a 2D image that can be viewed. Examples of objects could be a model made from the torso of a real person (see Figure 1.1), a character in an animated movie (see Figure 1.2), or even a monster from a video game (see Figure 1.3). All of these examples were made to give the viewer the best visual appearance possible given the constraints under which the images need to be rendered. 3D computer graphics lends itself to a hierarchical approach. When viewing in 3D, what a computer program user is viewing is called a Scene, which defines the scope of what the user can view. A Scene is made up of one of more Models, which is a term professionals in the field use to refer to any discrete object in a Scene. Models are represented in most computer programs with Meshes of polygons, usually triangles. The more triangles that make up the 3

13 Figure 1.4: This figure shows a model of yoda on the left and a wireframe view of the same model, showing the model s mesh, on the right. The image was generated with our program. The model was retrieved from 3DCafe.com[15] surface of a mesh, the more realistic it appears to a user. A balance must be kept between realism and performance, however, because it takes the computer longer to render an image of a model that has more triangles in its mesh. The goal of this project is to research, design, implement, and evaluate a nontrivial cross-platform program that reduces the polygon count of three-dimentional triangular meshes of all models in a scene. The program should be released to the public as an open-source program, and be easily extensible to include other mesh-altering or mesh-utilizing algorithms and to import and export various data formats. Lowering the polygonal count of meshes allows computers to render them faster, since fewer computations are needed. The method used to decrease the polygonal count is known as mesh simplification or decimation. An early article documenting a decimation algorithm was written by Schroeder, Zarge, and Lorensen in the July 1992 issue of Computer Graphics, a publication 4

14 of ACM SIGGRAPH[27]. This has been the basis for most of the work we have done in implementing a decimating program. Mesh simplification is defined in their paper as the problem of reducing the number of faces in a dense mesh while minimally perturbing the shape. For our project we created an OpenGL[1] application that executes in Windows or Linux. The program has been released under the terms of the GNU General Public License[32]. The program is capable of rendering meshes in VRML or Inventor 2.x format. 5

15 Chapter 2 Literature Review 2.1 Mesh Simplification Decimation of Triangle Meshes William J. Schroeder, Jonathan A. Zarge, and William E. Lorensen[27] This paper is a technical report on the decimation algorithm. The paper discusses the evaluation and implementation of the algorithm. It also defines the goal of decimation: to reduce the total number of triangles in a triangle mesh, while preserving as accurately as possible important features. [27] This paper outlines three steps in the decimation process. 1. characterize the local vertex geometry and topology 2. evaluate the decimation criteria 3. triangulate the resulting hole 6

16 Figure 2.1: The vertex categories. Reproduced from Schroeder et al. Figure 2.2: The distance to the average plane. Reproduced from Schroeder et al. The paper characterizes the local geometry and topology in five different categories of vertices: simple, complex, boundary, interior edge, and corner. Simple vertices share two triangles with each vertex it has as a neighbor (neighbors are vertices that are members of a triangle that the vertex under scrutiny is also a member), and are not interior edges or corners. Complex vertices have neighbors that they do not share two triangles with, but are not boundary vertices. Boundary vertices do not share two vertices with a neighbor because it is at the edge of a mesh. Interior edge vertices are members of two edges that form sharp angles between the triangles on either side of them. Corner vertices are members of three edges that form sharp angles between the three triangles on either side of them (see Figure 2.1). Only vertices of types simple and interior edge are possible candidates for removal. 7

17 Figure 2.3: The split-plane algorithm. The average plane of the removed vertex s neighbors is shown, along with the split plane, shaded, that is orthogonal to it, and the split line, bold, which will become a new edge in the model. The three lines connected by two points in the average plane of the picture are the edges of the hole that is being filled in. Reproduced from Schroeder et al. The paper simplifies the issues of vertex removal and vertex selection. Particularly, it shows that finding the vertex to remove is an iterative process and that selection is performed by finding the vertex with the smallest value of d using a distance formula. The value d of a vertex is the distance from the vertex to the best-fit average plane for the vertex s neighboring vertices. In Schroeder s and his colleagues work, if d is less than a given threshold, the vertex is removed. The last step in the decimation process is the triangulation of the hole that is left by the vertex removal. This paper suggests using a split-plane method. The method is a recursive process that finds the neighboring vertex which is the best candidate for retriangulation (that which would produce the most uniform and least skinny triangles) and splits the hole by a plane orthogonal to the average plane that joins the starting vertex and best candidate. It repeats with another split on each side of the previous split. The recursion is defined to stop when there are only three vertices remaining on a split, since three vertices create a triangle. This method guarantees that, when successful, it will produce a 8

18 triangulation where no triangles overlap, and the hole is completely filled. See Figure 2.3. Our program implements a subset of the decimation algorithm defined in this paper. Specifically it uses the distance to the plane formula and does not detect complex vertices. Our program has a simple edge and corner detector that is not robust or optimal Simplifying Surfaces with Color and Texture Using Quadric Error Metrics Michael Garland and Paul S. Heckbert, Carnegie Mellon University[10] This paper presents a robust algorithm used for model decimation, produced by Michael Garland and Paul Heckbert of Carnegie Mellon University in The algorithm selects vertices based on the quadric error at each vertex, which could be thought of as representing how creased a surface is at the vertex. More precisely, the quadric error is the sum of squared distances to a set of planes that are initially the faces incident to the vertex, but which may not remain incident upon it as the algorithm proceeds. The algorithm also takes into account the color at each vertex or the texture used with the model. It does not necessarily, however, generate manifold surfaces (fully closed surfaces with only simple, interior edge, and corner vertices), and is fairly expensive in memory usage and speed (see the Lindstrom and Turk paper in our literature). 9

19 The quadric error at each vertex is found by summing the squares of the distances between the vertex and the planes that make up the triangles surrounding the vertex. The algorithm performs decimation by joining pairs of vertices into one vertex, choosing the position of the new vertex between the two points that minimizes the quadric error. First, a list of pairs is generated by selecting pairs of vertices which are said to be valid for contraction. A pair is valid if the vertices in the pair represent an edge or are sufficiently close together (given a threshold parameter). The cost of contracting each pair is simply the sum of the quadratic error at each point. The algorithm iteratively contracts the pair with the smallest cost, replacing occurrences of each vertex from the contracted pair with the new vertex in every pair in the list of pairs. The algorithm was extended to incorporate major differences in vertex color between vertices or texture color near the vertices into the cost calculated for each pair. The work done by Garland and Heckbert uses a different approach to mesh simplification than that of Schroeder et al. Specifically, rather than removing a vertex from the model outright, their method merges two vertices into one new vertex which may not be (and usually isn t) in the same location as either vertex, but is instead in a new location in space that is chosen intelligently so as to minimize total geometric error for the model overall. This cannot be done optimally - it is an optimization problem. 10

20 2.1.3 Mesh Optimization Hugues Hoppe, Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle, University of Washington[14] This article describes a mesh simplification method that represents the problem of simplifying a mesh as an optimization problem of an energy function, resulting from research by Hugues Hoppe and his colleagues at the University of Washington in They define this energy function as an addition of three terms that help to balance two competing goals (fewer vertices and results that closely resemble the original model) while keeping changes confined to affecting the mesh only locally. The energy function is defined as: E(K, V ) = E dist (K, v) + E rep (K) + E spring (K, V ) The term V is a representation in the computer s memory of the positions of the vertices in the model. The term K is a representation in the computer s memory of the relationships between those vertices as edges and faces. Their program can add or remove vertices, as will be described later, to find a minimal value of the sum of these terms, which are described here. 11

21 The first term in the energy function is the distance energy E dist. E dist is defined as the sum of squared distances from the points X = x 1,..., x n to the mesh: n E dist (K, V ) = d 2 (x i, φv( K )) i=1 This value is increased as the current representation moves further away from its original form. This means that the program must keep the original mesh in memory so that it can iteratively compare the current representation to the original one input by the user. The second term in the energy function is the representation energy E rep. E rep is defined as: E rep (K) = c rep m where m is the number of vertices in the mesh and c rep is a user-selected threshold value that determines how much this value affects the result, and thus how important the concern for a compact representation is to the user. The third and last term, E spring, is the spring energy of the mesh, with a spring of rest energy zero along each edge, is defined as: E spring (K, V ) = κ v j v k 2 {j,k} K 12

22 This ensures that sharp areas of the mesh are not penalized (and thus oversimplified) and helps to guide the optimization to a desirable local minimum. The value κ is set by the program as a small value. The entire optimization is done iteratively with a decreased value of κ for each iteration, fine-tuning the result when fewer calculations will be necessary. The problem was broken down into two nested subproblems: an inner problem of reducing the representation energy by removing or replacing vertices, and an outer problem of reducing the distance energy by choosing the best of a few courses of action selected by the inner problem. Vertices are removed, added, or replaced using one of three moves from one iteration to the next: edge collapse, which merges two vertices along an edge into one vertex, thus removing one vertex, one edge, and two triangles; edge split, which splits a vertex into two vertices, adding one vertex, one edge, and two triangles, and edge swap, which moves an edge from between two vertices to between two of their neighbors, thus re-orienting two triangles in the mesh and keeping the numbers of triangles, edges, and vertices the same. It is interesting to note that edge split and edge swap do not reduce the representation energy, but may reduce the distance energy or spring energy. For example, it may reorient long triangles in the direction of the mesh that is more smooth, a result that would relieve some spring energy, which was identified by Hugues et al as desirable. This could result in a lower total energy for the mesh. 13

23 There are a number of issues that this method does not address, as identified in Lindstrom and Turk s most recent work[19]. First, the evaluation of the energy for the current mesh representation is costly, and the need to perform the entire operation multiple times, each time with a reduced spring constant, means the entire operation cannot be done in linear time. Second, the method used to select what move to make is unintelligent: rather than guessing that smooth areas should be reduced, moves are selected purely at random and either accepted or rejected if it decreases or increases the total energy. Third, the program does not choose the optimal placement for a vertex position during the edge collapse or edge split operation; instead it places the new vertex at the position of one of the two endpoints of the edge (the equivalent of a vertex removal) or at its midpoint, choosing which placement of the three produces the lowest total energy. Finally, the method consumes more memory than necessary by keeping a copy of the original mesh in memory for comparison to the current mesh each iteration. Although logically it would seem that this would yield better results than comparing to the last iterations representation, it appears that in practice, finding good placement for a vertex that results from an edge collapse produces similar results to this mesh optimization method in far less time. The next review shows more mature methods of placing new vertices. 14

24 2.1.4 Progressive Meshes Hugues Hoppe, Microsoft Research[12] With the growing expectation for realistic visual representations, highly detailed geometric models are necessary. As previous stated, highly detailed geometric models imply complex meshes that have the disadvantage of being expensive to store, transmit, and render. This expectation of realism has led Hoppe to develop a new representation for triangle meshes that minimizes these disadvantages. The Progressive Mesh (PM) representation benefits greatly over common mesh representations, such as VRML[6]. We define a common mesh representation as a file that contains vertex information in 3D space and a triangle list that is an index for the vertex information. A Progressive Mesh (PM) is a mesh representation that can progress through many levels of detail seamlessly. To do this it uses two basic mesh transformations. A PM uses a previously defined mesh transformation called an edge collapse[14]. We discussed edge collapses in the previous paper we reviewed. An edge collapse is what occurs when two neighboring vertices combine to make one vertex. An algorithm that uses the PM representation implements edge collapses to optimize a mesh. However, to make the PM representation progressive it implies that the algorithm needs to implement the inverse of an edge collapse. A vertex split has the exact opposite effect of an edge collapse. It has the ability to create two vertices from a single vertex. These two mesh transformations are 15

25 the foundation for a PM representation. Mesh optimization occurs by using a combination of edge collapses. Mesh rebuilding occurs by using a combination of vertex splits. A PM is a mesh data representation that is capable of optimizing or rebuilding itself. Optimizing and rebuilding a mesh is one of the major achievements over common mesh representations, since it can recreate the original model from any level of detail PM. This means that this data structure creates a lossless representation. This is a benefit in two ways. A mesh can be compressed to a extremely low level of detail, but can be rebuilt back to the original mesh with a combination of vertex splits. The second benefit is that the extremely low level detail model can be transmitted across data lines to a remote device (most likely another computer that recognizes the data as a polygonal mesh), and with a combination of vertex splits it can be rebuilt on the remote computer. This benefit is what Hoppe defines as Progressive Transmission. The second major achievement is that the paper introduces a new simplification method that is implemented using the Progressive Mesh data representation. This new procedure simplifies based on overall appearance, not just the geometry of the mesh surface. It is based on an earlier paper by Hoppe, Mesh Optimization [14]. Similar to the paper Mesh Optimization it uses a combination of energy functions to create an energy metric that is used to measure the accuracy of a simplified mesh. 16

26 The simplification algorithm applies a combination of edge collapse transformations to the original mesh. For each edge collapse the program will write the sequence of vertex splits to a file. In addition, after it has fully optimized the mesh, it writes the base mesh to a file. Using that file it recreates the original mesh using the vertex splits that were recorded. We see this new simplification method as an example to create better algorithms that use Progressive Meshes. The third major achievement is the ingenious use of Progressive Meshes (PM) for Geomorphing. A Geomorph is a smooth visual transition between two meshes. When a mesh is modified from a vertex split or edge collapse it can be restored to its previous state by using the inverse transformation. Each single transformation has a single corresponding undo transformation. Each individual transformation can be transitioned smoothly, so can the composition of any sequence of them. Since a PM is a sequence of edge collapses or vertex splits, any two meshes of a PM representation can be used to make visually smooth transitions. 17

27 2.1.5 Fast and Memory Efficient Polygonal Simplification Peter Lindstrom and Greg Turk, Georgia Institute of Technology, Atlanta[19] This article describes the work done by Peter Lindstrom and Greg Turk at the Georgia Institute of Technology on polygonal model simplification in Their algorithm calculates error metrics for all neighboring pairs of vertices (edges) and collapses the edge with the lowest error into a single vertex. This vertex is placed in a position that keeps the total volume of the model constant and fits into a number of other error-reducing optimizations to reduce mean geometric error. What makes their work unique in the field is that their algorithm, while remaining very accurate compared to peer work, makes greedy decisions for each edge collapse and vertex placement, and does not keep track of any information about what the model used to look like, making it extremely fast and memory efficient. Edge collapse is argued to be more attractive than vertex removal since it can preserve geometry of the model better and does not require the use of triangulation algorithms since it does not create holes in the model. I argue that without further look at the resulting triangles at each collapse, some vertices will be ordered correctly, generating model which cannot be backface-culled (and thus not as efficient as they could be with a more careful approach. 18

28 This paper is the only in the literature we have seen that shows a specific error quantification method for mesh simplification. We were introduced to a program called Metro that estimates mean geometric error among levels of detail for 3D models - a useful comparison tool A Unified Approach for Simplifying Polygonal and Spline Models M. Gopi and D. Manocha, University of North Carolina at Chapel Hill[11] This article overviews the work done by M. Gopi and D. Manocha, in the Department of Computer Science at the University of North Carolina at Chapel Hill, in model simplification. They first created a method of simplifying B-spline models, and then generalized the use of their algorithm by creating a multi-pass method for simplifying any 3D model. Gopi and Manocha s program runs in three passes. The first pass creates an intermediate C-Model representation from a polygonal mesh, triangular mesh, or a tensor product patch model. This C-Model representation is a collection of triangular spline patches. The second pass merges edges of the patches and then performs further simplification using pattern matching of patch adjacencies from a lookup table of known simplifications that are safe to perform. The final pass performs tesselations to revert the simplified model to its original form. The approach is assumed to be very flexible and fairly safe, but not overly efficient 19

29 with time or space. Gopi and Manocha s implementation was written in C++ using arrays. Theirs is a work in progress, citing work that will influence the next iteration of development. 2.2 Analyzing Geometric Error Metro: Measuring Error on Simplified Surfaces P. Cignoni, C. Rocchhini, and R. Scopigno[4] Metro is a tool that compares two levels of detail of a mesh by calculating the volume between the two meshes. The need for this tool is stated by the authors: The field of surface simplification still lacks a formal and universally acknowledged definition of error. The paper gives an overview of the reasoning behind this tool, and it details the ideas employed to create this tool. Metro is a necessity for authors of simplification algorithms. The ultimate reason for proclaiming this tool a necessity is that it produces visual and numerical output that describes comprehensible differences between an original and simplified mesh. Metro compares the output from a simplification program to the original mesh, not taking into account the type of algorithm the program uses. Thus, the use of the Metro tool can help to eliminate bias about which simplification method was used[19]. 20

30 Metro uses approximate distance to evaluate the difference between two meshes, which is defined as e(p, S) = min ( p S), where P is a point on the mesh, S is the surface of the mesh, and d() is the distance between two points. To find the approximate distance, Metro takes a user-specified sampling step, which produces a number of sample points p on the surface of the original mesh S which are either uniformly using scan conversion of the triangular faces, or randomly distributed among the surface to produce a mean result with Monte Carlo integration, at the user s option (yielding similar results in most cases). These points are used to get a precise approximation of the original mesh surface. The approximate distance is the distance from one of these sampled points to the closest face on the simplified mesh. This distance is minimal in respect to the original mesh. The program outputs two values. The first value is the maximum distance E which is defined a E(S 1, S 2 ) = max ( p S 1 )(p, S 2 ), the maximum value out of all the calculated distances between each point and the approximated surface. The second value is the mean distance, which is defined as the surface integral of the distance divided by the area of the original surface: E m (S 1, S 2 ) = 1/( S 1 ) S 1 )(e(p, S 2 )ds) ( 21

31 Metro also uses the approximate distance to calculate several important characteristics of a mesh that numerically show differences in a simplified meshes topology, its handling of boundaries by comparing boundary edge totals between the two meshes, and total volume between the two meshes Evaluation of Memoryless Simplification Peter Lindstrom and Greg Turk, Georgia Institute of Technology, Atlanta[20] This paper compares the results of the memoryless simplification algorithm developed by its authors in [19] with five other simplification methods that retain geometric history. Memoryless simplification makes no comparisons between the partially simplified model and the original model while simplification is ensuing. The authors have demonstrated that geometric history (the original model) need not be retained in memory to deliver good approximations (a philosophy we adopted in our algorithm). The advantages over the more complicated approaches are smaller memory requirements and (generally) faster simplification. To compare the results of their algorithm to the algorithms of their peers, they adopted to use the Metro[4] tool and public-domain implementations of the other algorithms (rather than writing their own implementations). They hoped to remove some bias by using tools and programs they did not write, so that the evaluation tool was written by authors who are not counted in the evaluation, 22

32 and the other programs have been (hopefully) optimized by their authors better than the memoryless simplification authors would. Also, they point out that by using a tool in the public domain, others can compare their own programs to these results. In their comparison, they created eight levels of detail of four models with each of the six programs. Two models were completely closed (a horse model and a sphere), and two contained boundaries (the Stanford bunny and a portion of the sphere). Next, they ran the Metro program to compare the original model to each level of detail for each of the four models, for each of the six programs. Finally, they compared the results of this Metro evaluation using line graphs with a logarithmic scale along the widely-varying errors. We ran the Metro program on the bunny model for eight levels of detail using our own program (see the Results chapter) and borrowed the data from this reviewed paper for use in our own evaluation (see the Analysis chapter). 23

33 Chapter 3 The Decimation Program 3.1 Design Data Structures The easiest way to implement the decimation algorithm is to create a ring structure[27]. This consists of a data structure in which triangles know which vertices they use (Figure 3.1), and vertices know which triangles they are a part of (Figure 3.2). This leads to a ring structure that allows the decimation algorithm to remove vertices and reconstruct the triangle mesh with ease Algorithm Our decimation algorithm works by removing vertices from areas of the mesh that are relatively flat. A vertex has a list of vertices called neighbors. We define neighbors as other vertices in the entire vertex list that are connected to the vertex by an edge of a triangle. An average plane can be calculated from these neighbors. This is the plane that is closest approximation to all of the 24

34 Figure 3.1: The Triangle data structure has a list of 3 Vertex objects. Figure 3.2: The Vertex data structure has a list of Triangle objects. 25

35 d = V N p N (3.1) Figure 3.3: The distance to plane formula. neighbors in point-normal form. The point is the average point of the neighbors (P), and the normal is the average of their normals (N). p is found by averaging each component of the neighbors (x, y, and z); N is found by averaging each component of their normals (x, y, and z). After the average plane has been found, the distance between the removal candidate vertex (from which the neighbors were found, and which the average plane did not take into account) and the plane is calculated. If the distance is within a threshold range, the vertex is removed. The distance is found using the formula in Equation 3.3, where V is the candidate vertex for removal, p is the projection of this point onto the plane, and N is the unit normal of the plane. Diagrams Following is a series of images that depict how the distance to a plane is calculated. Step 1, Figure 3.4: Get the average plane, in point normal form (Figure 3.5). Step 2, Figure 3.6: Get the distance from the vertex to a point on the plane. This is the distance to the plane, and can be expressed as Equation

36 Figure 3.4: Find the average plane. Figure 3.5: The average plane in point-normal form. 27

37 Figure 3.6: Calculate the distance from the vertex to the plane Our algorithm calculates the distance to the plane for each vertex initially upon loading of the file, and anytime one of its neighboring vertices are removed. Anytime a neighboring vertex is removed from the list of vertices, the average plane has been modified, and the distance needs to be recalculated using the new average plane. The reason we recalculate the distance when a neighboring vertex is removed is so that the distance will always be current. The following is a series of images that depict the method used to remove a single vertex from the list of vertices. Step 1, Figure 3.7: Find the best candidate in the model. Our algorithm iterates through the entire list of vertices and finds the Vertex that has the smallest distance to the plane, which we previously defined as d. 28

38 Figure 3.7: Candidate for vertex removal Figure 3.8: Locate the neighbors 29

39 Figure 3.9: Retriangulate neighbors, pointing to one neighbor and removing some triangles Step 2, Figure 3.8: Get the list of neighbors from the best candidate vertex. This can be accomplished easily, since our Vertex data structure keeps a current list of its neighbors in an easily iterated STL set. Step 3, Figure 3.9: Retriangulate. Our retriangulation algorithm is basic. It exploits a given in manifold models. The given is that if you replace the best candidate vertex with one of its neighbors in all of the neighbors two things happen. The model remains manifold, and two triangles are removed. In the previous image there were five triangles. In this image there are three. Step 4, Figure 3.10: Finally we remove the vertex from the model. The model now has one less vertex and two less triangles. 30

40 Figure 3.10: Remove the vertex The steps depicted above are of one complete cycle. This cycle is repeated for the number of vertices the user wants to remove. The repetition is known as decimation Brief Algorithm Comparison Unlike previous methods discussed in our literature review[12], our program is lossy. This means that no information about removed vertices or triangles is maintained. Our algorithm chooses the best candidate based solely on the distance to the average plane. We would have liked to incorporate appearance based vertex removal[12] in our algorithm had time permitted. Other algorithms we researched performed base vertex removal on volume[10, 14]. One algorithm incorporated textures to aid in the viewing of an optimized 31

41 mesh[10], and some will alter the positions of other vertices to make the removal appear less severe[10, 14, 19] Interface The command line interface, commonly referred to as CLI, gives the user two options for decimation. We designed the CLI so a user can run the program without the need for a graphical user interface or user input. This means it can be run from a terminal, and all the testing can be automated with a script. The ideas for the two methods are trivial when dealing with an optimization program. If the user wants to remove a certain amount of vertices, then the user can explicitly tell the program the number of vertices to remove. If the user is not sure of how many, but wants to remove a specific percentage, our program also satisfies that requirement. We designed a graphical user interface, commonly referred to as GUI, for the program as well. This interface allows a user to use keyboard commands and a pointing device such as a mouse to view a visual representation of the model before and after decimation. This interface was designed with only one option to decimate the models in a scene because of a limitation in the library we used to implement this interface (see the next section). This option is similar to a user giving the command line option for decimation to a specific percentage. In this case, the algorithm runs until only the given percentage of vertices are removed 32

42 from the model. Our program offers many features that allow the user to analyze the program graphically. Using the mouse a user can change three other sets of options. The first set of options include settings for the model. The options include viewing the normals at the center of the triangle or at each vertex in the mesh. This is helpful to see where there are triangles, and how many there are. The user has the option of turning the lighting on or off. This can be used to see the normals better. There are then options to turn on or off certain features of the model. There are three distinct features of any model: triangles, edges, and points. Correspondingly there are options to allow each one to render in any combination. This is useful when a user may need to see if certain vertices have been removed. In this case the user would select the render points and de-select all the others. If the user then wants to view the mesh in wire frame mode, the user can de-select render points to turn them off, and select render edges. The second set of options are used for adding or subtracting viewports. When you add a viewport the program displays the original mesh in the same window, and automatically resizes to fit both viewports. There is a limit of four viewports that can be displayed at any one time. Multiple viewports are useful for giving a before-and-after view, using the model options described above, or for seeing the model from more than one viewpoint. In this set of options the user can 33

43 also remove viewports; this only works if there is more than one viewport. The third set of options are used for the navigation control of the model. The default option is to have model control. This is analogous to grabbing an object and twisting it around its center. The other option is eye control. This option is similar to first person shooter computer games. In this option you are the navigator and move around the model, similar to walking around a real-world object. This option is a little harder to control but allows a better perspective in some cases. 3.2 Implementation Standards OpenGL[1, 26] We chose OpenGL for its wide availability and support. As the name implies, it is an open standard that is amended by a board of leading graphics industry professionals. We chose it partially because of the easy learning curve and because it is what the WPI graphics course used. We also partially chose it since any graphics hardware can run the program if the operating system supports the standard set of OpenGL calls. In this case the operating system calls the software implementation rather than passing it to the hardware. This method is slower and can result in poor performance, but most newer graphics cards have 34

44 at least full OpenGL 1.1 compliance. We also chose it because we wanted it to run in Linux and did not want to write our own graphics functions. OpenGL Utility Toolkit (GLUT)[1, 18] The OpenGL Utility Toolkit (GLUT) is a standard that is used most often in educational settings or for prototyping, due to its very small list of features (no graphical user interface widgets like buttons, file menus, or dialog boxes) and relative slowness (direct window manager routine calls are faster than translation from GLUT calls at run time). For these reasons, it is not often used in industry. However, there are compelling reasons to use it in research. Programs written using GLUT have the ability to run on many different platforms without modification. The most important aspect of cross-platform design is that there is no dependency on a platform-specific toolkit like Windows s Microsoft Foundation Classes, the Gimp Tool Kit (used in GNOME-based window managers), the QT libraries (used in the KDE window manager), or any other graphical toolkit library. Also, GLUT allows for very rapid development of a user interface for OpenGL programs. GLUT is limited in that it recently dropped support for dialog boxes. This means that the functions for opening and saving VRML files under a file name that was input in the user interface are not supported in our program. The limitations of GLUT are apparent in our GUI design. The lack of dialog boxes and text boxes in GLUT prevents our GUI design from having the option 35

45 to decimate to a certain number of vertices. This is only caused by our own desires to make this program cross-platform. If we designed our interface for a specific platform, we could avoid this problem, but our program would not be immediately available to those who don t have specific platforms or compilers, such as those who use a Microsoft Windows[7] based Operating System. C++ and the Standard Template Library[33] We were determined to use the most compatible standards. In using C++ and the Standard Template Library (STL) we were able to port the code to Windows Visual C with an addition of minimal code. This allowed the code to be used on virtually any configuration, with some given facts. The code should compile and run on any computer with a compiler implementation that conforms to the latest ISO C++ standard, has OpenGL and GLUT support, and contains a good STL implementation. OpenGL and GLUT have been included with many commercial systems. VRML and Open Inventor[6] Virtual Reality Markup Language (VRML) is a standard for highly portable modeling programs. It has been used for many web applications, and has been proven a successful standard. It also has the benefit of being in text format, so anyone with the desire to change, or read the model file can do so with ease. This standard has been heavily documented and is used in many professional modeling packages. Open Inventor is a format that evolved from VRML to meet 36

46 more general purpose demands. It can handle much more modeling detail, such as environment mapping and bump mapping, and can be used in a compressed format that saves disk space and memory. We chose to support VRML 1.0 and 2.0 and the ASCII version of the Open Inventor standard (which is fundamentally similar to VRML 2.0) for input files because of the wide availability of good test data for our algorithm and the ease of development and human readable and editable data in case we wanted to tweak the models for our purposes in debugging the program Target Platforms Linux We chose to use the Linux[9] operating system as our main development platform for many reasons. It is robust in development features such as version control with Revision Control System (RCS) and its front end Concurrent Versioning System (CVS). It runs on hardware that is available for our use, and it offers full memory protection, unlike Microsoft Windows 98[7] and previous versions. It is more stable than other operating systems we tried while programs are under development. We are also familiar with its development features and debugging programs. Other benefits of using Linux as the main development platform is its ease of porting to other operating systems. Since we used open standards, free development environments that use the same tools we use (Makefiles, cvs, gcc and g++) are available for many operating systems, including the GNU tools for 37

47 other Unix variants and Microsoft Windows through the Cygwin application[16]. Linux and its GNU tools (including full-featured development programs, compilers, and debuggers) are also free - a major benefit. They are not only free in price, however; since the source code is available for Linux and all GNU tools, the user can be assured that the programs perform exactly as promised by the programs authors, and they are extended further freedoms because of this. If anything does not work to the users satisfaction, any of the programs can be altered directly. Finally, if the user wishes to understand more about how the implementations of the open standards they use work (such as STL containers), the source code for the implementations can be consulted directly to seek discrepancies or ambiguities in the standards descriptions (such as whether those containers store references to or copies of the data passed into them). Microsoft Windows Microsoft Windows[7] is the most often used platform for desktop and workstation computers. By ensuring that our program compiles and runs at full speed on this platform, we ensure that we can reach a larger audience than by only supporting Unix variants. We made sure that the source code for our program compiles in both Microsoft Visual C and with Cygwin[16] (an emulated Linux build environment which creates Win32 binaries) with the proper libraries installed (OpenGL[1] and its utilities, and a better STL than Visual C++ supplies). 38

48 3.2.3 Code Organization We intended to organize the code in a logical manner. The classes we created have been structured as unique and individual entities. First we created the simple data structure classes: Color, Normal, Point, Vector, Vertex, and Light. These classes are simple in that they correlate directly to a simple graphical structure. These classes are also the most generic in that they can be reused in other applications. A possible application, like a graphing calculator, could use any one of those classes. Next we created the more complex data structure classes: Triangle, Model, Scene, Viewport and ViewportController. These classes are more specific to the task of our program (mesh decimation), but have the ability to be used in an application where similar concepts are needed, such as a rendering engine for an interactive entertainment application. Note that these classes are dependent on the implementation of the simple data structures. Finally, we created the single program code block main. This code determines the function of the program as a whole; setting up instances of the VRML parser, setting up the user interface, and handing off control of that interface to GLUT Code Walk Initialization This decimation program is fairly complex, but it can be seen as a set of logical steps. This section is useful for anyone who has an interest in a more in-depth explanation than what has been previously been offered in this paper. 39

49 There are six main steps that occur when the program starts. The first step is the parameter check. Here, the program determines what it has been passed from the command line. There are several options that a user could specify through the command line. The term given to what parameters are acceptable for the program is usage. The usage of our program is as follows: algo filename [action] [action-options] -h prints this message --help prints this message filename The VRML file to open [action] gui DEFAULT nogui [action-options] (options for the "nogui" action) -o filename the name of the VRML file to save as DEFAULT: out.wrl One of the following are MANDATORY: -p number percent to decimate to -v number vertex count to decimate to Typical usage would be: algo model.wrl Typical usage for nogui would be: algo model.wrl nogui -p 100 -o output.wrl The user must specify the filename that contains the VRML or Open Inventor ASCII code. This will launch the program in GUI mode using GLUT with interactive controls. If the user specifies to run in NoGUI mode, the program 40

50 will decimate the model under the constraints given on the command line and output to a file, which can be given on the command line. The second step in the process is to parse the VRML or Open Inventor file. Before the parsing takes place, we must first find the file, or at least see if it exists. Given that the file exists and follows one of the standards, the parsing process is simple and direct. The most important information in the file is the vertices and triangles. Any other information absent in the file is unimportant and is calculated by our program, with a set of defaults. For instance, if no vertex ordering is specified we assume counter-clockwise, or if no normals are given the program will calculate them in the next step. The third step in the process is the creation of the model. When reading the input file, the previous step fills in temporary data structures that are used in this step. In this step anything missing in the file, such as vertex normals or colors, are calculated or given default values, so there is at least the minimal set of data for the model. In this process of filling in the blanks, the data structures found in the Model class are created. The fourth and fifth steps are simple but need to be done in order for GLUT to work. The fourth step is the creation of the user menus. The menus are the command interface for the user in GUI (interactive) mode. GLUT provides several functions to enable these menus directly. To create a menu, a programmer calls a combination of glutaddmenuentry(), glutaddsubmenu(), and finally a 41

51 call to glutattachmenu(). The first two functions define the structure of the menu, while the last attaches the menu to a mouse button. For instance, if you made the call glutattachmenu(right-button) the menu would pop-up when the right mouse button was clicked in the program window. This is what our program does. The fifth step is the registering of callback functions. A callback function is a function that allows an external library, in this case GLUT, to link to functions in this program, so the library can control event handling. To use them you create a function with a special purpose, for instance rendering to the display. After creating this function you need to register it with GLUT to tell it that you have created a function for the specific purpose of rendering. The GLUT function that you call is glutdisplayfunc(myrenderfunction). There are several types of callback functions that GLUT provides. Some examples are: glutreshapefunc() for when the program window is resized, glutkeyboardfunc() for when the user presses a keyboard button, and glutmousefunc() for handling mouse events. The sixth and last step is the most important when using GLUT. Without this step event handling would not occur. Simply put, this step gives the control to GLUT. This is accomplished by calling glutmainloop. In our program we handle several events. Since our program can be controlled through the menu or keyboard, we have mouse and keyboard events. There are two not-so-obvious events that also need to be defined. They are the events for rendering. They are 42

52 used in two different events. The first event is obvious: the graphics need to be displayed whenever the model is updated or moved, or when the window needs to be redrawn because, for instance, another window was dragged over it and off again. For this reason, GLUT has a callback for a display event. The other display-related event is the idle display event. The idle event occurs when no action is being performed in the program. Since our program does not animate of its own accord, only when a user is interacting with it or the same window needs to be redrawn, we do not need to handle this event in our program. Event Handling When the program runs in the default mode (GUI), control of the program is passed Now we will discuss haw we handle events, such as requests from the window manager to redraw the contents of the window, or user actions such as the choosing a menu option from the menus we set up. In the last section, we registered callback functions with GLUT. GLUT handles the operating system events and translates them into a common event system that works on any operating system. When a user performs an action such as pressing a key, GLUT calls the function we passed to it and passes parameters that help us handle the event, such as which key was pressed or how far the user dragged the model with the mouse cursor. When the program starts, usage information is displayed on the console. We will not discuss the details of these callback functions here. They are described 43

53 in the programmers reference manual included in the source distribution and on the project homepage (see Deployment). Decimation The purpose of this project is to reduce the complexity of triangular meshes. We accomplish this by implementing a decimation algorithm that was described in the paper Decimation of Polygon Meshes by William Schroeder and his colleagues in 1992[27]. In this section, we describe how our implementation of the algorithm works and why we made the choices we did. First, the program receives instructions to decimate the scene in memory. This can be accomplished from the command line or from a GLUT callback from the menu. The user chooses to decimate either an exact number of vertices or a percentage of the original amount of vertices. If a percentage is chosen, we calculate immediately what exact number to remove to make our decimation loops execute identically in either case. When the program receives the message to begin decimation, it calls the decimateevent function in ViewportController, passing it a number and a type of decimation, either PERCENTAGE or NUMBER. decimateevent in turn calls the decimate function of the appropriate viewport (the one the event came from, currentviewport, since there may be more than one viewport open in the same window for visual comparison, for example). 44

54 The decimate function in Viewport creates an instance of the Decimate class and calculates the maximum number of vertices that it should remove, using a helper function in Viewport called getmax. getmax takes the number and type passed to the decimate function and returns a number of vertices to remove, making sure it does not exceed the number of valid candidate vertices (corners are invalid, and the four vertices with the highest distance-to-average-plane values are invalid, for example) to ensure that we are left with valid Models in the Scene when the decimation is complete. The decimate function then calls the remove function in the new Decimate object, passing it the Scene and the number of vertices to remove. We chose to create instances of the Decimate class rather than using static functions in the class in order to take advantage of memory cleanup when the object loses scope. We chose to calculate the maximum vertices here in order to keep the remove function in Decimate slim. The remove function in Decimate contains a large loop exits on the condition that it has removed or failed to remove the number of vertices that it was told to remove. This loop starts by finding the vertex with the smallest d value. It does this by stepping through the Vertex set in Scene, looking for the smallest distance-to-average-plane. We must do this manually because every implementation of STL we use does not sort sets, which it should by the STL standard. If sets worked properly, we would simply remove from the front of the set. Next, the remove function calls the helper function removevertex, passing it the Model that contains the Vertex and the Vertex itself. removevertex 45

55 (which we will describe later) tries to remove the Vertex and notifies the remove function of success or failure. The last thing the remove function does is check the number of vertices in the Model it removed the Vertex from. If it has less than four, the Model, and all its Vertex members, are removed from the Scene. The removevertex function first checks the number of neighboring vertices the Vertex has. If it only has one to three, removevertex removes them from the Model and Scene and removes the Triangles that are involved. No hole results in this removal, so no retriangulation needs to be done. If there are at least three neighbors, it does not remove the Vertex. Instead, it immediately tries to retriangulate the hole that would result if the Vertex were removed. It does this using a replacement algorithm, replacing all occurrences of the removal candidate Vertex in the involved Triangles with one of its neighbors, called the head. It tries each neighbor as the new head and detects errors, such as overlapping triangles, that would result, and chooses a head that yields a good result. If it finds a good head, it removes the Vertex, performs the retriangulation, and returns control to the remove function, indicating that the removal was successful. Otherwise, it immediately returns control indicating failure. A Vertex is removed by a call to the Model class s removevertex function, passing the Vertex to remove. This function actually deletes the instance of the Vertex class in memory. When Decimate s removevertex function removes a Triangle, it deletes it from memory immediately. When the retriangulation is performed, 46

56 each neighbor is notified via a call to its removeneighbor function, which Decimate s remove function passes the removal candidate Vertex to before the Vertex is removed. At the end of the removeneighbor function, each neighbor recalculates its distance-to-average-plane value, thus ensuring that Decimate s remove function always chooses the Vertex with the smallest distance value, since the value may change when one of its neighbors is no longer considered in the calculation. Output and Wrap-Up The decimation program has two possible forms of output. The first form is a VRML V2.0 file of the Scene in memory (for the current Viewport if the program is running in GUI mode). This is either done on the command line in NoGUI mode with the -o <filename> argument, or by pressing the F3 key in GUI mode, which saves the file as out.wrl (we cannot gather a filename at runtime in GUI mode since GLUT does not support input text boxes in the most recent version). The resulting VRML files can be compared using a geometric error measuring utility such as Metro[4]. The other form of output is the picture on screen of the Scenes in the Viewports themselves, when the program is running in GUI mode. The use of a screen capture utility can yield results for subjective, visual comparison. 47

57 3.3 Deployment The program that resulted from the implementation of our project is released to the public domain under the GNU General Public License[32], Version 2.0 or any more recent version at the user s choice. We placed the source code and binary packages for Linux/i386 and Win32 on SourceForge[21] for public access. The design page with documentation for programmers who wish to use the library for our project, as well as an HTML version of this report, can be found at: The SourceForge project page for this project, for programmers who wish to contribute to the development of the program and/or library, can be found at: The latest source code for this project can be checked out with anonymous access through the Concurrent Versioning System (CVS) at SourceForge.net. A binary distribution package for the CVS program for most platforms can be obtained at Use the following commands to check out the meshalgorithms project source and build environment. When prompted for a password for anonymous, simply press the Enter key. cvs -d:pserver:anonymous@cvs.meshalgorithms.sourceforge.net: /cvsroot/meshalgorithms login 48

58 cvs -z3 /cvsroot/meshalgorithms co meshalgorithms Finally, high quality models that can be used with our program can be found at the Georgia Tech web site at: models/ 49

59 Chapter 4 Results The results of the project are determined by the success of our program. In turn, the success of the program is directly influenced by the performance of the decimation algorithm. In this chapter we gather the results of our algorithm s performance. Our first set of results are in the form of a visual comparison. The performance of our algorithm is directly apparent in the visual analysis of these results. Our second set of data was taken from the Metro[4] program discussed in our literature review. These results determine our program s performance numerically. Our last set of results is a time comparison of our algorithm. In that section we will compare the time our algorithm takes when run under different operating systems and hardware options. 50

60 4.1 Visual Comparison The results of the algorithm can be measured by graphical analysis of a model at different stages. This analysis is the procedure that was used by William J. Schroeder, Jonathan A. Zarge and William E. Lorensen[27] in their initial study. We will look at nine different models, each having interesting shapes and contours that will test the flexibility of the algorithm. For each of the models we take a snapshot look at several different stages in the decimation process. The first stage is the original model, unaltered and freshly loaded into the program. The second stage will show the model after sixty percent of the vertices have been removed. The third stage will be after eighty percent, and the last stage will be after ninety percent. Along with the images will be information that shows exactly how many vertices have been removed and how long the process had taken. The next set of images depicts the first test model test001.wrl in four levels of detail. The top left viewport has the original model, the top right has the model decimated at sixty percent, the bottom left has the model decimated at eighty percent, and the bottom right has the model decimated at ninety percent. This model is a three sided pyramid with an extra vertex on the bottom, so it initially has four vertices and six triangle polygons. After removing the only extraneous polygon it has the least amount of polygons, which is four. Therefore, at each sixty, eighty, and ninety, the model is the same (fully decimated), which means 51

61 Figure 4.1: The model test001 rendered solid. our program has properly recognized the corners and not removed them. The first image, Figure 4.1, is a rendering of the solid model with no options enabled. The solid rendering shows how the models at each level of detail differ. In this case it fails to change the geometry of the model, since the extraneous vertex had a distance of 0 (zero) to the average plane (see discussions of our algorithm in 3.1 Design). Figure 4.2 is a rendering of the solid model with the wireframe option enabled. This image has an advantage over the previous image by showing the hidden lines (edges) that complete the model. With larger models it becomes more apparent how many vertices have been removed. 52

62 Figure 4.2: The model test001 rendered solid with wireframe lines to show edges. Figure 4.3 is a rendering of the model with various options enabled. The top left viewport has the original model with only points rendered. The top right viewport has the fully decimated model with triangle normals visible (the red lines). The bottom left viewport has the fully decimated model with only edges rendered. The bottom right viewport has the fully decimated model with only points rendered. The top left and bottom right viewports can be compared directly to show our program did indeed remove the only extraneous vertex. The next set of images depicts the second test model test002.wrl in four levels of detail. The viewports are arranged in the exact same manner as the last set of images (original, sixty percent, eighty percent, ninety percent). This model is a small two-by-two cube. This model initially has twenty-six vertices and forty- 53

63 Figure 4.3: The model test001 rendered with various options enabled. eight triangle polygons. The fully decimated model has eight vertices and twelve triangle polygons, making a perfect and maximally simplified cube. The first image, Figure 4.4, is a rendering of the solid model with no options enabled. The solid rendering shows how the models at each level of detail differ. Figure 4.5 is a rendering of the solid model with the wireframe option enabled. This image has an advantage over the previous image by showing the hidden lines (edges) that complete the model. With larger models it becomes more apparent how many vertices have been removed. Figure 4.6 is a rendering of the model with the show points option enabled. This image has an advantage over the previous image by showing the exact points that 54

64 Figure 4.4: The model test002 rendered solid. Figure 4.5: The model test002 rendered solid with wireframe lines to show edges. 55

65 Figure 4.6: The model test002 rendered in points. complete the model. With larger models it becomes more apparent how many vertices have been removed. The next set of images depicts the third test model test003.wrl in four levels of detail. The viewports are arranged in the exact same manner as the last set of images (original, sixty percent, eighty percent, ninety percent). This model is a single sided ten-by-ten plane. This model initially has 121 vertices and 200 triangle polygons. The fully decimated model has 40 vertices and 38 triangle polygons. The ideal solution for a single sided plane is four points and two polygons, but our program detects boundary edges the same way as it detects corners, so they are not removed. 56

66 Figure 4.7: The model test003 rendered solid. Figure 4.8: The model test003 rendered solid with wireframe lines to show edges. 57

67 Figure 4.9: The model test003 rendered in points. Figure 4.10: The model test004 rendered solid. 58

68 Figure 4.11: The model test004 rendered solid with wireframe lines to show edges. Figure 4.12: The model test004 rendered in points. 59

69 Figure 4.13: The model test005 rendered solid. The next set of images depicts the fourth test model test004.wrl in four levels of detail. The viewports are arranged in the exact same manner as the last set of images (original, sixty percent, eighty percent, ninety percent). This model is a large ten-by-ten cube. This model initially has 602 vertices and 1200 triangle polygons. The fully decimated model has eight vertices and twelve triangle polygons. The result of fully decimating this model is a perfect, and maximally simplified cube, with eight vertices. The next set of images depicts the fifth test model test005.wrl in four levels of detail. The viewports are arranged in the exact same manner as the last set of images (original, sixty percent, eighty percent, ninety percent). This model is a hollow cylinder (a tube). This model initially has 528 vertices and

70 Figure 4.14: The model test005 rendered solid with wireframe lines to show edges. Figure 4.15: The model test005 rendered in points. 61

71 Figure 4.16: The model test006 rendered solid. triangle polygons. The model has no corners so it can be completely removed from the scene when fully decimated. Remember: we define a corner as either a boundary edge or a point that creates an angle greater than 60 degrees at all neighbors. The next set of images depicts the sixth through eighth test models in four levels of detail. The viewports are arranged in the exact same manner as the last set of images (original, sixty percent, eighty percent, ninety percent). Model test006 is a squashed four sided triangle pyramid (a dome). This model initially has 2402 vertices and 4800 triangle polygons. The fully decimated model has 449 vertices and 894 triangle polygons. Model test007 is a washer that has been distorted with noise and wave commands in 3D Studio Max. This model initially 62

72 Figure 4.17: The model test006 rendered solid with wireframe lines to show edges. Figure 4.18: The model test006 rendered in points. 63

73 Figure 4.19: The model test007 rendered solid. Figure 4.20: The model test007 rendered solid with wireframe lines to show edges. 64

74 Figure 4.21: The model test007 rendered in points. Figure 4.22: The model test008 rendered solid. 65

75 Figure 4.23: The model test008 rendered solid with wireframe lines to show edges. Figure 4.24: The model test008 rendered in points. 66

76 Figure 4.25: The model test009 rendered solid. has 9648 vertices and triangle polygons. The fully decimated model has 1072 vertices and 2048 triangle polygons. Model test008 is a box that has been stretched. This model initially has vertices and triangle polygons. The fully decimated model has twelve vertices and twenty triangle polygons. The next set of images depicts the ninth test model test009.wrl in four levels of detail. The viewports are arranged in the exact same manner as the last set of images (original, sixty percent, eighty percent, ninety percent). This model is a 3D visualization created from photographs of a real scene, given to us by WPI professor Mark Stevens. This model initially has vertices and triangle polygons. The fully decimated model has 1723 vertices and 3180 triangle polygons. 67

77 Figure 4.26: The model test009 rendered solid with OpenGL point smoothing enabled. Figure 4.26 is the same rendering as the previous image, with the exception that the smoothing option has been enabled. Figure 4.28 is a rendering of the model with the show points option enabled. This image has an advantage over the previous image by showing the exact points that complete the model. With larger models it becomes more apparent how many vertices have been removed. We will now examine the results for the test009 model in detail. The viewport is located on the face of the person on the left, facing the model. Figure 4.29 is the rendering of the model at the same levels of detail as all the previous image series (original, 60, 80, 90 percent). When we zoom into this 68

78 Figure 4.27: The model test009 rendered solid with wireframe lines to show edges. Figure 4.28: The model test009 rendered in points. 69

79 Figure 4.29: The model test009 rendered solid (detail). specific location of the model, we see how the algorithm is modifying the model. Figure 4.30 is the image of the same models in Figure 4.29, but with the smoothing option enabled. When we compare this figure with Figure 4.29 we see that this makes the models appear much closer in appearance to the original model, which is in the top left viewport in both figures. Most applications of 3D graphics render with smoothing enabled, so this indicates that our program can yield good results that can be indistinguishable from the original large models even after ninety percent of the vertices have been removed. Figure 4.31 shows the same image as Figure 4.29, with the exception that the wireframe option is enabled. We clearly see how many polygons have been 70

80 Figure 4.30: The model test009 rendered solid with OpenGL point smoothing enabled (detail). Figure 4.31: The model test009 rendered solid with wireframe lines to show edges (detail). 71

81 Table 4.1: The maximum and mean geometric error as calculated by Metro between the original Stanford bunny model and eight progressively simplified levels of detail generated with our program removed in the bottom right viewport (90 percent decimated). 4.2 Geometric Error We would like to compare the results of our decimation program with other programs that simplify meshes. With the help of to important papers in mesh simplification[4, 20], we were allowed to do this. First, we generated eight levels of detail for the Stanford bunny model using our decimation program, removing 50% of the vertices from the previous level of detail in each run of the program. Next, in order to compute the geometric error between each level of detail and the original bunny model, we used the Metro tool[4], producing the results shown in Table 4.1. Using the methodology and results from the memoryless simplification authors work[20] (see our Literature Review chapter), we were able to compare our results to six other programs that were used to create eight levels of detail of the same model using the same process (50% removal in each run). See our Analysis chapter for this comparison. 72

82 The first set of images, shown in Figure 4.32, depicts the original model of a clay bunny in solid and point renderings. The model initially has 35,947 vertices and 69,451 triangle polygons. The next two sets of images show the model in eight levels of detail, removing approximately 50% of the vertices from the model at each stage, first rendered solid and then showing only points. The first level of detail and the original model look identical in these renderings, since in both cases there is more than one triangle per pixel at the resolution these images were rendered in. The second set of images, shown in Figure 4.33, is a rendering of the solid model with no options enabled in all eight stages of decimation, removing 50% of the vertices at each stage. The original model is immense and needs a significant amount of memory and CPU power to load and render. The progressively decimated images have far fewer polygons, and we feel they are good visual representations of the original model, yet they render much more quickly because of the much lower polygon count. The last set of images, shown in Figure 4.34, is a rendering of the model showing only points in all eight stages of decimation, removing 50% of the vertices at each stage. These renderings emphasize how many fewer vertices we have during each iteration of the test. 73

83 Figure 4.32: The original bunny model rendered solid and with only points. 74

84 Figure 4.33: The bunny model rendered solid in eight levels of detail. 75

85 Figure 4.34: The bunny model rendered showing only points in eight levels of detail. 76

86 4.3 Time Taken to Decimate Computers Used We used three computers for testing. The first system (DUAL) has two Intel Celeron 450 MHz processors with 640 MB of RAM running Linux 2.4. The second system (BLACK) has an AMD K Mhz CPU with 378 MB RAM running Linux 2.4. The third system (DELL) has a Pentium III 750 MHz CPU with 378 MB of RAM running Windows 2000 Professional Time Test Results Each test computer ran our program on each of the test models undisturbed once, and we recorded the output to a file. These graphs will give us solid evidence on which operating system and hardware configuration is optimal for our programs performance. The first test is the time comparison of each computer at three levels of detail for the bunny model. Figure 4.35 shows the results of this comparison. The second test is the time comparison of each computer at three levels of detail for the test009 model. Figure 4.36 shows the results of this comparison. The third test is the time comparison of each computer at three levels of detail for the hand model. Figure 4.37 shows the results of this comparison. 77

87 Figure 4.35: The results of each computer for three levels of detail (60, 80, 90 percent) for model bunny. Figure 4.36: The results of each computer for three levels of detail (60, 80, 90 percent) for model test

88 Figure 4.37: The results of each computer for three levels of detail (60, 80, 90 percent) for model hand. Figure 4.38: The time comparison for each computer across the six largest test models for the sixty percent level of detail. 79

89 Figure 4.39: The time comparison for each computer across the six largest test models for the eighty percent level of detail. The next set of tests is a time comparison across the six largest test models, but for a single level of detail. The next figure (Figure 4.38) shows the results of the time comparison at sixty percent of the vertices removed. The next figure (Figure 4.39) shows the results of the time comparison at eighty percent of the vertices removed. The last figure (Figure 4.40) shows the results of the time comparison at ninety percent of the vertices removed. In this graph we see extremely similar results to the previous two graphs. The previous two graphs show the decimation process completed in less time, but the order of computers finishing the decimation stay the same. This consistency leads us to believe that our testing methods were accurate, and our analysis can 80

90 Figure 4.40: The time comparison for each computer across the six largest test models for the ninety percent level of detail. extrapolate precise conclusions. 81

91 Chapter 5 Analysis Using our program and the metrics outlined in the previous chapter we can now analyze how effectively the algorithm can be applied. In this chapter we analyze three sets of results. Our first analysis will be of the visual appearance of the test meshes compared to their corresponding simplified meshes. The second analysis will be of the numerical results obtained from the Metro[4] tool. The third analysis will compare the performance of our algorithm among different computing platforms. The final analysis will be of the computational complexity of our algorithm. 5.1 Visual Analysis In our results chapter there were several images that portrayed our testing models at different levels of detail. We explore a few of those test models in more detail. The visual analysis of these models are important in defining our success at designing and implementing our program. We break up this section into three sub-sections based on some of the features of our program. 82

92 Figure 5.1: The model test002 rendered solid and with wireframe Edge Detection When decimating a model it is ideal to preserve some basic geometry of the original mesh. We implemented a feature that detects edges upon loading the model file. Detecting edges in a model guarantees that the more extreme vertices of a model will not get removed, therefore preserving the basic shape of the original mesh. In Figure 5.1 we have a cube with two edges per side. When we run our program with this model it detects the eight corners of the cube and marks them as non-removable (See Figure 5.2 for a fully decimated rendering of test002). When we decimate the mesh fully (-p 100%), it removes all of the other vertices except the eight corners. The user can try repeatedly to remove more, but the program states that the model has been fully decimated, and no further decimation is possible. The program has successfully preserved its basic 83

93 Figure 5.2: The model test002 fully decimated, rendered solid and with wireframe. shape, a cube. However, when we try to fully decimate another model, specifically test003 (see Figure 5.3), our method for edge detection gets greedy and marks edges that can be removed as non-removable (see Figure 5.4. In this figure, note that there are vertices along the edges. These are considered boundary vertices[27], and can be removed safely. The ideal solution for this mesh on any simplification algorithm should be to remove all vertices (except for the four corners), therefore creating a four vertex and two triangle rectangle. These figures demonstrate our edge detection scheme. It preserves vertices that are sharp corners, and are along sharp edges. This is suitable behavior for most cases. However, when a mesh is not manifold it can detect an edge that is also 84

94 Figure 5.3: The model test003 rendered solid and with wireframe. Figure 5.4: The model test003 fully decimated, rendered solid and with wireframe. 85

95 Figure 5.5: The model test004 rendered solid and with wireframe. a boundary, which can and should be removed safely Simplifying Models When we simplify a mesh we expect a visually accurate depiction of the mesh, within a suitable threshold. Our algorithm can create ideal representations from certain geometrically generic meshes. In Figure 5.5, we use a geometrically basic model: a cube. This cube has many similar vertices; more specifically, most vertices lie flat on one of the six sides of the model. With help from edge detection, decimating the model using our program produces the ideal result. In Figure 5.6, we see that the model has become minimal, since a cube that is represented by a triangle mesh can contain at the very least eight vertices and 12 triangles. 86

96 Figure 5.6: The model test004 fully decimated, rendered solid and with wireframe. Our algorithm sometimes decimates with less than ideal results. In Figure 5.7, we see the original mesh of a hollow cylinder, or a tube. Unlike the cube from the previous figures, when we decimate the model, the mesh starts to collapse upon itself (see Figure 5.8). After only completing eighty percent of the vertex removal process, the model starts to produce jagged edges. This is the result of our corner detection algorithm, which only marks vertices as corners when all angles between faces incident upon the vertex are smaller than a threshold. Since the top and bottom of the cylinder are round, there are some angles between adjacent faces here that are not corners. If we did not remove these vertices, we would also not remove the edges of a cube, thus limiting the ability to simplify in many geometrically primitive cases. 87

97 Figure 5.7: The model test005 rendered solid and with wireframe. Figure 5.8: The model test005 decimated to eighty percent, rendered solid and with wireframe. 88

98 Figure 5.9: The model test009 rendered solid and with flat shading. When our algorithm is given a complex mesh, it has a hard time determining the next best candidate to replace the vertex that we want to remove. In this case, our algorithm replaces the vertex with a less than ideal candidate and creates polygons that modify the topology greatly. If we were to incorporate a volume preservation or bounding of geometric error algorithm, our program would be less likely to have these less than ideal results. Our program seems to work perfectly for simple shapes, but when confronted by a slightly more complex mesh, we find the limitations of our decimation algorithm Smooth Shading Here we explore how rendering a simplified mesh with smooth shading enabled gives a better visual result. Figure 5.9 shows the original mesh of the test009 89

99 Figure 5.10: The model test009 rendered solid and with smooth shading enabled. model. This image has many hard edges that are a result of the representation of the model as a mesh of flat triangles. In the next image (Figure 5.10), the original mesh appears to have less sharp edges: look at the table-top in particular. In the second image, the OpenGL option GL SMOOTH has been enabled. This means that the colors of all adjacent triangles are blended across each triangle to make a smoother representation; hence, the mesh appears smoother. The GL SMOOTH option becomes even more valuable as we decimate the model. In the next image (Figure 5.11), the mesh has been decimated to sixty percent of its original size. Removing 34,542 vertices from the model leaves 23,028 vertices that are still rendered. With this few vertices still being rendering, the model looks much more blocky. In Figure 5.12, we enable smooth shading: the mesh appears less rough. 90

100 Figure 5.11: The decimated model rendered solid and with flat shading. Figure 5.12: The decimated model rendered solid and with smooth shading enabled. 91

101 Figure 5.13: The model test009 rendered with flat shading on the top and smooth shading on the bottom. The two levels of detail are the original model on the left, and the decimated model on the right. Figure 5.13 shows a comparison of these models in greater detail. This figure is an image of the two levels of detail (original and sixty percent), each with flat shading and smooth shading enabled. The top left is the original model with flat shading. The top right is the sixty-percent decimated model with flat shading. Here we clearly see the difference in polygon count. The model in the top right has significantly fewer polygons and looks jagged. The bottom two viewports show two visually similar models. The bottom left viewport is the original 114,880 polygon mesh, and the bottom right viewport contains the 92

102 Figure 5.14: Comparison of the mean geometric error for eight levels of detail among seven mesh simplification programs[14, 13, 5, 10, 19, 3] decimated model with only 45,792 polygons. Both were rendered with smooth shading; the only difference between the top right and bottom right viewports is that smooth shading has been enabled in the bottom viewport. We conclude that the GL SMOOTH feature in OpenGL helps to increase the visual quality of a simplified mesh. 5.2 Geometric Error In this section, we compare the mean and maximum geometric error of the results of our program with those of six other mesh simplification programs to determine the accuracy of our chosen simplification method. See our Results chapter for data we collected using the Metro tool[4] on eight levels of detail of 93

103 Figure 5.15: Comparison of the maximum geometric error for eight levels of detail among seven mesh simplification programs[14, 13, 5, 10, 19, 3] the Stanford bunny model; see [19] and [20] for the results of the same process for six other mesh simplification programs on the same model. We compare our results with those of our peers in Figures 5.14 and See our Literature Review chapter for a description of what these numbers measure. 5.3 Computing Platform Here we examine the effects that the computing platform used can have on the performance of our decimation program. For our purposes, the computing platform consists of the hardware in a computer and the operating system it runs. 94

104 Figure 5.16: The time taken on each test computer for ninety percent removal on the largest six test models. Figure 5.16 compares the time taken by our program to decimate to ninety percent vertices removed with each test computer on the largest six test models. Using a logarithmic scale, the results can be easily compared even though some results are orders of magnitude apart. The hand model is the largest model; ninety-percent decimation of the model generates a model with 32,733 vertices. This procedure takes a long time to complete: DELL takes 46, seconds, BLACK takes 31, seconds, and DUAL takes 52, seconds. Unlike most of the other results in this graph, DUAL does not take the least amount of time. BLACK wins this by a large margin, even though it has the weakest CPU, a largely outdated 6th generation x86 AMD K6-2. Since both computers are using the same operating system, Linux 2.4, and DUAL has much more memory, this result is unexpected. DELL has similar hardware, but a CPU that is at least 95

105 Figure 5.17: The number of vertices removed per second for each of the largest six test models. twice as powerful[30]. Looking at all the previous test model results in this graph, we see that a Windows based operating system only has the best score once, and it s with the smallest model compared. This implies that our program works better under Linux 2.4. We can also say that our program is capable of handling medical imaging sized models, since the largest original model in our testing has 327,323 vertices. We did notice quite a strain on DELL, however, for this large model. We believe this occurred when the model filled main memory and the Microsoft Windows operating systems had to use virtual memory on the hard drives: the systems began thrashing when that information was immediately needed. 96

A Comparison of Mesh Simplification Algorithms

A Comparison of Mesh Simplification Algorithms A Comparison of Mesh Simplification Algorithms Nicole Ortega Project Summary, Group 16 Browser Based Constructive Solid Geometry for Anatomical Models - Orthotics for cerebral palsy patients - Fusiform

More information

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005 A Developer s Survey of Polygonal Simplification algorithms CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005 Some questions to ask Why simplification? What are my models like? What matters

More information

3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment

3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment 3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment Özgür ULUCAY Sarp ERTÜRK University of Kocaeli Electronics & Communication Engineering Department 41040 Izmit, Kocaeli

More information

CGAL. Mesh Simplification. (Slides from Tom Funkhouser, Adam Finkelstein)

CGAL. Mesh Simplification. (Slides from Tom Funkhouser, Adam Finkelstein) CGAL Mesh Simplification (Slides from Tom Funkhouser, Adam Finkelstein) Siddhartha Chaudhuri http://www.cse.iitb.ac.in/~cs749 In a nutshell Problem: Meshes have too many polygons for storage, rendering,

More information

Surface Simplification Using Quadric Error Metrics

Surface Simplification Using Quadric Error Metrics Surface Simplification Using Quadric Error Metrics Authors: Michael Garland & Paul Heckbert Presented by: Niu Xiaozhen Disclaimer: Some slides are modified from original slides, which were designed by

More information

Mesh Simplification. Mesh Simplification. Mesh Simplification Goals. Mesh Simplification Motivation. Vertex Clustering. Mesh Simplification Overview

Mesh Simplification. Mesh Simplification. Mesh Simplification Goals. Mesh Simplification Motivation. Vertex Clustering. Mesh Simplification Overview Mesh Simplification Mesh Simplification Adam Finkelstein Princeton University COS 56, Fall 008 Slides from: Funkhouser Division, Viewpoint, Cohen Mesh Simplification Motivation Interactive visualization

More information

Level Set Extraction from Gridded 2D and 3D Data

Level Set Extraction from Gridded 2D and 3D Data Level Set Extraction from Gridded 2D and 3D Data David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International

More information

Mesh Decimation Using VTK

Mesh Decimation Using VTK Mesh Decimation Using VTK Michael Knapp knapp@cg.tuwien.ac.at Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract This paper describes general mesh decimation methods

More information

A General Algorithm for Triangular Meshes Simplification

A General Algorithm for Triangular Meshes Simplification Proceedings of the 11th WSEAS International Conference on COMPUTERS, Agios Nikolaos, Crete Island, Greece, July 26-28, 2007 613 A General Algorithm for Triangular Meshes Simplification BOŠTJAN PIVEC University

More information

View-Dependent Selective Refinement for Fast Rendering

View-Dependent Selective Refinement for Fast Rendering 1 View-Dependent Selective Refinement for Fast Rendering Kyle Brocklehurst Department of Computer Science and Engineering The Pennsylvania State University kpb136@psu.edu Abstract Triangle meshes are used

More information

A General Simplification Algorithm

A General Simplification Algorithm A General Simplification Algorithm Boštjan Pivec, Vid Domiter Abstract In this article a new general algorithm for triangular mesh simplification is proposed. The algorithm extends Krivograd's work from

More information

CS 283: Assignment 1 Geometric Modeling and Mesh Simplification

CS 283: Assignment 1 Geometric Modeling and Mesh Simplification CS 283: Assignment 1 Geometric Modeling and Mesh Simplification Ravi Ramamoorthi 1 Introduction This assignment is about triangle meshes as a tool for geometric modeling. As the complexity of models becomes

More information

Mesh Repairing and Simplification. Gianpaolo Palma

Mesh Repairing and Simplification. Gianpaolo Palma Mesh Repairing and Simplification Gianpaolo Palma Mesh Repairing Removal of artifacts from geometric model such that it becomes suitable for further processing Input: a generic 3D model Output: (hopefully)a

More information

All the Polygons You Can Eat. Doug Rogers Developer Relations

All the Polygons You Can Eat. Doug Rogers Developer Relations All the Polygons You Can Eat Doug Rogers Developer Relations doug@nvidia.com Future of Games Very high resolution models 20,000 triangles per model Lots of them Complex Lighting Equations Floating point

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Applications. Oversampled 3D scan data. ~150k triangles ~80k triangles

Applications. Oversampled 3D scan data. ~150k triangles ~80k triangles Mesh Simplification Applications Oversampled 3D scan data ~150k triangles ~80k triangles 2 Applications Overtessellation: E.g. iso-surface extraction 3 Applications Multi-resolution hierarchies for efficient

More information

Curves & Surfaces. Last Time? Progressive Meshes. Selective Refinement. Adjacency Data Structures. Mesh Simplification. Mesh Simplification

Curves & Surfaces. Last Time? Progressive Meshes. Selective Refinement. Adjacency Data Structures. Mesh Simplification. Mesh Simplification Last Time? Adjacency Data Structures Curves & Surfaces Geometric & topologic information Dynamic allocation Efficiency of access Mesh Simplification edge collapse/vertex split geomorphs progressive transmission

More information

Geometric Features for Non-photorealistiic Rendering

Geometric Features for Non-photorealistiic Rendering CS348a: Computer Graphics Handout # 6 Geometric Modeling and Processing Stanford University Monday, 27 February 2017 Homework #4: Due Date: Mesh simplification and expressive rendering [95 points] Wednesday,

More information

A Short Survey of Mesh Simplification Algorithms

A Short Survey of Mesh Simplification Algorithms A Short Survey of Mesh Simplification Algorithms Jerry O. Talton III University of Illinois at Urbana-Champaign 1. INTRODUCTION The problem of approximating a given input mesh with a less complex but geometrically

More information

Progressive Compression for Lossless Transmission of Triangle Meshes in Network Applications

Progressive Compression for Lossless Transmission of Triangle Meshes in Network Applications Progressive Compression for Lossless Transmission of Triangle Meshes in Network Applications Timotej Globačnik * Institute of Computer Graphics Laboratory for Geometric Modelling and Multimedia Algorithms

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Appearance Preservation

Appearance Preservation CS 283 Advanced Computer Graphics Mesh Simplification James F. O Brien Professor U.C. Berkeley Based on slides by Ravi Ramamoorthi 1 Appearance Preservation Caltech & Stanford Graphics Labs and Jonathan

More information

CSE 163: Assignment 2 Geometric Modeling and Mesh Simplification

CSE 163: Assignment 2 Geometric Modeling and Mesh Simplification CSE 163: Assignment 2 Geometric Modeling and Mesh Simplification Ravi Ramamoorthi 1 Introduction This assignment is about triangle meshes as a tool for geometric modeling. As the complexity of models becomes

More information

Advanced Computer Graphics

Advanced Computer Graphics Advanced Computer Graphics Lecture 2: Modeling (1): Polygon Meshes Bernhard Jung TU-BAF, Summer 2007 Overview Computer Graphics Icon: Utah teapot Polygon Meshes Subdivision Polygon Mesh Optimization high-level:

More information

CLUSTERING A LARGE NUMBER OF FACES FOR 2-DIMENSIONAL MESH GENERATION

CLUSTERING A LARGE NUMBER OF FACES FOR 2-DIMENSIONAL MESH GENERATION CLUSTERING A LARGE NUMBER OF FACES FOR -DIMENSIONAL MESH GENERATION Keisuke Inoue, Takayuki Itoh, Atsushi Yamada 3, Tomotake Furuhata 4, Kenji Shimada 5,,3,4 IBM Japan Ltd., Yamato-shi, Kanagawa, Japan

More information

APPROACH FOR MESH OPTIMIZATION AND 3D WEB VISUALIZATION

APPROACH FOR MESH OPTIMIZATION AND 3D WEB VISUALIZATION APPROACH FOR MESH OPTIMIZATION AND 3D WEB VISUALIZATION Pavel I. Hristov 1, Emiliyan G. Petkov 2 1 Pavel I. Hristov Faculty of Mathematics and Informatics, St. Cyril and St. Methodius University, Veliko

More information

Simple Silhouettes for Complex Surfaces

Simple Silhouettes for Complex Surfaces Eurographics Symposium on Geometry Processing(2003) L. Kobbelt, P. Schröder, H. Hoppe (Editors) Simple Silhouettes for Complex Surfaces D. Kirsanov, P. V. Sander, and S. J. Gortler Harvard University Abstract

More information

Meshes: Catmull-Clark Subdivision and Simplification

Meshes: Catmull-Clark Subdivision and Simplification Meshes: Catmull-Clark Subdivision and Simplification Part 1: What I did CS 838, Project 1 Eric Aderhold My main goal with this project was to learn about and better understand three-dimensional mesh surfaces.

More information

Simplification. Stolen from various places

Simplification. Stolen from various places Simplification Stolen from various places The Problem of Detail Graphics systems are awash in model data: highly detailed CAD models high-precision surface scans surface reconstruction algorithms Available

More information

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0) Acceleration Techniques V1.2 Anthony Steed Based on slides from Celine Loscos (v1.0) Goals Although processor can now deal with many polygons (millions), the size of the models for application keeps on

More information

Manipulating the Boundary Mesh

Manipulating the Boundary Mesh Chapter 7. Manipulating the Boundary Mesh The first step in producing an unstructured grid is to define the shape of the domain boundaries. Using a preprocessor (GAMBIT or a third-party CAD package) you

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 15 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Surface Simplification Using Quadric Error Metrics

Surface Simplification Using Quadric Error Metrics Surface Simplification Using Quadric Error Metrics Michael Garland Paul S. Heckbert Carnegie Mellon University Abstract Many applications in computer graphics require complex, highly detailed models. However,

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Geometric Modeling. Mesh Decimation. Mesh Decimation. Applications. Copyright 2010 Gotsman, Pauly Page 1. Oversampled 3D scan data

Geometric Modeling. Mesh Decimation. Mesh Decimation. Applications. Copyright 2010 Gotsman, Pauly Page 1. Oversampled 3D scan data Applications Oversampled 3D scan data ~150k triangles ~80k triangles 2 Copyright 2010 Gotsman, Pauly Page 1 Applications Overtessellation: E.g. iso-surface extraction 3 Applications Multi-resolution hierarchies

More information

Progressive Mesh. Reddy Sambavaram Insomniac Games

Progressive Mesh. Reddy Sambavaram Insomniac Games Progressive Mesh Reddy Sambavaram Insomniac Games LOD Schemes Artist made LODs (time consuming, old but effective way) ViewDependentMesh (usually used for very large complicated meshes. CAD apps. Probably

More information

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo Geometric Modeling Bing-Yu Chen National Taiwan University The University of Tokyo Surface Simplification Motivation Basic Idea of LOD Discrete LOD Continuous LOD Simplification Problem Characteristics

More information

Character Modeling COPYRIGHTED MATERIAL

Character Modeling COPYRIGHTED MATERIAL 38 Character Modeling p a r t _ 1 COPYRIGHTED MATERIAL 39 Character Modeling Character Modeling 40 1Subdivision & Polygon Modeling Many of Maya's features have seen great improvements in recent updates

More information

Mesh Decimation. Mark Pauly

Mesh Decimation. Mark Pauly Mesh Decimation Mark Pauly Applications Oversampled 3D scan data ~150k triangles ~80k triangles Mark Pauly - ETH Zurich 280 Applications Overtessellation: E.g. iso-surface extraction Mark Pauly - ETH Zurich

More information

Geometry Processing & Geometric Queries. Computer Graphics CMU /15-662

Geometry Processing & Geometric Queries. Computer Graphics CMU /15-662 Geometry Processing & Geometric Queries Computer Graphics CMU 15-462/15-662 Last time: Meshes & Manifolds Mathematical description of geometry - simplifying assumption: manifold - for polygon meshes: fans,

More information

Polygonal Simplification: An Overview

Polygonal Simplification: An Overview L Polygonal Simplification: An Overview TR96-016 1996 UN LL U M I I V E RS IG S I T AT LUX LIBERTAS S EP TEN T CA RO Carl Erikson Department of Computer Science CB #3175, Sitterson Hall UNC-Chapel Hill

More information

Half-edge Collapse Simplification Algorithm Based on Angle Feature

Half-edge Collapse Simplification Algorithm Based on Angle Feature International Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 2015) Half-edge Collapse Simplification Algorithm Based on Angle Feature 1,2 JunFeng Li, 2 YongBo Chen, 3

More information

Evaluating the Quality of Triangle, Quadrilateral, and Hybrid Meshes Before and After Refinement

Evaluating the Quality of Triangle, Quadrilateral, and Hybrid Meshes Before and After Refinement Rensselaer Polytechnic Institute Advanced Computer Graphics, Spring 2014 Final Project Evaluating the Quality of Triangle, Quadrilateral, and Hybrid Meshes Before and After Refinement Author: Rebecca Nordhauser

More information

and Recent Extensions Progressive Meshes Progressive Meshes Multiresolution Surface Modeling Multiresolution Surface Modeling Hugues Hoppe

and Recent Extensions Progressive Meshes Progressive Meshes Multiresolution Surface Modeling Multiresolution Surface Modeling Hugues Hoppe Progressive Meshes Progressive Meshes and Recent Extensions Hugues Hoppe Microsoft Research SIGGRAPH 97 Course SIGGRAPH 97 Course Multiresolution Surface Modeling Multiresolution Surface Modeling Meshes

More information

Adjacency Data Structures

Adjacency Data Structures Last Time? Simple Transformations Adjacency Data Structures material from Justin Legakis Classes of Transformations Representation homogeneous coordinates Composition not commutative Orthographic & Perspective

More information

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER

AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER AUTOMATED 4 AXIS ADAYfIVE SCANNING WITH THE DIGIBOTICS LASER DIGITIZER INTRODUCTION The DIGIBOT 3D Laser Digitizer is a high performance 3D input device which combines laser ranging technology, personal

More information

Surface Reconstruction. Gianpaolo Palma

Surface Reconstruction. Gianpaolo Palma Surface Reconstruction Gianpaolo Palma Surface reconstruction Input Point cloud With or without normals Examples: multi-view stereo, union of range scan vertices Range scans Each scan is a triangular mesh

More information

Abstract. Introduction. Kevin Todisco

Abstract. Introduction. Kevin Todisco - Kevin Todisco Figure 1: A large scale example of the simulation. The leftmost image shows the beginning of the test case, and shows how the fluid refracts the environment around it. The middle image

More information

The Pennsylvania State University The Graduate School Department of Computer Science and Engineering

The Pennsylvania State University The Graduate School Department of Computer Science and Engineering The Pennsylvania State University The Graduate School Department of Computer Science and Engineering CPU- AND GPU-BASED TRIANGULAR SURFACE MESH SIMPLIFICATION A Thesis in Computer Science and Engineering

More information

Solidifying Wireframes

Solidifying Wireframes Solidifying Wireframes Vinod Srinivasan, Esan Mandal and Ergun Akleman Visualization Laboratory Department of Architecture Texas A&M University College Station, TX 77843-3137, USA E-mail: vinod@viz.tamu.edu

More information

CS 563 Advanced Topics in Computer Graphics Polygonal Techniques. by Linna Ma

CS 563 Advanced Topics in Computer Graphics Polygonal Techniques. by Linna Ma CS 563 Advanced Topics in Computer Graphics Polygonal Techniques by Linna Ma What I ll Talk About Introduction Tessellation and Triangulation Consolidation Triangle Strips, Fans and Meshes Simplification

More information

Computer Graphics I Lecture 11

Computer Graphics I Lecture 11 15-462 Computer Graphics I Lecture 11 Midterm Review Assignment 3 Movie Midterm Review Midterm Preview February 26, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Point-based Simplification Algorithm

Point-based Simplification Algorithm Point-based Simplification Algorithm Pai-Feng Lee 1, Bin-Shyan Jong 2 Department of Information Management, Hsing Wu College 1 Dept. of Information and Computer Engineering Engineering, Chung Yuan Christian

More information

Deformation Sensitive Decimation

Deformation Sensitive Decimation Deformation Sensitive Decimation Alex Mohr Michael Gleicher University of Wisconsin, Madison Abstract In computer graphics, many automatic methods for simplifying polygonal meshes have been developed.

More information

3D Modeling: Surfaces

3D Modeling: Surfaces CS 430/536 Computer Graphics I 3D Modeling: Surfaces Week 8, Lecture 16 David Breen, William Regli and Maxim Peysakhov Geometric and Intelligent Computing Laboratory Department of Computer Science Drexel

More information

3 Polygonal Modeling. Getting Started with Maya 103

3 Polygonal Modeling. Getting Started with Maya 103 3 Polygonal Modeling In Maya, modeling refers to the process of creating virtual 3D surfaces for the characters and objects in the Maya scene. Surfaces play an important role in the overall Maya workflow

More information

Parameterization with Manifolds

Parameterization with Manifolds Parameterization with Manifolds Manifold What they are Why they re difficult to use When a mesh isn t good enough Problem areas besides surface models A simple manifold Sphere, torus, plane, etc. Using

More information

Physically-Based Modeling and Animation. University of Missouri at Columbia

Physically-Based Modeling and Animation. University of Missouri at Columbia Overview of Geometric Modeling Overview 3D Shape Primitives: Points Vertices. Curves Lines, polylines, curves. Surfaces Triangle meshes, splines, subdivision surfaces, implicit surfaces, particles. Solids

More information

Occluder Simplification using Planar Sections

Occluder Simplification using Planar Sections Occluder Simplification using Planar Sections Ari Silvennoinen Hannu Saransaari Samuli Laine Jaakko Lehtinen Remedy Entertainment Aalto University Umbra Software NVIDIA NVIDIA Aalto University Coping with

More information

5 Subdivision Surfaces

5 Subdivision Surfaces 5 Subdivision Surfaces In Maya, subdivision surfaces possess characteristics of both polygon and NURBS surface types. This hybrid surface type offers some features not offered by the other surface types.

More information

Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces

Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces EUROGRAPHICS 2001 / Jonathan C. Roberts Short Presentations Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces Eskil Steenberg The Interactive Institute, P.O. Box 24081, SE 104 50 Stockholm,

More information

CS 465 Program 4: Modeller

CS 465 Program 4: Modeller CS 465 Program 4: Modeller out: 30 October 2004 due: 16 November 2004 1 Introduction In this assignment you will work on a simple 3D modelling system that uses simple primitives and curved surfaces organized

More information

COMPUTING CONSTRAINED DELAUNAY

COMPUTING CONSTRAINED DELAUNAY COMPUTING CONSTRAINED DELAUNAY TRIANGULATIONS IN THE PLANE By Samuel Peterson, University of Minnesota Undergraduate The Goal The Problem The Algorithms The Implementation Applications Acknowledgments

More information

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts MSc Computer Games and Entertainment Maths & Graphics II 2013 Lecturer(s): FFL (with Gareth Edwards) Fractal Terrain Based on

More information

10.1 Overview. Section 10.1: Overview. Section 10.2: Procedure for Generating Prisms. Section 10.3: Prism Meshing Options

10.1 Overview. Section 10.1: Overview. Section 10.2: Procedure for Generating Prisms. Section 10.3: Prism Meshing Options Chapter 10. Generating Prisms This chapter describes the automatic and manual procedure for creating prisms in TGrid. It also discusses the solution to some common problems that you may face while creating

More information

Adaptive Point Cloud Rendering

Adaptive Point Cloud Rendering 1 Adaptive Point Cloud Rendering Project Plan Final Group: May13-11 Christopher Jeffers Eric Jensen Joel Rausch Client: Siemens PLM Software Client Contact: Michael Carter Adviser: Simanta Mitra 4/29/13

More information

Design by Subdivision

Design by Subdivision Bridges 2010: Mathematics, Music, Art, Architecture, Culture Design by Subdivision Michael Hansmeyer Department for CAAD - Institute for Technology in Architecture Swiss Federal Institute of Technology

More information

Hardware Displacement Mapping

Hardware Displacement Mapping Matrox's revolutionary new surface generation technology, (HDM), equates a giant leap in the pursuit of 3D realism. Matrox is the first to develop a hardware implementation of displacement mapping and

More information

In Proceedings of ACM Symposium on Virtual Reality Software and Technology, pp , July 1996.

In Proceedings of ACM Symposium on Virtual Reality Software and Technology, pp , July 1996. In Proceedings of ACM Symposium on Virtual Reality Software and Technology, pp. 11-20, July 1996. Real-time Multi-resolution Modeling for Complex Virtual Environments Veysi _Isler Rynson W.H. Lau Mark

More information

Illumination and Geometry Techniques. Karljohan Lundin Palmerius

Illumination and Geometry Techniques. Karljohan Lundin Palmerius Illumination and Geometry Techniques Karljohan Lundin Palmerius Objectives Complex geometries Translucency Huge areas Really nice graphics! Shadows Graceful degradation Acceleration Optimization Straightforward

More information

Using Semi-Regular 4 8 Meshes for Subdivision Surfaces

Using Semi-Regular 4 8 Meshes for Subdivision Surfaces Using Semi-Regular 8 Meshes for Subdivision Surfaces Luiz Velho IMPA Instituto de Matemática Pura e Aplicada Abstract. Semi-regular 8 meshes are refinable triangulated quadrangulations. They provide a

More information

Defense Technical Information Center Compilation Part Notice

Defense Technical Information Center Compilation Part Notice UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO1 1334 TITLE: Progressive Representation, Transmission, and Visualization of 3D Objects DISTRIBUTION: Approved for public release,

More information

Face Morphing. Introduction. Related Work. Alex (Yu) Li CS284: Professor Séquin December 11, 2009

Face Morphing. Introduction. Related Work. Alex (Yu) Li CS284: Professor Séquin December 11, 2009 Alex (Yu) Li CS284: Professor Séquin December 11, 2009 Face Morphing Introduction Face morphing, a specific case of geometry morphing, is a powerful tool for animation and graphics. It consists of the

More information

Offset Triangular Mesh Using the Multiple Normal Vectors of a Vertex

Offset Triangular Mesh Using the Multiple Normal Vectors of a Vertex 285 Offset Triangular Mesh Using the Multiple Normal Vectors of a Vertex Su-Jin Kim 1, Dong-Yoon Lee 2 and Min-Yang Yang 3 1 Korea Advanced Institute of Science and Technology, sujinkim@kaist.ac.kr 2 Korea

More information

03 - Reconstruction. Acknowledgements: Olga Sorkine-Hornung. CSCI-GA Geometric Modeling - Spring 17 - Daniele Panozzo

03 - Reconstruction. Acknowledgements: Olga Sorkine-Hornung. CSCI-GA Geometric Modeling - Spring 17 - Daniele Panozzo 3 - Reconstruction Acknowledgements: Olga Sorkine-Hornung Geometry Acquisition Pipeline Scanning: results in range images Registration: bring all range images to one coordinate system Stitching/ reconstruction:

More information

Joe Warren, Scott Schaefer Rice University

Joe Warren, Scott Schaefer Rice University Joe Warren, Scott Schaefer Rice University Polygons are a ubiquitous modeling primitive in computer graphics. Their popularity is such that special purpose graphics hardware designed to render polygons

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

MA 323 Geometric Modelling Course Notes: Day 36 Subdivision Surfaces

MA 323 Geometric Modelling Course Notes: Day 36 Subdivision Surfaces MA 323 Geometric Modelling Course Notes: Day 36 Subdivision Surfaces David L. Finn Today, we continue our discussion of subdivision surfaces, by first looking in more detail at the midpoint method and

More information

As a consequence of the operation, there are new incidences between edges and triangles that did not exist in K; see Figure II.9.

As a consequence of the operation, there are new incidences between edges and triangles that did not exist in K; see Figure II.9. II.4 Surface Simplification 37 II.4 Surface Simplification In applications it is often necessary to simplify the data or its representation. One reason is measurement noise, which we would like to eliminate,

More information

1. Introduction. 2. Parametrization of General CCSSs. 3. One-Piece through Interpolation. 4. One-Piece through Boolean Operations

1. Introduction. 2. Parametrization of General CCSSs. 3. One-Piece through Interpolation. 4. One-Piece through Boolean Operations Subdivision Surface based One-Piece Representation Shuhua Lai Department of Computer Science, University of Kentucky Outline. Introduction. Parametrization of General CCSSs 3. One-Piece through Interpolation

More information

Mesh and Mesh Simplification

Mesh and Mesh Simplification Slide Credit: Mirela Ben-Chen Mesh and Mesh Simplification Qixing Huang Mar. 21 st 2018 Mesh DataStructures Data Structures What should bestored? Geometry: 3D coordinates Attributes e.g. normal, color,

More information

Single Triangle Strip and Loop on Manifolds with Boundaries

Single Triangle Strip and Loop on Manifolds with Boundaries Single Triangle Strip and Loop on Manifolds with Boundaries Pablo Diaz-Gutierrez David Eppstein M. Gopi Department of Computer Science, University of California, Irvine. Abstract The single triangle-strip

More information

Smooth Patching of Refined Triangulations

Smooth Patching of Refined Triangulations Smooth Patching of Refined Triangulations Jörg Peters July, 200 Abstract This paper presents a simple algorithm for associating a smooth, low degree polynomial surface with triangulations whose extraordinary

More information

Chapter 1. Introduction

Chapter 1. Introduction Introduction 1 Chapter 1. Introduction We live in a three-dimensional world. Inevitably, any application that analyzes or visualizes this world relies on three-dimensional data. Inherent characteristics

More information

No.5 An Algorithm for LOD by Merging Near Coplanar Faces 451 The approaches mentioned above mainly make passes by calculating repeatedly the geometric

No.5 An Algorithm for LOD by Merging Near Coplanar Faces 451 The approaches mentioned above mainly make passes by calculating repeatedly the geometric Vol.16 No.5 J. Comput. Sci. & Technol. Sept. 2001 An Algorithm for LOD by Merging Near Coplanar Faces Based on Gauss Sphere CAO Weiqun ( Π), BAO Hujun ( Λ) and PENG Qunsheng (ΞΠ±) State Key Laboratory

More information

To Do. Resources. Algorithm Outline. Simplifications. Advanced Computer Graphics (Spring 2013) Surface Simplification: Goals (Garland)

To Do. Resources. Algorithm Outline. Simplifications. Advanced Computer Graphics (Spring 2013) Surface Simplification: Goals (Garland) Advanced omputer Graphics (Spring 213) S 283, Lecture 6: Quadric Error Metrics Ravi Ramamoorthi To Do Assignment 1, Due Feb 22. Should have made some serious progress by end of week This lecture reviews

More information

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 1 The Blender Interface and Basic Shapes

Blender Notes. Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 1 The Blender Interface and Basic Shapes Blender Notes Introduction to Digital Modelling and Animation in Design Blender Tutorial - week 1 The Blender Interface and Basic Shapes Introduction Blender is a powerful modeling, animation and rendering

More information

Near-Optimum Adaptive Tessellation of General Catmull-Clark Subdivision Surfaces

Near-Optimum Adaptive Tessellation of General Catmull-Clark Subdivision Surfaces Near-Optimum Adaptive Tessellation of General Catmull-Clark Subdivision Surfaces Shuhua Lai and Fuhua (Frank) Cheng (University of Kentucky) Graphics & Geometric Modeling Lab, Department of Computer Science,

More information

Semiautomatic Simplification

Semiautomatic Simplification Semiautomatic Simplification ABSTRACT Gong Li gongli@cs.ualberta.ca Dept. Computing Science Univ. Alberta Edmonton, Alberta CANADA T6G 2H1 We present semisimp, a tool for semiautomatic simplification of

More information

Lofting 3D Shapes. Abstract

Lofting 3D Shapes. Abstract Lofting 3D Shapes Robby Prescott Department of Computer Science University of Wisconsin Eau Claire Eau Claire, Wisconsin 54701 robprescott715@gmail.com Chris Johnson Department of Computer Science University

More information

Circular Arcs as Primitives for Vector Textures

Circular Arcs as Primitives for Vector Textures Circular Arcs as Primitives for Vector Textures Zheng Qin, Craig Kaplan, and Michael McCool University of Waterloo Abstract. Because of the resolution independent nature of vector graphics images, it is

More information

Medial Scaffolds for 3D data modelling: status and challenges. Frederic Fol Leymarie

Medial Scaffolds for 3D data modelling: status and challenges. Frederic Fol Leymarie Medial Scaffolds for 3D data modelling: status and challenges Frederic Fol Leymarie Outline Background Method and some algorithmic details Applications Shape representation: From the Medial Axis to the

More information

COMP 175: Computer Graphics April 11, 2018

COMP 175: Computer Graphics April 11, 2018 Lecture n+1: Recursive Ray Tracer2: Advanced Techniques and Data Structures COMP 175: Computer Graphics April 11, 2018 1/49 Review } Ray Intersect (Assignment 4): questions / comments? } Review of Recursive

More information

Spatial Data Structures

Spatial Data Structures 15-462 Computer Graphics I Lecture 17 Spatial Data Structures Hierarchical Bounding Volumes Regular Grids Octrees BSP Trees Constructive Solid Geometry (CSG) March 28, 2002 [Angel 8.9] Frank Pfenning Carnegie

More information

Outline. Reconstruction of 3D Meshes from Point Clouds. Motivation. Problem Statement. Applications. Challenges

Outline. Reconstruction of 3D Meshes from Point Clouds. Motivation. Problem Statement. Applications. Challenges Reconstruction of 3D Meshes from Point Clouds Ming Zhang Patrick Min cs598b, Geometric Modeling for Computer Graphics Feb. 17, 2000 Outline - problem statement - motivation - applications - challenges

More information

Physically-Based Laser Simulation

Physically-Based Laser Simulation Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on

More information

EECS 487: Interactive Computer Graphics

EECS 487: Interactive Computer Graphics EECS 487: Interactive Computer Graphics Lecture 36: Polygonal mesh simplification The Modeling-Rendering Paradigm Modeler: Modeling complex shapes no equation for a chair, face, etc. instead, achieve complexity

More information

An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline

An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline 3rd International Conference on Mechatronics and Information Technology (ICMIT 2016) An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline Zhengjie Deng1, a, Shuqian He1,b, Chun Shi1,c,

More information

Curve Corner Cutting

Curve Corner Cutting Subdivision ision Techniqueses Spring 2010 1 Curve Corner Cutting Take two points on different edges of a polygon and join them with a line segment. Then, use this line segment to replace all vertices

More information