Multiresolution model generation of. texture-geometry for the real-time rendering 1

Size: px
Start display at page:

Download "Multiresolution model generation of. texture-geometry for the real-time rendering 1"

Transcription

1 Multiresolution model generation of texture-geometry for the real-time rendering 1

2 Contents Contents...i Figures...iv 1. Introduction Real-time rendering for complex object Background Rendering pipeline Methods for enhancing the performance of the rendering pipeline Multiresolution model The multiresolution model for real-time rendering Complexity and fidelity Overview of thesis The multiresolution geometry Introduction Previous work Geometry only simplification Property preserving simplification Texture generating simplification Using camera-centered features and light-centered features Summary Generation method : edge-collapsing with region hierarchy Overview Edge collapsing operator Multiresolution structures using the edge collapsing operator Results Generation method : AGSphere Overview AGSphere (Aligned Gaussian Sphere)...33 i

3 M-AGSphere : multiresolution extension of the AGSphere Real-time control of the M-AGSphere Results Summary Conclusion Texture map for the multiresolution geometry Overview Texture mapping function Texture boundary problem Texture distortion problem Overall approach Modified multiresolution geometry generation method Texture parameterization method minimizing multiresolution mapping distortion Parameterization process Multiresolution mapping error Texture parameterization for the multiresolution model Experimental results and discussion Conclusion Multiresolution texture generation using region merging Motivation Requirements Multiresolution texture model with multiresolution geometry Overview Geometry level selection for texture refinement Texture map refinement using region splitting and merging Experimental results Conclusion...86 ii

4 5.1. Summary Application of the multiresolution model Future Work Texture map generation Real-time rendering using multiresolution texture and geometry...88 References...90 iii

5 Figures FIGURE 1 EXAMPLES OF THE VIRTUAL OBJECT WITH COMPLEX SHAPE....2 FIGURE 2 RENDERING PIPELINE[WOHN97]...3 FIGURE 3 RENDERING ARCHITECTURE IN THE GENERIC HIGH-END PC...4 FIGURE 4 MODELS FOR RENDERING TIME MEASUREMENTS...5 FIGURE 5 EXAMPLE OF THE MULTIRESOLUTION GEOMETRY...14 FIGURE 6 VERTEX DEVIATION...17 FIGURE 7 TYPICAL SIMPLIFICATION USING EDGE COLLAPSE[HOPPE97]...18 FIGURE 8 SIMPLIFIED RESULTS OF PROPERTY PRESERVING METHOD[GARLAND98]...20 FIGURE 9 EXAMPLE OF THE IMPOSTER [MACIEL95]...21 FIGURE 10 USING GEOMETRY INFORMATION WITH TEXTURE IMAGES[ALIAGA96]...21 FIGURE 11 HALF EDGE COLLAPSING AND FULL EDGE COLLAPSING...27 FIGURE 12 UPDATING VERTEX MAPPING TABLE FOR HALF EDGE COLLAPSING...27 FIGURE 13 CASE OF THE FACE FLIPPING...28 FIGURE 14 REGION MERGING BY THE EDGE COLLAPSING...29 FIGURE 15 PARTITIONED REGIONS USING EDGE COLLAPSING OPERATIONS...29 FIGURE 16 VERTEX TREE [HOPPE97]...30 FIGURE 17 SIMPLIFICATION RESULTS FIGURE 18 GEOMETRIC SIMPLIFICATION FOR AN OBJECT WITH COLOR INFORMATION...31 FIGURE 19 MODEL GENERATION TIME AND VERTEX DEVIATION...32 FIGURE 20 MAPPING POLYGONS ONTO THE AGSPHERE...33 FIGURE 21 MERGING POLYGONS IN A CELL...34 FIGURE 22 MERGING TWO REGIONS USING HALF-EDGE STRUCTURE...35 FIGURE 23 SUCCESSIVE SUBDIVISION OF A CELL...36 FIGURE 24 QUADTREE REPRESENTATION OF M-AGSPHERE...36 FIGURE 25 MERGING AND SPLITTING...37 FIGURE 26 MULTIRESOLUTION CELL SELECTION...39 FIGURE 28 SIMPLIFYING SHARED BOUNDARIES...40 FIGURE 29 CASES OF RETRIANGULATION WITH VIRTUAL EDGES...40 iv

6 FIGURE 30 GENERATING NEW TRIANGLES FROM THE CANDIDATE BOUNDARY...41 FIGURE 31 PROPER AND IMPROPER TRIANGLES...41 FIGURE 32 RETRIANGULATION PROCESS...42 FIGURE 33 SILHOUETTE PRESERVING MULTIRESOLUTION MODEL...43 FIGURE 34 RENDERING RESULTS FOR DIFFERENT VIEWER POSITION...43 FIGURE 35 DETAIL LEVEL SELECTION EXAMPLE...43 FIGURE 36 SIMPLIFIED BUNNY...44 FIGURE 37 SILHOUETTE PRESERVING SIMPLIFICATION FOR THE B747 MODEL...44 FIGURE 38 FRAME RATES AND NUMBER OF REGIONS FOR DIFFERENT LEVELS...45 FIGURE 39 DIFFERENCE IN SILHOUETTE FOR AN ORDINARY METHOD...45 FIGURE 40 DIFFERENCE IN SILHOUETTE USING M-AGSPHERE...45 FIGURE 41 PRESERVING LIGHT SILHOUETTE...46 FIGURE 42 REQUIRED TIME COST FOR SIMPLIFICATION OPERATION...47 FIGURE 43 SIMPLIFICATION RESULT WITH BROKEN TEXTURE INFORMATION...48 FIGURE 44 AN EXAMPLE OF THE BOUNDARY PROBLEM...52 FIGURE 45 EXAMPLE OF THE HARMONIC MAPPING [ECK95]...55 FIGURE 46 TEXTURE PELTING RESULT [PIPONI2000]...55 FIGURE 47 MERGED REGIONS USING PROPOSED REGION MERGING MECHANISM...59 FIGURE 48 VERTEX DEVIATION USING THE PROPOSED REGION MERGING MECHANISM...60 FIGURE 49 POINT PAIR FOR THE SIMPLIFICATION...61 FIGURE 50 GENERATED TEXTURE MAP FOR A HEMISPHERE MODEL...67 FIGURE 51 HEMISPHERE WITH 9 DIFFERENT TEXTURE IMAGES...68 FIGURE 52 SIMPLIFICATION RESULT USING M-AGSPEHRE WITHOUT CONSIDERING TEXTURES...68 FIGURE 53 GENERATED TEXTURE MAP WITHOUT CONSIDERING MULTIRESOLUTION MAPPING ERROR...69 FIGURE 54 RENDERING RESULT OF THE SIMPLIFIED MODEL...69 FIGURE 55 TEXTURE MAP FOR AN OBJECT WITH MULTIPLE COLOR MATERIAL...70 FIGURE 56 SIMPLIFICATION EXAMPLE FOR AN OBJECT WITH MULTIPLE COLOR MATERIAL...70 FIGURE 57 RENDERING RESULT FOR DIFFERENT DETAIL LEVELS...71 FIGURE 58 EXAMPLE FOR GEOMETRY OF MULTIPLE TEXTURE IMAGES WITH 5,000 POLYGON...72 FIGURE 59 RENDERING RESULT OF TEXTURE MAP WITH MINIMIZING MULTIRESOLUTION MAPPING ERROR...73 FIGURE 60 MULTIRESOLUTION MAPPING ERROR FOR SIMPLIFICATION...73 FIGURE 61 BLURRED REGION CAUSED BY TEXTURE DISTORTION...74 v

7 FIGURE 62 RENDERING RESULT USING LOW RESOLUTION TEXTURE...74 FIGURE 63 HISTOGRAM ON THE POLYGON AREA RATIO FOR CORRESPONDING POLYGONS...77 FIGURE 64 TEXTURE MAPPING FUNCTION FOR MULTIRESOLUTION GEOMETRY AND TEXTURE...79 FIGURE 65 RANK-I GEOMETRY...80 FIGURE 66 MULTIRESOLUTION TEXTURE MAP...82 FIGURE 67 RENDERED IMAGE USING MIPMAP STYLE MULTIRESOLUTION TEXTURE...83 FIGURE 68 RENDERED IMAGE USING MIPMAP STYLE MULTIRESOLUTION TEXTURE(2)...83 FIGURE 69 REFINED TEXTURE MAPS FROM THE GLOBAL TEXTURE MAP...84 FIGURE 70 RENDERING RESULTS USING REFINED TEXTURE MAP FIGURE 71 DETAIL PART COMPARISON OF TWO DIFFERENT MAPS FIGURE 72 RATIO OF POLYGON AREA BETWEEN TEXTURE IMAGE AND GEOMETRY MESH...85 vi

8 1. Introduction 1.1. Real-time rendering for complex object Real-time rendering is a process of the reproducing images from the 3D geometric data in real-time. It has been one of the most important technologies for real-time computer graphics applications including the virtual reality applications. The virtual reality applications require high level of the realism especially the visual realism with highly complex shapes. The virtual world is constructed by three dimensional objects. These objects are represented visually by rendering its shape into the physical screen. The shape of the object is modeled by using various modeling techniques. In this paper, the object model refers the model of the shape to be represented for the real-time rendering. The complexity of the object model is determined by the rendering cost of the model. Real-time rendering techniques deal with problems of processing high complexity shape model. With the development of rendering hardware and software technologies, recent rendering systems can render relatively complex models in real-time which have been impossible in several years ago. As the technology advances, requirements of the visual realism are also elevated. Especially, the rapid expansion of virtual reality applications requires real-time rendering of complex models in various low-cost platforms including low-end home personal computers and PDAs. In this paper, a method to render virtual object with complex shape model in real-time is investigated. This paper focuses on rendering the geometric model that represents the shape of the virtual object. Geometric models used in the virtual reality applications usually have high complexity shape information. In the general rendering system, the shape information is expressed in a several properties like positional properties, color properties, surface normal properties, and texture images[foley90]. More detailed description of these properties is given in the section 1.3. Figure 1 shows examples of the complex shape of the virtual object. Left most one is a example of the ancient-pagoda for the virtual navigation of the ancient city[heo2000]. It is represented using about 100,000 polygons with 30 different texture images. Middle one is an ordinary building model with shape with 5,000 polygons with 10 different colors. Right most one is scanned shape for the Digital Michelangelo Project [DMP]. It is a shape with 7,600,000 polygons with different color and other information on 3,800,000 vertices. Even in the most developed rendering hardware these models are not eligible to be rendered in real-time. Furthermore, it is more difficult when the virtual worlds are consisted by lots of these complex objects which are common in real world. These situations are often in the virtual reality applications such as 1

9 virtual city applications, virtual museums, various training applications including military and flight training. Figure 1 Examples of the virtual object with complex shape. Worse thing is that even if we use complex models, it often does not give enough detail we want. When we see the high detail object in close view, it shows many unnatural results with the blurred effect, flattened surface, and more. It shows that we still need more complex model to give enough realism to the virtual world. To render it in real-time, we need to adopt a special rendering method to process complex objects in real-time. In this paper, we deal with the real-time rendering method of complex objects for various hardware platforms including from high-end graphics systems to low-cost personal computers. To render complex model in real-time, we need to convert it into real-time render-able form. Traditionally, various methods are used such as caching, compression, and culling. These methods convert model information into renderable forms which requiring less processing time for computation, transportation, etc. These methods are basically lossless method that they preserve all the detail information of the original morel. In contrast to these methods, the multiresolution method simplifies the model to be rendered in real-time. The multiresolution method sacrifices visual detail for the real-time performance. The lossless methods are often unable to accomplish real-time rendering performance for the complex object, but the multiresolution method is able to do it. This method uses models with different complexity. When an object is located far from the camera, the high detail rendering is not required. In this case, the method renders low complexity model instead of using the original high complexity model with high rendering cost. This method renders object in real-time while preserving possible details[clark76][airely90] [Astheimer94]. In section these methods are described in more detail. In this paper, we devise a new method for the real-time rendering based on the multiresolution 2

10 approach. Previous researches on the multiresolution approaches are focused on the geometry information which is represented as polygons. In this paper, we investigate on the texture and color information along with the geometry. The texture and color information is the important information to represent shape. We develop a method to manage other information along with geometric information. The multiresolution modeling techniques which is also known as levels-of-detail techniques have two processes. The first step is generation process of the multiresolution model from the original complex model. The generated multiresolution model is used for the real-time rendering process. In this paper, we focus on the multiresolution generation process. Most of the current real-time rendering techniques are able to adopt easily the new proposed multiresolution model. The requirements and the modifications on the current rendering method will be discussed in the last chapter Background Rendering pipeline Since early days of implementing rendering hardware, the rendering process is constructed in pipeline architecture. Several processes are divided into pipeline stages. Application-specific processing Special Effects Input Devices Network Sound Dynamic Behaviors Scene processing Culling LoD Polygon processing Geometric Transformation Clipping Lighting Pixel processing Hidden pixel removing Texture mapping Color interpolation Antialiasing Scene graph Display list Depth Texture buffer memory Frame buffer Figure 2 Rendering pipeline[wohn97] Figure 2 shows generic rendering pipeline in the conventional rendering system. The virtual world is represented as scene graph with behavioral relationship among the virtual objects. The application specific process computes scene level behaviors like behaviors, interface, etc. After the application specific process, the scene graph is traversed to generate polygon display list. The scene graph traversal step processes culling and levels-of-detail computations. The processed scene graph is stored as polygon display list to be processed in the later step. The third step is polygon processing step. In the polygon 3

11 processing step, the polygon display list is transformed into polygons in the screen space by geometric transformation, clipping and lighting computations. The transformed polygons are processed in the last step of the pipeline, the pixel processing. In the pixel processing, the polygons are finally drawn to the frame buffer by hidden pixel removing, texture mapping, color interpolation and anti-aliasing computations. There have been several approaches to use different rendering pipeline[eyles97], but most of the rendering hardware and algorithms uses the rendering pipeline like figure 3. Application Geometry Processor Graphics Cache Rasterizer Figure 3 Rendering architecture in the generic high-end PC The previous figure shows software pipeline architecture of the real-time rendering. The software architecture is implemented in hardware system using similar architecture. Figure 3 shows the hardware rendering architecture of the conventional real-time rendering system[moller99]. The first component of the hardware system is a module of processing the application specific process and the scene process. These processes are computed by the application processor which is often computed in the main processor. The result of the computation is transferred to the next module using the bus between two processors. The processing power of the module is determined by the processing power of the main processor and the bandwidth of the bus between the main processor and the processor for the next module. Current hardware system has more than several hundred MIPS processing power for the main processor and the bus bandwidth is over 1GB/sec using the high end PC architecture like AGP4x bus[intel]. The processed data is transferred to the next module in the form of rendering primitives like polygons, textures. The second component is polygon processing step. This module is often implemented in the graphics subsystem which have specialized geometry processor. The geometry processor is a specialized processor which computes graphics transformation and lighting equations. The performance is determined by the transformational computation power of the processor. Current high-end PC graphics subsystem is equipped with processor that runs more than 100M transform/sec[nvidia]. 4

12 The last step is module for drawing polygons to the frame buffer. This module runs pixel processing step by interpolation colors, texture mapping. The main performance element in this step is number of pixels to be drawn in the frame buffer and number of pixels to be read from the pixel image. Current high-end hardware has power of more than giga pixels per second. The rendering performance bound is determined by these components. Because that these components are combined in pipeline architecture, any slow component is the bottleneck of the rendering performance. Roughly speaking, application processor is sufficient to process more than ten thousand of objects when the real-time rendering requires speed of 60 frames/sec. The current transport bus in the high-end PC architecture is sufficient to send about 100,000 polygons in a frame when the number of bytes required for each polygon is 100 bytes including position, color, normal and texture coordinates. The geometric processor is sufficient to process about 150,000 polygons in a frame with current hardware spec when we count that about 12 computations are required for vertex transformation and 3 light calculations. Finally, the raterization process can render 1,000,000 polygons if each polygon occupies average 100 pixels. Even if we take very conservative measurement, is it not sufficient to render the virtual scene with thousand objects which have hundred thousand polygons. (a) Girl : model with no texture (b) Campus : model with lots of textures Figure 4 Models for rendering time measurements 5

13 Table 1. Rendering time (ms) Model Rendering resolution Time cost (ms) Girl Girl Girl Campus Campus Campus 640x x x x x x These observation are easily verified using some tests on each rendering pipeline stage[moller99]. The table 1 shows the amount of time required to render the objects showed in the figure 5. The girl figure has no textures and the campus scene has more than 100 textures. The numbers of polygons are similar for two cases, the girl figure has 24,000 polygons and the campus has 19,000 polygons. The rendering time is measured for different models with different screen resolution. From the girl model, the pixel processing is not a bottleneck but the campus image with similar number of polygons, the pixel processing stage is the bottleneck of the rendering pipeline. The rendering time of the campus model is reduced by the reduced image resolution. From this experience, we define that the polygon and the texture image as a primitive of rendering process. Along with the polygons and texture images, number of lights is another important factor to determine the rendering time. The increased number of lights requires increase geometric computations. The first two primitives are description of the object itself, but the lights are the primitives of the environments. The primitives of the environments are number of lights, special effects, etc. There are other factors that have effects in rendering performance. These are device parameters such as screen resolution. In this paper, we focus on the primitives of the object itself. We devise a new method to render complex object in real-time. The purpose of this research is to devise a method to simplify the complex object to render in realtime while preserving the quality of the rendered image. The complexity of the object is determined by time cost of the rendering. In the next section, previous methods to improve rendering performance are described. 6

14 Methods for enhancing the performance of the rendering pipeline There have been lots of efforts to improved rendering performance over the rendering pipeline. These methods are caching methods, compression methods, culling methods and simplification method. Caching methods The caching method is very effective and popular method among these methods. Basically, caching methods try to cache the information to relieve burdens of the bottleneck stage. By reducing the time cost of the bottleneck stage, the entire performance of the pipeline is increased. In the modern rendering systems, high speed rendering subsystem cache memory is used to cache information to be transferred through the bus or to cache intermediate computation results of the geometry processor. Caching mechanism is also effective for the texture images. Image caching methods reduces the cost of transferring large amount of image data by caching images into high speed cache memory rather than transferring for each time[igehy98][cox98]. For geometry, most of the current rendering API provides control over the caching mechanism. Geometry data can be cached in the cache memory, and the intermediate computation results are stored in the special form of draw lists [OpenGL]. The caching methods are adopted to enhance the performance of the bus transfer or the enhance computations of the geometry processing stage. Compression methods The caching methods use the time coherence in the computation and the information needed. There are other approaches to reduce the data explicitly. The compression method is an approach to reduce the processing time by removing redundant data to be processed. There have been approaches to reduce number of bytes to be transferred through bus by encoding information using offset or its significant range of values [Deering95][Gueziec99b][Gumhold98] [Isenburg2000][Karni2000][Khodakovsky2000][Taubin98a][Taubin98b][Wiley97]. The compressed data is decompressed in the later processes such as geometry processor or pixel rasterizer. Along with the geometry compression, texture images are also compressed to reduce the transfer cost [NVIDIA] [Haindl98]. Some of these methods are implemented in the consumer hardware. 7

15 The compression method reduces the transfer bandwidth and it is good for the rendering system with the bus bottleneck. Culling methods The culling method reduces costs for entire rendering pipeline. Previous two methods are good for reducing specific stage of the pipeline. The culling method eliminates the portion of the data which is not seen in the final rendered images. By eliminating the data before the rendering pipeline processing, the whole pipeline stage is benefited by this method. This method is especially good for the system with surplus processing power in the application processor, which is the case of the most of the current rendering system as we state in the previous section. The culling method removes unseen polygons from the rendering pipeline. These methods are classified as frustum culling, back-face culling and occlusion culling method as the features used[heo2000]. This method enhances the performance of the rendering pipeline regardless of the system bottleneck process, but it is not effective for the scene when the lots of complex objects are seen from the current camera position. For this case, we need to reduce the complexity of the visible objects. Mesh simplification The mesh simplification approach reduces the rendering time cost by reducing the amount of data to be processed, which is similar to the culling method. It simplifies shape information to be processed, while the culling method removes information that is not processed in the specific rendering time. The result of mesh simplification not only reduces the complexity but also lowers the quality of the rendering. The mesh simplification has loss in the image quality but it is the only method that can cope with very complex object which is not suitable to real-time process using other methods. The mesh simplification method is applied for the very complex mesh obtained from the laser scanner[dehaemer91][gueziec95][gueziec99a][kalvin96] [Klein96]. These methods simplify the complex mesh to be processed. Based on these methods, the simplification approach is applied to the realtime rendering process. Using the simplification method, the complex object is modified to simple form. The simplified object substitutes the complex object as necessary. The simplification method makes different model for 8

16 the same objects. The model with set of different complexity for the same object is defined as the multiresolution model. This research concerns on generating method of the multiresolution model for given complex object Multiresolution model The multiresolution model for real-time rendering The multiresolution approach is applied to the real-time rendering by generating models with simplified geometry and by selecting detail model for the rendering. This method trades off the image quality and the rendering time cost. Clark et. al. proposed multiresolution model for the visibility culling process [Clark76][Airey90]. Their method first computes visibility information using the simplified model. The calculated result is enhanced using the mode detail model when the computation time is enough to do it. This approach is adopted with the mesh simplification approach by using simplified model in rendering [Heckbert94]. The multiresolution approach has two stages of the modeling stage and the rendering stage. In the modeling stage, the original complex model is simplified into a series of models with different details to form a multiresolution model. This stage is called as multiresolution model generation process. The generated multiresolution model is used to select proper detail model in real-time rendering. The latter stage is multiresolution model selection process. There ware lots of researches on the multiresolution model generation method. These method uses simplification operations to make the multiresolution model. These methods are described in the chapter 2. The simplified results are expressed as a single multiresolution model. The multiresolution structure is a hierarchy on the complexity of models. A discrete multiresolution structure is a sequence of models starting from the most simplified one to the most detail one. The discrete multiresolution structure is simple to use in real-time rendering, and it is popular to most of the real-time rendering system so far. In addition to the discrete structure, there also have been other structures such as hierarchical structures based on wavelet and B-spline[Lounsbery94][Eck95], tree-like structures on generic polygon [Hoppe97][Garland97], and lots of other structures [Falby93][Koh94]. The multiresolution model selection methods selects appropriate detail model to give small degradation on the rendering quality. The degradation is measured by selection metrics such as distance from the camera, speed of the objects, size of the object, etc [Black87][Funkhouser93][Ohshima96]. Using 9

17 these metrics, the small object or the object which is located far from the camera is rendered with low detail model. The multiresolution model is applied to various applications including terrain rendering [DeBerg98][Lindstrom96][Cignoni95][CohenOr96][Duchaineau97][Floriani95][Garland95][Rabinovich9 7], multiresolution analysis and editing of the shape [Kobbelt98], and morphing [Lee99]. The multresolution model approach is also able to be combined with other performance enhancing method. El Sana et.al. proposed method using compression techniques like triangle strip, to the simplified object[elsana99]. This research deals with the generation of the multiresolution model. The multiresolution model can be used for various information types. The multiresolution model in this paper, concerns about the object shape rather than other information like visibility information. The shape of an object is represented using the rendering primitives. The rendering primitives are categorized as object primitives, environment primitives and device primitives. As stated in section 2, the object primitives are mainly polygon geometry with color information and the texture images. In this research, we propose a new multiresolution structure for polygon geometry and texture image. The multiresolution model should provide enough variety of the complexity. The resolution of the model is selected to give real-time performance. Generally, high complexity model gives high image quality and low complexity model gives low image quality. To give proper selection, the multiresolution model should be able to provide required complexity for the object. To preserve image quality, it should preserve visual fidelity as much as possible for given complexity. From this point of view, the multiresolution model is defined as a representation of the shape in models with monotone complexity and monotone fidelity. The complexity and fidelity is measured as the time cost and the image quality. Measuring criteria are discussed in the next section. The monotone complexity relation is a relation that the models in a multiresolution representation should have monotone increase of complexity from the simplest model to the most complex model. Using the complexity relation, all the models in the multiresolution model forms ordered graph where each node is simplified form of the model and edge denotes relation. In the rendering process, a node which satisfies given rendering time bound is selected. Among nodes with same complexity, the node with most high fidelity is selected. Furthermore, all nodes should be generated to preserve as much fidelity as possible. The monotone fidelity requirement is a condition that a model with high complexity should 10

18 preserve more fidelity than models with low complexity. More strictly speaking, the selected model using the selection criteria at a given parameters should have high fidelity than the selected model with less complexity using the same selection criteria. To satisfy this condition, we put monotone fidelity requirement for the multiresolution model. Let s say a multiresolution model is a set of model M i where i = [0,n]. There is a monotone complexity relation between M i and M j such that the complexity of M i is greater than the complexity of M j. In this case, the fidelity of M i should be greater than or equal to the fidelity of M j. If the M j has higher fidelity than M i, we don t need to use M i in any form because that we have model with less complexity and high fidelity. It also means that M i is unnecessary redundant information on the multiresolution model. So, it is desired that the multiresolution model has monotone fidelity condition. The goal in the multiresolution model generation method is generating a set of simplified model which has monotone complexity relation with monotone fidelity condition Complexity and fidelity Meaning of the complexity in the multiresolution model is the rendering time cost required to render the model. a simple model can be rendered in a small time budget and a complex model requires more time to be rendered. The complexity is measured by the factors that determine the rendering speed. The three elements of the rendering are object primitives, environment primitives, and device primitives. Object primitives are elements that define the shape of the object. Environment primitives are elements that define the environmental behavior such as lights, atmosphere effect, etc. Device primitives are hardware device related factor like screen resolution. In a point of multiresolution model generation, we need to consider object primitives. Other primitives are not directly related to the object model (special effects, screen resolution) or the performance factor is determined by the complexity of object model (lights). This research deals with the object primitives. Effects caused by other primitives requires multiresolution model for that primitives. Object primitives are basically defined as vertices on the object surface and connectivity among them. In this research, we assume that the shape of an object is expressed as polygon mesh. There are other expression primitives beside the polygon mesh like image based model, point based model, volumetric model, etc. The polygon mesh is a piecewise linear approximation of the primitives in the space, while other methods are based on sampling of primitives in the space. The polygon mesh has been used in most of rendering systems and it has advantage over other methods because that it is not based on the sampling method. In this paper, we deal with the polygon mesh. The problem of using polygon mesh 11

19 for a shape descriptor is other information including color is not approximated using the piecewise linear approximation on the position information. Texture image and texture map is a method to add color distribution on the linearly approximated surface. The polygon mesh and the texture image are the basic primitives of the object shape. The primitives are processed through the pipeline stages. The throughput of the each pipeline stage and the bandwidth of the transferring data between successive stages are factors to determine the performance of rendering. The rendering time is determined by the cost of processing and the cost of transferring. The cost of processing depends on the amount of computation to be processed. For the geometry processor, it is determined by the number of polygons. For the pixel processor, it is determined by the size of texture images to be processed. The total cost of the rendering is determined by the maximum values of two, because that they are in the pipeline architecture. The cost of transferring is directly related to the data size. It is determined by the number of polygons and size of texture images. The total cost of rendering is combination of time required for transferring and processing. These two factors are determined by the number of polygons and the size of texture images. So, it is natural to use the number of polygons and the size of texture images as a measurement of the rendering cost. The visual fidelity is a measurement of the image quality between simplified model and the original model. The image is determined by the primitives of the rendering. If two models have similar object primitives, we can say that the rendered image quality would be same. Most of the researches on the mesh simplification used this observation. The most common measurement for the fidelity is maximum point distance between the two models. Although the object primitives are important features, the effects on the rendered image may differ due to the change of the other primitives. To deal this effect, object primitives should be filtered and manipulated in real-time. Furthermore, it is possible to use image itself as a measurement. There also are approaches to study of the perceptual difference of two models to measure fidelity of the multiresolution model[luebke2002]. The more of fidelity measurement is discussed in the chapter 2. In summary, the complexity of the shape model is defined as the rendering time cost. It is expressed as the amount of shape data that are number of polygons and size of texture images. The fidelity of the shape model is often expressed using the object primitives. The multiresolution generation method should reduce the complexity of the rendering while preserving as mush as visual fidelities of the model. 12

20 1.4. Overview of thesis In this research, we propose new method to generate multiresolution model for the complex object. The proposed multiresolution model is a multiresolution structure on the polygon geometry and the texture images. In chapter 2, the issues of the multiresolution model of the polygon geometry is discussed. Also, M-AGSphere, the proposed multiresolution structure for the polygon geometry is described along with other popular method in the multiresolution model generation. The generated multiresolution model has problem on the texture mapping. These problems are identified as texture boundary problem and texture distortion problem. These problems are identified in the chapter 3. To solve these problems, we make new texture map for the multiresolution model. The proposed generation method is described in the chapter 3. Finally, the multiresolution structure on the texture is described in the chapter 4. Along with the multiresolution model for the polygon geometry, the multiresolution texture makes complete set of primitives for the object rendering. Finally, the applications and future works on the multiresolution model is stated in the chapter 5. 13

21 2. The multiresolution geometry 2.1. Introduction The multiresolution method is one of research efforts to generate rendered images of very complex virtual worlds while maintaining real-time display rates. To deal with this problem, the multiresolution method uses a multiresolution model to represent 3D shapes of objects at multiple levels of detail. The multiresolution method is the process of generating and rendering multiresolution models that reduces rendering costs while preserving visual fidelities of rendered images. Among rendering primitives, polygons and textures are one of important primitives. In this chapter, we focus on a polygonal geometry. Several methods to build multiresolution model for a polygonal geometry is described in this chapter. We call the multiresolution model for a polygonal geometry the multiresolution geometry model. Since the rendering cost is usually proportional to a number of polygons for the object, the multiresolution geometry model is a set of successive coarse-to-fine polygonal representation of the original polygon mesh representation. Figure 5 Example of the multiresolution geometry The number of polygon is reduced from 20,694 (top-left) to 10,002 (top-right), 5,017 (bottom-left) and 512 (bottom-right) The perceptual quality of rendered images with different detail models are usually measured by 14

22 rendered shapes in the images. In a geometrical point of view, the rendered shape of a 3D object is determined by major 3 components of the scene: the object itself, camera parameters, and lights. Lights determine colors of image pixels through relations with camera parameters and innate color materials of the object. Camera parameters for the rendering are a position and an orientation of the camera. It determines visible parts of the object and its silhouette. It means that features like silhouettes are formed from object shape features by applying camera parameters. It is similar to lights. In other words, we can see visual properties of an object from an object-centered view, a camera-centered view and a lightcentered view. We call these features object-centered features, light-centered features and camera-centered features. Object-centered features are features expressed in an object-centered point of view, such as curvature of a surface part, area of a surface part, or volume of an object part. Camera-centered features are features that are determined by camera parameters such as view silhouettes and visible parts. Lightcentered features are determined by light parameters such as light silhouettes. The multiresolution model generation process is expressed as successively reducing number of polygons while preserving these visual features. Usually, we deal with rigid objects instead of deformable objects. Object-centered features are static for a rigid object. Instead, camera-centered features are changed through navigation in the virtual world, and light-centered features are also changed through light movements or light on-off behaviors. Because of that, most of the multiresolution model generation method uses object-centered features. Light-centered features and camera-centered features are incorporated in rendering time, combined with pre-generated object-centered features. If we model that the object has energy of these features, the simplification process is selecting polygons that minimizes the energy difference. The entire energy is summation of energies from these three features. In the following equation, the camera-centered energy e v is computed from the cameracentered feature f v which is filtering function to express camera-centered features from the object-centered features f o. The light-centered energy e l is computed from the light-centered features f l which is filtering function to the object centered feature. e o is an object centered energy that is directly computed from the object-centered feature f o. E = e + e + e G v l o If a geometry G is simplified to a simplified geometry G, G has energy E G. The goal of the simplification process is select vertices of polygons which make E G -E G as minimum. 15

23 In this chapter, we describe about the multiresolution geometry generation method. In the section 2, we will talk about the previous methods. In the section 3, we will investigate more on the most popular method based on edge-collapsing method. In that section, we add some modification to use the method in later chapter. In the section 4, new generation method incorporating all the three features is proposed Previous work The multiresolution geometry model generation is a sequence of object simplifications. For a polygonal model, the object simplification is one of the popular issues in computer graphics research area. There have been a lot of different methods on the simplification. Traditionally, the object-centered features have been the only features that used in the simplification. Among the different object-centered features, the geometric distance has been popular measure for energy differences. We call this kind of simplifications the geometry only simplification which uses only the geometric information like vertex positions. While the vertex position and polygon connectivity are the important features in the polygonal shape, the color is another important feature that determines the rendered image of the object. There have been several approaches that consider color and other properties along with the geometric information. We call it the property preserving simplification. The property preserving simplification models that polygons, or vertices have high energy difference in simplification when they have different properties with neighbors. The property preserving simplification usually gives promising results, but it is not well suited for the surface with a lot of different properties. Furthermore, when we deal with texture maps, conventional property preserving simplification methods do not measure energy differences for simplifications on texture boundaries. To deal with this issue, there have been approaches to re-generate texture maps over simplified regions. We call these kinds of methods the texture generation method. In this section, we will summarize previous work in these three categories. Along with the object-centered features, there have been approaches that combine the cameracentered features. The camera-centered features like silhouettes are usually expressed as transformed properties of the object-centered feature. There have been several methods to preserve camera-centered feature. We will talk about these methods in the last part of this section Geometry only simplification Geometry only simplification is a simplification process on the polygon mesh by removing vertices, 16

24 edges and faces. In early days of the geometry simplification, methods like vertex sub-sampling or vertex clustering have been used[rossignac93]. These methods select representative vertices from the complex model and regenerate simplified object using the selected vertices. Although these methods easily simplify the original complex geometry, it does not have energy difference measurement. So, it is hard to tell that how the result is close to the original shape and how much it preserves important object features. The vertex decimation method is a sequence of vertex removing process while preserving object shape [Schroeder92]. This method uses decimation criteria to determine which vertex is going to be removed. The vertex with smallest decimation criteria value is removed. After removing, remained area is re-triangulated to make a proper mesh. Usually, vertex deviation is used for the decimation criteria. Vertex deviation is a positional difference of the simplifying vertex after the simplification. We can easily assume that a vertex with the smallest deviation gives small shape change which result small image difference in the rendered image. In the decimation method, the energy is expressed as summation or maximum value of decimation criteria. Figure 6 Vertex deviation After the vertex decimation method have been proposed, it becomes the one of most popular methods in simplification. There has e been a lot of enhancement on the decimation criteria. Saucy et. al. proposed re-triangulation error metric as a decimation metric[saucy96]. Their re-triangulation error metric is based on differences of surface normal vectors and vertex deviations to make use of curvature information on the surface. Klein used Hausdorff distances and normal deviations for deviation criteria [Klein96]. Along with the vertex decimation methods, other decimation methods like edge decimation and face decimation are also popular in commercial products. In the opposite view of the decimation, there have been refinement approaches that construct object with desired number of polygon through successive adding vertices to the simplest polygon. These methods use similar criteria for adding vertices with the decimation method. These methods usually utilize the parametric hierarchy like wavelet[lousbery94][eck95], subdivision surface[floriani95], and B- 17

25 spline[forsey95]. There also have been other approaches such as sampling based on the surface shape[turk92], select polygons within lower and upper geometric boundary [Cohen96], build a new geometry from the octree hierarchy [Gran99] [He95][Shekhar96] and using simplicial elements for shape representation [Popovic97]. Currently, most popular methods are methods based on an edge collapsing operator [Hoppe96][Garland97] [Gueziec97][Luebke97][Xia96]. An edge collapsing is an operation of merging two vertices into one vertex. Figure 7 shows an example of an edge collapsing operation. Two vertices v t and v s are merged into v s. Other methods like decimation method require re-triangulation over simplified area which requires relatively high cost. The edge collapsing method has small re-triangulation cost. It also has properties that transitions from a detailed mesh to a simplified one is continuously defined[hoppe97]. These properties give abilities of smooth transition between two different detail level in rendering time[hoppe96], and real-time selection of the features to be preserved. Figure 7 Typical simplification using edge collapse[hoppe97] Furthermore, edge collapsing operator is easily inversable to the vertex split operator. It enhances the real-time controllability of the edge collapsing operator. All the methods described in this section deals with object-centered features. These methods try to remove vertex, edge, or polygons that give smallest energy difference with original complex model. As a measurement of energy contribution of each element, vertex deviation is widely used. Along with that, several other measurements are used such as curvature information, polygon area[atheheimer94] 18

26 Property preserving simplification Geometric properties are important object-centered features, but there are other features that contribute to the rendered image quality. These features are color which directly contributes to the rendered color information, and normal vectors which modify colors combined with lights, and texture coordinates. There have been several approaches to include these features in simplification. These approaches extends energy space from the 3-dimensional geometric space to the extended dimension includes color, normal and texture coordinate. Normal vector of the vertex or face is one of the firstly used properties in the simplification. Using the decimation operator, Soucy et. al. used normal difference as a one of the decimation criteria [Soucy96]. Kim included normal difference along with the topological difference into the vertex deviation error metric [Kim97]. Garland et. al. literally extended energy space into combined position, color and normal space [Garland98]. The energy is defined in the extended space and the vertex with smallest energy difference will be removed. Instead of extending the energy space dimension, Cohen et. al. used mapped deviation of the normal and color in the 3 dimensional geometric space[cohen98]. To accomplish this, Cohen defined normal map and texture map which maps normal and color into the geometric space. The deviation is expressed in the positional differences for points that are mapped to the same point in a normal map or a texture map. Recently, Lindstrom proposed to use deviation criteria based on image differences [Lindstrom2000a]. This method calculates the energy of the simplified mesh based on image differences with pre-generated rendered image of the original detail model. This method removes a vertex which gives smallest image difference. This method gives very good results when the object is rendered from the nearby camera position with the pre-generated image, but the energy difference in other directions are not well defined. 19

27 Figure 8 Simplified results of property preserving method[garland98] Left mesh is simplified using the geometry only method (middle) and the property preserving method (right) Figure 8 shows image difference between two different methods. Property preserving methods give more plausible results than geometry only methods. These methods are good if the properties are well defined and continuous like a texture map or normal map. If a simplification occurs across the boundary of the texture map, the continuity of properties is not defined and conventional methods fail to measure any energy difference for that case. This causes unmeasured energy loss which makes very different rendered image. To prevent this anomaly, these methods prevent simplification over different properties like texture boundary, or color boundary. By preventing simplification over boundary, the energy is well preserved through simplification, but the simplification result is restricted by the shape of the texture map or the normal map. As a result, it does not give sufficient simplification for complex mapped surfaces Texture generating simplification Texture generation simplification approaches have been proposed in purpose of substituting complex geometry with the texture. A texture image that is mapped on a planar polygon would be a rendered image of the geometry from specific camera parameters. Using the texture generating simplification, we can simplify geometries without loosing property details. The texture generating simplification method generates representative texture image or normal image for the simplified polygon. The imposter is an image that represents a part of a complex object. The imposter is usually generated by capturing rendered image from the predefined camera position. It is mapped on the polygon that is best suited for that camera position[maciel95]. Using the imposter, we can express the complex object with very small number of polygons. 20

28 Figure 9 Example of the imposter [Maciel95] Shade et. al. proposed hierarchical structure to the imposters for the entire scene with a lot of objects[shade96]. While the imposter is good to express detail with small number of polygons, it is very hard to define imposter for general polygon meshes. The imposter is not suitable either, for the multiresolution model with continuous detail levels. Furthermore, the rendering result is erroneous for the viewing direction where the imposter is not defined. This prevents enhancing rendered image qualities by using more polygons. Sillion proposed re-meshing the imposter area to enhance rendered results [Sillion97]. Schaufler suggested re-generation method of the imposter using image warping techniques [Schaufler98] and more general image-based rendering techniques are used [Oliveira2000]. But, these methods still have difficulties in controlling the visual fidelity by using different number of polygons. There have been approaches to combine geometric representation with image representations in more controllable manner. Aliaga et. al. proposed a method of combining geometry with texture images [Aliaga96]. They used geometry representations (polygon mesh) for the near scene, and they used texture images for the far scene (Figure 10). Figure 10 Using geometry information with texture images[aliaga96] These approaches are well suited for the indoor scene where the world is partitioned into discrete cells and portals. The geometry over the portal is substituted into a single polygon with image [Aliaga97]. 21

29 For the out-door scene, Ebbesmeyer used virtual walls with mapped texture images for objects in the far scene [Ebbesmeyer98]. Aliaga et.al. also proposed a method to control number of polygon in real-time rendering [Aliaga99b]. There have been a lot of variations on these methods including method using image warping technuqies[rafferty98][aliaga99b] and using meshed texture map[aliaga98b]. These methods define energy differences as differences in the rendered image. It represents objectcentered features along with the static camera-centered features. These methods use pre-generated images in pre-rendering time, and select proper images in real-time. With these images, camera-centered features are captured for the pre-defined camera parameters. While substituting a complex geometry with an image gives high quality rendered results, it does not well represents dynamic properties of camera-centered features. To capture dynamic camera-centered features to images, there was a method that replacing the captured image with a new snap shot at run-time from a new camera parameters [Schaufler96][Chen99]. The new image is captured when the energy difference is greater than certain amount of the pre-defined threshold. These methods give relatively good results, but it requires additional rendering cost to capture new images. This cost is not included in the conventional cost model. It gives difficulties for the controlling rendering time using the multiresolution model. Above methods generate textures to substitute complex geometry. That image does only represent the single detail level rather than multiresolution model. There have been other approaches to generate texture images for using with the multiresolution model. Certain et. al mentioned possibility of the generating texture map for a simplified geometry [Certain96]. His approach makes texture images for most simplified polygons. Texture images can be used to the more detailed geometry. This method makes controllable geometry resolution, but it does not have energy measurement for the texture map[certain96][cignoni98][cignoni99]. The simplified or enhanced geometry has errors to newly generated texture images, so that the simplification process or texture generation process should include the energy difference caused by the generated texture map. Furthermore, the texture region for other methods like edge collapsing based approaches is not applicable for their texture generation method. Cohen proposed energy metric for mapping texture images to different geometries [Cohen98]. They used that measure for a texture generating simplification method. Their method pre-partitions the geometry, and captures texture images for each partition. Their method defines successive energy difference on the simplification that represents texture mapping difference for the different geometric detail. This method successfully make multiresolution model for pre-generated texture image, but it still have problems on texture boundaries. On texture boundaries, energy differences in simplification operations are not defined. 22

30 So, this method prevents simplification over boundaries [Cohen98]. It is a restriction that limits simplification by pre-generated texture images. With poorly generated set of images, the process does not give enough simplification for the real-time rendering. Approaches so far have problem on the measuring energy difference caused by texture and geometry. These approaches also have problems for simplification on the texture boundary. To solve this problem, we generates new texture map that have no restriction on the boundary simplification. The new generated texture map has generated by minimizing energy difference by geometric simplification and texture mapping. In this chapter, we will talk about geometry simplification issues. Texture related issues are discussed in the chapter Using camera-centered features and light-centered features In preceding sections, previous approaches that are focused on the object-centered features are described. In this section, we talk about methods that deal camera-centered features and light-centered features. Camera-centered features are related to the camera movements, object movements and object deformation. The relative direction and position of the object to the camera are major components of camera-centered features. This property is same to light-centered features. Example of camera-centered features is silhouettes. Light-centered features make shadowed part of objects along with light silhouettes. In most traditional LOD algorithms, these features are taken into account by selecting one of the pre-defined detail levels on run time. If we select a coarse detail for an object, it means that we choose not to preserve these features of the object. This approach does not fit well in the case of large objects. A large object may occupy most of the virtual space in a way that it should reveal different details for different parts to give real-time rendering. For example, if you stand in front of the Great Wall of China, some part of the wall is close enough to be rendered in full detail, but other parts are not. Furthermore, even for a small object, if we put more detail on its silhouette than other parts, the resulting image gets closer to the original image with less rendering complexity. There have been several approaches to apply different criteria to a single object in order to preserve important visual features of the object [Eck95][Cohen96]. These approaches can give arbitrary detail for a part of the object. Using these methods, we can maintain some parts in high detail levels to preserve some 23

31 features, while put other parts in lower detail levels. These approaches give good result by giving different details for parts of the object, but they are not easily extendible for camera-/light-centered features. Because they do not provide real-time simplification methods, in most cases we must pre-calculate all the details of the object. To apply these approaches for the camera-centered features, we must subdivide an object into a large set of small parts, apply the LOD algorithm independently, and finally put the parts together seamlessly which are presented in different detail levels which requires high real-time cost. To accommodate the camera-centered feature in the LOD generation, we need a method of realtime multiresolution model generation. A real-time generation method is a process of simplifying a set of specific parts on the fly, so that it is possible to generate the appropriate detail representation for time varying camera parameters. Edge collapsing method is one of the most successful methods to implement real-time generation process. Vertex tree is the example of this method [Xia96][Hoppe97][Luebke97][Garland97]. Using the vertex tree, camera-centered features are captured by selecting appropriate vertices in the tree. Along with the features described in this section, we can consider additional features related on the perceptual effects[ramasubramanian99]. Perceptual effects are extension of the camera-centered features which considers not only position of the camera but also perception ability of the viewer. Although there are little results given but the perceptual features would be one of the most important features to define image quality of the rendered result Summary For the geometric multiresolution model generation, we use geometry defined by polygons. In this paper, we define the geometry M as : Geometry M A geometry is defined as mesh M(V,E) where V is a vertex and E is an edge that connects two of the vertices in the V. The geometry is usually given as a 2-manifold surface. The mesh M is a piece-wise linear function of the position on the surface. The number of piecewise linear function is same to the number of polygon 24

32 in the mesh M. On the geometry M, an energy measurement is defined as a function of M. The geometry M is simplified though the simplification operator to build up the multiresolution model. Simplification operator O : (V,E) (V,E) Remove portion of V,E to make new set of (V,E ) such that size of the set V and E are smaller than V and E. Removing sections are selected to minimize energy difference of two meshes. Examples of operators are vertex decimation and edge collapsing. As a result of the simplification operator, the number of polygon is reduced so that the rendering cost of the geometry is reduced. In this paper, we use edge-collapsing and region merging as a simplification operator. The goal of the simplification is choosing a simplification operand with small energy change. The energy differences are measured in various manners. In this paper, we use vertex deviation as an energy element. The energy element is modified by the object, camera and light parameters to express objectcentered features, camera-centered features, and light-centered features. In this chapter, we will describe conventional edge collapsing method for a example of the multiresolution geometry generation method. This method is slightly extended for the new texture generation method which is described in the chapter 3. We describe our new approach on the viewcentered and light-centered features in the section 4. In this research, we will consider that these two method as a reference of the geometric multiresolution model generation. The result of the simplification is multiresolution model. The multiresolution model is a set of geometry M such as {M 0, M 1,, M n }. The multiresolution geometry also has a monotone complexity relation R c :M M such that R c (M i,m j ) means M i has less complexity than the M j. It also have monotone fidelity constraints such that, if two meshes have relation R c (M i,m j ), M j has more fidelity, which means less energy difference with the original model, than M i. 25

33 2.3. Generation method : edge-collapsing with region hierarchy Overview The multiresolution generation method for a geometry model uses simplification operators like decimation or edge collapsing operator. Using these operators, the method generates successive different detail representation for a complex geometry model. Generated multiresolution geometry models are used in real-time rendering process to select proper detail level for the real-time process. Traditional multiresolution model is constructed in a form of discrete levels of detail. If an object with 100,000 polygons is simplified to 50,000 polygon model, 20,000 polygon model, and 10,000 polygon model, these 4 models consists a single multiresolution model of 4 different levels of detail. Using the discrete levels of detail, the real-time rendering process is selecting process of proper detail models among the discrete sets of different details. For a discrete model, the selecting process has small additional costs. A shortcoming of this model is that it is impossible to select models with desired costs among discrete detail models when the desired cost is not covered by the discrete set. In the previous example of the multiresolution model, it is not possible to make a representation with 30,000 polygons. In this case, we should choose a model with 20,000 polygons which does not utilize all possible polygon-rendering power we have. Furthermore, more serious problem is that it has large difference in the energy between two successive detail levels so that the rendered image does. It gives popular popping degenerates [Hoppe96]. Continuous multiresolution model is a method to reduce these popping effects and to provide sufficient varieties of rendering costs. In the continuous multiresolution model, successive detail levels are defined by removing small number of polygons to reduce the popping effects. Furthermore, some continuous multiresolution level mechanisms define the morphing structure between two successive detail models which greatly reduces popping effect [Hoppe96]. An edge collapsing operator is one of the most popular methods to generate continuous multiresolution model. In this section, we will describe the method based on the edge collapsing operator Edge collapsing operator An edge collapsing operator is a method of deleting a vertex. An edge collapsing operation is collapsing operation of a single edge into a single vertex. As a result, a single edge collapsing reduces number of polygons by one or two for the triangle mesh. Figure 7 shows an example of a typical edge collapsing. An edge collapsing operation is paired with vertex split operation. The vertex split operation is 26

34 a inverse process of the edge collapsing operation which reconstruct collapsed edge by splitting a single vertex into two connected vertices. To adopt an edge collapsing operator, the polygon mesh keeps additional vertex mapping table. By the edge collapsing operation, a vertex is removed and substituted with a new vertex. There are two different approaches on the edge collapsing. These are full edge collapsing and half edge collapsing. The full edge collapsing makes a new vertex position for two vertices of the collapsing edge. Instead of making a new vertex position, the half edge collapsing method uses existing vertex between two vertices of the collapsing edge as a new vertex. Because of the simplicity and the data efficiency, half edge collapsing is more popular method. In this paper, we use the half edge collapsing method. Full edge collapsing : new vertex is generated Half edge collapsing : vertex position is preserved Figure 11 Half edge collapsing and full edge collapsing. Dotted edges are changed to solid edges (right figure) The object shape is represented as faces, edges and vertices with the vertex mapping table. By using the half edge collapsing method, a single edge collapsing operation replaces the removing vertex to the remaining vertex. Figure 12 show typical result of the edge collapsing. The vertex V j is replaced into V i by the result of the edge collapsing. V 0 V 0 V i V 1... V 1... V i... V i... V j V i.... V.. j V n V n Figure 12 Updating vertex mapping table for half edge collapsing 27

35 Using the edge collapsing operation, the multiresolution model generation sequence is a successive process of selecting collapsing edges. Edges are selected which gives small energy difference with the previous detail model. The supposed energy difference is stored on each vertex which represents difference in energy by removing the vertex. In this section, we use vertex deviation for an energy difference measurement. The vertex deviation is defined as distance between the vertex and the projected position to modified polygons by the edge collapsing. The collapsing operation removes the vertex with minimum deviation. It is same fo an edge on boundaries of the mesh. These edge collapsing operation is stored to reproduce it in real-time. The representation methods for the edge collapsing sequence will be described in the next section. Figure 13 Case of the face flipping Before collapsing the edge with smallest energy difference, a validity test takes place for the collapsing. In the Figure 13, the edge collapsing operation generates flipped face. The triangle with the grid is flipped triangle by collapsing the thick solid edge in the left figure. We do not invoke this edge collapsing operation to prevent this anomaly. The flipped face is easily determined by comparing normal vectors of the face with the original faces. As a result of the edge collapsing operation, the removed vertex makes a region consisted by neighboring vertices. At the first time, each polygon is represented as a distinct region. A single edge collapsing merges several regions in the single region. All polygons surrounding the removed vertex are merged into a single region. As successive collapsing of edges, regions are merged into larger region. There is a property on the region boundary, such that vertices on the region boundary do not have edge collapsing operation so far. This property is used in the chapter 3. In the following figure, polygons with four different regions are merged into one single region with two polygons. If the removing vertex is close to the new modified edge, the region is partition into two separate regions instead of making one single region. This makes regions grow in slow manner. The detail of modified method is described in the chapter 3. The property is that the region boundary is not modified by the edge collapsing or it is close to the detail vertex. 28

36 General region merging Modified region merging Figure 14 Region merging by the edge collapsing To store region structure, we construct face relation table which all the faces point to the representative face of that region. After an edge collapsing, we partition all polygons that are connected by the removing vertex into one or two regions. If there already is a representative polygon for a region that is not removed during the edge collapsing, all polygons are mapped to that representative polygon. If there is no representative polygon or the polygon is removed by edge collapsing, all the other polygons in a single region elect the new representative polygon. Figure 15 Partitioned regions using edge collapsing operations The figure shows the intermediate result of the region merging. Areas with different color are occupied by different regions. We have modified original edge collapsing operation to include region concept. Each edge collapsing operation is stored with set of region modification information which has the new and old representative polygons with polygons of the regions. In the real-time, all the extra work is updating polygon mapping for the polygons in the stored information. Furthermore, we don t need to update mapping for the polygons that is already mapped to the old representative polygons. It greatly reduces amount of information of regions for edge collapsing operations. 29

37 Multiresolution structures using the edge collapsing operator The multiresolution structure using the edge collapsing operation is easily defined by successive sequence of the edge collapsing operation. Hoppe et. al. proposed the progressive mesh structure that represents sequence of the edge collapsing operator [Hoppe96]. From the most detailed mesh, each edge collapsing operation defines one step simplified mesh. By successively applying stored edge collapsing operator, we can easily select polygon model with desired number of polygons. Figure 16 Vertex tree [Hoppe97] Vertex tree is an extension of the progressive mesh. A single edge collapsing operation reduces two vertices into a single new vertex. In the vertex tree, this collapsing operation is expressed as a parent-child relationship between the new vertex and collapsed vertices [Xia96][Hoppe97][Luebke97][Garland97]. The new vertex becomes the parent of the collapsed vertices. In the half edge collapsing method, the new vertex is a copy of the remaining vertex. In the vertex tree, each edge represents single edge collapsing that is combined with edges to the sibling vertex. Vertices with no hierarchical relationship are able to be collapsed in parallel without affecting each other. The simplified mesh is represented by cut of the tree. Using the vertex tree representation, we can easily adopt camera-centered features and light-centered features. Several researches used the vertex tree structure for adopting the camera-centered features [Xia96] [Hoppe97][Luebke97]. In this paper, we experiment with the progressive mesh structure with modified edge collapsing operations. Although our experiments in the following chapters use the progressive mesh structures, the vertex tree structure is able to be used instead of the progressive mesh structure. To use the vertex tree structures in the following chapters, we need to store regions of the each levels of tree or each set of 30

38 hierarchical disjoint cut of the tree. We will talk about it in the chapter Results Figure 17 Simplification results. From left to right, model with 20,693 polygons, 10,517 polygons, and 550 polygons Figure 17 shows the simplified result using the edge collapse operator. For the object with different colors, the simplification result gives lots of difference in the rendered image. In the figure, the left one is model with 4984 polygons and the right one is rendered result of the simplified model using the edge collapsing (having 1503 polygons). Figure 18 Geometric simplification for an object with color information 31

39 Figure 19 Model generation time and vertex deviation The graph shows the real-time edge collapsing cost and the vertex deviation of the simplification for the model in the figure 18. The horizontal axis denotes number of polygons, and the vertical axis denotes milliseconds. The cost is measured in the Pentium IV 1.5 GHz PC. Although the edge collapsing is simple operation, it is not rational to reconstruct desired mesh from the scratch. Usually, the mesh is reconstructed from the mesh in the previous frame which requires small numbers of edge collapsing or vertex splitting operations. It is a restriction of the edge collapsing over the discrete multiresolution models. To use small amount of time in real-time reconstruction, we should restrict number of edge collapse for each frame by using frame coherence [Hoppe97]. This restriction gives not enough degree of freedom in controlling the number of polygons Generation method : AGSphere Overview Camera-centered features especially the silhouette is determined by the positions of the camera (the view) and the object. If the camera or the object moves, the silhouette is changed. We need efficient structures for managing this type of real-time varying features. In this section, we propose the multiresolution AGSphere for efficient management of camera-centered features. The multiresolution AGSphere constructs and manages important directional relationships of the silhouettes for real-time processing. We can use the sphere for other directional features like light silhouettes. We show several experimental results for the sphere on view silhouette and light silhouette. 32

40 In general, simplifying a part of the object is an expensive operation to be executed in real-time. To overcome this restriction, structures that maintain simplification sequences have been proposed. These approaches selected details in a polygon or vertex wise manner instead of details in an entire object. The vertex tree is the most successful one to make a camera-centered mesh reconstruction on the fly. Although these approaches give a solution to the problem, they still requires much time to process in real-time without using some coherence-based approximations. Because camera- and light-centered features are essentially based on the directional relationship with object parts, it is natural to use these relationships to preserve features. Vertex tree related methods have drawbacks in regard to the size of the tree. The tree must have as many leaf nodes as the total number of vertices and single edge in a tree usually reduces the number of vertices by one. For a complex object, the size of the vertex tree gets a large depth. To handle camera- or light-centered features, we should traverse the entire visible nodes in vertex tree that is at least as large as the number of visible vertices. Because the directional relationship of the polygons and the camera or the lights changes by every movement, we need to re-traverse the tree for every frame. For an example of silhouettes preserving simplification, whenever the object or camera moves, basically we need to reconsider whole parts of the object into the candidates of silhouettes. In this section, we describe a new data structure - the multiresolution AGSphere - to handle the mentioned difficulty by structuring directional relationship between polygons of objects AGSphere (Aligned Gaussian Sphere) An AGSphere is a structure on the directional relationship of polygons. The AGSphere maps a directional distance between two surface parts to a distance of two points in the sphere by placing similar directional parts as neighbors on the AGSphere. Figure 20 Mapping polygons onto the AGSphere We assume that an object is represented as a polygon mesh. In terms of polygons, we can define the AGSphere as a Gaussian sphere that maps a polygon to a point on the sphere along its surface normal 33

41 vector. A point (x, y, z) on the sphere represents a unit direction vector (x, y, z). Using this mapping, polygons that have a normal vector (x, y, z) are mapped to the point (x, y, z) on the sphere. Each point on the sphere represents a set of polygons that have the same direction. On the AGSphere, visible polygons with directions similar to the directional vector v are located at the neighbor part of the point v on the surface of the AGSphere. Simplification using a directional feature is a process of merging polygons with neighbors on the AGSphere. This requires an efficient access method to the neighbors of a given point on the AGSphere. To give efficient access to the AGSphere, the sphere is partitioned into discrete cells. Each cell occupies a part of the AGSphere that contains a set of regions. This representation of the sphere partitions polygons in an object into a discrete number of neighboring-clusters. A set of polygons in a single cell is considered to have similar directional vectors. Original Polygon Mesh A View Cell Merging result Figure 21 Merging polygons in a cell (Filled polygons in right figure are two regions in the cell) Using this property, we make simplification on the polygons in a single cell. We merge neighboring polygons in the object space whenever they are placed in a single cell. Each merged polygons forms a region. After the merging, each cell contains a set of regions that represent simplified polygons in that part of the sphere. In the Figure 21, neighboring polygons in the same cell are simplified to a few regions M-AGSphere : multiresolution extension of the AGSphere We make a simple representation of the AGSphere using larger cells. A larger cell has regions that are simplified form of polygons for a cluster with wider range of directions. We can easily identify polygons to be merged using the cell structure on the AGSphere. Using regions, it is natural to generage new simplified regions from previous merged regions. Region merging is the process of removing shared boundaries of regions. For an efficient access to 34

42 the boundaries, we use a half-edge structure for the boundary representation of each region. Using the structure, we merge regions by traversing region boundaries. Figure 22 illustrates this merging process. Traversing region Current traversing edge Neighbor region in the same cell Neighbor region in a different cell (a) Example configuration (b) Traversing edges shared by the neighbor region in a different cell Current traversing edge (c) Traversing edges with the neighbor region (d) Result of the traverse in the same cell edge removed Figure 22 Merging two regions using half-edge structure The result of this merging is smaller number of regions with vertices of the original boundaries. Using a larger cell, size of regions from this merging gets larger. Conversely, small size cells represent a refined form of a given object through large number of small size regions. This process of merging and refining gives a multiresolution representation of the sphere. A M-AGSphere is defined by successive refinement processes on the cells. We generate a refined cell through successive subdivisions starting from the coarsest representation of a cell. We call the coarsest representation the level 0 cell. The level i-1 cell is subdivided into 4 level i cells (Figure 23). Level i cells construct the level i AGSphere. We define the level 0 sphere by subdividing a sphere into 8 cells. For easy access of the multiresolution sphere, we construct a quad tree (Figure 24). Each node in the quad tree is a cell, and the children of a node are the subdivided cells. Level i node in the quad tree represents an AGSphere in level (i-1). 35

43 Figure 23 Successive subdivision of a cell Figure 24 Quadtree representation of M-AGSphere Using the quad tree structure, we select arbitrary details for the object by traversing and selecting nodes in the tree. Along with the quad tree structure, we also keep neighbor cells of every cell for efficient access. This neighbor relation is used for realtime control of AGSphere. We also construct hierarchical relationship among regions. Using the region merging structure, each region is merged into larger region. A parent region have set of child regions, that they are merged into the parent region. This region structure is used for the method in the chapter 3 and Real-time control of the M-AGSphere Generally, the key processes of realtime control of a multiresolution structure are a process of selecting parts to be simplified (or enhanced) and a process of simplifying (or enhancing) these parts. In a M-AGSphere, these processes are called as the identifying process, the merging process and the splitting process. Merging and splitting are processes for simplifying and enhancing regions. Identifying is a process for selecting a set of regions to be merged or split. In this section, we describe each of these three basic processes for the AGSphere. In the M-AGSphere structure, merging is a process for merging two or more neighbor cells. Merging cells induces merging regions that belong to these cells. Using a half-edge structure, merging several regions only requires the time of traversing their boundaries. A set of polygons that has n edges requires o(n) time for merging through the process described in the section

44 Splitting is the reverse operation of merging. This process splits a cell into a number of smaller cells by backtracking previous merging sequences. This results in the subdivision of a region in the original cell into a set of detailed regions. Regions Merging Splitting Cells in the AGSphere Figure 25 Merging and Splitting Identifying is a process to select sets of polygons to be merged or to be split. Simplification using directional features requires a fast identifying method for parts that have similar directions. In this section, we describe our proposed method of identification using an example of silhouettes. Other directional features that require clustering parts with similar directions can use the same techniques as in identifying. A silhouette is a part that is orthogonal to the reference direction like a viewing direction or direction of a light. Thinking about the silhouette, we can subdivide object parts into front, back and silhouette parts. If a reference point is the camera, the front parts are the parts that face the camera, and the back parts are the parts that are not visible from the camera. We define a cell that covers the reference direction a reference cell and cells with orthogonal direction orthogonal cells. Our algorithm chooses a simplified form for a reference cell and uses a refined form for orthogonal cells. We assume that object simplification occurs when the object is far enough from the reference point that makes the reference direction. So, it is safe to consider that the orthogonal cells contain polygons on a silhouette of the object. Furthermore, our polygon representation keeps actual boundary details, so in any case, silhouette boundaries are exactly preserved. There are two selection methods in selecting detail levels of cells. The static method selects base reference cell for given viewing directions and refines neighbor cells with one more detail. If the neighbor cells are orthogonal cells which defines silhouettes of the shape, cells are refined into two more detail. The refinement process is repeated until the desired number of polygon is acquired. Usually, the static method 37

45 gives good approximation of the shape. To have more control on the shapes, we propose dynamic method which uses dynamic reconstruction of the quad-tree and frame coherence. Following codes are the pseudo code of the static level selection. Draw(nDesiredPoly) begin <select default cell levels for each cell> while (nnumofpoly < ndesiredpoly) begin TranverseCells(orthogonalCells, ndesiredpoly). TranverseCells(undeterminedCells, ndesiredpoly). TranverseCells(referenceCells, ndesiredpoly). end DrawCells(orthogoalCells). DrawCells(undeterminedCells). DrawCells(referenceCells). end TraverseCells(cellist, ndesiredpoly) begin for each cell in the cell list while (ndesiredpoly < nnumofpoly). begin expand the cell into 4 children. put child cells into appropriate cell list. update nnumberofpolygon. end end The dynamic process starts from the selected cell levels of the previous frame. At first, we assume to have cells with same level. To give simplified cell representation for a reference cell, we merge it with neighbor cells. In the multiresolution AGSphere structure, most cells satisfy subdivision connectivity with neighbors. So, merging with the three neighbors results in a larger triangular cell which forms a coarser detailed representation of the cell. We subdivided orthogonal cells into more detailed ones. If the result is still high number of polygons, cells are merged with neighbor cells starting from non-reference front cells and orthogonal cells. This process is repeated until it has under desired number of polygons with constraints that the reference cell has at most two levels coarser and at least one level coarser than the orthogonal cell. As a result, a quad tree is reconstructed as the aligned form with respect to the reference cell. Figure 26 shows the example of this cell selection. For efficient access, we keep current partitions of reference cell, orthogonal cell and other undetermined cells. The splitting process subdivides the reference cell at first and orthogonal cells at last. The merging process takes place in reverse order. 38

46 Figure 26 Multiresolution cell selection The only exception to this process is on the cells with singular points that do not satisfy subdivision connectivity for a cell. A singular point is one of the vertices on base cell boundaries, which does not have 6 degrees. Cells with a singular point do not have three neighbors to be merged. To overcome this deficiency, we choose a neighbor cell with subdivision connectivity as a new center of merging cells. If the cell is not a cell on the base sphere, there exists a neighbor cell with no singular point. For a cell on the base sphere, we have no need to merge to simplify the object. The first neighbor with no singular point is chosen as the new cell. By following sequence of identifying, merging and splitting processes, the simplified representation with desired number of regions are generate The final step of the real-time process is actual drawing step. Our basic assumption on the rendering is that there is a hardware that can render concave polygons in real-time. If this hardware is given, rendering using the multiresolution AGSphere is as simple as rendering each region in the current cells. Unfortunately, most currently available hardware is not able to render the concave polygons properly. So, we need to retriangulate the polygon regions in the cells. We propose a simple retriangulation algorithm to render cells of the AGSphere in real-time. Polygons in a cell are merged into a few simplified regions. In general, these regions are formed as concave non-planar polygons. At first, we make a simplification of these regions. Connected boundary parts of the region are simplified as single edge if the part is neighboring with the same region and the boundary have small vertex deviation. The vertex deviation threshold is determined by the maximum projected vertex deviation for the cell which contains the simplifying region. Each cell has maximum vertex deviation as an error threshold. When the region in the single cell is merged, the deviation of the removed vertex is calculated. This process produces more simplified region representation that gives smaller number of triangles after retriangulation. We use the resulting regions in rendering. 39

47 Boundary edges with the same neighbor region Simplified result Figure 27 Simplifying shared boundaries For the static level selection, boundary simplification is able to be calculated in pre-processing time. Every region stores simplified results with neighbor region of same detail level, one less detail level, and two less detail level. The selection process imposes that the selected cell has at most two level difference, it is sufficient to store the results in three cases. Given a polygon region, the retriangulation method constructs a triangle mesh to fulfill real-time rendering and a close representation with the original mesh. To fulfill these requirements, the retriangulation process should be fast enough for real-time rendering, and the process should preserve the sharp edges of the original mesh. (a) Case of edge removing Edges of candidate boundary (b) Case of vertex removing Figure 28 Cases of retriangulation with virtual edges There are two cases in the merging process when retriangulation is required. The first one is the process of merging two regions by removing a single boundary edge. If we assume that all of the polygons are properly triangulated, we just put a new virtual edge in the place of the removed boundary edge. A virtual edge is an edge that constructs a triangle mesh, but does not form a boundary of the region. 40

48 The second case is the case of removing a vertex. Using the boundary and virtual edge, we select vertices neighboring the removed one. Let's call the boundary constructed by these neighboring vertices a candidate boundary. All the vertices on the candidate boundary have a virtual or boundary edge to the removed vertex. Our algorithm is based on the strategy that constructs the triangle using two edges of the candidate boundary (Figure 29). All the vertices on the candidate boundary are ordered in a counter clock-wise direction. The basic operation involves cutting off a set of selected boundary vertices by making a set of triangles following the candidate boundary. Candidate boundary (solid line) New triangle Figure 29 Generating new triangles from the candidate boundary Improper triangle Original Mesh Improper triangle Proper triangles Figure 30 Proper and improper triangles For each vertex on the candidate boundary, we check if this vertex forms proper triangle. A proper triangle is a triangle that is contained in a region defined by the boundary and does not intersect with other triangles. Let's say the current vertex is v i, and the previous and next one is v i-1 and v i+1. If v i-1, v i, v i+1 forms a proper triangle, it makes a virtual edge between v i-1 and v i+1. This action forms a new triangle of v i- 1, v i and v i+1. After that, remove v i from the candidate boundary. This makes v i+1 as next vertex of v i-1 and vice versa. Verifying the proper triangle is simple. We first choose a triangle with a convex point in the candidate boundary and test the edge intersections of the new virtual edge with other edges. We can check the convexity of the point, using the surface normal vector and the cross product of edge vectors. We use the normal vector of the cell that contains the boundary, as the approximated value of normal vector of the 41

49 plane. We first project all points to the plane that has a same normal vector of the cell. Because all the points have edges with the removed one, if the triangle does not contain the removed vertex on the projected plane we know that it does not intersect with another one. Following the above process, we can remove all of the convex points by single scan. The worst case of the proposed algorithm is that the number of convex points is smaller than concave points. To resolve these cases we develop two pass algorithm. Once a new triangle is formed, we make v i-1 as the current vertex, and repeat the process until no new triangles appear. For the concave vertex, we reverse test directions starting with v i+1. If we repeat this process, we can remove all of the traversed concave vertices in 2m order time where m is the number of visited vertices. So, all the retriangulation processes are done in O(n) to remove a single vertex where n is the number of vertices in the candidate boundary. Virtual edges (dotted lines) First scan Second scan Second visit to the vertices (a) First Step (b) Second Step First visit to the vertices Second visit to the vertices (c) Third Step (d) Fourth Step Figure 31 Retriangulation process In the next section, we describe experimental results on preserving directional features of an object Results Using M-AGSphere, the silhouette identifying is a process of aligning the sphere. By aligning the sphere to face the camera, we can easily identify silhouette, front, and back cells to the camera. Identified front cells are cells that facing the camera, back cells are cells that are not seen from the camera, and silhouette cells are cells that lie on the silhouette of the objects. In this section, we describe of experimental results on silhouette selection. 42

50 (a) (b) (c) Figure 32 Silhouette preserving multiresolution model Figure 32 shows three different details on the apple model. (a) is full detail model with 1,704 triangles. (b) is low detail model imposing same detail level for whole part of the object with 856 triangles. (c) is silhouette preserving model using proposed algorithms with 897 triangles. For a visual impact, (c) gives less visual difference with (a) than (b). It means that preserving object silhouette is more effective than ordinary low detail level. (a) Side view of a teapot (b) Teapot from different viewing direction Figure 33 Rendering results for different viewer position Figure 33 illustrates effect of silhouette preserving simplification results with different viewer. (a) and (b) have 1,357 and 1,478 polygons with the 4,000 polygon of the original. The part of the body that is simplified in (a), gets more detail when the camera move to the position of (b). Reference Direction AGSphere Figure 34 Detail level selection example 43

51 Figure 34 shows actual detail level selection for an AGSphere. Blue cells are front cells and purple cells are silhouette cells. (a) 17,141 regions (b) 11,382 regions (c) 6,816 regions (d) 5,036 regions (e) 3,392 regions (f) 18,484 regions (g) 12,563 regions (h) 5,720 regions (i) 4,078 regions (j) 2,820 regions Figure 35 Simplified bunny Figure 35 shows experimental results for the bunny model with 69,451 polygons. Image (a)-(e) are rendering results of different detail levels. Image (f)-(j) are rendering results from different camera position. Each figure illustrates rendering result of silhouette preserving simplification on specified level. As the model gets simplified, surfaces that facing the camera are flattened by successive merging process while silhouette parts are preserved. Average rendering rates are 3.5 frames/sec for (a) with 17,141 regions and 12.6 frames/sec for (e) with 3,392 regions while 2.5 frames/sec for the original model. The result shows that simplification cost is small enough to processing for the real-time rendering. Figure 36 Silhouette preserving simplification for the B747 model Left : Original model (20,694 polygons), Right : Simplified model (4,223 polygons) 44

52 Figure 37 Frame rates and number of regions for different levels Figure 37 shows relative relation between rendering speed and number of regions for different models. Bunny is the bunny model with 69,451 polygons and B747 airplane model has 20,694 polygons. It shows that AGSphere has its overhead for managing and realtime reconstruction but it is simple enough to give enhanced rendering frame rates. Figure 38 Difference in silhouette for an ordinary method Left : ordinary simplification results with 8541 polygons, right : pixel differences with the original model Figure 39 Difference in silhouette using M-AGSphere Left : silhouette preserving simplification results with 4823 polygons, right : pixel differences with the original model 45

53 Figure 38 shows image differences of the outer silhouettes (object boundaries) between the original model and the ordinary simplified model. We used a vertex removing mechanism with the curvature and the volume error metric, which is similar to the traditional decimation method. Figure 39 shows the differences for the ordinary simplified model. Figure 39 shows result for the M-AGSphere. With small number of polygons it is clear that the silhouettes are well preserved using the M-AGSphere. We can use the AGSphere not only for the view silhouette but also for other directional features like back faces or light silhouettes. To identify back faces, we traverse the AGSphere and select only back cells. It requires the time cost in an order of the number of cells that is constant to the number of polygons. Along with the silhouette generated by the camera, silhouette from the light determines overall visual shape of the object. Faces which face a light shine with high intensity, while faces lying on the backside of a light are on the shadow. If a part of region is completely on the backside of every light, this part is rendered in complete black color. In that case, we can simplify that part of surface into a single polygon provided that the actual boundary of that surface region in the projected image plane is preserved. Using AGSphere, we can easily identify that region. At first, we make AGSphere for the camera. Using this sphere, we can identify the faces facing the camera (visible from the viewer). For each light, put additional information on cells that it is silhouette or front of back cell for the light. If a cell is silhouette cell of one or more lights, we render it more detail level. If a cell is front cell of a light and does not on the silhouette part of other lights, we render it less detail level. Cells that lie on the back part for every light appear as black region in the final rendered image. So, we simplify them using the least detail representation. Figure 40 Preserving light silhouette Left-most : original model with 4,366 polygon, middle : silhouette preserving 2,048 polygons, right-most : silhouette preserving 1,350 polygons Figure 40 shows rendering result for a single directional light. It shows that the differences in the intensity caused by the directional light are well preserved but other surface parts are simplified. 46

54 Figure 41 Required time cost for simplification operation The graph shows the real-time construction time for given number of polygons. The time cost is relatively small compared with that of edge collapsing based structures. It shows that the M-AGSphere has more real-time control power to the multiresolution model rendering than edge collapsing based methods Summary Directional features are important in object simplification process for realtime rendering. We propose M-AGSphere for a simple structure to manage the directional features. It is the representation that effectively incorporates the directional relationship between a reference direction and surface part of an object. We also develop a multiresolution management algorithm that operates on the multiresolution AGSphere, which is simple enough to run in real-time. Our algorithm makes discrete representation of the sphere for efficient access. Proposed algorithm assumes that the target surface is piecewise smooth. When the surface is not piecewise smooth, like edges of a saw, simplification of polygons using the directional relationship does not make sufficient simplification results. Furthermore, when we simplify surfaces relatively close to the viewer using view silhouette preserving mechanism, the regions on the silhouette of the AGSphere does not exactly form the silhouette of the object. To solve this problem, we can modify the algorithm to reassign regions to appropriate cells when the reference point moves and other criteria on positional relationships are satisfied. Because our algorithm is fast enough to deal the region splitting and merging in 47

55 realtime, we can replace regions to proper cells in realtime. For the next work, an efficient replacing mechanism for regions as the reference point moves using frame coherence and the combining criteria of positional relationship between regions into the structure is required Conclusion In this chapter, we described two different multiresolution generation method for the geometry. One is the edge collapsing method which is one of the most popular methods in the research scene. The edge collapsing method is modified to give region structures to be used in following chapters. This modification gives little difference in the simplification sequence. We also describe our new generation method of M-AGSphere. M-AGSphere is specialized multiresolution structures for capturing directional information efficiently. The structure is good to preserve camera-centered features and light-centered features. It also has strong points in small cost for real-time process. Figure 42 Simplification result with broken texture information Like the progressive mesh show the problem as in the Figure 18, the M-AGSphere also has problem on the simplification for the surface with different color information. The figure shows the broken texture information on the simplified geometry. Left image is the original model with 5,000 polygons and the right image shows simplified model with broken texture information. In the next chapter, we propose a method for generating new texture image to compensate this broken color information. 48

56 3. Texture map for the multiresolution geometry 3.1. Overview Multiresolution geometry generation method have problem on the boundary of color material or texture. In Figure 42, colors or textures on the simplified polygons are substituted with single dominant values rather than expressing more close values. There have been approaches to consider color or texture values in the simplification, what we call property preserving simplification methods [Garland98][Cohen98]. These approaches measure simplification energy differences as combined values of a color material or texture coordinate differences and position differences in the simplified mesh. Although these approaches preserve some of the properties, they still have a problem on the texture boundary and there exist few measures which deal the simplification over texture boundary. Most of the previous approaches prohibit the simplification over texture boundary[cohen98]. In this chapter, we propose a new method of texture map generation which does not have limitation on the simplification. There is a similar but independent work to this research which generates texture map for multiresolution geometry. Sander et. al. proposed a texture generation method for the progressive mesh [Sander2001]. They deal with the same problem, and their method is quite similar to our method [Kim2001]. The main difference is that texture boundaries in the proposed method are determined by the actual geometry simplification process to give little limitation on the geometric simplification while the other method selects texture regions before the simplification which inevitably give limitation on the simplification. They have selection measurements for texture regions to reduce limitation, but proposed region selecting method gives less limitation on the geometry simplification. In our method, all limitation on the simplification is included in the simplification process. The generation process is also simplified using the proposed method. Sander s method consists of four steps; 1) selects regions from the geometry, 2) makes initial texture map for each region, 3) Simplifies the geometry and 4) adjust the texture map. The proposed method integrates the first three steps into one. So the overall process is simplified to two steps; 1) Simplifies the geometry and 2) Generates texture map for the simplified geometry. Our method also enables a multiresolution approach which is described in the chapter 4. In the section 1, we will talk about the problem of the texture map for the multiresolution geometry and summarize overall approach of the texture map generation. We use the progressive mesh and the M- AGSphere for the multiresolution geometry generation method, but other methods like vertex decimation, 49

57 vertex tree, etc. are applicable to our method. In the section 2, the requirements and modification of the original geometry simplification method for the texture generation is described. The texture generation method is described in the section 3 and the experimental results are given in the section Texture mapping function In the chapter 2, we define the geometry as a piece-wise linear function of the position on the surface. In the point of rendering process, we can define the whole scene as a set of infinitely many points in the space. Each point interacts with light source and the camera to give colored information to the rendered image. The interaction process involves calculating color information of the point and reflection and distribution of the calculated color information to the other points. In the conventional local shading model which is popular in the real-time rendering process, the calculated color information is combination of color on the point and the color of the incoming lights[foley90]. The distributed color in the conventional shading model is calculated using the reflection vector which is usually defined by surface normal vector stored on the point. The conventional local shading model only computes information for the points on the surface. From this local shading model, the geometry can be modeled as an approximation of the points on the surface. If we use the geometry M for the rendering object, all the parameters of the point are expressed in piecewise linear function. It gives computable amount of information to generate rendered image instead of using all infinitely many number of points. Parameters of the points are position, surface normal vector, and color. In this point of view, the geometry M is not only a piecewise linear function of the position on the surface but also linear function of the normal vector and color. Among these parameters, surface normal is usually related to the position information. So that if we make piecewise linear approximation of the surface position, it also serves as an approximation of the surface normal. For the color, the distribution of the color is usually independent with the position distribution. For a complex colored surface, the position based piecewise linear approximation does not correctly approximate the color information. So, it is often required to subdivide a linear function into a set of piecewise wise linear functions with smaller domains. This subdivision increases the number of polygons which requires high rendering cost. Instead of using more polygons for the color approximation, the texture mapping method is investigated. The texture mapping method uses a texture image as a color approximation over a set of polygons. The texture image is sampled color information on a set of polygons. Points on the texture image are mapped onto the points on the polygons. With the texture mapping method, we need not to 50

58 increase number of polygons for proper approximation of the color information. Using the texture mapping, the position-based piecewise linear function serves as a piecewise linear mapping function between texture image and the surface polygons. Each vertex of the polygon has mapping value to a point on the texture image and the mapping values of the internal points are calculated using linear function of the polygon. The texture mapping function is piecewise linear function from the point on the polygon to the points on the texture image. The boundaries of the piecewise linear functions are same with the geometry M which is defined as a edges of the polygon. Texture mapping function Φ : M T where M is a geometry and texture image T is a 2D image Texture mapping function is a function from the point in the geometry mesh to the point in the texture image. Every points in the M is mapped to the corresponding point in T. Using the piecewise linear mapping function, it has strong dependency to the geometry M. It means that the texture mapping function is changed when the geometry M is changed by simplification or other operations. In multiresolution geometry, geometry is modified through simplification operations. In the case of texture mapping, modified geometry imposes a different texture mapping function to that of original geometry. It makes unexpected texture image distortions like sliding or blurring effects. The problem on the texture mapping over multiresolution geometry is measure the energy difference by the simplification operator. There have been several methods to measure energy difference caused by changing in the texture mapping function [Garland97][Hoppe97][Cohen98][Sander2001]. These method measure the energy difference as difference in the texture coordinate on the vertices, edges or polygons. Using these energy measurements, we can measure energy difference caused by simplification for the geometry with texture map. These methods approximates original texture mapping function with reduced piecewise linear functions. These methods recalculate or reuse the texture coordinate of each vertex from the original texture mapping function. Using these methods, the texture mapping value for each vertex is preserved to give good approximation of the texture mapping function. The correct mapping is a texture mapping that the distance of the two points in the geometry is same with the distance in the two mapped points in the texture image[maillot93]. A texture mapping function is called a consistent mapping for multiresolution geometry, if it is the correct mapping for all the different geometry in the multiresolution model. It is clear that in general case, the consistent mapping is 51

59 not possible for a set of different geometry using a single texture image. We should generate different texture images for different geometries to make consistent mapping. The mapping method that the previous approaches used is point-consistent mapping. The pointconsistent mapping is a texture mapping function that a mapping value for each vertex is same over different geometry. Previous approaches try to approximate the point-consistent mapping to the consistent mapping. In this chapter, we use similar approaches. We build point-consistent mapping function for the multiresolution geometry that is close to the consistent mapping function. The problem of previous methods is they still do not properly deal a simplification over texture boundary. In the next section, we will talk about the problem on the texture boundary. In summary, using a single texture mapping for the entire geometric hierarchy has some problem to solve; 1) texture boundary problem, 2) texture distortion. In this chapter, we describe a method to solve these problems Texture boundary problem In the point-consistent texture mapping mechanism over a multiresolution geometry, a single texture image is mapped onto different geometries. The single texture image can be partitioned into a set of sub-images for the efficient texture mapping. Using texture images for the different geometries has problem of discontinuity mapping. This problem also exists for the single image. The discontinuity of the mapping is generated on the boundary of texture image and boundaries of sub-images. If a two neighboring points are mapped onto the distant boundaries of texture image, the texture mapping function for these two points are not defined using the texture mapping defined by vertex mapping values. Figure 43 An example of the boundary problem. 52

60 Figure 43 illustrates the problem. The left image is the detailed mesh with different texture subimages. The illustrated mesh is partitioned by two different part of the texture image. One part is consisted by dotted grids and the other is squared checks. When this mesh is changed into different form, some part of the geometry does not have proper texture images to be mapped. The right figure shows the mesh which is changed by simplification from the left mesh. A vertex on the texture boundary is removed by the simplification operator. In that case, the texture map of the newly generated polygon is not defined. In this example, the newly generated polygon is colored area on the right image. This phenomenon causes more serious problem when a set of polygon is merged in to a single polygon. We call this kind of simplification as the boundary crossing simplification. To solve this problem, we should generate a mapping function which is continuous over simplified boundaries. In other words, if we have a single continuous texture image which is linearly mapped onto newly generated polygons, we can make a proper texture mapping for the polygons. In general case, we should generate a new texture mapping function for each detail level. It means that the number of texture maps is same to the number of detail levels. When the continuous multiresolution geometry model is used, a new texture should be generated for each simplification. For a progressive mesh, number of new texture is same to the number of edge collapsing. Each edge collapse generates new polygons and some of them lie over edges of previous polygons. Roughly speaking, the number of edge collapsing is half of the number of polygons. For a mesh with 10,000 polygons, we should generation 5,000 new texture images. The situation gets worse for the vertex tree. In the vertex tree, we don t have pre-defined simplification sequence. For the vertex tress case, we need to generate new texture for every combination of edge collapsing sequence. This is same for the M-AGSphere. A region merge operation of the M-AGSphere removes boundary edges of the region which issues up the boundary texture problem. The maximum number of texture is proportional to the number of possible region merging sequences. Generating a lot of new textures requires a lot of additional storage and rendering cost. Because of that, generating new texture for every geometry simplification is not feasible. In this chapter, we propose a new texture generation method for the multiresolution geometry. Instead of generating number of textures for each simplification, the new method generates a single continuous texture that has no boundary problem. To solve the boundary problem, we select boundaries of texture images as edges of simplest polygons. Selected boundaries have properties on the simplification sequence such that no cross-edge simplification occurs. In other words, we construct a texture map such that a boundary of the texture map is mapped onto the edge of the most simplified geometry. 53

61 Texture distortion problem Texture distortion is a typical issue of the texture mapping problem. The texture distortion is a measure of the quality of the texture image over polygon on the geometry surface. A polygon on the geometry surface is mapped onto polygon in the texture image. It means that the polygon on the geometry surface is sampled to the pixels in the texture image, which are called texels. If the shape of a polygon in the texture image is equal to the shape of the polygon on the geometry surface, amount of points that is mapped onto single texel is uniform over the surface area. If corresponding polygon shapes are different, some of texels covers larger area of the polygon on the geometry surface while some others occupies very small area. It results undesirable blurring and aliasing effects. In other words, different shape between two corresponding polygons makes sub-sampling or super-sampling problem which is quite popular issue in the texture mapping domain. So, the texture should have minimal difference in the polygonal shape. The difference in the polygonal shape is called as a texture distortion. If the shape difference is big, the texture mapping function have large distortion. In the traditional texture mapping, the texture distortion is appeared as a parameterization distortion. The texture image and the texture mapping function is a 2 dimensional parameterization of a corresponding surface area. The parameterization process makes texture mapping function for each point on the surface. In the texture map generation researches, the parameterization distortion is measured as shape distortion of each polygon. This distortion is caused by the pixel level resolution different for the same area of the rendered image. We call this kind of distortion a pixel resolution distortion. The goal of the texture map generation is making a texture map with small pixel resolution distortion. The other factor that makes texture distortion is point-consistent mapping over a multiresolution geometry. If a single texture image is mapped onto different geometries, the mapping inconsistency makes additional distortion on the texture map. We call this distortion a polygon resolution distortion. Our new texture generation method reduces texture distortion by minimizing the pixel resolution distortion and the polygon resolution distortion. We define multiresolution mapping distortion as a combination of the pixel resolution distortion and the polygon resolution distortion. In this section, we review about the previous researches on minimizing the pixel resolution distortion. The polygon resolution distortion is discussed in the section There have been a lot of parameterization methods to minimize distortion. These parameterization methods are used for generate texture map for the geometric surface. The convex parameterization method with locally minimizing distortion is popular for the graphics package. Examples of these methods are given as harmonic mapping[eck95] and several convex parameterization methods[floater97]. 54

62 Maillot et. al. proposed a parameterization distortion measure for their texture generation method[maillot93]. When we have a texture map, each edge and each polygon on the geometry surface have corresponding edge and polygon on the texture image. They measure parameterization distortion as a sum of length differences of corresponding edges. They also add sum of signed area difference of corresponding polygons to the distortion measurement. Their result makes the polygonal shapes on the texture image close to that on the geometry surface. Piponi et. al. expressed the geometry as a mass-spring model[piponi2000]. They pelted the surface by stretching the surface from the selected surface cut. Although they did not give a explicit measurement of distortion, their method give quite good parameterization with properly selected cut. Figure 44 Example of the harmonic mapping [Eck95] (a) Geometry model (b) Texture parametrization Figure 45 Texture pelting result [Piponi2000] Figure 44 and Figure 45are examples of the texture parameterization. In the Figure 44, we can easily notice the distortion for the polygons in the center and the polygons in the boundary of the texture image. In the Figure 45, these distortions are observed in the polygons of the eye part. Even though we 55

63 tried to minimize distortions, these distortions are inevitable when we put every polygon in the universally connected form. Piponi et. al. listed methods to reduce distortions caused by polygon connectivity[piponi2000]. - Subdivide surface to be mapped into single texture image If we subdivide the mesh in the Figure 44 with front side and back side, we can make more plausible parameterization than currently illustrated. By extending this idea, we can make a set of texture images for given surface. Most of the previous researches take approaches based on this idea [Maillot93][Piponi2000]. In this paper, we also utilize this property for generating texture map. It is discussed in the chapter 4. The remaining issue of this method is the distortion on the texture boundary. If two boundaries of the texture meet in the geometry surface as edges of polygon, the number of texels should be same to give no distortion. - Use of non-planar mapping for the texture If we use other types of parameterization like spherical or cylindrical mapping, it would give parameterization with less distortion. It can also preserve the polygon connectivity on the texture image. The drawback of this method is the current rendering mechanism and hardware does not support this kind of mappings. Furthermore, we need specialized mapping method for different topological shapes. In this paper, we adopt Maillot s texture atlas method. This method gives relatively fast optimization, and it does not requires manual cutting process if we have measure for the making charts from the geometry. The final factor that makes texture distortion is the resolution of the texture image. If we use higher resolution image the rendered result gets clearer. In other words, if we use lower resolution image, the result is blurred and distorted. In this chapter, we do not consider the texture resolution issue explicitly, but the results will be enhanced by considering distortion caused by texture resolution. In the section 3.3, resolution selection method is briefly described and it is discussed in the chapter 4. 56

64 Overall approach In this chapter, the new method is proposed to solve anomalies of the geometry simplification that is seen in the figure Figure 42. The new method generates new texture map for the geometry simplification. Generating texture map has two issues. One is the boundary problem and the other is texture distortion problem. In the first look, texture boundary problem can be solved by using global continuous parameterization with minimizing texture distortion. But the globally continuous parameterization over 2 dimensional texture image is not possible. To remove the boundary problem, the boundary of the texture should be a boundary with no boundary crossing simplification. If we have information on the simplification result about the boundary crossing, we can choose edges with no boundary crossing as a texture boundary. Instead of generating global continuous texture map, we make a texture map with a boundary with no crossing simplification. We call this kind of boundary as a p-boundary which denotes a candidate boundary of the parameterization. Geometric simplification method is modified to ensure that the simplified result should have at least one p-boundary. The modified geometry simplification method is described in the section 2. After selecting p-boundaries, we make a target parameterization for the multiresolution geometry. The parameterization should minimize the distortions. The distortions are defined as the pixel resolution distortion and polygon resolution distortion. In the section 3, the parameterization method is described. In the section 4, the experimental results are shown and the limitation of proposed method is discussed Modified multiresolution geometry generation method To apply our texture map generation method, the simplified result should have a least one p- boundary. Boundary crossing simplification is a simplification that newly generated edges cross the original boundary. The M-AGSphere naturally has p-boundary which is boundaries of regions. The M-AGSphere is based on the region merge operator. A region merge operator preserves boundary of the regions. The boundary of a region has no simplification history over boundaries. If there was any single simplification on the boundary, it makes two neighboring regions merged which removed the boundary itself. So the 57

65 boundary of a region does not have boundary crossing simplification though any simplification to make that region. The problem rises as the boundary is simplified in the boundary simplification process. In the M- AGSphere, the simplification process consists of region merging process and the boundary simplification process. We will treat this distortion as a polygon resolution distortion which is discussed in the next section. The p-boundaries for the texture map generation are simplified boundaries of the regions in the base cell in the M-AGSphere. More delicate situation happens when we treat edge collapsing operators like the progressive mesh or the vertex tree. As stated in the chapter 2, each edge collapsing makes a single region that the edges inside the region are modified by the edge collapsing operator. It makes the inside edges can not be p- boundaries. In other words, the boundary edges of the region can serve as a p-boundary. If the vertex on the boundary is simplified, it makes portion of the previous p-boundary as invalid and makes new p- boundary. Through successive edge collapsing, the regions grow until a single region covers all the surfaces. We define boundary crossing simplification as a simplification that a vertex on the p-boundary is simplified to the other vertex that is not on the same p-boundary. Boundary crossing edge collapsing : A collapsing edge (v i, v j ) is a boundary crossing operations iff vi B p and j B p v where v i is removed vertex and v j is remaining vertex in the half edge collapsing. B p is a p-boundary. We need to have at least one p-boundary for the parameterization. To satisfy that requirement, we impose a constraint on the edge collapsing that if the simplification makes all remaining regions to be merged into a region with hole, we prevent that simplification. In general, the edge collapsing operator enlarges regions faster than M-AGSphere. Also, the edge collapsing tends to make unbalanced set of regions compared to the M-AGSphere. The balanced set of regions is set of regions with similar surface area to each other. To slow down the region growing speed, we select edge collapsing operator that merging smaller regions and making good shaped regions among the edge collapsing operators with similar vertex deviation. The good shaped region is a region which is good for the texture map generation which is defined by its planarity and compactness. The measurement used in the experiments is similar to that of Sander s[sander2001]. The selection process is selecting an 58

66 edge collapsing operator with the minimal region energy. The region energy is defined as, E region where R = x Ri, y Pi is region in the M P is approximated plane of the R i c is center of the R B i i is p - area( Ri ) x y + (max( c x ) min( c y )) x Bi x Bi area( M ) and boundary of R. i i i Instead of merging all the polygons affected by the edge collapsing into a single region, if a removed vertex is close to the newly generated edge after the edge collapsing, we make the edge as a p- boundary if the collapsed edge was a p-boundary. The closeness criteria is defined as, distance( where e k ( v vi, ek ) ε for a collapsingedge ( vi, v j ) is defined as ( v j, vk ), and, v ),( v, v ) are on the same p - boundary k i i j In the experiments, the threshold is defined as maximum value of 10% of the vertex deviation for the collapsing edge and 1% of the maximum length of the bounding volume. It slows down the speed of the region growing without sacrificing much of the simplification power. We can even add a balancing measure for the simplification, but in general case with smooth surface this selecting method works well for balancing the region. The last thing we impose on the simplification is we do a half edge collapsing operation rather than full edge collapsing. The half edge collapsing serves as a slow region growing operator compared to the full edge collapsing operator. i i Figure 46 Merged regions using proposed region merging mechanism 59

67 Figure 47 Vertex deviation using the proposed region merging mechanism The figure illustrates merged result of regions using the progressive mesh structure. Left image is conventional region merging and the right image is using the modified region merging method. Both models have about 2,000 polygons. The graph shows difference on the edge collapsing error. It shows that there is only small difference on the edge collapsing error and overall shape of the graph is much similar. Although these constraints limits simplification operators, our method postpone the simplification limitation until no other choice is available rather than give limitations in-prior as other method does[sander2001]. The region merging process constructs hierarchy on the regions. Child regions are merged into parent regions. This hierarchy is used in the chapter 4. In this chapter, the most simplified region is used for the parameterization. Along with the region hierarchy, we also keep the simplification pair for each simplification operation. The simplification pair is vertex position pair of vertices with maximum vertex deviation. In the M-AGSphere, the maximum vertex deviation is approximated as distance with a simplified vertex with the simplified region. For each simplified vertex, there is a pair of the simplified vertex and the position on the region with smallest distance. The maximum vertex deviation is the maximum value among these distances. The position on the region is expressed as a linear combination of the vertices on the simplified region. For the boundary merging process, the removed boundary vertex is paired with a closest point on the simplified edge. We store a simplification pair for each simplified vertex. 60

68 v i v 3 v 1 P i v 2 Figure 48 Point pair for the simplification Figure 48 shows the point pair for the simplified vertex vi. vi is paired with Pi, where P = α v. i k k = 1,2,3 k It is same to the progressive mesh and the vertex tree. Most of edge collapsing makes pair information of removed vertex and the simplified polygon. The edge collapsing on the p-boundary makes pair information of removed vertex and the p-boundary. In the progressive mesh approach, we add additional pair for each edge collapsing. The correct maximum deviation happens not only with the removed vertex but also with the changed edges. For each edge collapsing, the shortest distance with the changed edges and the original edges are computed. The exception is a changed edge which is same to the remaining edge. It is the case that the edge is boundary of the removed polygon. Also we don t compute for the edges that connect the removed vertex pair with the vertices of the containing polygon. The containing polygon is the polygon that contains the closest point which is paired with the removed vertex. Instead of computing possible pairs for every edge pair, we simplify the possible pairs among a modified edge and original edges from the near boundary points to the remaining vertex. (v,v ) is l i possible point pair for a (1, k) for k g such that l = ( k, n) for k > g where (v,v ) is nearest edge from the v vertices in N i g N = (v j i modified edge ( v are ordered in cw or ccw direction and 0 j n i i = v,...,v ) is ordered neighbor set of the v in cw or ccw order k,v ) there exist v This approximation is valid to use for a region with near planar which is guaranteed by good simplification operations. The stored pairs are used in computation for the multiresolution mapping distortion. j l 61

69 3.3. Texture parameterization method minimizing multiresolution mapping distortion Parameterization process In this section, we describe our new parameterization method for the multiresolution geometry. The purpose of a texture map generation is that the generated texture map should not have boundary problem. To accomplish this purpose we select p-boundaries as a texture boundary. Using the selected texture boundary, we generate texture map for the regions that are surrounded by the boundaries. After the geometric simplification process described in the section 3.2, a set of most simplified regions is gathered. The boundaries of regions are p-boundaries. So, each region is suitable for a new texture map. We call the most simplified regions as base regions. For the M-AGSphere, the base regions are regions in the base cell. For the progressive mesh, the base regions are the regions in the most simplified mesh. For the vertex tree, the base regions are the regions constructed by the level 0 vertices in the tree. The texture map should have small parameterization distortion and it should preserve texel level connectivity over boundary. To accomplish this purpose, collected regions are merged into small number of regions. For the base parameterization, we make a least two regions for a surface with spherical topology. Neighboring regions are merged to larger region. Selecting criteria for the region merge process is based on the maximum vertex deviation for the merge process and the shape of the region. If the region boundary has uniform distance with the region center, we call this region as a good shaped region. We used same energy measurement of the region energy used in the edge collapsing. Before region merging, it is tested whether the merged region has holes in it. If the hole exists in the merged region, these two regions are not merged at that time. The hole is detected if the two merged regions have more than one connecting boundaries which have these two regions as a neighbor. After the region merging, each merged regions is mapped onto part of the texture. In the Maillot s terms, this is expressed as atlas. An atlas is a set of chart. Each chart is a part of the texture which represents a single region. We generate the texture map by reducing multiresolution mapping distortion which is defined as the pixel resolution distortion and the polygon resolution distortion. In summary, the process of the texture map generation consists of four steps. (1) Generate multiresolution geometry 62

70 (2) Gather base regions (3) Initial boundary selection by merging base regions into small number of regions (4) Generate texture mapping function mapping minimizing multiresolution mapping distortion. The multiresolution mapping distortion is discussed in the next section. In the following sections, the process 3) and the process 4) is described Multiresolution mapping error error. The texture generation process is a parameterization process which minimizing texture distortion The texture distortion error consists of polygon resolution errors and pixel resolution errors. The pixel resolution error is a measurement of the pixel resolution distortion and the polygon resolution error is a measurement of the polygon resolution distortion. Furthermore, it is desired that the generated texture mapping function does not introduce additional errors on the geometry. If it gives additional errors on the geometry, it means that the geometry simplification is invalidated by the texture. If we use a different texture map, there might be a case that changing the simplification sequence would make lesser simplification energy difference. In this section, our measurements of the texture distortion errors are described. By using proposed measurements, the generated texture has minimized additional errors on the geometry simplification process. A texture image is mapped onto the corresponding polygons. This means that a polygon in the texture image is mapped onto the polygon in the geometric space. If the two polygon shapes are same, we can get the same quality textured polygon in the geometric space. If a polygon in the texture image is highly distorted with respect to the polygon in the geometric space, some part of it would be highly blurred which causes a distorted rendered result. It is a popular problem in the texture mapping areas. We call this distortion as a pixel resolution distortion. A lot of methods are proposed to generate a texture map with less distortion [Floater97][Maillot93][Piponi2000]. We adopt Maillot s texture atlas approach to minimize pixel resolution distortion. The Maillot s error measures difference in the edge length between corresponding edges in the texture image and the geometry surface and difference in the polygon area between corresponding polygons. Equation 2 is the error function we used. In the equation 2, V i is a vertex in the most detailed geometry and v i is a corresponding edge in the texture image with the V i. All the edges and triangles in the equation is that in the most detailed geometry. 63

71 E l = ( Vi, V j ) Edges ( v v i j 2 i V V j i V V 2 j 2 ) 2 E E s = param ViV jvkis triangle = αe + (1 α) E l (det( v v, v v ) V V V V i s j i i k V V V V j i i k j i k ) 2 Although we use the Maillot s error metric, it is suitable to use other error metrics in our scheme. Changing the parameterization error metric does not give effects on the other error metrics including geometry simplification metric. In this chapter, we show the results with Maillot s error metric. To measure correct pixel resolution distortion, we should consider the pixel level resolution of the texture image. The calculated error value is multiplied by the resolution modifier. The resolution modifier denotes that the high resolution model have less distortion to the low resolution image. We use empirical constant for the resolution modifier. If the resolution is quadrupled, it means that the volume that the texel occupies is reduced into quarter size of the low resolution texel. The resolution modifier for the quadruple resolution is 1/4 times of the resolution modifier of the low resolution. In the experiments, we use resolution modifier value 1 for the 512x512 texel resolution texture image. The final pixel resolution distortion is defined where r is resolution modifier. E = r pixel E param The polygon resolution distortion is caused by inconsistent mapping for the simplified geometry. The texture mapping is a process that maps a texel to a volume in the geometry space. All the points in that volume will have color values of that texel. If a single texel is mapped onto relatively large volume, it means that it has large distortions. To reduce the distortions, we should minimize the maximum mapping volume for each texel. It is observed that, a set of simplification pairs has maximum deviation for the simplified polygon regions. Each simplification pair is closest point pair for the modified vertex or edges. It is also a maximum of the distance between the simplified polygons and the detailed polygons. If a single texel is mapped onto each pair of points, it has the minimum of maximum mapping volumes for each texel. In other point of view, if a point is simplified with the vertex deviation value, it means that the point is moved to the paired point in the simplified geometry with the vertex deviation. The proper mapping is the texture mapping that maps these two points into a single texel. Also the polygon resolution error is scaled by the resolution modifier r. 64

72 E poly where = r i = k ( Vi, Pi ) corresponding vertex pair P α V, p k i 65 = ( V P k i α v v p The equation shows the proposed polygon resolution error metric for given multiresolution geometry. The vertex V i in the level i+1 geometry is paired with the point P i in the level i geometry by the simplification operation. The point P i is expressed as linear combination of the vertices in the level i geometry, especially vertices of the containing triangle. Because that the texture mapping function is also piecewise linear functions, the texture coordinate of the point P i is represented by the linear combinations of the surrounding vertices texture coordinate with same coefficient to the geometry correspondent. The error is minimized when the texture coordinate of two points are same. These values are normalized by the deviation in the geometry space to impose more weight on the simplification pair with small deviation. If the simplification pair has two points on the corresponding edges, not only the P i and p i are expressed as a linear combination of near vertices, but also V i and v i are expressed as a linear combination of vertices on the edge. For the simplification on the p-boundary, the linear combination is constructed by two points of the corresponding edge. All the others are expressed as a linear combination of the vertices of the containing triangle. There was another metric to measure polygon resolution mapping error. Cohen et. al. proposed texture deviation error[cohen98]. The texture deviation error is defined as distance with two points which is mapped onto a single texel. Although they did not used the term of the polygon resolution mapping error, their error metric is another form of the polygon resolution mapping error. It is easy to see that our proposed metric is equivalent to the texture deviation metric. The deviation in the simplification pair is the minimum deviation value for the simplified point. It is also a maximum deviation value for the simplified geometry. If we construct all the possible simplification pair as we stated in the section 3.3.1, the maximum deviation value is included in the set of simplification pair. So, if we put same texel for every simplification pair, the texture deviation metric is minimized. Using the same arguments, it is clear that if the multiresolution error is zero, the texture map gives no additional deviation in the rendered image. So, the new generated texture map gives minimized additional errors to the geometry simplification results. The multiresolution mapping error is calculated by summation of the pixel resolution error and the polygon resolution error. E = βe pixel + ( 1 β ) E poly We make a parameterization by minimizing the defined error. k i i 4 i 2 + 1)

73 Texture parameterization for the multiresolution model After merging the regions as stated in the section 3.3.1, the actual parameterization process takes place. The parameterization process is the making process of the actual texture map. The texture mapping function is defined as a texture image and the texture coordinate for each vertex. The texture mapping function is generated for each base region. Each base region has a set of polygons which contained in that region. The polygons are that of the most detailed geometry. The base region consists of polygons on the most detailed geometry and the vertices on the most detailed geometry. And it also has a set of simplification pair to calculate polygon resolution mapping error. The texture mapping function is generated by deciding the texture coordinate of the each vertex in the region. The generation process is accomplished by minimizing proposed multiresolution mapping error. We use the conjugate gradient method for the minimization process[press97]. After the texture coordinate is determined, we measure the size of the texture map. If the mapped area is large, the size of the texture map also gets large. Values of the multiresolution mapping error also determine the texture map size. With high error, it is desired to have high resolution to give enough texels which represents color detail for geometry surface. After that, the texture coordinates which is the result of the optimization process, are resized into [0,1] values. The texture image is captured by rendering the polygons using the texture coordinate into [0,1]x[0,1] size screen using the conventional rendering hardware. In this chapter, the experiment result used uniform 512x512 texel resolution for each texture image. To have efficient texture map storage management, it would be suitable to capture the texture into resolutions which is proportional to the stored parameterization size. To utilize this idea, the resolution modifier r is not used for the multiresolution mapping error. Instead, the amount of unmodified error and the size of the parameter space determine the resolution of texture images. If the number of base regions is large, it would be better to pack the texture images for each region into small number of texture images. Sander et. al. showed the possibility of packing number of texture images into a large single texture map[sander2001]. In this chapter, we make very small number of base regions, so the packing method is not necessary. 66

74 Figure 49 Generated texture map for a hemisphere model The figure shows parameterization result for the hemi-sphere. All the regions are merged into a single base region. A single base region is valid for the surface that is topologically identical to the 2- dimensional disk. The new generated texture map is used for the multiresolution geometry model. The final step of the texture image generation is a gap filling stage. Because the optimized parameterization is not shaped in square, there are texels with background color in the captured texture image. In the figure, the outer area of the disk is texels with background color. These background colored texels would give undesirable rendering effects in the final rendering. The undesirable rendering effects are happened by the round off error in the texture coordinate which gives mapping to the nearby background texels instead of mapping to the correct texels from the geometry. And these texels with background colors give undesired blurred color when using the mipmap for the rendering. The gap filling stage uses mipmap hierarchy. At the first stage, the mipmap hierarchy is built from the generated texture map. The mipmap hierarchy consists of n levels of texture images. The level n image is the generated texture map. The level i image is generated from the level i+1 image by average the 4 corresponding texel values. The level i image has half width and half height to the level i+1 image. Each texel in the level i image corresponds to the 2x2 texel window in the level i+1 image as the conventional mipmap process has. If the corresponding texel value of the level i+1 image is background texel, the value is not participate into the average calculation. If all of the corresponding texels are background texel, the level i texel is marked as background. The second stage is re-propagating texel values from the level 0 image to the level n image. In the re-propagating stage, background texel in the level i+1 image is filled with value of the corresponding level i texel. The resulting texture image is gap filled image that can be used for the rendering process. 67

75 3.4. Experimental results and discussion The experiment is performed for several geometric models with various color and texture mappings. Generated texture images have resolution of 512x512 and all the experiments are performed on conventional personal computer with Pentium IV processor and NVIDIA Geforce graphics card. For the results with no specific statement, used weight values for the distortion error are α = 0.8, β = (a) Hemisphere (b) Mapping images Figure 50 Hemisphere with 9 different texture images (471polygons) Figure 50 is example of the geometry with discontinuous texture mapping function. Figure 51 shows the simplified result with the geometry only simplification using M-AGSphere. Left one has 87 polygons and right one has 45. It shows that the texture boundary is broken while the geometric shape is well preserved. Figure 51 Simplification result using M-AGSpehre without considering textures (87 and 45 polygons) 68

76 Figure 52 Generated texture map without considering multiresolution mapping error Figure 52 is a generated texture image using the parameterization method without considering multiresolution mapping error. Figure 53 is a rendered result using the generated texture map. (a) is a comparison of the rendering result using high detail model which is quite similar between original model and the textured model. (b) shows the rendered results of the simplified geometry with original textures and generated ones. Although the generated has distortions, it is still better than the geometry only simplification. (a) High detail rendering. Left one is rendered with original texture and right one is rendered with generated texture. (b) low detail rendering (87, 45 polygons) Figure 53 Rendering result of the simplified model 69

77 Figure 55 is an example of texture generating simplification. In the figure, (a) shows simplification result of the geometry only simplification. The result is obtained by using M-AGSphere. From the figure, it is clearly observable that simplification operations break color information over the surface. (b) is rendered image using the proposed texture generation method. Using the generated texture, the rendered result is quite similar to the original one. Figure 54 shows the generated texture image for the model. To be a fair comparison, all lighting effects are removed for rendering in the Figure 55. (a) Original model with 4,984 polygons (b) Generated texture map Figure 54 Texture map for an object with multiple color material (a) Geometry simplification without texture map (b) Geometry simplification with generated map From left, 2,960 polygons, 1,476 polygons, and 798 polygons Figure 55 Simplification example for an object with multiple color material 70

78 2,280 polygons 1,476 polygons 798 polygons 2,960 polygons Figure 56 Rendering result for different detail levels Upper part is generated using the generated texture map, and the lower part is rendered result with geometry only simplification Figure 56 is another example using the M-AGSphere for the object with multiple color material. As for the textured geometry, the proposed method is good for the colored geometry. Figure 57 is a result for a model of the wall of the building. In a typical modeling process, each object is modeled with lots of discontinuous texture mapping and colors. Using the proposed method, the rendering result of the simplified geometry is very close to the detailed geometry. 71

79 (a)original model (5,000 polygons with 5 difference images) (b) Texture images used in the original model (c) Geometry only simplification (d) Generated texture map (e) Rendered results with generated texture map (11polygons) Figure 57 Example for geometry of multiple texture images with 5,000 polygon Figure 58 shows the rendering result using two different texture maps. The texture map for the left one is generated by using pixel resolution error only. The right one uses the texture map minimizing the multiresolution mapping error. The number of polygon for both models is about 1,500 polygons. The 72

80 result is greatly enhanced by using the texture map with minimizing the multiresolution mapping error. Figure 58 Rendering result of texture map with minimizing multiresolution mapping error Left image is result without the multiresolution mapping error. Right image is result with minimizing the multiresolution mapping error. Figure 59 Multiresolution mapping error for simplification Figure 59 shows the relationship between the multiresolution mapping error and the number of polygon. As the model is simplified the multiresolution mapping error is increased which is proper behavior to satisfy monotone fidelity constraints. The lower graph shows the vertex deviation of the simplification operator. The reason of the peak in the vertex deviation graph is due to the penalty we give for over simplified vertices. Over simplified vertices often have excessive connected edges which give unnatural rendering results. To prevent this, we propagate the deviation of the removed vertex and the extra penalty value to all the neighbor vertices. The plotted value is the pure deviation without an extra penalty. 73

81 3.5. Conclusion Our proposed method generates continuous texture mapping for the multiresolution geometry. Proposed method minimizes multiresolution mapping error. The multiresolution mapping error is defined by texture distortion error including the errors caused by the geometry simplification. The generated texture map is valid for the multiresolution geometry. Using the generated texture map, the original shape is well preserved with relatively small number of polygons. The boundaries of the generated texture map are determined by p-boundaries which have no boundary crossing simplification. Although we have minimized the texture distortions in generation process, there still exist small amount of distortion in the texture map. Figure 60 shows result of rendering with blurred image caused by the texture distortion. This distortion is magnified when we use low resolution texture image. Figure 61 shows the result using texture image with 128x128 resolution. With the same parameterization, the result is not satisfiable for the low resolution texture image. In the chapter 4, we propose multiresolution texture model to enhance rendered result of the simplified geometry with texture map. Figure 60 Blurred region caused by texture distortion (a) Texture (b) Rendered result Figure 61 Rendering result using low resolution texture 74

82 4. Multiresolution texture generation using region merging 4.1. Motivation Multiresolution model rendering is an approach that uses models with different complexity and fidelities to control the speed and quality of the real-time rendering. So far, the geometry has been main focus of the research for the multiresolution approach. As stated in the chapter 2, there have been a lot of research efforts on the multiresolution geometry model. In this chapter, a multiresolution model for the texture map is investigated. Texture mapping requires additional rendering cost to a polygonal geometry rendering. It is clear that the texture rendering cost is one of the important factors which determine rendering time cost (section 1.3.1). The rendering cost of the texture mapping is determined by the resolution of texture images. In the conventional rendering pipeline which is described in the Figure 2, the texture mapping process is performed by the pixel processing stage. The time cost of the pixel processing stage depends on the pixel resolution of the polygon to be rendered and the resolution of the texture image that is mapped onto the polygon. If we use higher resolution texture, we generally have high quality scene with high rendering cost. Although there might be a problem of pixel sampling in using high resolution textures, these defects are reduced using various anti-aliasing techniques. In this chapter, we don t consider much about these sampling effects. The texture map which is generated in the previous chapter has p-boundaries of the most simplified geometry as the boundary of the texture image. Furthermore, these boundaries are merged into boundaries of larger regions to give more continuous mapping. It makes the generated texture map to have inevitable distortions. We devise a refinement method for the generated texture map. The refined texture map constructs multiresolution structure of the texture map, which have high resolution and less distortion. The factors that make multiresolution mapping distortions are pixel resolution error and polygon resolution mapping error. The pixel resolution error can be reduced by subdividing the texture map into set of sub-maps[maillot93][piponi2000]. The polygon resolution mapping error is measured by deviations in the texture image for vertices of different geometry levels. Vertices on p-boundaries have no vertex-wise derivations. All the possible deviation is caused by simplified vertices. To reduce polygon resolution mapping error, we use higher detail geometry which have smaller number of simplified vertices. In the proposed multiresolution texture mapping scheme, a texture map which is generated in the previous chapter serves as the lowest detail texture. Through successive refinements of the texture map, we make a set of high detail texture maps which is defined as a multiresolution texture map. 75

83 Using the multiresolution texture map, we can control the rendering speed by selecting texture images with proper resolution. The multiresolution texture and geometry give thorough controllability on the rendering speed. In this chapter, a method for generating multiresolution texture model is described Requirements Multiresolution texture model is a hierarchy on the set of texture maps that are mapped onto a multiresolution geometry model. The hierarchy is based on the rendering cost and the quality. To be a good multiresolution model, every model should satisfy the monotone complexity constraints and the monotone fidelity constraints. In the proposed scheme of the texture map generation method, the fidelity of the texture map is measured by the texture distortion error metric called multiresolution mapping error and resolution of the texture image. If a texture distortion value is high, the texture has low fidelity and vice versa. The texture complexity is measured by the resolution of the texture image. So the generated texture multiresolution model should have high resolution for high detail level textures and less texture distortion for less detail level textures. The texture distortion value is calculated for the given geometry detail model and the texture map. If a higher detail level geometry is mapped to the same texture image, the rendered result has less texture distortion due to reduced multiresolution mapping error. Furthermore, if a texture map is generated using p-boundaries of higher detail geometries, it has reduced multiresolution mapping error. In the previous chapter, the texture map is generated using the simplest detail geometry. The idea of enhancing texture map is using more high detail geometry to give p-boundaries with less distortion. The last condition is that the enhanced texture should preserve connectivity for simplified polygons. Using the geometry with more detail makes new p-boundary which is not a boundary of the region in the more simple geometry. It gives validity limitation on the texture. A valid mapping is defined as following. A texture mapping function Φ is valid iff Φ (p) is in a polygon (Φ (v 1 ),Φ (v 2 ),, Φ (v n )) on a single texture image, where point p is in a polygon (v 1,v 2,,v n ) on a geometry surface. 76

84 Roughly speaking, the generated texture is not able to be mapped onto simpler geometries. We propose a multiresolution texture model which also has hierarchical relationship on the validity. The lowest texture map is valid for the whole multiresolution geometry. Higher level of texture map has smaller range of valid geometries on the multiresolution geometry hierarchy. This constructs hierarchical relationship of texture maps and geometry levels with valid mapping functions. A simple way to satisfy these conditions is using MIPMAP-related method for the multiresolution hierarchy. Originally the MIPMAP is a method for solving the sampling problem of the texture map by using pre-generated set of textures with different resolutions [Ebert98]. This method can be applied to the multiresolution texture model generation. To adopt MIPMAP style approach to the multiresolution texture model, we put the texture generated for the base mesh as a base level texture map with the lowest resolution. The base level texture map is refined to have higher resolution. The refined texture map is a re-sampled texture image with same mapping function of the base texture map. The re-sampled texture image has quadruple size to the previous low level texture image, which constructs MIPMAP hierarchy in reverse order. The generated hierarchy satisfies previous conditions of the multiresolution texture model. It has monotone fidelity, monotone cost and continuities. The refined texture map has low pixel resolution distortion because of the small resolution modifier. Although refined texture map reduces the distortion by reducing the resolution modifier, it does not reduce the E param which also determines the pixel resolution distortion. Figure 62 Histogram on the polygon area ratio for corresponding polygons Figure 62 shows the histogram on the ratio of the polygon area between the texture image and the geometry mesh. If the ratio is near the 1, it means that the texture image is well mapped onto the geometry. Strictly speaking, the same area in two polygons does not guarantee that it has minimum texture distortions, but it is sufficient to see the overall behavior of the mapping. The figure illustrates that the 77

85 distorted ratio of the triangle has wide spread distribution. It means that we need to enlarge the texture map a lot to give enough pixel resolution to minimize the texture distortions. Using the extremely high resolution texture is not possible in the conventional hardware and it requires a lot of texture data to be processed which results slow-down in rendering time. Furthermore, the MIPMAP hierarchy does not reduce the polygon resolution error. In this chapter, we propose a method to build the multiresolution texture model. The multiresolution mapping error is reduced when the texture map is subdivided into several sub-maps. Based on this observation our method builds refined texture maps by subdividing the original texture maps Multiresolution texture model with multiresolution geometry Overview The multiresolution geometry has a set of geometries which forms hierarchical relation from the most simplified mesh to the most detailed mesh. If we have sequential levels-of-detail for the multiresolution model, the multiresolution model is expressed by a sequence of the mesh M n with most polygons to the coarse mesh M 0 with least polygons. M n 1 is simplified mesh from M n by applying unit amount of simplification operations. This sequence is well defined for the progressive mesh, but it is not quite easy to define sequential level hierarchy for complex multiresolution model like M- AGSphere or vertex tree. In this section, we use progressive mesh like structure for a description purpose. For the case of M-AGSphere and vertex tree is stated in the section The multiresolution texture is defined in similar way to the multiresolution geometry. Texture map is defined by a texture image T and a mapping function Φ. A texture image is a 2D image and the texture detail is defined as the resolution of the texture image. The coarsest texture is denoted as T 0 and image with most detail texture as T m. The texture mapping function Φ is mapping function from the geometry to the texture image. For the multiresolution geometry and texture, the mapping function is defined for each pair of geometry and texture resolution. The texture mapping function from the mesh M i to the texture T j is denoted as Φ. This relation is illustrated in the Figure 63. i, j 78

86 M 0 φ i, j ( x ) T 0 M i φ i, j T j M n T m Figure 63 Texture mapping function for multiresolution geometry and multiresolution texture The goal of the multiresolution texture map generation is generating the series of the texture image T 0 to T m and set of texture mapping function Φ i, j which minimizes the multiresolution mapping error. By generating the texture image minimizing the multiresolution mapping error, a set of texture mapping function to that texture image is defined. The texture image T 0 that is generated in the chapter 3 has a set of texture mapping function Φ i, 0 for i = 0, n. Each texture mapping function is defined by the point consistent mapping to the mapping function Φ n, 0. The Φ n, 0 is generated by minimizing multiresolution error which makes Φ i, 0 s have small texture distortions. Start from the base texture map that is generated from the base geometry M 0, the refined texture map is generated using the more detail geometry with more p-boundaries. The overall process consists of four steps. 1) Select a set of p-boundary from a refined geometry 2) Select a set of regions to be mapped onto single texture image 3) Generate texture image and the texture mapping function 4) Extend generated texture mapping function to the set of geometry levels using pointconsistent mapping In section 4.3.2, we describe the first step of the process and discuss about the validity of the texture mapping function. The main generation step is described in the section

87 Geometry level selection for texture refinement The multiresolution texture is generated using more p-boundaries from a detailed geometry model. Using higher detail geometry, there are more p-boundaries in the geometry. In other words, the higher detail geometry has larger number of regions. In the M-AGSphere, boundaries of the level-i region are not merged in the higher detail level cell. It means that the boundaries of the level-i regions do not have boundary crossing simplification when the original detailed mesh is simplified through M-AGSphere. The boundary is simplified when the level-i regions are neighbored with other regions with shared boundary or the level-i regions are merged into level-(i-1) regions. From this observation, we can easily see that the boundaries of the level-i regions are the p-boundaries for the regions with high levels to the level-i. We define rank-i geometry as a set of multiresolution geometry which has regions with higher or equal level i. The rank-i p-boundary is a boundary that does not have boundary simplification for all the rank-i geometry. The rank-0 geometry covers all the geometry levels in the M-AGSphere, and the rank-0 p-boundary is a p-boundary that we defined in the chapter 3. The rank-n geometry set is consisted by a single geometry of highest level where the n is the highest level of the cell in the M-AGSphere. level-i region level-(i+1) region level-(i+2) region Rank-i geometry level-(i+1) region level-(i+1) region level-(i+2) region level-i region level-i region level-(i-1) region Figure 64 Rank-i geometry The texture generated from the rank-0 p-boundary is refined by using rank-1 p-boundary. And the texture map is refined successively using higher rank p-boundaries. Using the M-AGSphere, level-j texture map is generated using the rank-j p-boundary. This is similar for the vertex tree case. For the edge collapsing operator, the modified simplification builds up region hierarchy on edge collapsing operations hierarchy (section 3.2). Level-0 regions are regions on the most simplified geometry. It is clear that, the boundaries of the level-i regions are not simplified by the set of edge collapsing operations to generate the region. For geometry with regions of levels higher than or equal to the level-i, its level-i region boundaries are p-boundary of this geometry. The 80

88 rank-i p-boundary is defined by boundaries of level-i regions. The progressive mesh does not have explicit rank concept. Instead of defining rank-i geometry for the progressive mesh, we use region hierarchy constructed in the chapter 2. Every region is merged though edge collapsing. It makes region hierarchy. The level-0 region is two most simplified regions. Rank-i p- boundaries are boundaries of the level-i regions. The boundaries of the level-i regions have properties of the p-boundary. The rank-j regions are defined by rank-j p-boundaries. The texture image T j is generated from the rank-j regions. Using similar approaches described in the chapter 3, T j have a texture mapping function Φ n, j to the most detail geometry. From the T j and Φ n, j, a set of mapping function is defined as a point-consist mapping function to the Φ n, j. Because the texture map T j is defined by rank-j regions, the texture mapping function is defined for the rank-j geometries. The texture image T 0 is valid for the whole multiresolution geometry models, and T n is valid for the most detail geometry. The valid geometry set of the T i includes the valid geometries of the T i Texture map refinement using region splitting and merging Using level-j regions, a texture map T j is generated by minimizing the multiresolution mapping error. The multiresolution mapping error for the texture map T is measured for rank-j geometry. Every removed vertex to make rank-j geometry is gathered and processed to measure the multiresolution mapping error. With a set of level-j regions, a new refined texture image is generated. Basically, every region is merged into small number of regions to make partially continuous mapping for the geometry. The number of merged region should be larger than the original number of texture images. With many regions, we can reduce the multiresolution mapping error while we sacrifice the continuity and the rendering costs caused by using number of textures. Several methods can be used to build up the regions, including merge with error threshold or subdivide original regions into set of small regions. j 81

89 Multiresolution texture T 0 Region hierarchy level-0 region Multiresolution geometry M 0 T 1 level-1 regions M 1 T j level-j regions M i T m level-m regions M n Rank-j geometry Figure 65 Multiresolution texture map We use a subdivision-based method. If we have a texture map T j from the rank-j geometry, the texture map T j+ 1 is generated by refining level-j regions. Each level-j regions consists of number of level-(j+1) regions which subdivide the region into a disjoint set. The level-(j+1) regions are merged to generate small number of regions using the region merging process described in the section Since the level-j region is constructed by considering the spherical closeness measure, it is natural to subdivide the region into 4. The process follows the merge sequence of the section 3.3.3, until the total number of regions is reduced to 4. The level j+1 textures are generated from the merged regions. We call the merged region as level-(i+1) m-region. The multiresolution texture is generated by successive sequence of refining textures. Start from the level-0 m-regions, level-i m-regions are subdivided into 4 (or more) level-(i+1) m-regions. The texture map is generated for each level-(i+1) m-region. The final shape of the multiresolution texture is quad-tree like hierarchy on the texture image and corresponding texture mapping functions. Figure 65 illustrates the multiresolution texture map with the multiresolution geometry model Experimental results To make a multiresolution texture model, MIPMAP is the one of the easiest methods to use. Figure 66 shows the rendered image of the multiresolution hierarchy using the MIPMAP-based method. The multiresolution hierarchy is generated using the method described in the section 4.2. In the experiments, 82

90 the lowest level texture uses the texture with 64x64 resolution and the highest level uses the texture with 512x512 resolution. The left image is rendered image using the original high detail model with 5,000 polygons and 7 different set of texture images. Other images are rendered result using different detail textures for the simplified geometry. From left, the texture image with 512x512, 256x256, 128x128 and 64x64 resolution is used for each image. The number of polygons for the simplified model is 11 polygons. Figure 66 Rendered image using MIPMAP style multiresolution texture The texture distortion noticed in the right-most image is reduced when the high resolution texture is used. Figure 67 is another example using the MIPMAP-style multiresolution texture. The resolution of the texture image is enlarged from 64x64 to 512x512. The right-most image is rendered image using the 64x64 resolution and the high resolution texture is used for the left images. The bottom row is rendered images that are seen from the distance to show the effect of using low resolution models for the distant object. The rendered image from the distant camera gives little difference when using the simple texture. Figure 67 Rendered image using MIPMAP style multiresolution texture(2) 83

91 Although the MIPMAP-style multiresolution approach reduces the pixel resolution error, the polygon resolution error is still remained in the result. Figure 68 is the example of refined texture map for the same model with the Figure 67. The original 512x512 texture image is partitioned into four 512x512 sub images. Figure 68 Refined texture maps from the global texture map Figure 69 compares different rendering results between the original map and the refined map. To give a fair comparison, we sample the refined map at a smaller resolution which gives the same resolution as the original map. The original map is a 512x512 image and the refined map is four 256x256 images. From the Figure 69 and the Figure 70, we could see that refined map has fewer artifacts in the rendered image. The original map shows high distortion around the eyebrows, and distorted and blurred results along the necklace part of the model. From the right part of the Figure 70, we could see that the refined map gives less distorted results on these parts. Figure 69 Rendering results using refined texture map. The left image was generated using the single original texture map and the right one used the refined map. 84

92 Figure 70 Detail part comparison of two different maps. Figure 71 Ratio of polygon area between texture image and geometry mesh Figure 71 shows the histogram on the ratio of the polygon area between the texture image and the geometry mesh. If the ratio is near the 1, it means that the texture image is well mapped onto the geometry. From the histogram, the refined map shows less distorted mapping. Further more, the standard deviation of the two histograms are for the global map, and for the refined map. It shows the refined map have few polygons with distorted areas. It means that the refined map is a better representation of the mapped geometry. The multiresolution mapping error for the enhanced texture is also reduced. In the previous example, the multiresolution mapping error for the base texture map is 87.9 and the multiresolution mapping error for the enhanced texture map is in total. Using the enhanced texture map, the rendering result is further enhanced. The enhanced texture map builds up the multiresolution texture structure. Using the multiresolution texture map along with the multiresolution geometry, the rendering quality and the rendering cost is controlled more thoroughly. 85

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005 A Developer s Survey of Polygonal Simplification algorithms CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005 Some questions to ask Why simplification? What are my models like? What matters

More information

and Recent Extensions Progressive Meshes Progressive Meshes Multiresolution Surface Modeling Multiresolution Surface Modeling Hugues Hoppe

and Recent Extensions Progressive Meshes Progressive Meshes Multiresolution Surface Modeling Multiresolution Surface Modeling Hugues Hoppe Progressive Meshes Progressive Meshes and Recent Extensions Hugues Hoppe Microsoft Research SIGGRAPH 97 Course SIGGRAPH 97 Course Multiresolution Surface Modeling Multiresolution Surface Modeling Meshes

More information

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development

Scene Management. Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Video Game Technologies 11498: MSc in Computer Science and Engineering 11156: MSc in Game Design and Development Chap. 5 Scene Management Overview Scene Management vs Rendering This chapter is about rendering

More information

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo Geometric Modeling Bing-Yu Chen National Taiwan University The University of Tokyo Surface Simplification Motivation Basic Idea of LOD Discrete LOD Continuous LOD Simplification Problem Characteristics

More information

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. 1 2 Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters. Crowd rendering in large environments presents a number of challenges,

More information

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0) Acceleration Techniques V1.2 Anthony Steed Based on slides from Celine Loscos (v1.0) Goals Although processor can now deal with many polygons (millions), the size of the models for application keeps on

More information

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz CS 563 Advanced Topics in Computer Graphics QSplat by Matt Maziarz Outline Previous work in area Background Overview In-depth look File structure Performance Future Point Rendering To save on setup and

More information

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts

Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts Subdivision Of Triangular Terrain Mesh Breckon, Chenney, Hobbs, Hoppe, Watts MSc Computer Games and Entertainment Maths & Graphics II 2013 Lecturer(s): FFL (with Gareth Edwards) Fractal Terrain Based on

More information

LOD and Occlusion Christian Miller CS Fall 2011

LOD and Occlusion Christian Miller CS Fall 2011 LOD and Occlusion Christian Miller CS 354 - Fall 2011 Problem You want to render an enormous island covered in dense vegetation in realtime [Crysis] Scene complexity Many billions of triangles Many gigabytes

More information

Multiresolution Meshes. COS 526 Tom Funkhouser, Fall 2016 Slides by Guskov, Praun, Sweldens, etc.

Multiresolution Meshes. COS 526 Tom Funkhouser, Fall 2016 Slides by Guskov, Praun, Sweldens, etc. Multiresolution Meshes COS 526 Tom Funkhouser, Fall 2016 Slides by Guskov, Praun, Sweldens, etc. Motivation Huge meshes are difficult to render store transmit edit Multiresolution Meshes! [Guskov et al.]

More information

3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment

3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment 3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment Özgür ULUCAY Sarp ERTÜRK University of Kocaeli Electronics & Communication Engineering Department 41040 Izmit, Kocaeli

More information

Adaptive Point Cloud Rendering

Adaptive Point Cloud Rendering 1 Adaptive Point Cloud Rendering Project Plan Final Group: May13-11 Christopher Jeffers Eric Jensen Joel Rausch Client: Siemens PLM Software Client Contact: Michael Carter Adviser: Simanta Mitra 4/29/13

More information

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane Rendering Pipeline Rendering Converting a 3D scene to a 2D image Rendering Light Camera 3D Model View Plane Rendering Converting a 3D scene to a 2D image Basic rendering tasks: Modeling: creating the world

More information

CGAL. Mesh Simplification. (Slides from Tom Funkhouser, Adam Finkelstein)

CGAL. Mesh Simplification. (Slides from Tom Funkhouser, Adam Finkelstein) CGAL Mesh Simplification (Slides from Tom Funkhouser, Adam Finkelstein) Siddhartha Chaudhuri http://www.cse.iitb.ac.in/~cs749 In a nutshell Problem: Meshes have too many polygons for storage, rendering,

More information

Motivation. Culling Don t draw what you can t see! What can t we see? Low-level Culling

Motivation. Culling Don t draw what you can t see! What can t we see? Low-level Culling Motivation Culling Don t draw what you can t see! Thomas Larsson Mälardalen University April 7, 2016 Image correctness Rendering speed One day we will have enough processing power!? Goals of real-time

More information

Illumination and Geometry Techniques. Karljohan Lundin Palmerius

Illumination and Geometry Techniques. Karljohan Lundin Palmerius Illumination and Geometry Techniques Karljohan Lundin Palmerius Objectives Complex geometries Translucency Huge areas Really nice graphics! Shadows Graceful degradation Acceleration Optimization Straightforward

More information

CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling

CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling CSE 167: Introduction to Computer Graphics Lecture #10: View Frustum Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015 Announcements Project 4 due tomorrow Project

More information

Advanced Computer Graphics

Advanced Computer Graphics Advanced Computer Graphics Lecture 2: Modeling (1): Polygon Meshes Bernhard Jung TU-BAF, Summer 2007 Overview Computer Graphics Icon: Utah teapot Polygon Meshes Subdivision Polygon Mesh Optimization high-level:

More information

Mesh Decimation. Mark Pauly

Mesh Decimation. Mark Pauly Mesh Decimation Mark Pauly Applications Oversampled 3D scan data ~150k triangles ~80k triangles Mark Pauly - ETH Zurich 280 Applications Overtessellation: E.g. iso-surface extraction Mark Pauly - ETH Zurich

More information

Simplification. Stolen from various places

Simplification. Stolen from various places Simplification Stolen from various places The Problem of Detail Graphics systems are awash in model data: highly detailed CAD models high-precision surface scans surface reconstruction algorithms Available

More information

Applications. Oversampled 3D scan data. ~150k triangles ~80k triangles

Applications. Oversampled 3D scan data. ~150k triangles ~80k triangles Mesh Simplification Applications Oversampled 3D scan data ~150k triangles ~80k triangles 2 Applications Overtessellation: E.g. iso-surface extraction 3 Applications Multi-resolution hierarchies for efficient

More information

Level of Details in Computer Rendering

Level of Details in Computer Rendering Level of Details in Computer Rendering Ariel Shamir Overview 1. Photo realism vs. Non photo realism (NPR) 2. Objects representations 3. Level of details Photo Realism Vs. Non Pixar Demonstrations Sketching,

More information

Occluder Simplification using Planar Sections

Occluder Simplification using Planar Sections Occluder Simplification using Planar Sections Ari Silvennoinen Hannu Saransaari Samuli Laine Jaakko Lehtinen Remedy Entertainment Aalto University Umbra Software NVIDIA NVIDIA Aalto University Coping with

More information

APPROACH FOR MESH OPTIMIZATION AND 3D WEB VISUALIZATION

APPROACH FOR MESH OPTIMIZATION AND 3D WEB VISUALIZATION APPROACH FOR MESH OPTIMIZATION AND 3D WEB VISUALIZATION Pavel I. Hristov 1, Emiliyan G. Petkov 2 1 Pavel I. Hristov Faculty of Mathematics and Informatics, St. Cyril and St. Methodius University, Veliko

More information

A Bandwidth Effective Rendering Scheme for 3D Texture-based Volume Visualization on GPU

A Bandwidth Effective Rendering Scheme for 3D Texture-based Volume Visualization on GPU for 3D Texture-based Volume Visualization on GPU Won-Jong Lee, Tack-Don Han Media System Laboratory (http://msl.yonsei.ac.k) Dept. of Computer Science, Yonsei University, Seoul, Korea Contents Background

More information

Mesh Simplification. Mesh Simplification. Mesh Simplification Goals. Mesh Simplification Motivation. Vertex Clustering. Mesh Simplification Overview

Mesh Simplification. Mesh Simplification. Mesh Simplification Goals. Mesh Simplification Motivation. Vertex Clustering. Mesh Simplification Overview Mesh Simplification Mesh Simplification Adam Finkelstein Princeton University COS 56, Fall 008 Slides from: Funkhouser Division, Viewpoint, Cohen Mesh Simplification Motivation Interactive visualization

More information

Graphics and Interaction Rendering pipeline & object modelling

Graphics and Interaction Rendering pipeline & object modelling 433-324 Graphics and Interaction Rendering pipeline & object modelling Department of Computer Science and Software Engineering The Lecture outline Introduction to Modelling Polygonal geometry The rendering

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

Project Gotham Racing 2 (Xbox) Real-Time Rendering. Microsoft Flighsimulator. Halflife 2

Project Gotham Racing 2 (Xbox) Real-Time Rendering. Microsoft Flighsimulator. Halflife 2 Project Gotham Racing 2 (Xbox) Real-Time Rendering Microsoft Flighsimulator Halflife 2 1 Motivation (1) Many graphics applications are dynamic Simulators (surgery, flight simulators, ) 3D computer games

More information

Triangle Rasterization

Triangle Rasterization Triangle Rasterization Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 2/07/07 1 From last time Lines and planes Culling View frustum culling Back-face culling Occlusion culling

More information

Geometric Modeling and Processing

Geometric Modeling and Processing Geometric Modeling and Processing Tutorial of 3DIM&PVT 2011 (Hangzhou, China) May 16, 2011 6. Mesh Simplification Problems High resolution meshes becoming increasingly available 3D active scanners Computer

More information

Mesh Decimation Using VTK

Mesh Decimation Using VTK Mesh Decimation Using VTK Michael Knapp knapp@cg.tuwien.ac.at Institute of Computer Graphics and Algorithms Vienna University of Technology Abstract This paper describes general mesh decimation methods

More information

VISIBILITY & CULLING. Don t draw what you can t see. Thomas Larsson, Afshin Ameri DVA338, Spring 2018, MDH

VISIBILITY & CULLING. Don t draw what you can t see. Thomas Larsson, Afshin Ameri DVA338, Spring 2018, MDH VISIBILITY & CULLING Don t draw what you can t see. Thomas Larsson, Afshin Ameri DVA338, Spring 2018, MDH Visibility Visibility Given a set of 3D objects, which surfaces are visible from a specific point

More information

PowerVR Hardware. Architecture Overview for Developers

PowerVR Hardware. Architecture Overview for Developers Public Imagination Technologies PowerVR Hardware Public. This publication contains proprietary information which is subject to change without notice and is supplied 'as is' without warranty of any kind.

More information

Visibility and Occlusion Culling

Visibility and Occlusion Culling Visibility and Occlusion Culling CS535 Fall 2014 Daniel G. Aliaga Department of Computer Science Purdue University [some slides based on those of Benjamin Mora] Why? To avoid processing geometry that does

More information

Goal. Interactive Walkthroughs using Multiple GPUs. Boeing 777. DoubleEagle Tanker Model

Goal. Interactive Walkthroughs using Multiple GPUs. Boeing 777. DoubleEagle Tanker Model Goal Interactive Walkthroughs using Multiple GPUs Dinesh Manocha University of North Carolina- Chapel Hill http://www.cs.unc.edu/~walk SIGGRAPH COURSE #11, 2003 Interactive Walkthrough of complex 3D environments

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Hardware-driven visibility culling

Hardware-driven visibility culling Hardware-driven visibility culling I. Introduction 20073114 김정현 The goal of the 3D graphics is to generate a realistic and accurate 3D image. To achieve this, it needs to process not only large amount

More information

Triangle Strip Multiresolution Modelling Using Sorted Edges

Triangle Strip Multiresolution Modelling Using Sorted Edges Triangle Strip Multiresolution Modelling Using Sorted Edges Ó. Belmonte Fernández, S. Aguado González, and S. Sancho Chust Department of Computer Languages and Systems Universitat Jaume I 12071 Castellon,

More information

Performance OpenGL Programming (for whatever reason)

Performance OpenGL Programming (for whatever reason) Performance OpenGL Programming (for whatever reason) Mike Bailey Oregon State University Performance Bottlenecks In general there are four places a graphics system can become bottlenecked: 1. The computer

More information

Universiteit Leiden Computer Science

Universiteit Leiden Computer Science Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd

More information

Culling. Computer Graphics CSE 167 Lecture 12

Culling. Computer Graphics CSE 167 Lecture 12 Culling Computer Graphics CSE 167 Lecture 12 CSE 167: Computer graphics Culling Definition: selecting from a large quantity In computer graphics: selecting primitives (or batches of primitives) that are

More information

Parameterization of Triangular Meshes with Virtual Boundaries

Parameterization of Triangular Meshes with Virtual Boundaries Parameterization of Triangular Meshes with Virtual Boundaries Yunjin Lee 1;Λ Hyoung Seok Kim 2;y Seungyong Lee 1;z 1 Department of Computer Science and Engineering Pohang University of Science and Technology

More information

Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces

Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces EUROGRAPHICS 2001 / Jonathan C. Roberts Short Presentations Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces Eskil Steenberg The Interactive Institute, P.O. Box 24081, SE 104 50 Stockholm,

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Participating Media Measuring BRDFs 3D Digitizing & Scattering BSSRDFs Monte Carlo Simulation Dipole Approximation Today Ray Casting / Tracing Advantages? Ray

More information

Multiresolution structures for interactive visualization of very large 3D datasets

Multiresolution structures for interactive visualization of very large 3D datasets Multiresolution structures for interactive visualization of very large 3D datasets Doctoral Thesis (Dissertation) to be awarded the degree of Doctor rerum naturalium (Dr.rer.nat.) submitted by Federico

More information

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload)

Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload) Lecture 2: Parallelizing Graphics Pipeline Execution (+ Basics of Characterizing a Rendering Workload) Visual Computing Systems Analyzing a 3D Graphics Workload Where is most of the work done? Memory Vertex

More information

Building scalable 3D applications. Ville Miettinen Hybrid Graphics

Building scalable 3D applications. Ville Miettinen Hybrid Graphics Building scalable 3D applications Ville Miettinen Hybrid Graphics What s going to happen... (1/2) Mass market: 3D apps will become a huge success on low-end and mid-tier cell phones Retro-gaming New game

More information

Geometric Modeling. Mesh Decimation. Mesh Decimation. Applications. Copyright 2010 Gotsman, Pauly Page 1. Oversampled 3D scan data

Geometric Modeling. Mesh Decimation. Mesh Decimation. Applications. Copyright 2010 Gotsman, Pauly Page 1. Oversampled 3D scan data Applications Oversampled 3D scan data ~150k triangles ~80k triangles 2 Copyright 2010 Gotsman, Pauly Page 1 Applications Overtessellation: E.g. iso-surface extraction 3 Applications Multi-resolution hierarchies

More information

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo

Computer Graphics. Bing-Yu Chen National Taiwan University The University of Tokyo Computer Graphics Bing-Yu Chen National Taiwan University The University of Tokyo Hidden-Surface Removal Back-Face Culling The Depth-Sort Algorithm Binary Space-Partitioning Trees The z-buffer Algorithm

More information

Point based Rendering

Point based Rendering Point based Rendering CS535 Daniel Aliaga Current Standards Traditionally, graphics has worked with triangles as the rendering primitive Triangles are really just the lowest common denominator for surfaces

More information

Speeding up your game

Speeding up your game Speeding up your game The scene graph Culling techniques Level-of-detail rendering (LODs) Collision detection Resources and pointers (adapted by Marc Levoy from a lecture by Tomas Möller, using material

More information

3D Modeling: Surfaces

3D Modeling: Surfaces CS 430/536 Computer Graphics I 3D Modeling: Surfaces Week 8, Lecture 16 David Breen, William Regli and Maxim Peysakhov Geometric and Intelligent Computing Laboratory Department of Computer Science Drexel

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Final Projects Proposals due Thursday 4/8 Proposed project summary At least 3 related papers (read & summarized) Description of series of test cases Timeline & initial task assignment The Traditional Graphics

More information

Hardware Displacement Mapping

Hardware Displacement Mapping Matrox's revolutionary new surface generation technology, (HDM), equates a giant leap in the pursuit of 3D realism. Matrox is the first to develop a hardware implementation of displacement mapping and

More information

The Traditional Graphics Pipeline

The Traditional Graphics Pipeline Last Time? The Traditional Graphics Pipeline Reading for Today A Practical Model for Subsurface Light Transport, Jensen, Marschner, Levoy, & Hanrahan, SIGGRAPH 2001 Participating Media Measuring BRDFs

More information

Visible Surface Detection. (Chapt. 15 in FVD, Chapt. 13 in Hearn & Baker)

Visible Surface Detection. (Chapt. 15 in FVD, Chapt. 13 in Hearn & Baker) Visible Surface Detection (Chapt. 15 in FVD, Chapt. 13 in Hearn & Baker) 1 Given a set of 3D objects and a viewing specifications, determine which lines or surfaces of the objects should be visible. A

More information

Surface Rendering. Surface Rendering

Surface Rendering. Surface Rendering Surface Rendering Surface Rendering Introduce Mapping Methods - Texture Mapping - Environmental Mapping - Bump Mapping Go over strategies for - Forward vs backward mapping 2 1 The Limits of Geometric Modeling

More information

An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline

An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline 3rd International Conference on Mechatronics and Information Technology (ICMIT 2016) An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline Zhengjie Deng1, a, Shuqian He1,b, Chun Shi1,c,

More information

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms Visible Surface Determination Visibility Culling Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms Divide-and-conquer strategy:

More information

Mesh Repairing and Simplification. Gianpaolo Palma

Mesh Repairing and Simplification. Gianpaolo Palma Mesh Repairing and Simplification Gianpaolo Palma Mesh Repairing Removal of artifacts from geometric model such that it becomes suitable for further processing Input: a generic 3D model Output: (hopefully)a

More information

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp

Next-Generation Graphics on Larrabee. Tim Foley Intel Corp Next-Generation Graphics on Larrabee Tim Foley Intel Corp Motivation The killer app for GPGPU is graphics We ve seen Abstract models for parallel programming How those models map efficiently to Larrabee

More information

View-dependent Polygonal Simplification

View-dependent Polygonal Simplification View-dependent Polygonal Simplification Pekka Kuismanen HUT pkuisman@cc.hut.fi Abstract This paper describes methods for view-dependent simplification of polygonal environments. A description of a refinement

More information

3D Rasterization II COS 426

3D Rasterization II COS 426 3D Rasterization II COS 426 3D Rendering Pipeline (for direct illumination) 3D Primitives Modeling Transformation Lighting Viewing Transformation Projection Transformation Clipping Viewport Transformation

More information

Chapter 5. Projections and Rendering

Chapter 5. Projections and Rendering Chapter 5 Projections and Rendering Topics: Perspective Projections The rendering pipeline In order to view manipulate and view a graphics object we must find ways of storing it a computer-compatible way.

More information

Texture Mapping. Michael Kazhdan ( /467) HB Ch. 14.8,14.9 FvDFH Ch. 16.3, , 16.6

Texture Mapping. Michael Kazhdan ( /467) HB Ch. 14.8,14.9 FvDFH Ch. 16.3, , 16.6 Texture Mapping Michael Kazhdan (61.457/467) HB Ch. 14.8,14.9 FvDFH Ch. 16.3, 16.4.5, 16.6 Textures We know how to go from this to this J. Birn Textures But what about this to this? J. Birn Textures How

More information

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011) Lecture 13: Reyes Architecture and Implementation Kayvon Fatahalian CMU 15-869: Graphics and Imaging Architectures (Fall 2011) A gallery of images rendered using Reyes Image credit: Lucasfilm (Adventures

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

CPSC GLOBAL ILLUMINATION

CPSC GLOBAL ILLUMINATION CPSC 314 21 GLOBAL ILLUMINATION Textbook: 20 UGRAD.CS.UBC.CA/~CS314 Mikhail Bessmeltsev ILLUMINATION MODELS/ALGORITHMS Local illumination - Fast Ignore real physics, approximate the look Interaction of

More information

CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling

CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling CSE 167: Introduction to Computer Graphics Lecture #11: Visibility Culling Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 Announcements Project 3 due Monday Nov 13 th at

More information

Geometry Processing & Geometric Queries. Computer Graphics CMU /15-662

Geometry Processing & Geometric Queries. Computer Graphics CMU /15-662 Geometry Processing & Geometric Queries Computer Graphics CMU 15-462/15-662 Last time: Meshes & Manifolds Mathematical description of geometry - simplifying assumption: manifold - for polygon meshes: fans,

More information

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial Data Structures and Speed-Up Techniques Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial data structures What is it? Data structure that organizes

More information

Interactive Ray Tracing: Higher Memory Coherence

Interactive Ray Tracing: Higher Memory Coherence Interactive Ray Tracing: Higher Memory Coherence http://gamma.cs.unc.edu/rt Dinesh Manocha (UNC Chapel Hill) Sung-Eui Yoon (Lawrence Livermore Labs) Interactive Ray Tracing Ray tracing is naturally sub-linear

More information

3D Rendering Pipeline

3D Rendering Pipeline 3D Rendering Pipeline Reference: Real-Time Rendering 3 rd Edition Chapters 2 4 OpenGL SuperBible 6 th Edition Overview Rendering Pipeline Modern CG Inside a Desktop Architecture Shaders Tool Stage Asset

More information

EECS 487: Interactive Computer Graphics

EECS 487: Interactive Computer Graphics EECS 487: Interactive Computer Graphics Lecture 36: Polygonal mesh simplification The Modeling-Rendering Paradigm Modeler: Modeling complex shapes no equation for a chair, face, etc. instead, achieve complexity

More information

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination

3D Polygon Rendering. Many applications use rendering of 3D polygons with direct illumination Rendering Pipeline 3D Polygon Rendering Many applications use rendering of 3D polygons with direct illumination 3D Polygon Rendering What steps are necessary to utilize spatial coherence while drawing

More information

Sung-Eui Yoon ( 윤성의 )

Sung-Eui Yoon ( 윤성의 ) CS380: Computer Graphics Clipping and Culling Sung-Eui Yoon ( 윤성의 ) Course URL: http://sglab.kaist.ac.kr/~sungeui/cg/ Class Objectives Understand clipping and culling Understand view-frustum, back-face

More information

A Real-time Rendering Method Based on Precomputed Hierarchical Levels of Detail in Huge Dataset

A Real-time Rendering Method Based on Precomputed Hierarchical Levels of Detail in Huge Dataset 32 A Real-time Rendering Method Based on Precomputed Hierarchical Levels of Detail in Huge Dataset Zhou Kai, and Tian Feng School of Computer and Information Technology, Northeast Petroleum University,

More information

CS4620/5620: Lecture 14 Pipeline

CS4620/5620: Lecture 14 Pipeline CS4620/5620: Lecture 14 Pipeline 1 Rasterizing triangles Summary 1! evaluation of linear functions on pixel grid 2! functions defined by parameter values at vertices 3! using extra parameters to determine

More information

GeForce4. John Montrym Henry Moreton

GeForce4. John Montrym Henry Moreton GeForce4 John Montrym Henry Moreton 1 Architectural Drivers Programmability Parallelism Memory bandwidth 2 Recent History: GeForce 1&2 First integrated geometry engine & 4 pixels/clk Fixed-function transform,

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

Computer Graphics. Bing-Yu Chen National Taiwan University

Computer Graphics. Bing-Yu Chen National Taiwan University Computer Graphics Bing-Yu Chen National Taiwan University Visible-Surface Determination Back-Face Culling The Depth-Sort Algorithm Binary Space-Partitioning Trees The z-buffer Algorithm Scan-Line Algorithm

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Geometric Features for Non-photorealistiic Rendering

Geometric Features for Non-photorealistiic Rendering CS348a: Computer Graphics Handout # 6 Geometric Modeling and Processing Stanford University Monday, 27 February 2017 Homework #4: Due Date: Mesh simplification and expressive rendering [95 points] Wednesday,

More information

EECE 478. Learning Objectives. Learning Objectives. Rasterization & Scenes. Rasterization. Compositing

EECE 478. Learning Objectives. Learning Objectives. Rasterization & Scenes. Rasterization. Compositing EECE 478 Rasterization & Scenes Rasterization Learning Objectives Be able to describe the complete graphics pipeline. Describe the process of rasterization for triangles and lines. Compositing Manipulate

More information

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU FROM VERTICES TO FRAGMENTS Lecture 5 Comp3080 Computer Graphics HKBU OBJECTIVES Introduce basic implementation strategies Clipping Scan conversion OCTOBER 9, 2011 2 OVERVIEW At end of the geometric pipeline,

More information

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1 graphics pipeline sequence of operations to generate an image using object-order processing primitives processed one-at-a-time

More information

Real-Time Graphics Architecture

Real-Time Graphics Architecture Real-Time Graphics Architecture Kurt Akeley Pat Hanrahan http://www.graphics.stanford.edu/courses/cs448a-01-fall Geometry Outline Vertex and primitive operations System examples emphasis on clipping Primitive

More information

CS452/552; EE465/505. Clipping & Scan Conversion

CS452/552; EE465/505. Clipping & Scan Conversion CS452/552; EE465/505 Clipping & Scan Conversion 3-31 15 Outline! From Geometry to Pixels: Overview Clipping (continued) Scan conversion Read: Angel, Chapter 8, 8.1-8.9 Project#1 due: this week Lab4 due:

More information

COMP 175: Computer Graphics April 11, 2018

COMP 175: Computer Graphics April 11, 2018 Lecture n+1: Recursive Ray Tracer2: Advanced Techniques and Data Structures COMP 175: Computer Graphics April 11, 2018 1/49 Review } Ray Intersect (Assignment 4): questions / comments? } Review of Recursive

More information

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 Announcements Project 2 due Friday, October 11

More information

Graphics Hardware. Instructor Stephen J. Guy

Graphics Hardware. Instructor Stephen J. Guy Instructor Stephen J. Guy Overview What is a GPU Evolution of GPU GPU Design Modern Features Programmability! Programming Examples Overview What is a GPU Evolution of GPU GPU Design Modern Features Programmability!

More information

Real-Time Rendering (Echtzeitgraphik) Dr. Michael Wimmer

Real-Time Rendering (Echtzeitgraphik) Dr. Michael Wimmer Real-Time Rendering (Echtzeitgraphik) Dr. Michael Wimmer wimmer@cg.tuwien.ac.at Visibility Overview Basics about visibility Basics about occlusion culling View-frustum culling / backface culling Occlusion

More information

Hierarchical surface fragments *

Hierarchical surface fragments * Hierarchical surface fragments * HUA Wei**, BAO Hujun, PENG Qunsheng (State Key Laboratory of CAD & CG, Zhejiang University, Hangzhou 310027, China) Abstract A new compact level-of-detail representation,

More information

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

CSE 167: Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012 Announcements Homework project #2 due this Friday, October

More information

CS535 Fall Department of Computer Science Purdue University

CS535 Fall Department of Computer Science Purdue University Culling CS535 Fall 2010 Daniel G Aliaga Daniel G. Aliaga Department of Computer Science Purdue University Types of culling View frustum culling Visibility culling e.g., backface culling, portal tlculling,

More information

Direct Rendering of Trimmed NURBS Surfaces

Direct Rendering of Trimmed NURBS Surfaces Direct Rendering of Trimmed NURBS Surfaces Hardware Graphics Pipeline 2/ 81 Hardware Graphics Pipeline GPU Video Memory CPU Vertex Processor Raster Unit Fragment Processor Render Target Screen Extended

More information

Subdivision Surfaces. Course Syllabus. Course Syllabus. Modeling. Equivalence of Representations. 3D Object Representations

Subdivision Surfaces. Course Syllabus. Course Syllabus. Modeling. Equivalence of Representations. 3D Object Representations Subdivision Surfaces Adam Finkelstein Princeton University COS 426, Spring 2003 Course Syllabus I. Image processing II. Rendering III. Modeling IV. Animation Image Processing (Rusty Coleman, CS426, Fall99)

More information

Terrain Rendering Research for Games. Jonathan Blow Bolt Action Software

Terrain Rendering Research for Games. Jonathan Blow Bolt Action Software Terrain Rendering Research for Games Jonathan Blow Bolt Action Software jon@bolt-action.com Lecture Agenda Introduction to the problem Survey of established algorithms Problems with established algorithms

More information