An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline

Similar documents
Complexity Reduction of Catmull-Clark/Loop Subdivision Surfaces

Mesh Repairing and Simplification. Gianpaolo Palma

3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment

Rendering Subdivision Surfaces Efficiently on the GPU

Approximate Catmull-Clark Patches. Scott Schaefer Charles Loop

Postprocessing of Compressed 3D Graphic Data

Ray Casting of Trimmed NURBS Surfaces on the GPU

G 2 Interpolation for Polar Surfaces

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005

A Comparison of Mesh Simplification Algorithms

Joe Warren, Scott Schaefer Rice University

Advanced Computer Graphics

Efficient GPU Rendering of Subdivision Surfaces. Tim Foley,

Applications. Oversampled 3D scan data. ~150k triangles ~80k triangles

Near-Optimum Adaptive Tessellation of General Catmull-Clark Subdivision Surfaces

A Short Survey of Mesh Simplification Algorithms

A Real-time System of Crowd Rendering: Parallel LOD and Texture-Preserving Approach on GPU

A General Simplification Algorithm

MA 323 Geometric Modelling Course Notes: Day 36 Subdivision Surfaces

Subdivision curves and surfaces. Brian Curless CSE 557 Fall 2015

3/1/2010. Acceleration Techniques V1.2. Goals. Overview. Based on slides from Celine Loscos (v1.0)

Mesh Morphing. Ligang Liu Graphics&Geometric Computing Lab USTC

High-quality Shadows with Improved Paraboloid Mapping

Simplification of Meshes into Curved PN Triangles

A Real-time Rendering Method Based on Precomputed Hierarchical Levels of Detail in Huge Dataset

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

Surface and Solid Geometry. 3D Polygons

Hardware Displacement Mapping

Irregular Vertex Editing and Pattern Design on Mesh

Fast Rendering of Subdivision Surfaces

From curves to surfaces. Parametric surfaces and solid modeling. Extrusions. Surfaces of revolution. So far have discussed spline curves in 2D

INF3320 Computer Graphics and Discrete Geometry

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Mesh Decimation. Mark Pauly

Local Modification of Subdivision Surfaces Based on Curved Mesh

Multiresolution Remeshing Using Weighted Centroidal Voronoi Diagram

1. Introduction. 2. Parametrization of General CCSSs. 3. One-Piece through Interpolation. 4. One-Piece through Boolean Operations

Physically-Based Modeling and Animation. University of Missouri at Columbia

Subdivision surfaces. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

Multiresolution model generation of. texture-geometry for the real-time rendering 1

LOD and Occlusion Christian Miller CS Fall 2011

Subdivision Depth Computation for Extra-Ordinary Catmull-Clark Subdivision Surface Patches

A General Algorithm for Triangular Meshes Simplification

3D Modeling techniques

Chapter IV Fragment Processing and Output Merging. 3D Graphics for Game Programming

Surface Quality Assessment of Subdivision Surfaces on Programmable Graphics Hardware

Motivation MGB Agenda. Compression. Scalability. Scalability. Motivation. Tessellation Basics. DX11 Tessellation Pipeline

Parameterization of Triangular Meshes with Virtual Boundaries

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo

Direct Rendering of Trimmed NURBS Surfaces

Curves & Surfaces. Last Time? Progressive Meshes. Selective Refinement. Adjacency Data Structures. Mesh Simplification. Mesh Simplification

Mesh Simplification. Mesh Simplification. Mesh Simplification Goals. Mesh Simplification Motivation. Vertex Clustering. Mesh Simplification Overview

Real-Time Hair Simulation and Rendering on the GPU. Louis Bavoil

View-dependent Refinement of Multiresolution Meshes Using Programmable Graphics Hardware <CGI special issue>

Computer Graphics CS 543 Lecture 13a Curves, Tesselation/Geometry Shaders & Level of Detail

Iterative Process to Improve Simple Adaptive Subdivision Surfaces Method with Butterfly Scheme

Subdivision Surfaces. Homework 1: Questions on Homework? Last Time? Today. Tensor Product. What s an illegal edge collapse?

Modified Catmull-Clark Methods for Modelling, Reparameterization and Grid Generation

Until now we have worked with flat entities such as lines and flat polygons. Fit well with graphics hardware Mathematically simple

12.3 Subdivision Surfaces. What is subdivision based representation? Subdivision Surfaces

Subdivision Surfaces. Homework 1: Questions/Comments?

Robust Meshes from Multiple Range Maps

Smooth Patching of Refined Triangulations

Lecture 4: Geometry Processing. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Subdivision Curves and Surfaces: An Introduction

Removing Polar Rendering Artifacts in Subdivision Surfaces

u 0+u 2 new boundary vertex

Progressive Mesh. Reddy Sambavaram Insomniac Games

Could you make the XNA functions yourself?

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc. A wavelet based real-time rendering technology for indoor mixed reality ABSTRACT

Approximating Subdivision Surfaces with Gregory Patches for Hardware Tessellation

All the Polygons You Can Eat. Doug Rogers Developer Relations

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo

Adaptive Semi-Regular Remeshing: A Voronoi-Based Approach

Smooth Surfaces from 4-sided Facets

Subdivision. Outline. Key Questions. Subdivision Surfaces. Advanced Computer Graphics (Spring 2013) Video: Geri s Game (outside link)

Using Semi-Regular 4 8 Meshes for Subdivision Surfaces

Rendering If we have a precise computer representation of the 3D world, how realistic are the 2D images we can generate? What are the best way to mode

Simplified Mesh Generation from Renders. Thorvald Natvig

Technical Report. Removing polar rendering artifacts in subdivision surfaces. Ursula H. Augsdörfer, Neil A. Dodgson, Malcolm A. Sabin.

Computer Graphics 1. Chapter 2 (May 19th, 2011, 2-4pm): 3D Modeling. LMU München Medieninformatik Andreas Butz Computergraphik 1 SS2011

A Robust Procedure to Eliminate Degenerate Faces from Triangle Meshes

Acquisition and Visualization of Colored 3D Objects

Geometric Modeling and Processing

Mesh Decimation Using VTK

Grid Generation and Grid Conversion by Subdivision Schemes

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

M4G - A Surface Representation for Adaptive CPU-GPU Computation

A New Constrained Texture Mapping Method

Subdivision Curves and Surfaces

Defense Technical Information Center Compilation Part Notice

Triangle Order Optimization for Graphics Hardware Computation Culling

3D Models from Range Sensors. Gianpaolo Palma

A Global Laplacian Smoothing Approach with Feature Preservation

Geometric Modeling. Mesh Decimation. Mesh Decimation. Applications. Copyright 2010 Gotsman, Pauly Page 1. Oversampled 3D scan data

Advanced Graphics. Subdivision Surfaces. Alex Benton, University of Cambridge Supported in part by Google UK, Ltd

Cover Page. Title: Surface Approximation Using Geometric Hermite Patches Abstract:

An Efficient Data Structure for Representing Trilateral/Quadrilateral Subdivision Surfaces

Real-Time Hair Rendering on the GPU NVIDIA

Subdivision Surfaces

Transcription:

3rd International Conference on Mechatronics and Information Technology (ICMIT 2016) An Algorithm of 3D Mesh Reconstructing Based on the Rendering Pipeline Zhengjie Deng1, a, Shuqian He1,b, Chun Shi1,c, Cuihua Ma1 and Xiaojian Wu2 1 School of information science and technology, Hainan Normal University, Haikou, Hainan, China 2 Imofang technology co., ltd, Haikou, Hainan, China a hsdengzj@163.com, b76005796@qq.com, c605515770@qq.com Keywords: 3D mesh, Mesh reconstructing, Rendering pipeline, Texture Abstract. During the using of 3D mesh, the different LOD meshes are needed for a same model. This paper presents an algorithm of 3D mesh reconstructing based on the rendering pipeline. This algorithm bases the pipeline, performs the coordinate transform and records the original coordinates of the mesh during the vertex shader. It interpolates the vertex s 3D coordinates of every pixel in the rasterization. During the pixel shader, it extracts the visible pixels through the depth testing, and then outputs the corresponding 3D coordinates. According to the relative position between the pixels, it connects the 3D vertices together to make a triangle mesh for the current view. The result LOD mesh can be obtained by merging these triangle meshes for different views. The experiments show that the algorithm is effective, speedy and convenient for adjusting the LOD. Introduction Recently, the applications of virtual reality become more and more, so the rendering of the virtual object s mesh is necessary in the scene. As higher as people require the fidelity of the virtual scene, the more details are needed to be rendered. In a scene with many objects, when the angle or the distance of the view are changed, the same object should be rendered in a different size region, which occupies different amount pixels. In order to enhance the efficiency, it s better use low LOD mesh, when the occupied pixels are less, so the less vertices are operated, the faster it renders. On the other way, it s better use high LOD mesh for enhancing the details in the more occupied pixels. So the different LOD meshes for a same object are needed to be constructed. In order to obtain different LOD meshes of a model, users probably base on a mesh with normal granularity, gain other LOD meshes with other algorithms. Some simplify methods can easily create the meshes with lower LOD. [1] presented a greedy approach to simplify those regions which are enriched with number of triangles. [2] made a survey on the most notable available algorithm, which included an overview of iterative edge contraction algorithms. Some subdivide methods can easily create the meshes with higher LOD. In [3], a new stationary subdivision scheme was presented which performs slower topological refinement than the usual dyadic split operation. [4,5] have implemented the subdivision through spline surfaces and piecewise smooth surfaces. But when users want not only the lower ones, but also the higher ones, it is necessary to be familiar with both kinds of methods, and exchange between them frequently. As the computing of GPU becomes stronger, more and more computations, especially those parallel-able computations, apply GPU for gaining more performance. [6] demonstrated a mesh decimation method, adopted a vertex-clustering method to GPU by taking advantage of the geometry shader stage. It presented a novel general-purpose data structure designed for streaming architectures. In this paper, we presents a 3D mesh reconstructing algorithm based on the rendering pipeline. Focus on adjusting LOD, it does an LOD decision method through the view distance, which can carry out visually. During the computation, it applies the rendering pipeline of GPU, create the target mesh quickly with the GPU s parallel computing. The main contributions are as following. (1) Record the coordinates of the mesh s vertices through an A32B32G32R32 format texture; (2) According to the angle of the view, generate the 3D vertices and arrange them in an order like texture s pixels, which make it easy to organize them into a mesh. 2016. The authors - Published by Atlantis Press 338

The rest parts of the paper are arranged as: Section 2 reviews the traditional methods to create different LOD meshes; Section 3 illustrates the main idea of the algorithm in this paper; Section 4 introduces the normal progress of the rendering pipeline; Section 5 presents how to reconstruct the mesh based the rendering pipeline in detail, and how to implement the algorithm, including the vertex shader and the pixel shader; some experiments using this algorithm are shown in the Section 6. At last, Section 7 makes a conclusion and discusses the future about it. Traditional LOD Mesh Reconstructions When the LOD requirement is lower than the original one, it means that the new LOD mesh is a simpler version of the original one. In order to make the new one is as similar as the original one, many simplifications apply the Eq. 1 to obtain a new vertex during simplifying. L(u,v) = min Dist(S,S(L(u,v))), (1) while S is the original mesh; u and v are the parameterize coordinates on S; L(u,v) is a new vertex about the area around (u,v), which is used to preplace a group of vertices, edges or patches on S; S(L(u,v)) is the interval mesh that is obtained through using L(u,v), and the distance function Dist is used to calculate the difference between S and S(L(u,v)). Usually, the function bases on local area. Different simplifications provide different distance functions. When the LOD requirements are higher than the original one, it seems that the mesh need subdividing. In order to let the refined result be similar to the original one, many subdivisions apply the Eq. 2 to generate the new vertex. H(u,v) = Interpolate(S,u,v), (2) while the subdivide function Interpolate would do the interpolation based on the area around (u,v), to generate the new vertex H(u,v). The computation in the Eq. 1 and Eq. 2, often directly use the local vertex data of the original mesh, to produce the new vertex for the corresponding area. The final result of the two kinds of methods is a new mesh T, and make T and S be similar, while T accepts the special LOD requirement. The Main Idea of the Algorithm If there has been a mesh T accepting the requirement, the next work is only to sample T through (u,v). Fortunately, when the model is shown to the user, GPU has implemented the function that created a view mesh T i, which viewed the T through the current view angle. So, T can be generated by merging the T i s together, which are created quickly by GPU s rendering. In this way, the main idea of our algorithm follow through the Eq. 3. T i = View(S,i), (i=1,,n), T = Merge({T i }), (i=1,,n), (3) while the function View is to create the view mesh T i, which view the original mesh S through the ith view angle; n is the amount of the view angle according to the real condition; the function Merge generates the new mesh T through merging all view meshes. Our algorithm seems to parameterize a mesh, and then sample it based on a special interval. But be different to the traditional parameter method, it directly parameterizes through the projection during rendering, to make the result be same to the original mesh from the view angle. The advantages of our algorithm are as following: (1) Simply operating: only adjust the view port to specify the LOD; (2) Speedy: based on the rendering pipeline, generate the target mesh in the interactive time; 339

(3) High similarity: the view of new result is same to the magnifying result of the original model. Rendering Pipeline As the basic procedure of the rendering pipeline of the computer display, let the world transform matrix of the object be W, view transform matrix be V, project transform matrix P, and the matrix magnifying to the screen be E, organize the vertex coordinates of S into matrix, named by S too, so let the new mesh be SE in the screen space transformed from the original S, through Eq. 4. SE = S*W*V*P, (4) Anyway, the LOD of SE is still same to S at this time. In order to generate the mesh with a special LOD, the user should specify a LOD for the special application. In the real life, the farer the user leaves the object, the lower LOD is required, vice versa. Directly, our method supports users to decide the LOD through adjusting the distance away the object. In the Fig. 1, user can adjust the distance away the car, and then decide how many pixels would be occupied by the car. The less of pixels as Fig. 1 (a), the lower LOD; the more of pixels as Fig. 1 (c), the higher LOD. (a) (b) (c) Figure 1: Decide the LOD through adjusting the view distance. Next, according to the size of the view port and the distance, the LOD sampling is applied. The rasterization of GPU is applied through the Eq. 5 to obtain the sample result SR. SR = Rasterize (SE), (5) while the function Rasterize means to rasterize SE based on the current view port, and obtain the pixel set SR. During the implement of the function Rasterize, it likes the function Interpolate in Eq. 2, to compute the new vertex s position through vertex interpolation. This makes sure the similarity of SR and SE. On the other way, the function Rasterize is different to the function Interpolate, to generate the different LOD vertex set, which is on the surface that is made of the visual parts of the object in the current view. The vertices sheltered from others would be filtered out during the depth testing. Mesh Reconstructing Based on the Rendering Pipeline Mesh Reconstructing. GPU s rendering pipeline can quickly finished the computations of Eq. 4 and Eq. 5, which generates SR. But the coordinates of the target mesh should be in the coordinate space of the original mesh S. It means that SR should be restored as a mesh S in the original space, which can be computed by Eq. 6. S = SR*P -1 *V -1 *W -1. (6) If the restoring can be moved before the function Rasterize, the original S is passed into Rasterize. It would save many performance. Unfortunately, Rasterize could not compute on the vertices in the object space. A good news for us is that GPU supports to attach textures to a mesh, with which every vertex can attach one or more texture pixel colors. During the vertex transforming, the pixel colors do not be transformed, but Rasterize can interpolate the texture color value according to the coordinates. And almost display cards support the texture format A32B32G32R32 now. The pixel of this kind of texture can contain the normal 3D coordinates of one vertex. So, in order to skip the restoring, this 340

algorithm attaches an A32B32G32R32 texture to SE, and the pixel s values are the coordinates of the corresponding vertex of S. In this way, the computation about transforming and restoring can be replaced by Eq. 7. (SR, SN) = Rasterize((SE, SE.T)), SE.T = S, (7) while (SE, SE.T) is an SE with the texture SE.T, the values of SE.T come from the coordinates of S. After rasterization, SN is the vertex set in the object space. SR is the result sampling from SE, and SN is the corresponding result sampling from S. Viewing from the current angle, SN is a vertex set, which are arranged likes pixels in a texture. Connecting the neighbor vertices can construct a triangle mesh. If constructing with a background plane, which is a plane farer than the object, or the object s bounding box, a mesh T i for the current view angle can be generated. For different models, users can construct for different angles, and then obtain all angle s T i. Usually, it can be constructed according to the normal of six faces of the bounding box. And then, merging the meshes together to make the new LOD mesh for the object. For those objects not convex, it may construct more T i s. Implementing the Algorithm. The application illustrating the results in this paper is developed with Microsoft s Direct3D library. Eq. 4 and Eq. 7 are implemented in the vertex shader. The detail procedure is the function VertDepth. void VertDepth(float4 S:POSITION, out float4 SE:POSITION, out float4 SE_T:TEXCOORD1) { SE = mul(s, W); SE = mul(se, V); SE = mul(se, P); SE_T = S; } while SE_T that is SE.T in Eq. 7, is the texture pixel whose value is equal to the original coordinates of the corresponding vertex. The raterization and sampling in Eq. 7 are carried on through GPU s default rasterization function and the pixel shader. The default rasterization is not dicussed here. The procedure of the pixel shader is the function PixDepth. void PixDepth(float4 SE_T : TEXCOORD1, out float4 SN : COLOR) { SN = SE_T; } It s a note that the semantic of the output SE_T in VerDepth is TEXCOORD1, the one in PixDepth should be same. It s the rule to pass the right value. The PixDepth only outputs SN, but not SR, because a pixel shader only outputs one color. And the method s target is to generate the coordinates for constructing the new mesh, not for displaying. If user wants to display the object, it can be rendered again in a normal way. Experiments The application has been used to construct some meshes for objects. Fig. 2 has shown the results for a sphere object. (a) is the original model; (b) is the lower LOD T i, which only shows the surface constructed by the vertices of SN, without the background or the bounding box for clearly (the same for the later); (c) is the higher LOD T i ; (d) is the another view of (c) from another angle; (e) is the result merging two T i s. The result of the sphere shows that for a mesh, our method can efficiently create its different LOD meshes, and keep similar to the original one. Note that the boundary of the obtained mesh is not regular, because the object covers itself during viewing. (a) (b) (c) (d) (e) Figure 2: Reconstructing the sphere s mesh. 341

The method has been used on more complex model, and also obtain a good result as Fig. 3. (a) and (c) are the two views of the original model. (b) and (d) are the meshes constructing from the car s front and back. In order to see them clearly, they are shown with rotating an angle. The result shows that it s similar the original one, and the vertex density is controllable. (a) (b) (c) (d) Figure 3: Reconstructing the car s mesh. For the performance, the VertDepth and the PixDepth described above show that the algorithm is simple. The VertDepth only contains the ordinary world transform, view transform and project transform. The PixDepth only transfers the coordinate data. When the rendering finishes, the vertex set is generated for the corresponding view angle. Because the vertices in the set are arranged regularly like image s pixels, the mesh can be constructed in linear time. Conclusions This paper present a 3D mesh reconstructing algorithm based on the rendering pipeline. It uses a novel vertex shader and pixel shader, makes the rendering result be a vertex set corresponding the view angle. The vertex set is arranged as an image, easily to construct a view mesh. With the view meshes from some different angles, a new mesh can be generated by merging them. The LOD of the new mesh is decided by the view distance, so the LOD can be easily specified. The experiments show that the algorithm is efficient and fast. Anyway, there are some shortages in it now. For example, the side faces of the view mesh sometimes include some long and thin triangles, which may not put out the original model. Our future work would focus on them. Acknowledgements This work was financially supported by the National Natural Science Foundation (61502127, 61362016), Hainan Natural Science Foundation (20156225), Educational reform project of Hainan Provincial Department of Education (HNJG2014-33) and Scientific Research Projects in Universities of Hainan Province (Hnky2015-24). References [1] S. Hussain, H. Grahn and J. A. Persson: Feature-preserving Mesh Simplification: A Vertex Cover Approach. International Conference on Computer Graphics and Visualization 2008 (CGV 2008), pp270-275 [2] M. Garland: Multiresolution Modeling: Survey & Future Opportunities. EUROGRAPHICS' 99, State of The Art Reports [3] L. Kobbelt: 3-subdivision. Proceedings of the 27th annual conference on Computer graphics and interactive techniques. ACM Press/Addison-Wesley Publishing Co. (2000) p.103-112 [4] E. Catmull, J. Clark: Recursively generated B-spline surfaces on arbitrary topological meshes, CAD 10 (1978), p.350 355 [5] H. Hoppe, T. DeRose, T. Duchamp, M. Halstead, H. Jin, J. McDonald, J. Schweitzer, W. Stuetzle: Piecewise smooth surface reconstruction, SIGGRAPH 1994 Proceedings(1994), p. 295 302 342

[6] C. DeCoro, N. Tatarchuk: Real-time Mesh Simplification Using the GPU. Symposium on Interactive 3D Graphics (I3D) (2007), p. 6 343