I. DARIBO, R. FURUKAWA, R. SAGAWA, H. KAWASAKI, S. HIURA, and N. ASADA. 1. Introduction IS4-15 : 1385

Similar documents
Reconstruction PSNR [db]

Predictive Point-Cloud Compression

ISSN: An Efficient Fully Exploiting Spatial Correlation of Compress Compound Images in Advanced Video Coding

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations

COMPRESSING MATERIAL AND TEXTURE ATTRIBUTES FOR TRIANGULAR MESHES

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations

Robust and Accurate One-shot 3D Reconstruction by 2C1P System with Wave Grid Pattern

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

Compression of Light Field Images using Projective 2-D Warping method and Block matching

Point Cloud Compression Based on Hierarchical Point Clustering

3D Mesh Sequence Compression Using Thin-plate Spline based Prediction

OPTIMIZED MULTIPLE DESCRIPTION SCALAR QUANTIZATION BASED 3D MESH CODING

Rate Distortion Optimization in Video Compression

3-Dimensional Object Modeling with Mesh Simplification Based Resolution Adjustment

A Generic Scheme for Progressive Point Cloud Coding

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

Scene Segmentation by Color and Depth Information and its Applications

CONTEXT-BASED OCTREE CODING FOR POINT-CLOUD VIDEO. Diogo C. Garcia and Ricardo L. de Queiroz. Universidade de Brasilia, Brasil

A Real-time Rendering Method Based on Precomputed Hierarchical Levels of Detail in Huge Dataset

Context based optimal shape coding

Dimensional Imaging IWSNHC3DI'99, Santorini, Greece, September SYNTHETIC HYBRID OR NATURAL FIT?

Spectral Compression of Mesh Geometry

Module 7 VIDEO CODING AND MOTION ESTIMATION

Mesh Based Interpolative Coding (MBIC)

Modified SPIHT Image Coder For Wireless Communication

Progressive Geometry Compression. Andrei Khodakovsky Peter Schröder Wim Sweldens

Advanced Video Coding: The new H.264 video compression standard

Model-based Enhancement of Lighting Conditions in Image Sequences

Multiview Projectors/Cameras System for 3D Reconstruction of Dynamic Scenes

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

Stereo Image Compression

recording plane (U V) image plane (S T)

Point-based Simplification Algorithm

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Optimization of Bit Rate in Medical Image Compression

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Image Segmentation Techniques for Object-Based Coding

DECODING COMPLEXITY CONSTRAINED RATE-DISTORTION OPTIMIZATION FOR THE COMPRESSION OF CONCENTRIC MOSAICS

3D Scanning Method for Fast Motion using Single Grid Pattern with Coarse-to-fine Technique

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda

Processing 3D Surface Data

Flexible Point-Based Rendering on Mobile Devices

Image Compression Algorithm and JPEG Standard

Geometry-guided Progressive Lossless 3D Mesh Coding with Octree (OT) Decomposition

Learning based face hallucination techniques: A survey

Spectral Coding of Three-Dimensional Mesh Geometry Information Using Dual Graph

Extensions of H.264/AVC for Multiview Video Compression

Frequency Band Coding Mode Selection for Key Frames of Wyner-Ziv Video Coding

Geometric Modeling and Processing

Local Image Registration: An Adaptive Filtering Framework

A new predictive image compression scheme using histogram analysis and pattern matching

Surface Reconstruction. Gianpaolo Palma

A Developer s Survey of Polygonal Simplification algorithms. CS 563 Advanced Topics in Computer Graphics Fan Wu Mar. 31, 2005

Multiresolution Remeshing Using Weighted Centroidal Voronoi Diagram

Compression of Stereo Images using a Huffman-Zip Scheme

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

CS 563 Advanced Topics in Computer Graphics QSplat. by Matt Maziarz

Multi-view reconstruction for projector camera systems based on bundle adjustment

Error-Resilient Coding of 3-D Graphic Models via Adaptive Mesh Segmentation

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

RDTC Optimized Streaming for Remote Browsing in Image-Based Scene Representations

Compression of Point-Based 3D Models by Shape-Adaptive Wavelet Coding of Multi-Height Fields

BLOCK MATCHING-BASED MOTION COMPENSATION WITH ARBITRARY ACCURACY USING ADAPTIVE INTERPOLATION FILTERS

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

EXPLORING ON STEGANOGRAPHY FOR LOW BIT RATE WAVELET BASED CODER IN IMAGE RETRIEVAL SYSTEM

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Triangular Mesh Segmentation Based On Surface Normal

An Automatic Hole Filling Method of Point Cloud for 3D Scanning

International Journal of Advanced Research in Computer Science and Software Engineering

Natural method for three-dimensional range data compression

Scalable Coding of Image Collections with Embedded Descriptors

Professor, CSE Department, Nirma University, Ahmedabad, India

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

Fast Mode Decision for H.264/AVC Using Mode Prediction

Unconstrained Free-Viewpoint Video Coding

Model-Aided Coding: A New Approach to Incorporate Facial Animation into Motion-Compensated Video Coding

CODING METHOD FOR EMBEDDING AUDIO IN VIDEO STREAM. Harri Sorokin, Jari Koivusaari, Moncef Gabbouj, and Jarmo Takala

Open Access Compression Algorithm of 3D Point Cloud Data Based on Octree

A Hole-Filling Algorithm for Triangular Meshes. Abstract

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

SINGLE PASS DEPENDENT BIT ALLOCATION FOR SPATIAL SCALABILITY CODING OF H.264/SVC

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Shape Modeling with Point-Sampled Geometry

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Processing 3D Surface Data

Motion Estimation. Original. enhancement layers. Motion Compensation. Baselayer. Scan-Specific Entropy Coding. Prediction Error.

Video Compression Method for On-Board Systems of Construction Robots

Compression of Tetrahedral Meshes

Interframe Coding of Canonical Patches for Mobile Augmented Reality

Geometry Compression of Normal Meshes Using Rate-Distortion Algorithms

Point Cloud Streaming for 3D Avatar Communication

IBM Research Report. Inter Mode Selection for H.264/AVC Using Time-Efficient Learning-Theoretic Algorithms

View Synthesis for Multiview Video Compression

Self-similarity Based Editing of 3D Surface Textures

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

A NOVEL SCANNING SCHEME FOR DIRECTIONAL SPATIAL PREDICTION OF AVS INTRA CODING

Transcription:

(MIRU2011) 2011 7 Curve-Based Representation of Point Cloud for Efficient Compression I. DARIBO, R. FURUKAWA, R. SAGAWA, H. KAWASAKI, S. HIURA, and N. ASADA Faculty of Information Sciences, Hiroshima City University Hiroshima, Japan National Institute of Advanced Industrial Science and Technology Tsukuba, Japan Faculty of Engineering, Kagoshima University Kagoshima, Japan E-mail: {daribo,ryo-f,asada}@hiroshima-cu.ac.jp, ryusuke.sagawa@aist.go.jp, jiro@jouhou.co.jp, kawasaki@ibe.kagoshima-u.ac.jp Abstract Three-dimensional acquisition is emerging scanning systems raising the problem of efficient encoding of 3D point cloud, in addition to a suitable representation. This paper investigates cloud point compression via a curve-based representation of the point cloud. We aim particularly at active scanning system capable of acquire dense 3D shapes, wherein light patterns (e.g. bars, lines, grid) are projected. The object surface is then naturally sampled in scan lines, due to the projected structured light. This motivates our choice to first design a curve-based representation, and then to exploit the spatial correlation of the sampled points along the curves through a competition-based predictive encoder that includes different prediction modes. Experimental results demonstrate the effectiveness of the proposed method. Key words Point cloud, compression, curve-based representation, rate-distortion 1. Introduction Recently active scanning systems are capable to produce 3D geometric models with millions of pixels. The increase of 3D model complexity rise then the need to store, transmit, process and render efficiently the huge amount of data. In addition to the representation problem, efficient compression of 3D geometry data becomes particularly more and more important. Traditional 3D geometry representation usually falls in two categories: polygon mesh and point-sampled geometry. Typically, mesh-based representation exploits the connectivity between vertices, and orders them in a manner that contains the topology of the mesh. Such representation is then made of polygons coded as a sequence of numbers (vertex coordinates), and tuple of vertex pointers (the edges joining the vertices), mostly due to its native support in modern graphics cards. Such model requires, however, a time consuming and difficult processing with explicit connectivity constraint. Point-sampled geometry has received attention as an attractive alternative to polygon meshes geometry with several advantages. For example, no connectivity information are needed anymore to be stored, the triangulation overhead is saved, leading to a simpler and intuitive way to process and render object of complex topology through a cloud of points. Currently active 3D scanners are widely used for acquiring 3D models[1]. Especially, scanning system based on structured light have been intensively studied recently[2],[3]. Structured-light-based scanning is done by sampling the surface of an object with a known pattern (e.g. grid, horizontal bars, lines) (see Fig. 1). Studying the deformation of the pattern allows to build a 3D model by means of a point cloud. It is worth to note that structured-light-based scanning systems output the points along the measuring direction, which naturally orders group of points along the same direction: scan lines. This motivates our choice to take advantage of the spatially sequential order of the sampled-points along these scan lines: first, we pre-process the data by partitioning each scan lines in curve of points, and after we exploit the curve-based representation through competition-based predictive encoder specially designed to take benefit from the scanning directions. Our encoder can then reach an efficient compression ratio of the point cloud thanks to the curve-based representation (see Figure 2). In this work, by proposing a curve-driven point cloud compression, our framework can straightforwardly support for example random access, error recovery, error propagation limitation, where previous work mainly focus on compression efficiency only. These points will be further discussed in Section4.. The rest of the paper is organized as follows. We introduce some related work in Section 2.. Section 3. de- IS4-15 : 1385

Fig. 1 (left) Grid-pattern-based scanning system: a grid pattern is projected from the projector and captured by the camera. (right) Example of projected grid pattern. scribes the point cloud pre-processing towards a curvebased representation, and Section 4. addresses the problem of efficiently compressing a point cloud acquired by a structured-light 3D scanning system. Finally, our final conclusions are drawn in Section 6.. 2. Related work The problem of 3D geometry compression has been extensively studied for more than a decade and many compression schemes were designed. Existing 3D geometry coders mainly follow two general lines of research: single-rate and progressive compression. In opposition to single-rate coders, progressive ones allow the transmission and reconstruction of the geometry in multiple level of details (LODs), which is suitable for streaming applications. Since many important concepts have been introduced in the context of mesh compression, several point cloud compression schemes apply beforehand triangulation and mesh generation, and then use algorithms originally developed for mesh compression[4], but at the cost of also encode mesh connectivity. Instead of generating meshes from the point cloud, other approaches propose partitioning the point cloud in smooth manifold surfaces closed to the original surface, which are approximated by the method of moving least squares (MLS)[5]. On the other hand, an augmentation of the point cloud by a data structure has been proposed to facilitate the prediction and entropy coding. The object space is then partitioned based on the data structure, e.g. octree[6] [9], spanning tree[10],[11]. Although not strictly a compression algorithm, the QSplat rendering system offers a compact representation of the hierarchy structure[12]. A high quality rendering is obtained despite a strong quantization. A compression algorithm for the QSplat representation has been proposed through an optimal bitrate allocation[13]. Still in a RD sense, an RD-optimized version of the D3DMC encoder has been developed for dynamic mesh compression[14]. To the best of our knowledge, previous point-based coders mainly require at least one of the following issues: surface approximation: MLS, etc., complexity increasing: point re-ordering, triangulation, mesh generation, etc., data structure: spanning tree, octree, etc., which leads to either smoothing out sharp features, an increase of the complexity, or an extra-transmission of a data structure. 3. Curve-based representation As discussed before, the points are ordered along scan lines forming naturally lines as illustrated in Figure 2. Under the scan line representation assumption, the curve set generation process partitions the point cloud, wherein each partition is a curve. In some cases, the partitions can be directly obtained from the acquisition process, e.g. line detection algorithm[3]. 3. 1 Curve-based point cloud definition Let us consider the point cloud S = {p 1,p 2, p N } as a collection of N 3D points p k1<=. As mentioned k< N = earlier, structured-light-based 3D scanning systems fit the sampled points in curves. The point cloud S can then be represented as a set of M curves C l 1< = l< =M as S = {C 1,C 2,,C M } (1) where a l-ieme curve C l is expressed as C l = {p r,p r+1,,p s } with 1 < = r < s < N (2) 3. 2 Curve-based partitioning Each curve C is defined to contain points that share similar proprieties, e.g. curvature, direction, Euclidean zoom Fig. 2 The Stanford model Bunny partitioned into a set of curves with regard to the scanning directions. Curves are discriminate by different colors. IS4-15 : 1386

distance with his neighbor, etc.. The point cloud S is partitioned into a set of curves as defined in Equation (1). The division is controled by defining if the current point p k to process is an outlier with respect to the current curve C. In this study, we defined an outlier as p k p k 1 > ǫ, (3) with ǫ = 1 N 1 N p i p i 1. i=2 Thecurrentpointp k isconsideredasanoutlierandthen addtoanewcurve,iftheeuclideandistance p k p k 1 is larger than a defined threshold ǫ: here the average value of the distance between two consecutive points throughout the point cloud. For instance other outlier definitions can be considered. For example by checking if adding the current point p k will disturb the normal distribution of the current curve. Otherwise, by computing the inter-quartilerange (IQR) of the current curve, and then use a multiple as a threshold. In this study we only considered the Euclidean distance 4. Point cloud encoding The proposed framework compresses the points as ordered prior to scanning direction. The object surface is then sampled in curve of points as shown in Figure 2. Unlike previous work in geometry compression, we do not need any connectivity information, or data augmentation by a data structure (e.g. octree), to reach a satisfactory compression efficiency. 4. 1 Prediction Let C be the current curve to encode. Intra-curve prediction attempts to determine, for each point p k in C, the best predicted point p k with respect to the previous coded points p i,i<k in C. Note that previous coded points p i,i<k havebeenquantizedandinversequantized. For notation concision, let us define the sub-curve containing the previous coded points by C i<k = C {p i i < k}, (4) and the intra-curve prediction by p k = P (C i<k ). (5) It is important to note that another curve informations are not utilized, which for instance enables random access and error propagation limitation. The prediction outputs the corrective vector r k = p k p k, also denoted as residual, and transmits it to the entropy coder. The coding efficiency comes with the accuracy of the prediction that is improved by choosing the most suitable prediction method for each point. For each point, instead of using only one prediction method for all the points[10], we propose making compete all defined prediction modes that are known by the encoder and the decoder. The prediction that minimizes the Euclidean distance p k p k is defined as the best one. A prediction flag is then placed in the bitstream. Since the choice of the prediction modes is related to the scene content, a prediction flag is then placed in the bitstream for each point. In the following, we present the different designed prediction modes. 4.1.1 No-prediction P Intra No-prediction is applied, which define the current point as key point that can be used, for example, for random access and error propagation limitation. P Intra (C i<k ) = (0,0,0). (6) 4.1.2 Const P Const The previous coded point in the curve is used as prediction. P Const (C i<k ) = p k 1. (7) 4.1.3 Linear P Linear The prediction is based on the two previous coded point in the curve. P Linear (C i<k ) = 2 p k 1 p k 2 (8) 4.1.4 Fit-a-line P FitLine The predicted point is an extension of a segment L(C i<k ) defined by all the previous coded points. The segment L(C i<k ) is given by line fitting algorithm based on the M-estimator technique, that iteratively fits the segment using weighted least-squares algorithm. P FitLine (C i<k ) = 2 < L(C i<k ) p k 1 > (9) < L(C i<k ) p k 2 > where < L p i > is the orthogonal projection of the point p i onto the line supporting the segment L. 4.1.5 Fit-a-sub-line P FitSubLine As previously, a line fitting algorithm is used to perform the prediction, but a sub-curve C i0 < = i<k is utilized instead of all the previous coded points. The starting point p i0 is, however, needed to be signaled to the decoder, and thus an additional flag is put in the bitstream. IS4-15 : 1387

P FitSubLine (C i<k ) = 2 < L ( C i0 < = i<k) pk 1 > 4. 2 Quantization < L ( C i0 < = i<k) pk 2 > (10) After prediction, the point cloud is represented by a set of corrective vectors, wherein each coordinate is a real floating number. The quantization will enable the mapping of these continuous set of values to a relatively small discrete and finite set. In that sense, we apply a scalar quantization as follow r k = sign(r k ) round ( r k 2 bp 1) (11) where bp is the desired bit precision to represent the absolute floating value of the residual. 4.3 Coding The last stage of the encoding process removes the statistical redundancy in the quantized absolute component of the residual r k by entropy Huffman coding. Huffman coding assigns a variable length code to each absolute value of the quantization residual based on the probability of occurrence. The bitstream consists of: an header containing the canonical Huffman codeword lengths, the quantization parameter bp, the total number of points, the residual data for every point. The coded residual of every point is composed of: 3 bits signaling the prediction used, 1 bit for the sign, a variable-length code for each absolute component value of the corrective vector with regards to the entropy coder. 4. 4 Decoding The decoding process is straightforward by reading the prediction mode and the residual. 5. Experimental results The performance of the proposed framework is evaluated using the three models shown in Fig. 3. The objective compression performance of the proposed method is investigated in the rate-distortion (RD) curves plotted in Figure 4 through the average number of bits per points (bpp), in relation to the loss of quality, measured by the peak signal to noise ratio (PSNR). The PSNR is evaluated using the Euclidean distance between points. The peak signal is given by the length of the diagonal of the bounding box of the original model. The RD results correspond respectively to the seven bp quantization parameters: 8, 9, 10, 11, 12, 14 and 16. We compare our competitive-optimized strategy with the simpler decision made only on the prediction error. It can be observed that the proposed method provides better results in RD performance. In particular Table 1 shows the resulting compression rates in bits per point (bpp) after a 12 bits quantization as defined in Section 4.2. Around -80% of bit savings is achieved compared to the uncompressed rate that used 96 bits for each point. Tbl. 1 Compression rates in bits per point (bpp) after 12 bits quantization per coordinate. model #points #curves rate bit savings Dragon 43k 1221 11.98 bpp -87.51 % Buddha 79k 2535 11.52 bpp -87.99 % Bunny 35k 924 12.17 bpp -87.31 % 6. Conclusion We designed and implemented a competition-based predictive single-rate compression for the positions points outputted by a structured-light 3D scanning system. Novel prediction methods has been designed to exploit the inherent spatially organization of the points, that are ordered prior to the scanning direction. First we pre-process the point cloud to efficiently taking advantage of the scan lines in the prediction stage. In addition our method had the advantage to not need any surface approximation, or the transmission of a data structure information (e.g. octree, spanning tree). Several issues remain that warrant further research. In future studies, we integrate other point attributes (e.g. color, normal, etc.), and extend our encoder to arbitrary point clouds. Acknowledgment This work is partially supported by Strategic Information and Communications R&D Promotion Programme (SCOPE) No.101710002, by Grand-in-Aid for Scientific Research No.21200002 in Japan, and by Next Program LR03 of Japan Cabinet Office. References [1] J. Batlle, E. M. Mouaddib, and J. Salvi, Recent progress in coded structured light as a technique to solve the correspondence problem: a survey, Pattern Recognition, vol. 31, pp. 963 982, 1998. [2] H. Kawasaki, R. Furukawa, R. Sagawa, and Y. Yagi, IS4-15 : 1388

(a) model Dragon (b) Model Happy Buddha (c) Model Bunny Fig. 3 Test models from Stanford Computer Graphics Laboratory s 3D scanning repository (only one view from each model is used to preserve the scan lines from the acquisition process). 90 model Dragon 90 model Buddha 80 80 70 70 PSNR (db) 60 50 bp = 12 PSNR (db) 60 50 bp = 12 40 30 bp = 10 competitive mode optimized single predictive based 10 12 14 16 18 20 22 24 Bitrate (bpp) 40 30 bp = 10 competitive mode optimized single predictive based 10 12 14 16 18 20 22 Bitrate (bpp) (a) model Dragon (b) model Buddha 90 model Bunny 80 PSNR (db) 70 60 50 40 30 bp = 10 bp = 12 competitive mode optimized single predictive based 10 12 14 16 18 20 22 Bitrate (bpp) (c) model Bunny Fig. 4 Rate-distortion performance of the proposed encoder. IS4-15 : 1389

Dynamic scene shape reconstruction using a single structured light pattern, in Proc. of the IEEE Computer Vision and Pattern Recognition (CVPR), Anchorage, Alaska, Jun. 2008, pp. 1 8. [3] R. Sagawa, Y. Ota, Y. Yagi, R. Furukawa, N. Asada, and H. Kawasaki, Dense 3D reconstruction method using a single pattern for fast moving object, in Proc. of the IEEE International Conference on Computer Vision (ICCV), Kyoto, Japan, Sep. 2009, pp. 1779 1786. [4] P. Alliez and C. Gotsman, Recent advances in compression of 3D meshes, in Proc. of the Advances in Multiresolution for Geometric Modelling, 2003, pp. 3 26. [5] M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, and C. T. Silva, Computing and rendering point set surfaces, vol. 9, no. 1, pp. 3 15, 2003. [6] M. Botsch, A. Wiratanaya, and L. Kobbelt, Efficient high quality rendering of point sampled geometry, in Proc. of the Eurographics Workshop on Rendering (EGRW). Pisa, Italy: Eurographics Association, 2002, pp. 53 64. [7] J. Peng and C.-C. J. Kuo, Geometry-guided progressive lossless 3d mesh coding with octree (OT) decomposition, ACM Transaction on Graphics, vol. 24, pp. 609 616, July 2005. [Online]. Available: http: //doi.acm.org/10.1145/1073204.1073237 [8] R. Schnabel and R. Klein, Octree-based point-cloud compression, in Proc. of the IEEE Eurographics Symposium on Point-Based Graphics, M. Botsch and B. Chen, Eds. Eurographics, Jul. 2006. [9] Y. Huang, J. Peng, C.-C. J. Kuo, and M. Gopi, A generic scheme for progressive point cloud coding, vol. 14, no. 2, pp. 440 453, 2008. [10] S. Gumhold, Z. Kami, M. Isenburg, and H.-P. Seidel, Predictive point-cloud compression, in Proc. of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH). Los Angeles, USA: ACM, Aug. 2005. [11] B. Merry, P. Marais, and J. Gain, Compression of dense and regular point clouds, in Proc. of the Computer graphics, Virtual Reality, Visualisation and Interaction in Africa (AFRIGRAPH). Cape Town, South Africa: ACM, 2006, pp. 15 20. [12] S. Rusinkiewicz and M. Levoy, QSplat: A multiresolution point rendering system for large meshes, in Proc. of the Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH), Jul. 2000, pp. 343 352. [13] J.-Y. Sim and S.-U. Lee, Compression of 3-D point visual data using vector quantization and ratedistortion optimization, IEEE Transactions on Multimedia, vol. 10, pp. 305 315, 2008. [14] K. Muller, A.Smolic, M. Kautzner, and T. Wiegand, Rate-distortion optimization in dynamic mesh compression, in Proc. of the IEEE International Conference on Image Processing (ICIP), 2006. [15] H. E. III, Generalized lagrange multiplier method for solving problems of optimum allocation of resources, Operations Research, vol. 11, no. 3, pp. 399 417, May- Jun. 1963. [16] T. Wiegand and B. Girod, Lagrange multiplier selection in hybrid video coder control, in Proc. of the IEEE International Conference on Image Processing (ICIP), vol. 3, Thessaloniki, Greece, Oct. 2001, pp. 542 545. IS4-15 : 1390