MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL

Similar documents
FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

Mesh Resolution Augmentation using 3D Skin Bank

Muscle Based facial Modeling. Wei Xu

Shape and Expression Space of Real istic Human Faces

Computer Animation Visualization. Lecture 5. Facial animation

Shape-based detail-preserving exaggeration of extremely accurate 3D faces

Transfer Facial Expressions with Identical Topology

Hierarchical Retargetting of Fine Facial Motions

Fast Facial Motion Cloning in MPEG-4

Facial Deformations for MPEG-4

Motion Capture, Motion Edition

Physical based Rigging

Feature points based facial animation retargeting

Facial Motion Cloning 1

Animated Talking Head With Personalized 3D Head Model

Reconstruction of Complete Head Models with. Consistent Parameterization

Face Synthesis in the VIDAS project

CS 231. Deformation simulation (and faces)

Landmark Detection on 3D Face Scans by Facial Model Registration

Synthesizing Realistic Facial Expressions from Photographs

CS 231. Deformation simulation (and faces)

Master s Thesis. Cloning Facial Expressions with User-defined Example Models

Performance Driven Facial Animation using Blendshape Interpolation

Registration of Expressions Data using a 3D Morphable Model

A Data-driven Approach to Human-body Cloning Using a Segmented Body Database

Physically-Based Laser Simulation

Deposited on: 1 st May 2012

SYNTHESIS OF 3D FACES

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component

Facial Expression Analysis for Model-Based Coding of Video Sequences

Augmented Reality of Robust Tracking with Realistic Illumination 1

K A I S T Department of Computer Science

Image-Based Deformation of Objects in Real Scenes

Acquisition and Visualization of Colored 3D Objects

M I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies?

SURFACE CONSTRUCTION USING TRICOLOR MARCHING CUBES

Deformation Transfer for Triangle Meshes

Human hand adaptation using sweeps: generating animatable hand models ...

Human Body Shape Deformation from. Front and Side Images

Animation of 3D surfaces

Dynamic Refinement of Deformable Triangle Meshes for Rendering

Interactive Deformation with Triangles

Using Semi-Regular 4 8 Meshes for Subdivision Surfaces

Stereo pairs from linear morphing

Multi-resolution Modeling for Extremely High Resolution 3D Scanned Faces

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

IEEE TRANSACTIONS ON MULTIMEDIA 1. A Generic Framework for Efficient 2D and 3D Facial Expression Analogy

Registration of Dynamic Range Images

Automatic Generation of Subdivision Surface Head Models from Point Cloud Data

Speech Driven Synthesis of Talking Head Sequences

Caricaturing Buildings for Effective Visualization

Modeling High Genus Sculptures Using Multi-Connected Handles and Holes

3D Face Deformation Using Control Points and Vector Muscles

IFACE: A 3D SYNTHETIC TALKING FACE

Accurate Reconstruction by Interpolation

Research Article A Facial Expression Parameterization by Elastic Surface Model

VISEME SPACE FOR REALISTIC SPEECH ANIMATION

Animation of 3D surfaces.

THE development of stable, robust and fast methods that

Technical Report. Removing polar rendering artifacts in subdivision surfaces. Ursula H. Augsdörfer, Neil A. Dodgson, Malcolm A. Sabin.

Geometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo

Learning-Based Facial Rearticulation Using Streams of 3D Scans

HIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS

Practical Shadow Mapping

REAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR

A model to blend renderings

Facial Animation System Design based on Image Processing DU Xueyan1, a

Multi-view stereo. Many slides adapted from S. Seitz

Sculpting 3D Models. Glossary

All the Polygons You Can Eat. Doug Rogers Developer Relations

Solidifying Wireframes

Generating Tool Paths for Free-Form Pocket Machining Using z-buffer-based Voronoi Diagrams

Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds

Modifying Soft Tissue Models: Progressive Cutting with Minimal New Element Creation

An Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Shape and Appearance from Images and Range Data

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn

Faces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Ray tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1

Tutorial Model the perfect 3D face

An Efficient Data Structure for Representing Trilateral/Quadrilateral Subdivision Surfaces

Modeling the Virtual World

Facial Animation System Based on Image Warping Algorithm

Cloth Simulation. Tanja Munz. Master of Science Computer Animation and Visual Effects. CGI Techniques Report

Analysis and Synthesis of Facial Expressions with Hand-Generated Muscle Actuation Basis

Research On 3D Emotional Face Animation Based on Dirichlet Free Deformation Algorithm

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Point-based Simplification Algorithm

Statistical Learning of Human Body through Feature Wireframe

MODEL BASED FACE RECONSTRUCTION FOR ANIMATION WON-SOOK LEE, PREM KALRA, NADIA MAGNENAT THALMANN

Surface Reconstruction. Gianpaolo Palma

An Efficient Visual Hull Computation Algorithm

We present a method to accelerate global illumination computation in pre-rendered animations

Smart point landmark distribution for thin-plate splines

AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS

Transcription:

MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave., Ottawa, Ontario, Canada, K1N 6N5 {lzhu058, wslee}@uottawa.ca ABSTRACT Modeling the human face and producing realistic facial animation are the challenging tasks for computer animators. On the other hand, with the development of advanced laser-scanning service, it is capable of capturing face with millions of triangles. In the situations where the real-time animation is expected, the problem of how to reduce the size of the dense laser-scanned face data for the animation purpose has been addressed. In this paper, firstly we present an approach that is capable of producing the low polygon approximation model for the dense laser-scanned face while accurately conveying the distinguished features in the original data. We modify the predefined generic model based on the feature points to produce the approximation model. The modification of the generic model involves three steps: Radial Basis Function (RBF) morphing; then loop subdivision step followed by mesh refinement. Secondly, instead of creating new facial animation from scratch, we take advantage of the existing source animation data and use the face motion retargeting method to resample the source motion vectors onto our approximation model. The resulting facial animation is fast and efficient. KEY WORDS Animation, feature point, subdivision, motion vector 1. Introduction Modeling the human face and producing realistic face animation are the challenging tasks for computer animators. Originally, animators built the physical sculpture of the face and then they digitized the physical model. For each new subject, they had to perform this process from scratch. Even for a highly skilled animator, sheer amount of mechanic work is required. More automated approach is expected. In previous computer animation works, many different methods were used to model and animate the face. Kalral et al. [5] utilized the pseudo muscle model and used the deformation mechanism to animate the face. [7] and [4] built the physics-based model of the face and simulated the behaviours of the muscle structures to produce facial animation. [1] showed a simple muscle-based face model which is capable of producing realistic facial expressions in real-time. Guenter et al. [2] described a performancedriven system for motion capturing human facial expression and replaying it as a highly realistic 3D talking head. One common limitation of the aforementioned facial animation techniques is that they make little use of the existing animation data. For each new facial model created, animation for that has to be produced from scratch. Instead, recent motion retargeting method ( [11], [13], [15] and [9] ) utilizes the existing animation data in the form of motion vectors and transfers the motion vectors from the source to the target. It allows that the animation created by other methods can be retargeted to new models easily. [11] presented the expression cloning approach to reuse the motion vectors of the source model onto the target model. [13] shared the same underlying idea as [11] while preserving MPEG-4 compatible motions. [15] proposed an example-based approach for cloning facial expressions between models while preserving the characteristic features of the target model. The limitation of [15] is that it requires the animator to prepare many example keymodels for both source and target models. The approach introduced in [9] consists of two steps: base mesh retargeting and detail mesh retargeting. The examplebased approach is adopted in the base level. In detail mesh retargeting, they used the normal mesh to hierarchically transfer the normal offsets in the source to the target. Their approach requires fewer examples than [15]. On the other hand, with the advent of the advanced laser scanning technologies, it is available to capture the real world object digitally in extremely detailed 3D data. For example, there is 3D scanning service available in XYZ RGB Inc. (http://www.xyz.com) which utilizes advanced technology developed at the NRC (National Research Council) of Canada [16]. That available scanning service is capable of capturing human face detail in the order of about 100 micron. The resulting laser-scanned 3D face consists of millions of triangles.

Animated Source Model Animated S ource Working Model Approximation Feature Detection Source Base Model RBF Morphing Source Base Working Model Loop Subdivision Mesh Refinement Dense Laser-scanned Face Approximation Model Figure 1: System Overview Diagram Animated Approximation Model However, it is difficult to require the subject to hold the specific facial expression during laser scanning. As a result, there are only neutral 3D scanned face data available. The question is that how we can produce facial animation for the dense laser-scanned face. Since there is no inherent animation structure embedded in the original dense face data, it is not so efficient to work on millions of triangles directly. Steep costs (such as extremely long computation time and the need for more available computing resources) are encountered. Especially, in many situations (for example telecommunication and computer game) where the real-time animation is expected, the problem of how to reduce the size of the dense laser-scanned data has been addressed. For the dense laser-scanned face data, our motivation in this paper is to model and animate it in the low-resolution level. In this paper, we firstly propose an approach to efficiently construct the low-resolution approximation model for the extremely detailed laser-scanned 3D face. This approximation process greatly reduces the computation burden for dense face data for animation while keeping the distinguished features with very high accuracy. Secondly, inspired by the face motion retargeting method, we present a fast and efficient approach to clone animation motion vectors from the source face animation data to the low-resolution approximation model. Figure 1 illustrates the overall view of our system. This paper consists of five sections. Section 2 presents how we produce the low approximation model for the dense laserscanned face. In section 3, we introduce our approach to animate the approximation model produced in section 2. Sample results are shown in section 4. Finally, we conclude our paper with section 5. 2. Modeling the Dense Laser-scanned Face in the Low-resolution Level This section concentrates on discussing how to reduce the size of the dense laser-scanned 3D face data. Our goal is to model the human face that is the approximation version of the dense data. The resulting low-resolution face model should accurately convey the distinct features of the original dense model. Recently, there are some papers (such as [3] and [17]) discussing the approach to model and animate for the laser-scanned dense data. In this section, we use the similar underlying feature point based idea as [6] and improve it in order to adopt laser-scanned face data as our input. In the following sub-sections, firstly we prepare a generic face model, then after the feature detection step, we deform the generic model using Radial Basis Function (RBF) networks; then we perform the loop subdivision scheme following by the mesh refinement. Figure 2: Generic Model (1,485 triangles) 2.1 Preparation of the Generic Model It is tedious for animators to build the physical sculpture of the face and make the model digitally from scratch. Instead, preparing a generic face model in advance is a popular technique for modeling the face [12]. Polygonal model is used for modeling the face since it can be deformed easily. In our paper, we refine the generic

model from [6] in the eye, nose and mouth regions in order to make it more suitable for our laser-scanned face input. The generic face model which is shown as Figure 2 contains predefined feature point information. The feature points represent the most characteristic points for the human face. Totally, we define 172 feature points for our generic model. Besides the 163 feature points which were defined in the original generic model in [6], we add more feature points in the lip region in order for better modeling the approximation model and controlling our facial animation. 2.2 Feature Detection In this sub-section, our goal is to capture the distinguished features of the dense laser-scanned face data so that we can establish 3D feature point correspondence between the generic model and the scanned data. This goal is achieved by detecting the same 172 feature points on the laser scanned data as those were predefined in the generic face model. Here we use the semi-automatic feature point detection method. Figure 3: Feature Detection for the Laser-Scanned Face The feature points are semi-automatically marked on front, left and right 2D images of the laser-scanned data. Whenever a feature point on the front view is detected, its depth value z is calculated automatically and it can be visualized in left and right side views in the real-time fashion. The same situation happens when the feature point on the left or right side view is detected; the x value is calculated automatically and can be visualized in the front view. Primarily, we detect the feature points on the front view. Given the feature point P(x, y) in the front view image, the question is how we calculate the depth value z for P(x, y) automatically. Firstly we project point P onto the dense laser-scanned face data. Since the dense data consists of millions of triangles, we need to find that feature point P lies in which triangle Tj. To do this, we compute barycentric coordinates of P with respect to the triangles. If the barycentric coordinate values of P are all between (0, 1) for the specific triangle Tj, then we know that feature point P lies in that triangle Tj. So we can calculate the z value for P by linear interpolating three z values of the vertices of the triangle Tj. The similar algorithm is used for automatically calculating the x value when the feature point on the left or right side view is detected. Some feature points on the 2D images are defined interactively since they are not available in the original laser-scanned data. While all the 172 feature points are detected on the dense data, we are ready to calculate 3D feature point positions from those 2D feature points with predefined relations among points on front, left and right view images. (a) (b) (c) (d) Figure 4: (a) Dense Laser-scanned face data from XYZ RGB Inc. (roughly consists of 1,000,000 triangles); (b) RBF morphing of the generic model (1,485 triangles); (c) Loop Subdivide Once (5,940 triangles); (d) After mesh refinement: Approximation Model (5,940 triangles). 2.3 RBF Morphing After feature points are detected on the dense data, we can get 3D feature point correspondence between the generic model and the scanned dense face data. Now we are ready to deform the predefined generic model. Previously in [6], the Dirichlet Free-Form Deformation (DFFD) method was used to deform the generic model. In this paper, we use Radial Basis Function (RBF) networks (described in [1] and [10] ) to deform the generic model. RBF morphing produces smoother displacements of affected vertices in the model than DFFD method does. Here we use 172 3D feature points obtained in the feature detection step as centers of RBF. Generic model deformation is achieved at the global level by performing RBF morphing. Figure 4b shows the result after performing RBF morphing on the generic model. Obviously, the resulting deformed generic model contains the same point and polygon structures as those of the generic model. 2.4 Loop Subdivision However, with only 1,485 triangles which are inherited from the generic model after RBF morphing step, we still are not capable of generating precise low resolution approximation model for the dense laser-scanned face. We need to increase the number of triangles. In order to do so, we apply Loop s subdivision scheme ( [8]) once to the deformed generic model. The advantage of subdivision technique is that it is capable of increasing local complexity without adding global complexity. The result after the subdivision step is shown on Figure 4c.

The resulting subdivided model consists of 5,940 triangles. Compared with Figure 4b, it gives smoother surface after performing loop subdivision scheme once. 2.5 Mesh Refinement As we may observe in Figure 4b., RBF morphing with only feature points does not produce a perfect match between the dense face data and the generic model. Loop subdivision step (Figure 4c) only updates the local geometry without changing the global geometry a lot. The non-feature points still do not lie on the dense data. Consequently, we need one more refinement step to update the low-resolution model to match the dense laserscanned data. Firstly, the model in Figure 4c has to be scaled and translated into the same space as the dense laser-scanned face (Figure 4a). After aligning the deformed generic model (subdivided version) with the dense face, we project each vertex in the model of Figure 4c onto the dense face data using the cylindrical projection approach described in [11]. We cast the cylindrical projection ray for each vertex in the model of Figure 4c. If the intersection point exists between the projection ray and the dense data for the specific vertex, we use that intersection point to update that specific vertex to increase the accuracy of the model. The resulting model after the mesh refinement step is termed approximation model in our system (Figure 4d). As we can observe, our three-step modification on the generic model guarantees the convergence of the generic face to match the dense laser-scanned face. Compared with the original dense data, the resulting approximation model only has about 6K triangles while still conveying the distinguished features in the original dense data with high accuracy. 3. Animating the Approximation Model of the Dense Laser-scanned Face The goal of this section is to animate the low-resolution approximation model (for example, the model shown in Figure 4d.). Here we adopt the underlying idea of the face motion retargeting method described in [11] and [13]. 3.1 Obtaining Dense Motion Vector from the Source Animation Data [18] presented the system that goes from video sequences to high-resolution animated face models. The models created by their approach accurately reflect the shape and time-varying behaviour of the human face. In our research, we utilize their realistic facial animation data as our source animation data. Each of their resulting models has about 46K triangles. We select the model (Figure 5a) from their data which is in the neutral status as our source base model. Since the animated source model (Figure 5b) and the source base model (Figure 5a) have the same vertices and structure, the dense facial animation motion vectors can be obtained simply by calculating the difference of the vertex positions between the animated source base model and the neutral one. (a) (b) (c) (d) (e) (f) Figure 5: (a) Source Base Model from the Graphics and Imaging Laboratory of the University of Washington; (b) Animated Source Base Model from the Graphics and Imaging Laboratory of the University of Washington; (c) Source Base Working Model; (d) Animated Source Base Working Model; (e) Approximation Model; (f) Animated Approximation Model. 3.2 Creating and Animating the Source Base Working Model Once we get the dense animation motion vectors from source animation data, our next question is that how to resample it onto our low polygon approximation model. Firstly we need to construct the low-resolution model for the source base model. Given the source base model (Figure 5a) of about 46K triangles, we walk through the steps described from section 2.2 to section 2.5, then we can obtain the low-resolution model for the source base model which is termed the source base working model in our system (Figure 5c). In the next step, our goal is to animate the source base working model. For the specific point in the source base working model (Figure 5c), there is the corresponding point on the high-resolution source base model (Figure 5a). How can we find that corresponding point? Firstly we align the source base working model with the source base model via scaling and translation. Now they are in the same space, for the specific point Pj in the source base working model, we project Pj onto the source base model. Using the same concept of the barycentric coordinate algorithm presented in Section 2.2, we can find the corresponding triangle Ti with respect to Pj. Then the motion vector for Pj can be calculated by linear interpolating the motion vectors of three vertices of the triangle Ti. By doing so, we can get the re-sampled motion vectors for the source base working model. As we can observe from Figure 5a. to Figure 5d., the re-sampled animation motion vectors in the low-resolution level still convey the rich expressions contained in the original dense source face animation data. 3.3 Retargeting Re-sampled Source Motion Vectors to the Approximation Model The approach introduced in the section 2 is capable of producing the low-resolution approximation model. In this sub-section, our final goal is to animate the approximation model. Because the approximation model and the source base working model are both the derivatives of the generic face model in our system. So they are in the same space of the generic model. As a result, we do not need to scale the magnitude of the motion vectors during face motion retargeting process. Moreover, the approximation model and source base working model inherit both the vertices and polygon structures of the generic face model. Therefore, we can

apply motion vectors for each vertex of the source base working model to the approximation model directly by simple addition. As we can observe from Figure 5a to Figure 5f, our face motion retargeting approach in the low-resolution level gives satisfactory facial animation results while still conveying the rich expressions contained in the original dense source animation data. 4. Result Our methodology is implemented on a 2.80 GHz Intel Xeon PC with 512M RAM. The dense laser-scanned faces are provided by XYZ RGB Inc. (http://www.xyz.com). In the testing, we used 4 different laser-scanned dense faces. Each dense model has about 1,000K triangles. (See Figure 6) Each of the resulting approximation models consists of about 6K triangles. Figure 7 presents our face animation results. The top row shows the models in the neutral status. The left-most column shows the sample source animation data from the Graphics and Imaging Laboratory of the University of Washington. The animation for our target approximation models is cloned from it. (a) (b) (c) Figure 7: The top row shows the models in the neutral status; (a) Sample source animation data from the Graphics and Imaging Laboratory of the University of Washington; (b) and (c) The retargeted animation for the approximation models by our approach. (a) (b) Figure 6: (a) The dense laser-scanned faces from XYZ RGB Inc. (each consists of about 1,000,000 triangles); (b) The approximation models produced by our approach (each has 5,940 triangles) 5. Conclusion In this paper, we present our approach to approximate the extremely high detailed laser-scanned face in the lowresolution level. Our resulting approximation model accurately captures the distinguished features in the original dense laser-scanned face while greatly reducing the data size from millions of triangles to less than 6K triangles. Then we propose our fast and efficient approach to produce facial animation for the approximation model. In this paper, we are interested in the real-time animation in the low-resolution level. As shown in the experiment results, our facial animation retargeting system produces satisfactory facial animation results while still conveying the rich expressions contained in the original source animation data. The results presented in this paper show that our methodology is sufficiently robust and flexible to handle laser-scanned face data consisting of millions of triangles. Our methodology is capable of producing low polygon models which retain the original high-resolution features

with high accuracy. It is a suitable solution in applications where real-time rendering and animation are expected. The limitation for our facial animation approach is that we lose some animation information because the resolution of the original animation source gets decreased greatly in our facial motion retargeting approach. We could utilize the facial region division idea presented in [14] and [1] to extend our system. The idea of region division could be helpful for better controlling the facial animation. Our further research could also be extended to MPEG-4 compatible animation. Moreover, we wish to produce the sophisticated facial expressions statically for the original extremely high detailed 3D faces. Future research could explore the approach to recover the original dense 3D skin detail. Acknowledgements We wish to acknowledge Materials and Manufacturing Ontario for funding the research as well as XYZ RGB Inc. for scanning the faces of volunteers and preparing the dense laser-scanned data. We also would like to thank Li Zhang and Steven M. Seitz in the Graphics and Imaging Laboratory of the University of Washington for allowing us to use their face animation data. The contribution of our group member Andrew Soon is also recognized. References [1] T.D. Bui, M. Poel, D. Heylen and A. Nijholt, Automatic face morphing for transferring facial animation, Proc. 6th IASTED International Conference on Computers, Graphics and Imaging, Honolulu, Hawaii, USA, August 2003, pp. 19-23. [2] B. Guenter, C. Grimm, D. Wood, H. Malvar and F. Pighin, Making faces, Proceedings of the 25th annual conference on Computer graphics and interactive techniques, July 1998, pp. 55-66. [3] W.K. Jeong, K. Kähler, J. Haber and H.P. Seidel, Automatic generation of subdivision surface head models from point cloud data, Graphics Interface, 2002, pp. 181-188. [4] K. Kähler, J. Haber, H. Yamauchi and H. Seidel, Head shop: generating animated head models with anatomical structure, Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, San Antonio, Texas, July 2002, pp. 55 63. [5] P. Kalra, A. Mangili, N.M.Thalmannl and D. Thalmann, Simulation of facial muscle actions based on rational free form deformations, Proc. Eurographics '92, Computer Graphics Forum, Vol. 2, No. 3, Cambridge, U.K., 1992, pp. 59-69. [6] W. Lee and N. Magnenat-Thalmann, Fast head modeling for animation, Journal Image and Vision Computing. Volume 18, Number 4, Elsevier, Mar. 2000, pp. 355-364. [7] Y. Lee, D. Terzopoulos and K. Waters, Realistic modeling for facial animation, Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, September 1995, pp. 55-62. [8] C. Loop, Smooth subdivision surfaces based on triangles, Master s thesis, University of Utah, Department of Mathematics, 1987. [9] K. Na and M. Jung, Hierarchical retargetting of fine facial motions, In Proc. of Eurographics, vol. 23, 2004, pp. 687 695. [10] J.Y. Noh, D. Fidaeo and U. Neumann. Animated deformations with radial basis functions, Proc. ACM symposium on virtual reality software and technology, Seoul, Korea, 2000, pp. 166-174. [11] J. Y. Noh, and U. Neumann, Expression cloning, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, Aug. 2001, pp. 277-288. [12] Rick Parent, Computer animation algorithms and techniques (San Francisco, CA, Morgan Kauffmann, 2002). [13] Igor S. Pandzic, Facial motion cloning, Graphical Models, v.65 n.6, Nov. 2003, pp. 385-404. [14] S. Pasquariello and C. Pelachaud, Greta: a simple facial animation engine, 6th Online World Conference on Soft Computing in Industrial Appications, Session on Soft Computing for Intelligent 3D Agents, September 2001. [15] H. Pyun, Y. Kim, W. Chae, H. W. Kang and S. Y. Shin, An example-based approach for facial expression cloning, Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, San Diego, California, July 2003, pp. 167-176. [16] J. Taylor, J.-A. Beraldin, G. Godin, L. Cournoyer, M. Rioux and J. Domey, NRC 3D imaging technology for museums & heritage, Proceedings of The First International Workshop on 3D Virtual Heritage, Geneva, Switzerland, 2002, pp. 70-75. [17] Y. Zhang, T. Sim and C. L. Tan, Rapid modeling of 3D faces for animation using an efficient adaptation algorithm, GRAPHITE 2004, Singapore, June 2004, pp. 173-181. [18] L. Zhang, N. Snavely, B. Curless, and S.M. Seitz, Spacetime faces: high-resolution capture for modeling and animation, In ACM SIGGRAPH Proceedings, Los Angeles, CA, Aug. 2004.