MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL
|
|
- Shana Tate
- 5 years ago
- Views:
Transcription
1 MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave., Ottawa, Ontario, Canada, K1N 6N5 {lzhu058, ABSTRACT Modeling the human face and producing realistic facial animation are the challenging tasks for computer animators. On the other hand, with the development of advanced laser-scanning service, it is capable of capturing face with millions of triangles. In the situations where the real-time animation is expected, the problem of how to reduce the size of the dense laser-scanned face data for the animation purpose has been addressed. In this paper, firstly we present an approach that is capable of producing the low polygon approximation model for the dense laser-scanned face while accurately conveying the distinguished features in the original data. We modify the predefined generic model based on the feature points to produce the approximation model. The modification of the generic model involves three steps: Radial Basis Function (RBF) morphing; then loop subdivision step followed by mesh refinement. Secondly, instead of creating new facial animation from scratch, we take advantage of the existing source animation data and use the face motion retargeting method to resample the source motion vectors onto our approximation model. The resulting facial animation is fast and efficient. KEY WORDS Animation, feature point, subdivision, motion vector 1. Introduction Modeling the human face and producing realistic face animation are the challenging tasks for computer animators. Originally, animators built the physical sculpture of the face and then they digitized the physical model. For each new subject, they had to perform this process from scratch. Even for a highly skilled animator, sheer amount of mechanic work is required. More automated approach is expected. In previous computer animation works, many different methods were used to model and animate the face. Kalral et al. [5] utilized the pseudo muscle model and used the deformation mechanism to animate the face. [7] and [4] built the physics-based model of the face and simulated the behaviours of the muscle structures to produce facial animation. [1] showed a simple muscle-based face model which is capable of producing realistic facial expressions in real-time. Guenter et al. [2] described a performancedriven system for motion capturing human facial expression and replaying it as a highly realistic 3D talking head. One common limitation of the aforementioned facial animation techniques is that they make little use of the existing animation data. For each new facial model created, animation for that has to be produced from scratch. Instead, recent motion retargeting method ( [11], [13], [15] and [9] ) utilizes the existing animation data in the form of motion vectors and transfers the motion vectors from the source to the target. It allows that the animation created by other methods can be retargeted to new models easily. [11] presented the expression cloning approach to reuse the motion vectors of the source model onto the target model. [13] shared the same underlying idea as [11] while preserving MPEG-4 compatible motions. [15] proposed an example-based approach for cloning facial expressions between models while preserving the characteristic features of the target model. The limitation of [15] is that it requires the animator to prepare many example keymodels for both source and target models. The approach introduced in [9] consists of two steps: base mesh retargeting and detail mesh retargeting. The examplebased approach is adopted in the base level. In detail mesh retargeting, they used the normal mesh to hierarchically transfer the normal offsets in the source to the target. Their approach requires fewer examples than [15]. On the other hand, with the advent of the advanced laser scanning technologies, it is available to capture the real world object digitally in extremely detailed 3D data. For example, there is 3D scanning service available in XYZ RGB Inc. ( which utilizes advanced technology developed at the NRC (National Research Council) of Canada [16]. That available scanning service is capable of capturing human face detail in the order of about 100 micron. The resulting laser-scanned 3D face consists of millions of triangles.
2 Animated Source Model Animated S ource Working Model Approximation Feature Detection Source Base Model RBF Morphing Source Base Working Model Loop Subdivision Mesh Refinement Dense Laser-scanned Face Approximation Model Figure 1: System Overview Diagram Animated Approximation Model However, it is difficult to require the subject to hold the specific facial expression during laser scanning. As a result, there are only neutral 3D scanned face data available. The question is that how we can produce facial animation for the dense laser-scanned face. Since there is no inherent animation structure embedded in the original dense face data, it is not so efficient to work on millions of triangles directly. Steep costs (such as extremely long computation time and the need for more available computing resources) are encountered. Especially, in many situations (for example telecommunication and computer game) where the real-time animation is expected, the problem of how to reduce the size of the dense laser-scanned data has been addressed. For the dense laser-scanned face data, our motivation in this paper is to model and animate it in the low-resolution level. In this paper, we firstly propose an approach to efficiently construct the low-resolution approximation model for the extremely detailed laser-scanned 3D face. This approximation process greatly reduces the computation burden for dense face data for animation while keeping the distinguished features with very high accuracy. Secondly, inspired by the face motion retargeting method, we present a fast and efficient approach to clone animation motion vectors from the source face animation data to the low-resolution approximation model. Figure 1 illustrates the overall view of our system. This paper consists of five sections. Section 2 presents how we produce the low approximation model for the dense laserscanned face. In section 3, we introduce our approach to animate the approximation model produced in section 2. Sample results are shown in section 4. Finally, we conclude our paper with section Modeling the Dense Laser-scanned Face in the Low-resolution Level This section concentrates on discussing how to reduce the size of the dense laser-scanned 3D face data. Our goal is to model the human face that is the approximation version of the dense data. The resulting low-resolution face model should accurately convey the distinct features of the original dense model. Recently, there are some papers (such as [3] and [17]) discussing the approach to model and animate for the laser-scanned dense data. In this section, we use the similar underlying feature point based idea as [6] and improve it in order to adopt laser-scanned face data as our input. In the following sub-sections, firstly we prepare a generic face model, then after the feature detection step, we deform the generic model using Radial Basis Function (RBF) networks; then we perform the loop subdivision scheme following by the mesh refinement. Figure 2: Generic Model (1,485 triangles) 2.1 Preparation of the Generic Model It is tedious for animators to build the physical sculpture of the face and make the model digitally from scratch. Instead, preparing a generic face model in advance is a popular technique for modeling the face [12]. Polygonal model is used for modeling the face since it can be deformed easily. In our paper, we refine the generic
3 model from [6] in the eye, nose and mouth regions in order to make it more suitable for our laser-scanned face input. The generic face model which is shown as Figure 2 contains predefined feature point information. The feature points represent the most characteristic points for the human face. Totally, we define 172 feature points for our generic model. Besides the 163 feature points which were defined in the original generic model in [6], we add more feature points in the lip region in order for better modeling the approximation model and controlling our facial animation. 2.2 Feature Detection In this sub-section, our goal is to capture the distinguished features of the dense laser-scanned face data so that we can establish 3D feature point correspondence between the generic model and the scanned data. This goal is achieved by detecting the same 172 feature points on the laser scanned data as those were predefined in the generic face model. Here we use the semi-automatic feature point detection method. Figure 3: Feature Detection for the Laser-Scanned Face The feature points are semi-automatically marked on front, left and right 2D images of the laser-scanned data. Whenever a feature point on the front view is detected, its depth value z is calculated automatically and it can be visualized in left and right side views in the real-time fashion. The same situation happens when the feature point on the left or right side view is detected; the x value is calculated automatically and can be visualized in the front view. Primarily, we detect the feature points on the front view. Given the feature point P(x, y) in the front view image, the question is how we calculate the depth value z for P(x, y) automatically. Firstly we project point P onto the dense laser-scanned face data. Since the dense data consists of millions of triangles, we need to find that feature point P lies in which triangle Tj. To do this, we compute barycentric coordinates of P with respect to the triangles. If the barycentric coordinate values of P are all between (0, 1) for the specific triangle Tj, then we know that feature point P lies in that triangle Tj. So we can calculate the z value for P by linear interpolating three z values of the vertices of the triangle Tj. The similar algorithm is used for automatically calculating the x value when the feature point on the left or right side view is detected. Some feature points on the 2D images are defined interactively since they are not available in the original laser-scanned data. While all the 172 feature points are detected on the dense data, we are ready to calculate 3D feature point positions from those 2D feature points with predefined relations among points on front, left and right view images. (a) (b) (c) (d) Figure 4: (a) Dense Laser-scanned face data from XYZ RGB Inc. (roughly consists of 1,000,000 triangles); (b) RBF morphing of the generic model (1,485 triangles); (c) Loop Subdivide Once (5,940 triangles); (d) After mesh refinement: Approximation Model (5,940 triangles). 2.3 RBF Morphing After feature points are detected on the dense data, we can get 3D feature point correspondence between the generic model and the scanned dense face data. Now we are ready to deform the predefined generic model. Previously in [6], the Dirichlet Free-Form Deformation (DFFD) method was used to deform the generic model. In this paper, we use Radial Basis Function (RBF) networks (described in [1] and [10] ) to deform the generic model. RBF morphing produces smoother displacements of affected vertices in the model than DFFD method does. Here we use 172 3D feature points obtained in the feature detection step as centers of RBF. Generic model deformation is achieved at the global level by performing RBF morphing. Figure 4b shows the result after performing RBF morphing on the generic model. Obviously, the resulting deformed generic model contains the same point and polygon structures as those of the generic model. 2.4 Loop Subdivision However, with only 1,485 triangles which are inherited from the generic model after RBF morphing step, we still are not capable of generating precise low resolution approximation model for the dense laser-scanned face. We need to increase the number of triangles. In order to do so, we apply Loop s subdivision scheme ( [8]) once to the deformed generic model. The advantage of subdivision technique is that it is capable of increasing local complexity without adding global complexity. The result after the subdivision step is shown on Figure 4c.
4 The resulting subdivided model consists of 5,940 triangles. Compared with Figure 4b, it gives smoother surface after performing loop subdivision scheme once. 2.5 Mesh Refinement As we may observe in Figure 4b., RBF morphing with only feature points does not produce a perfect match between the dense face data and the generic model. Loop subdivision step (Figure 4c) only updates the local geometry without changing the global geometry a lot. The non-feature points still do not lie on the dense data. Consequently, we need one more refinement step to update the low-resolution model to match the dense laserscanned data. Firstly, the model in Figure 4c has to be scaled and translated into the same space as the dense laser-scanned face (Figure 4a). After aligning the deformed generic model (subdivided version) with the dense face, we project each vertex in the model of Figure 4c onto the dense face data using the cylindrical projection approach described in [11]. We cast the cylindrical projection ray for each vertex in the model of Figure 4c. If the intersection point exists between the projection ray and the dense data for the specific vertex, we use that intersection point to update that specific vertex to increase the accuracy of the model. The resulting model after the mesh refinement step is termed approximation model in our system (Figure 4d). As we can observe, our three-step modification on the generic model guarantees the convergence of the generic face to match the dense laser-scanned face. Compared with the original dense data, the resulting approximation model only has about 6K triangles while still conveying the distinguished features in the original dense data with high accuracy. 3. Animating the Approximation Model of the Dense Laser-scanned Face The goal of this section is to animate the low-resolution approximation model (for example, the model shown in Figure 4d.). Here we adopt the underlying idea of the face motion retargeting method described in [11] and [13]. 3.1 Obtaining Dense Motion Vector from the Source Animation Data [18] presented the system that goes from video sequences to high-resolution animated face models. The models created by their approach accurately reflect the shape and time-varying behaviour of the human face. In our research, we utilize their realistic facial animation data as our source animation data. Each of their resulting models has about 46K triangles. We select the model (Figure 5a) from their data which is in the neutral status as our source base model. Since the animated source model (Figure 5b) and the source base model (Figure 5a) have the same vertices and structure, the dense facial animation motion vectors can be obtained simply by calculating the difference of the vertex positions between the animated source base model and the neutral one. (a) (b) (c) (d) (e) (f) Figure 5: (a) Source Base Model from the Graphics and Imaging Laboratory of the University of Washington; (b) Animated Source Base Model from the Graphics and Imaging Laboratory of the University of Washington; (c) Source Base Working Model; (d) Animated Source Base Working Model; (e) Approximation Model; (f) Animated Approximation Model. 3.2 Creating and Animating the Source Base Working Model Once we get the dense animation motion vectors from source animation data, our next question is that how to resample it onto our low polygon approximation model. Firstly we need to construct the low-resolution model for the source base model. Given the source base model (Figure 5a) of about 46K triangles, we walk through the steps described from section 2.2 to section 2.5, then we can obtain the low-resolution model for the source base model which is termed the source base working model in our system (Figure 5c). In the next step, our goal is to animate the source base working model. For the specific point in the source base working model (Figure 5c), there is the corresponding point on the high-resolution source base model (Figure 5a). How can we find that corresponding point? Firstly we align the source base working model with the source base model via scaling and translation. Now they are in the same space, for the specific point Pj in the source base working model, we project Pj onto the source base model. Using the same concept of the barycentric coordinate algorithm presented in Section 2.2, we can find the corresponding triangle Ti with respect to Pj. Then the motion vector for Pj can be calculated by linear interpolating the motion vectors of three vertices of the triangle Ti. By doing so, we can get the re-sampled motion vectors for the source base working model. As we can observe from Figure 5a. to Figure 5d., the re-sampled animation motion vectors in the low-resolution level still convey the rich expressions contained in the original dense source face animation data. 3.3 Retargeting Re-sampled Source Motion Vectors to the Approximation Model The approach introduced in the section 2 is capable of producing the low-resolution approximation model. In this sub-section, our final goal is to animate the approximation model. Because the approximation model and the source base working model are both the derivatives of the generic face model in our system. So they are in the same space of the generic model. As a result, we do not need to scale the magnitude of the motion vectors during face motion retargeting process. Moreover, the approximation model and source base working model inherit both the vertices and polygon structures of the generic face model. Therefore, we can
5 apply motion vectors for each vertex of the source base working model to the approximation model directly by simple addition. As we can observe from Figure 5a to Figure 5f, our face motion retargeting approach in the low-resolution level gives satisfactory facial animation results while still conveying the rich expressions contained in the original dense source animation data. 4. Result Our methodology is implemented on a 2.80 GHz Intel Xeon PC with 512M RAM. The dense laser-scanned faces are provided by XYZ RGB Inc. ( In the testing, we used 4 different laser-scanned dense faces. Each dense model has about 1,000K triangles. (See Figure 6) Each of the resulting approximation models consists of about 6K triangles. Figure 7 presents our face animation results. The top row shows the models in the neutral status. The left-most column shows the sample source animation data from the Graphics and Imaging Laboratory of the University of Washington. The animation for our target approximation models is cloned from it. (a) (b) (c) Figure 7: The top row shows the models in the neutral status; (a) Sample source animation data from the Graphics and Imaging Laboratory of the University of Washington; (b) and (c) The retargeted animation for the approximation models by our approach. (a) (b) Figure 6: (a) The dense laser-scanned faces from XYZ RGB Inc. (each consists of about 1,000,000 triangles); (b) The approximation models produced by our approach (each has 5,940 triangles) 5. Conclusion In this paper, we present our approach to approximate the extremely high detailed laser-scanned face in the lowresolution level. Our resulting approximation model accurately captures the distinguished features in the original dense laser-scanned face while greatly reducing the data size from millions of triangles to less than 6K triangles. Then we propose our fast and efficient approach to produce facial animation for the approximation model. In this paper, we are interested in the real-time animation in the low-resolution level. As shown in the experiment results, our facial animation retargeting system produces satisfactory facial animation results while still conveying the rich expressions contained in the original source animation data. The results presented in this paper show that our methodology is sufficiently robust and flexible to handle laser-scanned face data consisting of millions of triangles. Our methodology is capable of producing low polygon models which retain the original high-resolution features
6 with high accuracy. It is a suitable solution in applications where real-time rendering and animation are expected. The limitation for our facial animation approach is that we lose some animation information because the resolution of the original animation source gets decreased greatly in our facial motion retargeting approach. We could utilize the facial region division idea presented in [14] and [1] to extend our system. The idea of region division could be helpful for better controlling the facial animation. Our further research could also be extended to MPEG-4 compatible animation. Moreover, we wish to produce the sophisticated facial expressions statically for the original extremely high detailed 3D faces. Future research could explore the approach to recover the original dense 3D skin detail. Acknowledgements We wish to acknowledge Materials and Manufacturing Ontario for funding the research as well as XYZ RGB Inc. for scanning the faces of volunteers and preparing the dense laser-scanned data. We also would like to thank Li Zhang and Steven M. Seitz in the Graphics and Imaging Laboratory of the University of Washington for allowing us to use their face animation data. The contribution of our group member Andrew Soon is also recognized. References [1] T.D. Bui, M. Poel, D. Heylen and A. Nijholt, Automatic face morphing for transferring facial animation, Proc. 6th IASTED International Conference on Computers, Graphics and Imaging, Honolulu, Hawaii, USA, August 2003, pp [2] B. Guenter, C. Grimm, D. Wood, H. Malvar and F. Pighin, Making faces, Proceedings of the 25th annual conference on Computer graphics and interactive techniques, July 1998, pp [3] W.K. Jeong, K. Kähler, J. Haber and H.P. Seidel, Automatic generation of subdivision surface head models from point cloud data, Graphics Interface, 2002, pp [4] K. Kähler, J. Haber, H. Yamauchi and H. Seidel, Head shop: generating animated head models with anatomical structure, Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, San Antonio, Texas, July 2002, pp [5] P. Kalra, A. Mangili, N.M.Thalmannl and D. Thalmann, Simulation of facial muscle actions based on rational free form deformations, Proc. Eurographics '92, Computer Graphics Forum, Vol. 2, No. 3, Cambridge, U.K., 1992, pp [6] W. Lee and N. Magnenat-Thalmann, Fast head modeling for animation, Journal Image and Vision Computing. Volume 18, Number 4, Elsevier, Mar. 2000, pp [7] Y. Lee, D. Terzopoulos and K. Waters, Realistic modeling for facial animation, Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, September 1995, pp [8] C. Loop, Smooth subdivision surfaces based on triangles, Master s thesis, University of Utah, Department of Mathematics, [9] K. Na and M. Jung, Hierarchical retargetting of fine facial motions, In Proc. of Eurographics, vol. 23, 2004, pp [10] J.Y. Noh, D. Fidaeo and U. Neumann. Animated deformations with radial basis functions, Proc. ACM symposium on virtual reality software and technology, Seoul, Korea, 2000, pp [11] J. Y. Noh, and U. Neumann, Expression cloning, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, Aug. 2001, pp [12] Rick Parent, Computer animation algorithms and techniques (San Francisco, CA, Morgan Kauffmann, 2002). [13] Igor S. Pandzic, Facial motion cloning, Graphical Models, v.65 n.6, Nov. 2003, pp [14] S. Pasquariello and C. Pelachaud, Greta: a simple facial animation engine, 6th Online World Conference on Soft Computing in Industrial Appications, Session on Soft Computing for Intelligent 3D Agents, September [15] H. Pyun, Y. Kim, W. Chae, H. W. Kang and S. Y. Shin, An example-based approach for facial expression cloning, Proceedings of the 2003 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, San Diego, California, July 2003, pp [16] J. Taylor, J.-A. Beraldin, G. Godin, L. Cournoyer, M. Rioux and J. Domey, NRC 3D imaging technology for museums & heritage, Proceedings of The First International Workshop on 3D Virtual Heritage, Geneva, Switzerland, 2002, pp [17] Y. Zhang, T. Sim and C. L. Tan, Rapid modeling of 3D faces for animation using an efficient adaptation algorithm, GRAPHITE 2004, Singapore, June 2004, pp [18] L. Zhang, N. Snavely, B. Curless, and S.M. Seitz, Spacetime faces: high-resolution capture for modeling and animation, In ACM SIGGRAPH Proceedings, Los Angeles, CA, Aug
FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING
FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave., Ottawa, Ontario, Canada,
More informationA Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets
A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets Kyungha Min and Moon-Ryul Jung Dept. of Media Technology, Graduate School of Media Communications, Sogang Univ., Seoul,
More informationMesh Resolution Augmentation using 3D Skin Bank
Mesh Resolution Augmentation using 3D Skin Bank Won-Sook Lee *, Andrew Soon SITE, Faculty of Engineering, University of Ottawa, 800 King Edward Avenue, Ottawa, ON, Canada K1N 6N5 Abstract We present 3D
More informationMuscle Based facial Modeling. Wei Xu
Muscle Based facial Modeling Wei Xu Facial Modeling Techniques Facial modeling/animation Geometry manipulations Interpolation Parameterizations finite element methods muscle based modeling visual simulation
More informationShape and Expression Space of Real istic Human Faces
8 5 2006 5 Vol8 No5 JOURNAL OF COMPU TER2AIDED DESIGN & COMPU TER GRAPHICS May 2006 ( 0087) (peiyuru @cis. pku. edu. cn) : Canny ; ; ; TP394 Shape and Expression Space of Real istic Human Faces Pei Yuru
More informationComputer Animation Visualization. Lecture 5. Facial animation
Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based
More informationShape-based detail-preserving exaggeration of extremely accurate 3D faces
Visual Comput (2006) 22: 478 492 DOI 10.1007/s00371-006-0023-5 ORIGINAL ARTICLE Andrew Soon Won-Sook Lee Shape-based detail-preserving exaggeration of extremely accurate 3D faces Published online: 21 June
More informationTransfer Facial Expressions with Identical Topology
Transfer Facial Expressions with Identical Topology Alice J. Lin Department of Computer Science University of Kentucky Lexington, KY 40506, USA alice.lin@uky.edu Fuhua (Frank) Cheng Department of Computer
More informationHierarchical Retargetting of Fine Facial Motions
EUROGRAPHICS 2004 / M.-P. Cani and M. Slater (Guest Editors) Volume 23 (2004), Number 3 Hierarchical Retargetting of Fine Facial Motions Kyunggun Na and Moonryul Jung Department of Media Technology, Graduate
More informationFast Facial Motion Cloning in MPEG-4
Fast Facial Motion Cloning in MPEG-4 Marco Fratarcangeli and Marco Schaerf Department of Computer and Systems Science University of Rome La Sapienza frat,schaerf@dis.uniroma1.it Abstract Facial Motion
More informationFacial Deformations for MPEG-4
Facial Deformations for MPEG-4 Marc Escher, Igor Pandzic, Nadia Magnenat Thalmann MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211 Geneva 4, Switzerland {Marc.Escher, Igor.Pandzic, Nadia.Thalmann}@cui.unige.ch
More informationMotion Capture, Motion Edition
Motion Capture, Motion Edition 2013-14 Overview Historical background Motion Capture, Motion Edition Motion capture systems Motion capture workflow Re-use of motion data Combining motion data and physical
More informationPhysical based Rigging
Physical based Rigging Dinghuang Ji Introduction Computer animation has been a popular research topic since 1970s, when the first parametric facial model is proposed[0]. In the recent few years, a lot
More informationFeature points based facial animation retargeting
Feature points based facial animation retargeting Ludovic Dutreve, Alexandre Meyer, Saida Bouakaz To cite this version: Ludovic Dutreve, Alexandre Meyer, Saida Bouakaz. Feature points based facial animation
More informationFacial Motion Cloning 1
Facial Motion Cloning 1 Abstract IGOR S. PANDZIC Department of Electrical Engineering Linköping University, SE-581 83 Linköping igor@isy.liu.se We propose a method for automatically copying facial motion
More informationAnimated Talking Head With Personalized 3D Head Model
Animated Talking Head With Personalized 3D Head Model L.S.Chen, T.S.Huang - Beckman Institute & CSL University of Illinois, Urbana, IL 61801, USA; lchen@ifp.uiuc.edu Jörn Ostermann, AT&T Labs-Research,
More informationReconstruction of Complete Head Models with. Consistent Parameterization
Reconstruction of Complete Head Models with Consistent Parameterization Niloofar Aghayan Thesis submitted to the Faculty of Graduate and Postdoctoral Studies In partial fulfillment of the requirements
More informationFace Synthesis in the VIDAS project
Face Synthesis in the VIDAS project Marc Escher 1, Igor Pandzic 1, Nadia Magnenat Thalmann 1, Daniel Thalmann 2, Frank Bossen 3 Abstract 1 MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211
More informationCS 231. Deformation simulation (and faces)
CS 231 Deformation simulation (and faces) Deformation BODY Simulation Discretization Spring-mass models difficult to model continuum properties Simple & fast to implement and understand Finite Element
More informationLandmark Detection on 3D Face Scans by Facial Model Registration
Landmark Detection on 3D Face Scans by Facial Model Registration Tristan Whitmarsh 1, Remco C. Veltkamp 2, Michela Spagnuolo 1 Simone Marini 1, Frank ter Haar 2 1 IMATI-CNR, Genoa, Italy 2 Dept. Computer
More informationSynthesizing Realistic Facial Expressions from Photographs
Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1
More informationCS 231. Deformation simulation (and faces)
CS 231 Deformation simulation (and faces) 1 Cloth Simulation deformable surface model Represent cloth model as a triangular or rectangular grid Points of finite mass as vertices Forces or energies of points
More informationMaster s Thesis. Cloning Facial Expressions with User-defined Example Models
Master s Thesis Cloning Facial Expressions with User-defined Example Models ( Kim, Yejin) Department of Electrical Engineering and Computer Science Division of Computer Science Korea Advanced Institute
More informationPerformance Driven Facial Animation using Blendshape Interpolation
Performance Driven Facial Animation using Blendshape Interpolation Erika Chuang Chris Bregler Computer Science Department Stanford University Abstract This paper describes a method of creating facial animation
More informationRegistration of Expressions Data using a 3D Morphable Model
Registration of Expressions Data using a 3D Morphable Model Curzio Basso, Pascal Paysan, Thomas Vetter Computer Science Department, University of Basel {curzio.basso,pascal.paysan,thomas.vetter}@unibas.ch
More informationA Data-driven Approach to Human-body Cloning Using a Segmented Body Database
15th Pacific Conference on Computer Graphics and Applications A Data-driven Approach to Human-body Cloning Using a Segmented Body Database Pengcheng Xi National Research Council of Canada pengcheng.xi@nrc-cnrc.gc.ca
More informationPhysically-Based Laser Simulation
Physically-Based Laser Simulation Greg Reshko Carnegie Mellon University reshko@cs.cmu.edu Dave Mowatt Carnegie Mellon University dmowatt@andrew.cmu.edu Abstract In this paper, we describe our work on
More informationDeposited on: 1 st May 2012
Xiang, G., Ju, X., and Holt, P. (2010) Automatic facial expression tracking for 4D range scans. In: 2010 International Conference on Computer Graphics and Virtual Reality, 12-15 July 2010, Las Vegas, Nevada,
More informationSYNTHESIS OF 3D FACES
SYNTHESIS OF 3D FACES R. Enciso, J. Li, D.A. Fidaleo, T-Y Kim, J-Y Noh and U. Neumann Integrated Media Systems Center University of Southern California Los Angeles, CA 90089, U.S.A. Abstract In this paper,
More informationAbstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component
A Fully Automatic System To Model Faces From a Single Image Zicheng Liu Microsoft Research August 2003 Technical Report MSR-TR-2003-55 Microsoft Research Microsoft Corporation One Microsoft Way Redmond,
More informationFacial Expression Analysis for Model-Based Coding of Video Sequences
Picture Coding Symposium, pp. 33-38, Berlin, September 1997. Facial Expression Analysis for Model-Based Coding of Video Sequences Peter Eisert and Bernd Girod Telecommunications Institute, University of
More informationAugmented Reality of Robust Tracking with Realistic Illumination 1
International Journal of Fuzzy Logic and Intelligent Systems, vol. 10, no. 3, June 2010, pp. 178-183 DOI : 10.5391/IJFIS.2010.10.3.178 Augmented Reality of Robust Tracking with Realistic Illumination 1
More informationK A I S T Department of Computer Science
An Example-based Approach to Text-driven Speech Animation with Emotional Expressions Hyewon Pyun, Wonseok Chae, Yejin Kim, Hyungwoo Kang, and Sung Yong Shin CS/TR-2004-200 July 19, 2004 K A I S T Department
More informationImage-Based Deformation of Objects in Real Scenes
Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method
More informationAcquisition and Visualization of Colored 3D Objects
Acquisition and Visualization of Colored 3D Objects Kari Pulli Stanford University Stanford, CA, U.S.A kapu@cs.stanford.edu Habib Abi-Rached, Tom Duchamp, Linda G. Shapiro and Werner Stuetzle University
More informationM I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies?
MIRALab Where Research means Creativity Where do we stand today? M I RA Lab Nadia Magnenat-Thalmann MIRALab, University of Geneva thalmann@miralab.unige.ch Video Input (face) Audio Input (speech) FAP Extraction
More informationSURFACE CONSTRUCTION USING TRICOLOR MARCHING CUBES
SURFACE CONSTRUCTION USING TRICOLOR MARCHING CUBES Shaojun Liu, Jia Li Oakland University Rochester, MI 4839, USA Email: sliu2@oakland.edu, li4@oakland.edu Xiaojun Jing Beijing University of Posts and
More informationDeformation Transfer for Triangle Meshes
Deformation Transfer for Triangle Meshes a Paper (SIGGRAPH 2004) by Robert W. Sumner & Jovan Popovic presented by Roni Oeschger Deformation Transfer Source deformed Target deformed 1 Outline of my presentation
More informationHuman hand adaptation using sweeps: generating animatable hand models ...
COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2007; 18: 505 516 Published online 16 July 2007 in Wiley InterScience (www.interscience.wiley.com).193 Human hand adaptation using sweeps:
More informationHuman Body Shape Deformation from. Front and Side Images
Human Body Shape Deformation from Front and Side Images Yueh-Ling Lin 1 and Mao-Jiun J. Wang 2 Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan
More informationAnimation of 3D surfaces
Animation of 3D surfaces 2013-14 Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible
More informationDynamic Refinement of Deformable Triangle Meshes for Rendering
Dynamic Refinement of Deformable Triangle Meshes for Rendering Kolja Kähler Jörg Haber Hans-Peter Seidel Computer Graphics Group Max-Planck-Institut für Infomatik Stuhlsatzenhausweg 5, 66123 Saarbrücken,
More informationInteractive Deformation with Triangles
Interactive Deformation with Triangles James Dean Palmer and Ergun Akleman Visualization Sciences Program Texas A&M University Jianer Chen Department of Computer Science Texas A&M University Abstract In
More informationUsing Semi-Regular 4 8 Meshes for Subdivision Surfaces
Using Semi-Regular 8 Meshes for Subdivision Surfaces Luiz Velho IMPA Instituto de Matemática Pura e Aplicada Abstract. Semi-regular 8 meshes are refinable triangulated quadrangulations. They provide a
More informationStereo pairs from linear morphing
Proc. of SPIE Vol. 3295, Stereoscopic Displays and Virtual Reality Systems V, ed. M T Bolas, S S Fisher, J O Merritt (Apr 1998) Copyright SPIE Stereo pairs from linear morphing David F. McAllister Multimedia
More informationMulti-resolution Modeling for Extremely High Resolution 3D Scanned Faces
Multi-resolution Modeling for Extremely High Resolution 3D Scanned Faces By: Andrew Thoe Yee Soon, B. Eng. A thesis submitted to The Faculty of Graduate Studies and Research In partial fulfillment of The
More informationHuman body animation. Computer Animation. Human Body Animation. Skeletal Animation
Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,
More informationIEEE TRANSACTIONS ON MULTIMEDIA 1. A Generic Framework for Efficient 2D and 3D Facial Expression Analogy
IEEE TRANSACTIONS ON MULTIMEDIA 1 A Generic Framework for Efficient 2D and 3D Facial Expression Analogy Mingli Song, Member, IEEE, Zhao Dong*, Student Member, IEEE, Christian Theobalt, Member, IEEE, Huiqiong
More informationRegistration of Dynamic Range Images
Registration of Dynamic Range Images Tan-Chi Ho 1,2 Jung-Hong Chuang 1 Wen-Wei Lin 2 Song-Sun Lin 2 1 Department of Computer Science National Chiao-Tung University 2 Department of Applied Mathematics National
More informationAutomatic Generation of Subdivision Surface Head Models from Point Cloud Data
Automatic Generation of Subdivision Surface Head Models from Point Cloud Data Won-Ki Jeong Kolja Kähler Jörg Haber Hans-Peter Seidel Max-Planck-Institut für Informatik, Stuhlsatzenhausweg 85, 66123 Saarbrücken,
More informationSpeech Driven Synthesis of Talking Head Sequences
3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University
More informationCaricaturing Buildings for Effective Visualization
Caricaturing Buildings for Effective Visualization Grant G. Rice III, Ergun Akleman, Ozan Önder Özener and Asma Naz Visualization Sciences Program, Department of Architecture, Texas A&M University, USA
More informationModeling High Genus Sculptures Using Multi-Connected Handles and Holes
Modeling High Genus Sculptures Using Multi-Connected Handles and Holes Vinod Srinivasan, Hernan Molina and Ergun Akleman Department of Architecture Texas A&M University College Station, Texas, USA vinod@viz.tamu.edu
More information3D Face Deformation Using Control Points and Vector Muscles
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.4, April 2007 149 3D Face Deformation Using Control Points and Vector Muscles Hyun-Cheol Lee and Gi-Taek Hur, University
More informationIFACE: A 3D SYNTHETIC TALKING FACE
IFACE: A 3D SYNTHETIC TALKING FACE PENGYU HONG *, ZHEN WEN, THOMAS S. HUANG Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign Urbana, IL 61801, USA We present
More informationAccurate Reconstruction by Interpolation
Accurate Reconstruction by Interpolation Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore International Conference on Inverse Problems and Related Topics
More informationResearch Article A Facial Expression Parameterization by Elastic Surface Model
International Journal of Computer Games Technology Volume 2009, Article ID 397938, 11 pages doi:10.1155/2009/397938 Research Article A Facial Expression Parameterization by Elastic Surface Model Ken Yano
More informationVISEME SPACE FOR REALISTIC SPEECH ANIMATION
VISEME SPACE FOR REALISTIC SPEECH ANIMATION Sumedha Kshirsagar, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva {sumedha, thalmann}@miralab.unige.ch http://www.miralab.unige.ch ABSTRACT For realistic
More informationAnimation of 3D surfaces.
Animation of 3D surfaces Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible =
More informationTHE development of stable, robust and fast methods that
44 SBC Journal on Interactive Systems, volume 5, number 1, 2014 Fast Simulation of Cloth Tearing Marco Santos Souza, Aldo von Wangenheim, Eros Comunello 4Vision Lab - Univali INCoD - Federal University
More informationTechnical Report. Removing polar rendering artifacts in subdivision surfaces. Ursula H. Augsdörfer, Neil A. Dodgson, Malcolm A. Sabin.
Technical Report UCAM-CL-TR-689 ISSN 1476-2986 Number 689 Computer Laboratory Removing polar rendering artifacts in subdivision surfaces Ursula H. Augsdörfer, Neil A. Dodgson, Malcolm A. Sabin June 2007
More informationGeometric Modeling. Bing-Yu Chen National Taiwan University The University of Tokyo
Geometric Modeling Bing-Yu Chen National Taiwan University The University of Tokyo What are 3D Objects? 3D Object Representations What are 3D objects? The Graphics Process 3D Object Representations Raw
More informationLearning-Based Facial Rearticulation Using Streams of 3D Scans
Learning-Based Facial Rearticulation Using Streams of 3D Scans Robert Bargmann MPI Informatik Saarbrücken, Germany Bargmann@mpi-inf.mpg.de Volker Blanz Universität Siegen Germany Blanz@informatik.uni-siegen.de
More informationHIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS
HIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS N. Nadtoka, J.R. Tena, A. Hilton, J. Edge Centre for Vision, Speech and Signal Processing, University of Surrey {N.Nadtoka, J.Tena, A.Hilton}@surrey.ac.uk Keywords:
More informationPractical Shadow Mapping
Practical Shadow Mapping Stefan Brabec Thomas Annen Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken, Germany Abstract In this paper we propose several methods that can greatly improve
More informationREAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR
REAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR Nuri Murat Arar1, Fatma Gu ney1, Nasuh Kaan Bekmezci1, Hua Gao2 and Hazım Kemal Ekenel1,2,3 1 Department of Computer Engineering, Bogazici University,
More informationA model to blend renderings
A model to blend renderings Vincent Boyer and Dominique Sobczyk L.I.A.S.D.-Universit Paris 8 September 15, 2006 Abstract. We propose a model to blend renderings. It consists in mixing different kind of
More informationFacial Animation System Design based on Image Processing DU Xueyan1, a
4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 206) Facial Animation System Design based on Image Processing DU Xueyan, a Foreign Language School, Wuhan Polytechnic,
More informationMulti-view stereo. Many slides adapted from S. Seitz
Multi-view stereo Many slides adapted from S. Seitz Beyond two-view stereo The third eye can be used for verification Multiple-baseline stereo Pick a reference image, and slide the corresponding window
More informationSculpting 3D Models. Glossary
A Array An array clones copies of an object in a pattern, such as in rows and columns, or in a circle. Each object in an array can be transformed individually. Array Flyout Array flyout is available in
More informationAll the Polygons You Can Eat. Doug Rogers Developer Relations
All the Polygons You Can Eat Doug Rogers Developer Relations doug@nvidia.com Future of Games Very high resolution models 20,000 triangles per model Lots of them Complex Lighting Equations Floating point
More informationSolidifying Wireframes
Solidifying Wireframes Vinod Srinivasan, Esan Mandal and Ergun Akleman Visualization Laboratory Department of Architecture Texas A&M University College Station, TX 77843-3137, USA E-mail: vinod@viz.tamu.edu
More informationGenerating Tool Paths for Free-Form Pocket Machining Using z-buffer-based Voronoi Diagrams
Int J Adv Manuf Technol (1999) 15:182 187 1999 Springer-Verlag London Limited Generating Tool Paths for Free-Form Pocket Machining Using z-buffer-based Voronoi Diagrams Jaehun Jeong and Kwangsoo Kim Department
More informationInteractive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds
1 Interactive Collision Detection for Engineering Plants based on Large-Scale Point-Clouds Takeru Niwa 1 and Hiroshi Masuda 2 1 The University of Electro-Communications, takeru.niwa@uec.ac.jp 2 The University
More informationModifying Soft Tissue Models: Progressive Cutting with Minimal New Element Creation
Modifying Soft Tissue Models: Progressive Cutting with Minimal New Element Creation Andrew B. Mor and Takeo Kanade Center for Medical Robotics and Computer Assisted Surgery Carnegie Mellon University,
More informationAn Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area
An Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area Rio Caesar Suyoto Samuel Gandang Gunanto Magister Informatics Engineering Atma Jaya Yogyakarta University Sleman, Indonesia Magister
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationShape and Appearance from Images and Range Data
SIGGRAPH 2000 Course on 3D Photography Shape and Appearance from Images and Range Data Brian Curless University of Washington Overview Range images vs. point clouds Registration Reconstruction from point
More informationAn Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering
An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster
More informationFacial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn
Facial Image Synthesis Page 1 of 5 Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn 1 Introduction Facial expression has been central to the
More informationFaces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation
Faces Everywhere: Towards Ubiquitous Production and Delivery of Face Animation Igor S. Pandzic 1, Jörgen Ahlberg 2, Mariusz Wzorek 2, Piotr Rudol 2, Miran Mosmondor 1 1 Department of Telecommunications
More informationMulti-View Image Coding in 3-D Space Based on 3-D Reconstruction
Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:
More informationRay tracing. Computer Graphics COMP 770 (236) Spring Instructor: Brandon Lloyd 3/19/07 1
Ray tracing Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 3/19/07 1 From last time Hidden surface removal Painter s algorithm Clipping algorithms Area subdivision BSP trees Z-Buffer
More informationTutorial Model the perfect 3D face
Model the perfect D face Want to get your head around D modelling? We use Maya to show you how to build an animatable face feature by feature T here are many ways in which to model a head in D. In this
More informationAn Efficient Data Structure for Representing Trilateral/Quadrilateral Subdivision Surfaces
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 3, No 3 Sofia 203 Print ISSN: 3-9702; Online ISSN: 34-408 DOI: 0.2478/cait-203-0023 An Efficient Data Structure for Representing
More informationModeling the Virtual World
Modeling the Virtual World Joaquim Madeira November, 2013 RVA - 2013/2014 1 A VR system architecture Modeling the Virtual World Geometry Physics Haptics VR Toolkits RVA - 2013/2014 2 VR object modeling
More informationFacial Animation System Based on Image Warping Algorithm
Facial Animation System Based on Image Warping Algorithm Lanfang Dong 1, Yatao Wang 2, Kui Ni 3, Kuikui Lu 4 Vision Computing and Visualization Laboratory, School of Computer Science and Technology, University
More informationCloth Simulation. Tanja Munz. Master of Science Computer Animation and Visual Effects. CGI Techniques Report
Cloth Simulation CGI Techniques Report Tanja Munz Master of Science Computer Animation and Visual Effects 21st November, 2014 Abstract Cloth simulation is a wide and popular area of research. First papers
More informationAnalysis and Synthesis of Facial Expressions with Hand-Generated Muscle Actuation Basis
Proceedings of Computer Animation 2001, pages 12 19, November 2001 Analysis and Synthesis of Facial Expressions with Hand-Generated Muscle Actuation Basis Byoungwon Choe Hyeong-Seok Ko School of Electrical
More informationResearch On 3D Emotional Face Animation Based on Dirichlet Free Deformation Algorithm
2017 3rd International Conference on Electronic Information Technology and Intellectualization (ICEITI 2017) ISBN: 978-1-60595-512-4 Research On 3D Emotional Face Animation Based on Dirichlet Free Deformation
More informationTEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA
TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,
More informationVehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video
Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using
More informationPoint-based Simplification Algorithm
Point-based Simplification Algorithm Pai-Feng Lee 1, Bin-Shyan Jong 2 Department of Information Management, Hsing Wu College 1 Dept. of Information and Computer Engineering Engineering, Chung Yuan Christian
More informationStatistical Learning of Human Body through Feature Wireframe
Statistical Learning of Human Body through Feature Wireframe Jida HUANG 1, Tsz-Ho KWOK 2*, Chi ZHOU 1 1 Industrial and Systems Engineering, University at Buffalo, SUNY, Buffalo NY, USA; 2 Mechanical, Industrial
More informationMODEL BASED FACE RECONSTRUCTION FOR ANIMATION WON-SOOK LEE, PREM KALRA, NADIA MAGNENAT THALMANN
MODEL BASED FACE RECONSTRUCTION FOR ANIMATION WON-SOOK LEE, PREM KALRA, NADIA MAGNENAT THALMANN MIRALab, CUI, University of Geneva, Geneva, Switzerland E-mail : {wslee, kalra, thalmann}@cui.unige.ch In
More informationSurface Reconstruction. Gianpaolo Palma
Surface Reconstruction Gianpaolo Palma Surface reconstruction Input Point cloud With or without normals Examples: multi-view stereo, union of range scan vertices Range scans Each scan is a triangular mesh
More informationAn Efficient Visual Hull Computation Algorithm
An Efficient Visual Hull Computation Algorithm Wojciech Matusik Chris Buehler Leonard McMillan Laboratory for Computer Science Massachusetts institute of Technology (wojciech, cbuehler, mcmillan)@graphics.lcs.mit.edu
More informationWe present a method to accelerate global illumination computation in pre-rendered animations
Attention for Computer Graphics Rendering Hector Yee PDI / DreamWorks Sumanta Pattanaik University of Central Florida Corresponding Author: Hector Yee Research and Development PDI / DreamWorks 1800 Seaport
More informationSmart point landmark distribution for thin-plate splines
Smart point landmark distribution for thin-plate splines John Lewis a, Hea-Juen Hwang a, Ulrich Neumann a, and Reyes Enciso b a Integrated Media Systems Center, University of Southern California, 3740
More informationAUTOMATED 3D MODELING OF URBAN ENVIRONMENTS
AUTOMATED 3D MODELING OF URBAN ENVIRONMENTS Ioannis Stamos Department of Computer Science Hunter College, City University of New York 695 Park Avenue, New York NY 10065 istamos@hunter.cuny.edu http://www.cs.hunter.cuny.edu/
More information