Master s Thesis. Cloning Facial Expressions with User-defined Example Models
|
|
- Chastity Casey
- 5 years ago
- Views:
Transcription
1 Master s Thesis Cloning Facial Expressions with User-defined Example Models ( Kim, Yejin) Department of Electrical Engineering and Computer Science Division of Computer Science Korea Advanced Institute of Science and Technology 2003
2 Cloning Facial Expressions with User-defined Example Models
3 Cloning Facial Expressions with User-defined Example Models Advisor : Professor Shin, Sung Yong by Kim, Yejin Department of Electrical Engineering and Computer Science Division of Computer Science Korea Advanced Institute of Science and Technology A thesis submitted to the faculty of the Korea Advanced Institute of Science and Technology in partial fulfillment of the requirements for the degree of Master of Engineering in the Department of Electrical Engineering and Computer Science Division of Computer Science Daejeon, Korea Approved by Professor Shin, Sung Yong Advisor
4 ( ) ( ) ( )
5 MCS Kim Yejin. Cloning Facial Expressions with User-defined Example Models.. Department of Electrical Engineering and Computer Science Division of Computer Science p. Advisor Prof. Shin, Sung Yong. Text in English. Abstract A main trend of recent computer animation is reusing existing motion data to produce a new animation. However, most of proposed retargeting approaches mainly focus on articulated body motions. In this thesis, we present a novel example-based approach for the cloning facial expression of a source model to a new model while reflecting the characteristic features of the new model in a resulting animation. Our method adopts an example-based approach. Given the source example models and their corresponding target example models created by an animator, we parameterize the target example models using the source example models and predefine the weight functions for the parameterized target example models based on radial basis functions. At runtime, given a source model, we compute the parameter vector of the new model to evaluate the weight functions for the target example models at their parameter vectors and obtain the new model by blending them with respect to their weight values. The resulting animation preserves the facial expressions of the source model as well as the characteristic features of the new model. Our approach provides the real-time performance to be applied to the applications such as video games and internet broadcasting. i
6 Contents Abstract Contents List of Tables List of Figures i iii iv v 1 Introduction 1 2 Related Work 3 3 Example-Based Cloning Approach Overview Parameterization Principal Component Analysis Weight Extraction Linear Approximation Radial Basis Interpolation Example Blending Experimental Results 15 5 Conclusion 21 Summary (in Korean) 22 References 23 iii
7 List of Tables 4.1 Modelsusedfortheexperiments Average errors of cloned animations from source source Average errors of cloned animations from source target source Computation time iv
8 List of Figures 3.1 Overview of example-based cloning approach Simplified emotion space diagram [22] The source base model and its example models with 20 manually selected featurepointsonthesurfaceofthebasemodel (a) Parameterizing a source example model S i for the corresponding target example model T i. (b) Target example models placed in the parameter space Defining a weight function for each of the target example models is illustrated: (a) Linear approximation, (b) Radial basis interpolation, and (c) Cardinal basis function Generating a new face model by blending target example models Source models with 20 manually selected feature points Targetmodels Cloning expressions from the source model to the target model Cloning expressions with the same source and target models v
9 1. Introduction A human facial expression plays a significant role in communication. A person usually expresses his or her emotion and knowledge through movements of facial parts such as pouting lips, raising eyebrows, bulging cheeks, and so on. Furthermore, facial expressions can complement verbal communication and are closely related to speech production. However, animating a human face with a 3D face model is a challenging task. Humans typically observe faces closely and are very sensitive to a slight glitch in facial movements. On the other hand, unlike other parts in a human body, a human face is an extremely complicated geometric form. It consists of a number of different types of muscles moving toward various directions with different velocity. In addition, the mechanical properties of the skin and underlying layers have a great influence on facial expressions, but are difficult to control in harmony for producing realistic and expressive facial animation. Facial animation has been a recurring theme in computer animation due to its wide spectrum of applications covering movie films, computer games, video teleconferencing, avatar control, to name a few. Since Parke s pioneering work [14] in the early 70 s, many different attempts have been conducted to produce realistic and expressive animation of a 3D face model. These approaches include muscle-based [10, 18, 24, 25], feature point-driven [5, 7, 17], performance-driven [26], and direct parameterized animations [15]. However, these traditional approaches are designed to produce a high quality facial animation for a single face model. Moreover, they make little use of existing animation data for animating a new model since animation parameters are not simply transferable between models. Making an animation for a model requires expensive computational cost and human effort, and the same will be true to create similar animations for other models. Many recent approaches for example-based motion synthesis [2, 4, 19] have addressed the issue of reusing existing motion data for new models. However, these approaches mainly focus on articulated body motions. To our knowledge, expression cloning [12] is the first work for addressing this issue for facial expressions. This approach transfers the vertex motion vectors from a source model to a target model after locally adjusting them to reflect the shapes of source and target models. However, transferring only the adjusted motion vectors of the source model to the target model may not reflect the characteristic expressions of the target model. For example, when cloning a smile on an adult face to a baby face, we have to take account of the characteristic features of a smile on a baby face such as excessive 1
10 raisings of corners of lips, which are not present in a smile on an adult face. With mechanical transferring of the motion vectors alone, it is difficult to make such characteristic features. In this thesis, we present a new method for cloning facial expressions from the source model to the target model while reflecting the correspondence between the characteristic features of models. Our basic idea is to generate an expression by blending a set of target example models representing key-expressions such as happiness, sadness, surprise, fear, and anger. Given the target example model corresponding to the source example model for each key-expression, we formulate expression cloning as a scattered data interpolation problem using radial basis functions [23]. Instead of the mechanical transfer of the motion vectors, we incorporate artist s insight into expression cloning by allowing the artist to specify the target key expressions. Unlike the expression cloning scheme [12], our approach does not require any morphing process and thus avoid the time-consuming step of establishing the feature correspondence between source and target models. This greatly simplifies our cloning process to guarantee real-time performance. To produce a facial animation by blending a set of user-defined example models, we address two main issues: parameterization and blending. To parameterize the target keyexpressions, we select a set of vertices called feature points from the source model. Letting the source model with the neutral expression as the base model, we compute the displacement of each feature point on the base model to the corresponding feature point on a source example model. We concatenate those displacements for all feature point pairs to form a displacement vector which is used as the parameter for the counterpart target example model. The dimensionality of the parameter space can be reduced greatly by adopting principal component analysis (PCA) [6]. This parameterization scheme is simple and effective for positioning the target expressions in the parameter space. With the target key-expressions parameterized, our cloning problem is transformed to a scattered data interpolation problem. Inspired by the work of Sloan et al. [23], we obtain a new expression corresponding the given source expression by blending the target key-expressions using radial basis function after deriving the parameter from the feature displacements of the source expression from its base model. The remainder of this thesis is organized as follows. In Chapter 2, we review related works. Chapter 3 describes the method used for cloning, where each subsection details overview of our approach, parameterizing key-expressions, PCA, computing weight values for target example models, and blending target example models, respectively. Implementation specifics and results are demonstrated in Chapter 4. Finally, Chapter 5 concludes this thesis and discusses possible extensions. 2
11 2. Related Work Since Parke s pioneering work [14], there have been extensive works on 3D facial animation. An excellent survey of these efforts can be found in [16]. We classify the traditional approaches into four different categories: muscle-based, feature point-driven, performancedriven, and direct parameterized approaches. We begin with those approaches and then move on to more recent works directly related to our scheme. Muscle-based approaches [10, 18, 24, 25] are for generating facial expressions based primarily on simulating the physical properties of facial muscles and tissue. Platt and Badler [18] used a mass-and-spring model to simulate facial muscles. Waters [25] developed an dynamic face model based on kinematics of muscle. Terzopoulos et al. [10, 24] applied physical modeling techniques to control facial expressions. Although these approaches use only a subset of muscles or an approximated skin structure for the simulation, the computational cost is still expensive and not feasible for real time animation even with a high-end computer. In feature point-driven approaches [5, 7, 17], facial movements on scanned or photographed images are measured by the changes of feature point positions, and smooth surface deformation techniques are used to manipulate those changes. Kalara et al. [7] described interactive techniques for simulating abstract muscle actions using rational free form deformations (RFFD). Guenter et al. [5] placed a large number of markers on an actor s face and reconstructed photo-realistic 3D animations by capturing both the 3D geometry and color and shading information. Pighin et al. [17] synthesized photo-realistic textured 3D face models with different expressions from a set of photographs of a human face and then created new expression models by blending those 3D models using the multiway morphing technique in [9]. However, the weights for the facial expression models are specified interactively unlike our scheme that computes the weights automatically by using radial basis functions. The other two conventional approaches have also been widely used in facial animation. Williams [26] introduced the performance-driven method which generates an animation from the face motion data captured from live actor s face in front of a camera. In the direct parameterized approach, Parke [15] used a parameter vector to represent the motion of a group of vertices and generated a wide range of facial expressions. Recently, there have been rich research results on reusing of existing animation data. 3
12 Gleicher [4] described a method for retargeting motions onto new characters with different proportions. Lee and Shin [8] enhanced this approach by using a hierarchical displacement mapping technique based on the multilevel B-spline approximation. Noh and Neumann [12] adopted the underlying idea to provide a novel approach called expression cloning for reusing facial animation data. This work can be regarded as facial motion retargeting. Based on geometry morphing, their approach transfers the facial motion vectors from a source model to a target model. Example-based motion synthesis [2, 13, 21, 23] is another stream of researches directly related to our approach. Rose et al. [21] and Sloan et al. [23] proposed example-based motion blending based on scattered data interpolation with radial basis functions. Park et al. [13] applied the similar idea to generate on-line locomotion. Bregler et al. [2] proposed the example-based approach for retargeting motions extracted from traditionally animated cartoons onto various types of models based on affine transformation. They extracted weight values of key-shapes from the input cartoon and generate a target shape by blending the output key-shapes with the extracted weight values. Lewis et al. [11] introduced pose space deformation which adopts the example-based approach for both facial skin deformation and skeleton-driven body deformation. Allen et al. [1] applied a similar technique to the range-scan data for creating a new pose model. To compute the weights for each example model, Lewis et al. used radial basis interpolation while Allen et al. used k-nearest neighbor interpolation. 4
13 3. Example-Based Cloning Approach 3.1 Overview In this section, we summarize an entire process of our example-based cloning approach. Given a facial animation created by any available method, a similar animation for a different face model is synthesized by blending the target example models parameterized by the corresponding source example models. As shown in Figure 3.1, our approach breaks into two main parts: parameterization and blending. As a preprocessing, we first parameterize the target example models to place them in the parameter space. Provided with the source and target example models, we interactively select a number of feature points on the source base model in order to extract their displacements to those on each of the source example models. Concatenating these displacements, we form a displacement vector to parameterize the corresponding target example model. Although parameter space is high-dimensional, most individual parameters tend to be correlated to each others. Relying on PCA (principal component analysis) [6], we reduce the dimensionality of the parameter space by removing less significant basis vectors. The PCA results in a matrix of eigenvectors called feature matrix to be used representing a parameter vector in terms of significant basis vectors. Using the parameterized target example models, we predefine the weight function of each target example model for later blending. We use cardinal basis functions [23] consisting of linear and radial basis functions to define a smooth, continuous weight function for each of the target example models. Once the weight function is predefined for every target example model, we take each frame of the source animation in sequence and obtain the displacement vector of the face model in the frame from the source base model to determine the location of the blended target model in the parameter space. We multiply the displacement vector with the feature matrix calculated in the preprocessing to obtain the reduced parameter vector. Finally, we generate a new facial expression by blending the target example models with respect to the weight values extracted from the predefined weight functions using this vector. 5
14 Parameterization Blending Parameters Source Example Models Target Example Models Source Animation Target Animation Figure 3.1: Overview of example-based cloning approach active alarmed afraid excited astonished angry annoyed frustrated delighted happy pleased unhappy content happy miserable depressed serene, calm relaxed bored tired sleepy inactive Figure 3.2: Simplified emotion space diagram [22]. 6
15 Figure 3.3: The source base model and its example models with 20 manually selected feature points on the surface of the base model. 3.2 Parameterization To express a new expression in terms of key-expressions, we need to define the key-expressions and their corresponding example models. A key-expression gives rise to an individual example model specified by an animator. In cartoon retargeting [2], Bregler et al. selected a set of input key-shapes from the source animation and defined the output key-shapes corresponding to their input counterparts. We also define the source example models and their corresponding target example models. However, we do not want our example models to be dependent on the source animation. Instead of choosing the key-expressions from the source animation, we refer to the emotion space diagram [22] which describes human facial expressions by two emotional axes representing happiness and activity as shown in Figure 3.2. We choose emotional expressions such as neutrality, happiness, sadness, surprise, fear, and anger as key-expressions. We also choose, as key-expressions, verbal expressions such as vowel and consonant visemes, that is the basic, visual mouth expressions observed in speech. Provided with the key-expressions, the animator sculpts a pair of 3D example models for each key-expression: one for the source model and the other for the target model. We use the displacement vector from the source base model to each of the source example models. The source base model is determined from a neutral key-expression of the source model. To measure the difference of a source example model from the base model, we specify 7
16 Example Models Parameter Space Si Vi Ti SB (a) (b) Parameterization Figure 3.4: (a) Parameterizing a source example model S i for the corresponding target example model T i. (b) Target example models placed in the parameter space. approximately 20 feature points on the surface of the source base model. The number of feature points depends on the shape and complexity of the base model. However, our experiments show that two to four feature points around the facial parts such as mouth, eyes, eyebrows, forehead, chin, and cheeks are sufficient to obtain the displacement vectors. Figure 3.3 shows manually selected feature points on the source base and example models representing the key-expressions. For each source example model, we compute a displacement vector of feature points from those for the source base model as follows: V i = S i S B, 1 i M, (3.1) where S B and S i are vectors obtained by concatenating, in a fixed order, the 3D coordinates of feature points on the source base model and those on the ith source example model, respectively, and M is the number of source example models. For each source example model S i, we compute the parameter vector V i from the source base model S B.Asshown in Figure 3.4, V i places each target example model T i in the parameter space. 8
17 3.3 Principal Component Analysis The parameter vectors obtained from the Equation (3.1) constitute a very high dimensional space, compared to the number of example models. The dimensionality of the parameter space is three (for x, y, and z coordinates) times the number of feature points on the source base model. However, most parameters are correlated with respect to each others in general. Thus, we reduce the dimensionality of the parameter vectors by employing the PCA which is a statistical technique for identifying patterns in data and analyzing their similarities and differences. Let a displacement vector V j, 1 j M, be represented as V j =[v 1j,v 2j,..., v Nj ], (3.2) where N is the number of components in V j. Collecting the ith components v ij of V j for all j, we construct a vector P i as follows: P i =[v i1,v i2,..., v im ], 1 i N. (3.3) We subtract the mean from each component of P i as follows: M j=1 vij P i =[v i1 v i,v i2 v i,..., v im v i ], 1 i N, (3.4) where v i = M. Then, we construct a covariance matrix which represents how the components of the parameter vectors vary with respect to each others, that is, cov(p 1, P 1) cov(p 1, P 2)... cov(p 1, P j) cov(p 2, P 1) cov(p 2, P 2)... cov(p 2, P j) C n n =., (3.5)..... cov(p i, P 1) cov(p i, P 2)... cov(p N, P N) where cov(p i, P j) is the covariance between two distinct vectors P i and P j. Since this covariance matrix is a square matrix, we can calculate eigenvectors and their corresponding eigenvalues from this matrix. These eigenvectors called principal components of the parameter vectors represent the principal axes that characterize the parameter vectors. In our experiment, we used the mathematical formulations provided in [20] to obtain the eigenvectors and their corresponding eigenvalues. Once the eigenvectors and the eigenvalues are found from the covariance matrix, we reduce the dimensionality of the parameter space by removing less significant eigenvectors whose eigenvalues are very small. A threshold value or the standard deviation of the eigenvalues can be used to remove the components of less significance. In our experiments, we set 9
18 the threshold value with to remove the eigenvectors. Taking out these eigenvalues will lose some information included in the original parameter vectors, but if the eigenvalues are small enough, lost information is trivial. To transform an original N-dimensional parameter vectors into a reduced parameter vector, we construct a N N matrix F called feature matrix, e 1 e 2 F = e 3, (3.6). where e i is the ith eigenvector remained, and N is the reduced dimensionality. Based on the feature matrix F, we now derive the reduced parameter vectors R i, 1 i N, as follows: e N R i = FP i,i= 1, 2,..., N. (3.7) Computing R i, 1 i N, eventually reduces the dimensionality of the parameter space N to N. Later in the blending part, we also use the feature matrix F to compute a reduced parameter vector from the displacement vector. 3.4 Weight Extraction Given a new position in the reduced parameter space, we need to calculate a weight value for each of the target example models and create a new face model by blending the target example models with those weight values. For this purpose, we employ the multidimensional scattered data interpolation proposed by Sloan et al. [23]. Adopting cardinal basis functions, their method first defines the weight functions for all source example models and then computes their weight values at runtime to blend the target example models. Given a parameter vector p, the weight w i (p) for the ith target example model is defined as follows: N M w i (p) = a il A l (p)+ r ji R j (p). (3.8) l=0 j=1 Here A l (p) anda il are the linear basis functions and their linear coefficients, respectively. Similarly, R j (p)andr ji are the radial basis functions and their radial coefficients. As defined previously, N and M indicate the number of the reduced parameters and target example 10
19 models, respectively. For interpolating the target example models exactly, w i (p) havea following constraint: 1 if i = j, w i (p j )= (3.9) 0 if i j. We first approximate the weight value by linear basis functions, and then resolve errors of the approximation by radial basis functions. Figure 3.5 summarizes the weight computation we used for each of the target example models. In following subsections, we discuss the linear approximation and radial basis interpolation in more detail Linear Approximation Finding the best linear approximation for each weight value of the target example model is the problem of finding the hyperplane in the parameter space that passes closest to the weight values of the target example models satisfying the Equation (3.9). We evaluate a hyperplane for each example model that form a basis for the N +1 (N for reduced parameters and one for weight) dimensional space. Ignoring the second term of Equation (3.8), we evaluate the N + 1 unknown linear coefficients a il : N w i (p) = a il A l (p). (3.10) l=0 The linear bases are simply A l (p) =p l, where A l (p) isthelth component of p and A 0 (p) = 1. Using the reduced parameter vector p i of each target example model and its weight w i (p i ), we employ a least squares method provided in [20] to evaluate the unknown linear coefficients a il of the linear bases. Figure 3.5(a) illustrates the linear approximation defined by the four example models for a simple one-dimensional parameter space. The four example models are placed in order, and the straight line fits best through the weight value for each of the four example models Radial Basis Interpolation Being defined the linear approximation from the previous section, we need to account the errors of the approximation called residuals between the weight value of the target example model and the approximated hyperplane as shown in Figure 3.5(b). We use radial basis functions to account for the residuals as given by w i(p) =w i (p) a il A l (p), for all i. (3.11) N l=0 11
20 (a) Linear part of the cardinal basis function for the first example model, T 1 (b) Linear and radial parts of the cardinal basis function for the first example, T 1. (c) Cardinal basis functions for the first example, T 1. Figure 3.5: Defining a weight function for each of the target example models is illustrated: (a) Linear approximation, (b) Radial basis interpolation, and (c) Cardinal basis function. 12
21 With these residuals, we solve for the radial coefficients, r ji, in the Equation (3.8). This leaves us with the problem of choosing specific radial bases and determining their coefficients. For its simplicity, we choose the radial bases with the cross-section of the cubic B-spline [21]. Thus, provided with example models p j, 1 j M, the radial basis function in the parameter space is defined as follows: ( ) p pj R j (p) =B, for 1 j M, (3.12) α where B( ) is the cubic B-spline function, α is the dilation factor, and p - p j denotes Euclidean distance between p and p j. For each example model in the parameter space, we choose the dilation factor α that the radius of the B-spline equals to twice Euclidean distance to the nearest other example model. The radial coefficients can now be found by solving the matrix system, rr = w, (3.13) where r is an M M matrix of the unknown radial coefficients r ji,andrand w are the matrices of the same size defined by the radial bases and by the residuals, respectively, such that R ij = R i (p j )andw ij = w i (p j). Note that R contains the value of the unscaled radial basis function centered on the ith target example model at the location specified by its parameter vector. The diagonal terms are all 2/3 since this is the value of the generic cubic B-spline at its center. Many of the off-diagonal terms are zero since the B-spline cross-sections drop to zero at twice the distance to the nearest target example model. Referring back to Figure 3.5(b), we see the four radial basis functions associated with the first of the four example models. If these radial basis functions are summed with the linear approximation, we get the cardinal basis function as shown in Figure 3.5(c). Note that it passes through one at the location of the first example model, T 1, and is zero at the other example model locations, T 2, T 3,andT 4. The same is true for the cardinal bases of the other three models, T 2, T 3,andT Example Blending Given with the solutions for a il and r ji, we are now able to obtain the weight value for each of the target example models from the Equation (3.8) at runtime. For each input frame from the source animation, we first obtain the displacement vector P in of the face model in the input frame as explained in Section 3.2. To reduce this N-dimensional parameter vector to the same dimensionality of the reduced parameter space N, we subtract each component 13
22 Rin Tnew SB TB Ti Sinput Parameter Space Source Animation Target Animation Figure 3.6: Generating a new face model by blending target example models of P in with the v i obtained from the Equation (3.5) and then multiply this mean-adjusted displacement vector P in with the feature matrix F calculated from the Equation (3.6). As a result, we obtain the reduced parameter vector R in from the Equation (3.7). Given the predefined weight functions for the target example models T i from the Equation (3.8), we generate a new face model T new using the parameter vector R in as follows: M T new (R in )=T B + w i (R in )(T i T B ), (3.14) i=1 where T new (R in ) is the new face model locating at R in in the reduced parameter space and T B is the target base model corresponding to the source example model S B. As done for the source base model S B, we use a neutral key-expression of the target model as the target base model T B. Figure 3.6 illustrates how to generate a new face model for the output animation. 14
23 4. Experimental Results Figures 4.1 and 4.2 respectively show the source and target models used in our experiments, and Table 4.1 gives the number of vertices and that of polygons in each model. For the source models, Man A and Man B, we manually selected 20 feature points on the modele as described in Section 3.2. As source animations, we used two different facial animations. The facial animation of Man A exhibits various exaggerated expressions such as opening and widening the mouth, bulging cheeks, and so on, while the facial animation of Man B shows verbal movements with emotional expressions. In the first experiment, we used Man A as the source model and the baby model as the target model. To clone the facial expressions of Man A in the source animation to the baby model, we used six example models for the source and target model, respectively. Figures 4.3(a) shows the sample expressions of Man A in the source animation and the baby model in the cloned animation. We also cloned the facial expressions of the same model to the monkey model as shown in Figure 4.3(b). The expressions of the source model are cloned nicely to the target models while reflecting the characteristic expressions of the target models with different geometry and mesh structures. In the next experiment, we used Man B as the source model and the woman model as the target model to clone a speech animation. For the speech animation, we prepared total of seventy-seven example models: twelve (one neutral, eight vowel, and three consonant) visemes for each of the six key-expressions (neutrality, happiness, anger, sadness, surprise, and fear) and five key-expressions (excluding neutrality) which contains no visemes. As Table 4.1: Models used for the experiments Model Vertices Polygons Man A Man B Baby Monkey Woman
24 (a) Man A (b) Man B Figure 4.1: Source models with 20 manually selected feature points (a) Baby (b) Monkey (c) Woman Figure 4.2: Target models 16
25 (a) Man A to Baby (b) Man A to Monkey (c) Man B to Woman Figure 4.3: Cloning expressions from the source model to the target model 17
26 Table 4.2: Average errors of cloned animations from source source Man A Man A Man B Man B Example-based Expression cloning Example-based Expression cloning x 0.110% 0.234% 0.051% 0.176% y 0.111% 0.196% 0.057% 0.133% z 0.050% 0.077% 0.100% 0.213% Table 4.3: Average errors of cloned animations from source target source Man A Baby Man A Man B Woman Man B Example-based Expression cloning Example-based Expression cloning x 0.112% 2.120% 0.118% 3.076% y 0.113% 1.936% 0.214% 3.893% z 0.051% 1.004% 0.268% 4.183% shown in the Figure 4.3(c), the verbal expressions of the source model are convincingly reproduced to the target model. In the last experiment, we measured the quantitative accuracy of our approach by cloning the source model to itself. For precise measurement, we took two different approaches: First, we used the same face model for both the source and target model. Second, we cloned a source model to a different target model, and vice versa. In both approaches, we created the same source animation as the resulting animation and compared this animation with the original source animation to measure the quantitative accuracy of our approach. First, we performed our experiment for Man A with the same models used in the first experiment. As shown in Figure 4.4(a), the source expression looks visually the same as the cloned animation. We repeat the same experiment for Man B with the same models used in the second experiment. The cloned samples are given in Figure 4.4(b). The error for each individual frame of cloned animation is measured as follows: N j=1 v j v j N j=1 v 100, (4.1) j where v j and v j are the 3D positions of the jth vertex in the source model and the corresponding one in the cloned model, respectively, and N is the number of vertices of the source model. For an animation, its error is measured as the average over all its constituent 18
27 (a) Man A to Man A (b) Man B to Man B Figure 4.4: Cloning expressions with the same source and target models 19
28 Table 4.4: Computation time Figure 4.3(a) Figure 4.3(b) Figure 4.3(c) (Man A to Baby) (Man A to Monkey) (Man B to Woman) Number of frames Number of example models Total time for defining weight 32 ms 30 ms 62 ms functions Total weight extraction time 326 ms 266 ms 548 ms Average time per frame 0.30 ms 0.25 ms 0.69 ms frames. Ideally, the vertex positions of a cloned model in the resulting animation should be identical to those of the corresponding model in the source animation. Table 4.2 and Table 4.3 show the average errors of cloned animations for the x, y, and z coordinates. The tables indicate that our example-based approach produces lower average errors than the one measured with the expression cloning approach. Since the average errors of cloned animations are very small, the visual difference between the source and cloned animations is hard to perceive. The performance of our approach is summarized in Table 4.4. The experiments were implemented with C++ and OpenGL on an Intel Pentium R PC (P-4 2.4GHz processor, 512MB RAM, and GeForce 4 R ). The timing data were obtained by varying the source and target model, the number of example models, and the number of source animation frames. As the table shows, it takes less than 1 millisecond per frame in all experiments, which generates over 1000 frames per second to guarantee the real-time performance. 20
29 5. Conclusion In this thesis, we present an example-based approach for cloning the expression of a source model to a new model while preserving characteristic features of the new model. The basic idea of our approach is blending the user-defined example models to clone the input facial animation. To instantiate this idea, we have addressed several issues: We provide a simple, but effective parameterization scheme to place the target example models in the parameter space, of which the dimensionality can be reduced by employing the PCA. To blend the parameterized target example models, we adopt multi-dimensional scattered data interpolation. Inspired by the work of Sloan et al. [23], we predefine a weight function for each of the target example models and compute the weight value for each target example model at runtime. The experimental results demonstrate that our approach can generate convincing animations in real time. The applications of our approach are diverse: a 3D facial animation for movies and computer games, internet broadcasting, and personalized avatars in virtual environments. With limited resources for creating facial animations, our approach can save a lot of human efforts and time required for adding original realism to new models. According to researches in psychology, the face can be split into several regions that behave as coherent units [3]. For example, the upper part of a human face such as eyes, eyebrows, and forehead is used for emotional expressions, and the lower part such as mouth, cheeks, and chin is used for verbal expressions. We can accordingly partition each example model into two regions. If we prepare the example expressions for each of the regions separately, we could generate more diverse expressions with a smaller number of example models. To achieve this, we need an effective way to combine separately-generated animations for individual regions seamlessly. 21
30 3 (retargeting). כ כ., (blending)., כ כ
31 References [1] B. Allen, B. Curless, and Z. Popovic. Articulated body deformation from range scan data. In Proceedings of SIGGRAPH 02, pages , [2] C. Bregler, L. Loeb, E. Chuang, and H. Deshpande. Turning to the masters: Motion capturing cartoons. In Proceedings of SIGGRAPH 02, pages , [3] P. Ekman and W. V. Friesen. Unmasking the face: A guide to recognizing emotions from facial clues. Prentice-Hall Inc., Englewood Cliffs, New Jersey, [4] M. Gleicher. Retargetting motion to new characters. In Proceedings of SIGGRAPH 98, pages 33 42, [5] B. Guenter, C. Grimm, D. Wood, H. Malvar, and F. Pighin. Making faces. In Proceedings of SIGGRAPH 98, pages 55 67, [6] I. T. Jollife. Principal components analysis. New York: Spinger, [7] P. Kalara, A. Mangili, N. M. Thalmann, and D. Thalmann. Simulation of facial muscle actions based on rational free from deformations. In Proceedings of Eurographics 92, 11(3):59 69, [8] J. Lee and S. Y. Shin. A hierarchical approach to interactive motion editing for humanlike figures. In Proceedings of SIGGRAPH 99, pages 39 48, [9] S. Lee, G. Wolberg, and S. Y. Shin. Polymorph: An algorithm for morphing among multiple images. IEEE Computer Graphics and Applications, 18(1):58 71, [10] Y. C. Lee, D. Terzopoulos, and K. Waters. Realistic modeling for facial animation. In Proceedings of SIGGRAPH 95, pages 55 62, [11] J. P. Lewis, M. Cordner, and N. Fong. Pose space deformation: A unified approach to shape interpolation and skeleton-drive deformation. In Proceedings of SIGGRAPH 00, pages , [12] J. Y. Noh and U. Neumann. Expression cloning. In Proceedings of SIGGRAPH 01, pages ,
32 [13] S. I. Park, H. J. Shin, and S. Y. Shin. On-line locomotion generation based on motion blending. ACM SIGGRAPH Symposium on Computer Animation, pages , [14] F. I. Parke. Computer generated animation of faces. Master s thesis, University of Utah, Salt Lake City, UT, June [15] F. I. Parke. Parameterized models for facial animation. IEEE Computer Graphics and Applications, 2(9):61 68, November [16] F. I. Parke and K. Waters. Computer facial animation. A K Peters, 289 Linden Street, Wellesley, MA 02181, [17] F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin. Synthesizing realistic facial expressions from photographs. In Proceedings of SIGGRAPH 98, pages 75 84, [18] S. M. Platt and N. I. Badler. Animating facial expressions. Computer Graphics, 15(3): , July [19] Z. Popovic and A. Witkin. Physically based motion transformation. In Proceedings of SIGGRAPH 99, pages 11 20, [20] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical recipes in C. Cambridge University Press, 40 West 20th Street, New York, NY , [21] C. Rose, M. F. Cohen, and B. Bodenheimer. Verbs and adverbs: Multidimensional motion interpolation. IEEE Computer Graphics and Applications, 18(5):32 40, September [22] J. A. Russel. A circomplex model of affect. J. Personality and Social Psychology, 39: , [23] P.-P. Sloan, C. F. Rose, and Michael F. Cohen. Shape by example. In Proceedings of 2001 Symposium on Interactive 3D Graphics, pages , [24] D. Terzopoulos and K. Waters. Physically-based facial modeling, analysis, and animation. Journal of Visualization and Computer Animation, 1(4):73 80, March [25] K. Waters. A muscle model for animating three-dimensional facial expressions. Computer Graphics (SIGGRAPH 87), 21(4):17 24, July
33 [26] L. Williams. Performance driven facial animation. In Proceedings of SIGGRAPH 90, pages ,
K A I S T Department of Computer Science
An Example-based Approach to Text-driven Speech Animation with Emotional Expressions Hyewon Pyun, Wonseok Chae, Yejin Kim, Hyungwoo Kang, and Sung Yong Shin CS/TR-2004-200 July 19, 2004 K A I S T Department
More informationPersonal style & NMF-based Exaggerative Expressions of Face. Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University
Personal style & NMF-based Exaggerative Expressions of Face Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University Outline Introduction Related Works Methodology Personal
More informationComputer Animation Visualization. Lecture 5. Facial animation
Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based
More informationFacial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn
Facial Image Synthesis Page 1 of 5 Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn 1 Introduction Facial expression has been central to the
More informationSynthesizing Realistic Facial Expressions from Photographs
Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1
More informationPose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation
Pose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation J.P. Lewis Matt Cordner Nickson Fong Presented by 1 Talk Outline Character Animation Overview Problem Statement
More informationVISEME SPACE FOR REALISTIC SPEECH ANIMATION
VISEME SPACE FOR REALISTIC SPEECH ANIMATION Sumedha Kshirsagar, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva {sumedha, thalmann}@miralab.unige.ch http://www.miralab.unige.ch ABSTRACT For realistic
More informationHuman body animation. Computer Animation. Human Body Animation. Skeletal Animation
Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,
More informationMuscle Based facial Modeling. Wei Xu
Muscle Based facial Modeling Wei Xu Facial Modeling Techniques Facial modeling/animation Geometry manipulations Interpolation Parameterizations finite element methods muscle based modeling visual simulation
More informationAnimation of 3D surfaces.
Animation of 3D surfaces Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible =
More informationShape and Expression Space of Real istic Human Faces
8 5 2006 5 Vol8 No5 JOURNAL OF COMPU TER2AIDED DESIGN & COMPU TER GRAPHICS May 2006 ( 0087) (peiyuru @cis. pku. edu. cn) : Canny ; ; ; TP394 Shape and Expression Space of Real istic Human Faces Pei Yuru
More informationM I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies?
MIRALab Where Research means Creativity Where do we stand today? M I RA Lab Nadia Magnenat-Thalmann MIRALab, University of Geneva thalmann@miralab.unige.ch Video Input (face) Audio Input (speech) FAP Extraction
More informationFACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING
FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave., Ottawa, Ontario, Canada,
More informationPerformance Driven Facial Animation using Blendshape Interpolation
Performance Driven Facial Animation using Blendshape Interpolation Erika Chuang Chris Bregler Computer Science Department Stanford University Abstract This paper describes a method of creating facial animation
More informationFacial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation
Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation Qing Li and Zhigang Deng Department of Computer Science University of Houston Houston, TX, 77204, USA
More informationMODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL
MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave.,
More informationData-Driven Face Modeling and Animation
1. Research Team Data-Driven Face Modeling and Animation Project Leader: Post Doc(s): Graduate Students: Undergraduate Students: Prof. Ulrich Neumann, IMSC and Computer Science John P. Lewis Zhigang Deng,
More informationThe accuracy and robustness of motion
Orthogonal-Blendshape-Based Editing System for Facial Motion Capture Data Qing Li and Zhigang Deng University of Houston The accuracy and robustness of motion capture has made it a popular technique for
More informationTransfer Facial Expressions with Identical Topology
Transfer Facial Expressions with Identical Topology Alice J. Lin Department of Computer Science University of Kentucky Lexington, KY 40506, USA alice.lin@uky.edu Fuhua (Frank) Cheng Department of Computer
More informationFacial Animation System Design based on Image Processing DU Xueyan1, a
4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 206) Facial Animation System Design based on Image Processing DU Xueyan, a Foreign Language School, Wuhan Polytechnic,
More informationA Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets
A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets Kyungha Min and Moon-Ryul Jung Dept. of Media Technology, Graduate School of Media Communications, Sogang Univ., Seoul,
More information3D Face Deformation Using Control Points and Vector Muscles
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.4, April 2007 149 3D Face Deformation Using Control Points and Vector Muscles Hyun-Cheol Lee and Gi-Taek Hur, University
More informationImage-Based Deformation of Objects in Real Scenes
Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method
More informationAn Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area
An Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area Rio Caesar Suyoto Samuel Gandang Gunanto Magister Informatics Engineering Atma Jaya Yogyakarta University Sleman, Indonesia Magister
More informationCS 231. Deformation simulation (and faces)
CS 231 Deformation simulation (and faces) Deformation BODY Simulation Discretization Spring-mass models difficult to model continuum properties Simple & fast to implement and understand Finite Element
More informationAnimation of 3D surfaces
Animation of 3D surfaces 2013-14 Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible
More informationHIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS
HIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS N. Nadtoka, J.R. Tena, A. Hilton, J. Edge Centre for Vision, Speech and Signal Processing, University of Surrey {N.Nadtoka, J.Tena, A.Hilton}@surrey.ac.uk Keywords:
More informationCS 231. Deformation simulation (and faces)
CS 231 Deformation simulation (and faces) 1 Cloth Simulation deformable surface model Represent cloth model as a triangular or rectangular grid Points of finite mass as vertices Forces or energies of points
More informationAnnouncements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th
Announcements Midterms back at end of class ½ lecture and ½ demo in mocap lab Have you started on the ray tracer? If not, please do due April 10th 1 Overview of Animation Section Techniques Traditional
More information3D Morphable Model Based Face Replacement in Video
3D Morphable Model Based Face Replacement in Video Yi-Ting Cheng, Virginia Tzeng, Yung-Yu Chuang, Ming Ouhyoung Dept. of Computer Science and Information Engineering, National Taiwan University E-mail:
More informationVision-based Control of 3D Facial Animation
Eurographics/SIGGRAPH Symposium on Computer Animation (2003) D. Breen, M. Lin (Editors) Vision-based Control of 3D Facial Animation Jin-xiang Chai,1 Jing Xiao1 and Jessica Hodgins1 1 The Robotics Institute,
More informationFacial expression recognition using shape and texture information
1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,
More informationINTERNATIONAL JOURNAL OF GRAPHICS AND MULTIMEDIA (IJGM)
INTERNATIONAL JOURNAL OF GRAPHICS AND MULTIMEDIA (IJGM) International Journal of Graphics and Multimedia (IJGM), ISSN: 0976 6448 (Print) ISSN: 0976 ISSN : 0976 6448 (Print) ISSN : 0976 6456 (Online) Volume
More informationMotion Synthesis and Editing. Yisheng Chen
Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional
More informationDeformation Transfer for Triangle Meshes
Deformation Transfer for Triangle Meshes Robert W. Sumner Jovan Popović Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology Figure 1: Deformation transfer copies
More informationReal time facial expression recognition from image sequences using Support Vector Machines
Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,
More informationAnalysis and Synthesis of Facial Expressions with Hand-Generated Muscle Actuation Basis
Proceedings of Computer Animation 2001, pages 12 19, November 2001 Analysis and Synthesis of Facial Expressions with Hand-Generated Muscle Actuation Basis Byoungwon Choe Hyeong-Seok Ko School of Electrical
More informationA VIRTUAL SCULPTURE BASED MORPHABLE FACE MODEL
A VIRTUAL SCULPTURE BASED MORPHABLE FACE MODEL A Thesis by JESSICA LAUREN RIEWE Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree
More informationFacial Expression Morphing and Animation with Local Warping Methods
Facial Expression Morphing and Animation with Local Warping Methods Daw-Tung Lin and Han Huang Department of Computer Science and Information Engineering Chung Hua University 30 Tung-shiang, Hsin-chu,
More informationAdding Hand Motion to the Motion Capture Based Character Animation
Adding Hand Motion to the Motion Capture Based Character Animation Ge Jin and James Hahn Computer Science Department, George Washington University, Washington DC 20052 {jinge, hahn}@gwu.edu Abstract. Most
More informationA Facial Expression Imitation System in Human Robot Interaction
A Facial Expression Imitation System in Human Robot Interaction S. S. Ge, C. Wang, C. C. Hang Abstract In this paper, we propose an interactive system for reconstructing human facial expression. In the
More informationPOLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES
POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES Seungyong Lee Department of Computer Science and Engineering Pohang University of Science and Technology Pohang, 790-784, S. Korea leesy@postech.ac.kr
More informationCS-184: Computer Graphics. Today
CS-184: Computer Graphics Lecture #20: Motion Capture Prof. James O Brien University of California, Berkeley V2005-F20-1.0 Today Motion Capture 2 Motion Capture Record motion from physical objects Use
More informationFACIAL ANIMATION FROM SEVERAL IMAGES
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL ANIMATION FROM SEVERAL IMAGES Yasuhiro MUKAIGAWAt Yuichi NAKAMURA+ Yuichi OHTA+ t Department of Information
More informationFacial Expression Analysis for Model-Based Coding of Video Sequences
Picture Coding Symposium, pp. 33-38, Berlin, September 1997. Facial Expression Analysis for Model-Based Coding of Video Sequences Peter Eisert and Bernd Girod Telecommunications Institute, University of
More informationCSE452 Computer Graphics
CSE452 Computer Graphics Lecture 19: From Morphing To Animation Capturing and Animating Skin Deformation in Human Motion, Park and Hodgins, SIGGRAPH 2006 CSE452 Lecture 19: From Morphing to Animation 1
More informationAnimation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter
Animation COM3404 Richard Everson School of Engineering, Computer Science and Mathematics University of Exeter R.M.Everson@exeter.ac.uk http://www.secamlocal.ex.ac.uk/studyres/com304 Richard Everson Animation
More informationK A I S T Department of Computer Science
A Region-based Facial Expression Cloning Bongcheol Park and Sung Yong Shin CS/TR-2006-256 April 24, 2006 K A I S T Department of Computer Science A Region-based Facial Expression Cloning Bongcheol Park
More informationFacial Motion Cloning 1
Facial Motion Cloning 1 Abstract IGOR S. PANDZIC Department of Electrical Engineering Linköping University, SE-581 83 Linköping igor@isy.liu.se We propose a method for automatically copying facial motion
More informationPhysical based Rigging
Physical based Rigging Dinghuang Ji Introduction Computer animation has been a popular research topic since 1970s, when the first parametric facial model is proposed[0]. In the recent few years, a lot
More informationFaces and Image-Based Lighting
Announcements Faces and Image-Based Lighting Project #3 artifacts voting Final project: Demo on 6/25 (Wednesday) 13:30pm in this room Reports and videos due on 6/26 (Thursday) 11:59pm Digital Visual Effects,
More informationFeature points based facial animation retargeting
Feature points based facial animation retargeting Ludovic Dutreve, Alexandre Meyer, Saida Bouakaz To cite this version: Ludovic Dutreve, Alexandre Meyer, Saida Bouakaz. Feature points based facial animation
More informationThiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):
Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation
More informationRe-mapping Animation Parameters Between Multiple Types of Facial Model
Re-mapping Animation Parameters Between Multiple Types of Facial Model Darren Cosker, Steven Roy, Paul L. Rosin, and David Marshall School of Computer Science, Cardiff University, U.K D.P.Cosker,Paul.Rosin,Dave.Marshal@cs.cardiff.ac.uk
More informationFast Facial Motion Cloning in MPEG-4
Fast Facial Motion Cloning in MPEG-4 Marco Fratarcangeli and Marco Schaerf Department of Computer and Systems Science University of Rome La Sapienza frat,schaerf@dis.uniroma1.it Abstract Facial Motion
More informationReal-time Expression Cloning using Appearance Models
Real-time Expression Cloning using Appearance Models Barry-John Theobald School of Computing Sciences University of East Anglia Norwich, UK bjt@cmp.uea.ac.uk Iain A. Matthews Robotics Institute Carnegie
More informationSYNTHESIS OF 3D FACES
SYNTHESIS OF 3D FACES R. Enciso, J. Li, D.A. Fidaleo, T-Y Kim, J-Y Noh and U. Neumann Integrated Media Systems Center University of Southern California Los Angeles, CA 90089, U.S.A. Abstract In this paper,
More informationnetwork and image warping. In IEEE International Conference on Neural Networks, volume III,
Mary YY Leung, Hung Yen Hui, and Irwin King Facial expression synthesis by radial basis function network and image warping In IEEE International Conference on Neural Networks, volume III, pages 1{15, Washington
More informationA Morphable Model for the Synthesis of 3D Faces
A Morphable Model for the Synthesis of 3D Faces Marco Nef Volker Blanz, Thomas Vetter SIGGRAPH 99, Los Angeles Presentation overview Motivation Introduction Database Morphable 3D Face Model Matching a
More informationModeling Facial Expressions in 3D Avatars from 2D Images
Modeling Facial Expressions in 3D Avatars from 2D Images Emma Sax Division of Science and Mathematics University of Minnesota, Morris Morris, Minnesota, USA 12 November, 2016 Morris, MN Sax (U of Minn,
More informationFacial Expression Recognition using Principal Component Analysis with Singular Value Decomposition
ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial
More informationCS-184: Computer Graphics
CS-184: Computer Graphics Lecture #19: Motion Capture!!! Prof. James O Brien! University of California, Berkeley!! V2015-S-18-1.0 Today 1 18-MoCap.key - April 8, 2015 Motion Capture 2 2 18-MoCap.key -
More informationSpeech Driven Synthesis of Talking Head Sequences
3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University
More informationResearch On 3D Emotional Face Animation Based on Dirichlet Free Deformation Algorithm
2017 3rd International Conference on Electronic Information Technology and Intellectualization (ICEITI 2017) ISBN: 978-1-60595-512-4 Research On 3D Emotional Face Animation Based on Dirichlet Free Deformation
More informationDeformation Transfer for Triangle Meshes
Deformation Transfer for Triangle Meshes a Paper (SIGGRAPH 2004) by Robert W. Sumner & Jovan Popovic presented by Roni Oeschger Deformation Transfer Source deformed Target deformed 1 Outline of my presentation
More informationReal-time Talking Head Driven by Voice and its Application to Communication and Entertainment
ISCA Archive Real-time Talking Head Driven by Voice and its Application to Communication and Entertainment Shigeo MORISHIMA Seikei University ABSTRACT Recently computer can make cyberspace to walk through
More informationApplication of the Fourier-wavelet transform to moving images in an interview scene
International Journal of Applied Electromagnetics and Mechanics 15 (2001/2002) 359 364 359 IOS Press Application of the Fourier-wavelet transform to moving images in an interview scene Chieko Kato a,,
More informationModeling Deformable Human Hands from Medical Images
Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2004) R. Boulic, D. K. Pai (Editors) Modeling Deformable Human Hands from Medical Images Tsuneya Kurihara 1 and Natsuki Miyata 2 1 Central Research
More informationLearnt Inverse Kinematics for Animation Synthesis
VVG (5) (Editors) Inverse Kinematics for Animation Synthesis Anonymous Abstract Existing work on animation synthesis can be roughly split into two approaches, those that combine segments of motion capture
More informationIFACE: A 3D SYNTHETIC TALKING FACE
IFACE: A 3D SYNTHETIC TALKING FACE PENGYU HONG *, ZHEN WEN, THOMAS S. HUANG Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign Urbana, IL 61801, USA We present
More informationVideo based Animation Synthesis with the Essential Graph. Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes
Video based Animation Synthesis with the Essential Graph Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes Goal Given a set of 4D models, how to generate realistic motion from user specified
More informationHuman hand adaptation using sweeps: generating animatable hand models ...
COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 2007; 18: 505 516 Published online 16 July 2007 in Wiley InterScience (www.interscience.wiley.com).193 Human hand adaptation using sweeps:
More informationResearch Article A Facial Expression Parameterization by Elastic Surface Model
International Journal of Computer Games Technology Volume 2009, Article ID 397938, 11 pages doi:10.1155/2009/397938 Research Article A Facial Expression Parameterization by Elastic Surface Model Ken Yano
More informationFacial Expression Detection Using Implemented (PCA) Algorithm
Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with
More informationSynthesizing Speech Animation By Learning Compact Speech Co-Articulation Models
Synthesizing Speech Animation By Learning Compact Speech Co-Articulation Models Zhigang Deng J.P. Lewis Ulrich Neumann Computer Graphics and Immersive Technologies Lab Department of Computer Science, Integrated
More informationHuman Face Animation Based on Video Analysis, with Applications to Mobile Entertainment
Human Face Animation Based on Video Analysis, with Applications to Mobile Entertainment Author Tang, John Sy-Sen, Liew, Alan Wee-Chung, Yan, Hong Published 2005 Journal Title Journal of Mobile Multimedia
More informationFACIAL EXPRESSION USING 3D ANIMATION TECHNIQUE
FACIAL EXPRESSION USING 3D ANIMATION TECHNIQUE Vishal Bal Assistant Prof., Pyramid College of Business & Technology, Phagwara, Punjab, (India) ABSTRACT Traditionally, human facial language has been studied
More informationMotion Control with Strokes
Motion Control with Strokes Masaki Oshita Kyushu Institute of Technology oshita@ces.kyutech.ac.jp Figure 1: Examples of stroke-based motion control. Input strokes (above) and generated motions (below).
More informationReal-time Facial Expressions in
Real-time Facial Expressions in the Auslan Tuition System Jason C. Wong School of Computer Science & Software Engineering The University of Western Australia 35 Stirling Highway Crawley, Western Australia,
More informationCaricaturing Buildings for Effective Visualization
Caricaturing Buildings for Effective Visualization Grant G. Rice III, Ergun Akleman, Ozan Önder Özener and Asma Naz Visualization Sciences Program, Department of Architecture, Texas A&M University, USA
More informationSketching Articulation and Pose for Facial Meshes
Sketching Articulation and Pose for Facial Meshes Edwin Chang Brown University Advisor: Odest Chadwicke Jenkins Brown University Figure 1: A reference curve (green) and target curve (blue) are sketched
More informationClassification of Face Images for Gender, Age, Facial Expression, and Identity 1
Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1
More informationParallel Architecture & Programing Models for Face Recognition
Parallel Architecture & Programing Models for Face Recognition Submitted by Sagar Kukreja Computer Engineering Department Rochester Institute of Technology Agenda Introduction to face recognition Feature
More informationHuman Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya
Hartmann - 1 Bjoern Hartman Advisor: Dr. Norm Badler Applied Senior Design Project - Final Report Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Introduction Realistic
More informationReducing Blendshape Interference by Selected Motion Attenuation
Reducing Blendshape Interference by Selected Motion Attenuation J.P. Lewis, Jonathan Mooser, Zhigang Deng, and Ulrich Neumann Computer Graphics and Immersive Technology Lab University of Southern California
More informationCloning Skeleton-driven Animation to Other Models
Cloning Skeleton-driven Animation to Other Models Wan-Chi Luo Jian-Bin Huang Bing-Yu Chen Pin-Chou Liu National Taiwan University {maggie, azar, toby}@cmlab.csie.ntu.edu.tw robin@ntu.edu.tw Abstract-3D
More informationHierarchical Retargetting of Fine Facial Motions
EUROGRAPHICS 2004 / M.-P. Cani and M. Slater (Guest Editors) Volume 23 (2004), Number 3 Hierarchical Retargetting of Fine Facial Motions Kyunggun Na and Moonryul Jung Department of Media Technology, Graduate
More informationanimation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time
animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to
More informationAnimation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala
Animation CS 4620 Lecture 33 Cornell CS4620 Fall 2015 1 Announcements Grading A5 (and A6) on Monday after TG 4621: one-on-one sessions with TA this Friday w/ prior instructor Steve Marschner 2 Quaternions
More informationAnimation. CS 465 Lecture 22
Animation CS 465 Lecture 22 Animation Industry production process leading up to animation What animation is How animation works (very generally) Artistic process of animation Further topics in how it works
More informationStatistical Learning of Human Body through Feature Wireframe
Statistical Learning of Human Body through Feature Wireframe Jida HUANG 1, Tsz-Ho KWOK 2*, Chi ZHOU 1 1 Industrial and Systems Engineering, University at Buffalo, SUNY, Buffalo NY, USA; 2 Mechanical, Industrial
More informationHuman Body Shape Deformation from. Front and Side Images
Human Body Shape Deformation from Front and Side Images Yueh-Ling Lin 1 and Mao-Jiun J. Wang 2 Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan
More informationInteractive Deformation with Triangles
Interactive Deformation with Triangles James Dean Palmer and Ergun Akleman Visualization Sciences Program Texas A&M University Jianer Chen Department of Computer Science Texas A&M University Abstract In
More informationCS 523: Computer Graphics, Spring Shape Modeling. Skeletal deformation. Andrew Nealen, Rutgers, /12/2011 1
CS 523: Computer Graphics, Spring 2011 Shape Modeling Skeletal deformation 4/12/2011 1 Believable character animation Computers games and movies Skeleton: intuitive, low-dimensional subspace Clip courtesy
More informationA PLASTIC-VISCO-ELASTIC MODEL FOR WRINKLES IN FACIAL ANIMATION AND SKIN AGING
MIRALab Copyright Information 1998 A PLASTIC-VISCO-ELASTIC MODEL FOR WRINKLES IN FACIAL ANIMATION AND SKIN AGING YIN WU, NADIA MAGNENAT THALMANN MIRALab, CUI, University of Geneva DANIEL THALMAN Computer
More informationFacial Expression Recognition
Facial Expression Recognition Kavita S G 1, Surabhi Narayan 2 1 PG Student, Department of Information Science and Engineering, BNM Institute of Technology, Bengaluru, Karnataka, India 2 Prof and Head,
More informationLearning-Based Facial Rearticulation Using Streams of 3D Scans
Learning-Based Facial Rearticulation Using Streams of 3D Scans Robert Bargmann MPI Informatik Saarbrücken, Germany Bargmann@mpi-inf.mpg.de Volker Blanz Universität Siegen Germany Blanz@informatik.uni-siegen.de
More informationanimation computer graphics animation 2009 fabio pellacini 1
animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to
More informationStylistic Reuse of View-Dependent Animations
Stylistic Reuse of View-Dependent Animations Parag Chaudhuri Ashwani Jindal Prem Kalra Subhashis Banerjee Department of Computer Science and Engineering, Indian Institute of Technology Delhi, Hauz Khas,
More informationAbstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component
A Fully Automatic System To Model Faces From a Single Image Zicheng Liu Microsoft Research August 2003 Technical Report MSR-TR-2003-55 Microsoft Research Microsoft Corporation One Microsoft Way Redmond,
More information