K A I S T Department of Computer Science

Size: px
Start display at page:

Download "K A I S T Department of Computer Science"

Transcription

1 A Region-based Facial Expression Cloning Bongcheol Park and Sung Yong Shin CS/TR April 24, 2006 K A I S T Department of Computer Science

2 A Region-based Facial Expression Cloning Bongcheol Park and Sung Yong Shin Abstract In this paper, we propose a region-based blend shape approach to facial expression cloning that automatically extracts a set of coherently-moving regions of the source face model and their counterparts in the target face model without any extra input data except the source and target key face models and their correspondence. We represent a face mesh as a mass-spring network to define two measures, potential energy and movement coherency, that quantify the spring energy at a mass point and the movement similarity between a pair of mass points, respectively. Based on these measures, we develop three schemes that together facilitate our approach: the automatic extraction, correspondence establishment, and grouping of the feature points on the source and target face models. From the user s point of view, our approach greatly reduces the animator s workload while enjoying the advantages inherent in blend shape-based facial expression cloning. 1 Introduction 1.1 Motivation Facial expressions are most effective for delivering emotions as supported by the recent popularity of facial animation in movie films, computer games, information booths, to name a few, everyday human life notwithstanding. As facial animation libraries are becoming rich, the reusability of existing animation data has been a recurring issue. This trend paves the way for the data-driven paradigm in facial animation. Noh and Neumann [16] presented a data-driven approach to facial animation known as expression cloning that transfers a source model s facial expressions in an input animation to a target face model. Assuming that an animation similar to the one in mind is available in a library for a different face model, the usefulness of expression cloning would be rather apparent. This situation will be coming closer as animation databases expand their repertoires. The basic idea of this approach is first to derive facial motion vectors based on 3D morphing between the source and target face models and then to deform the target model using those vectors. The approach works well without any key face models as long as the source and target models share the same topology. However, facial expressions cannot be transferred across face models with different topologies. Moreover, the target motion vectors are determined by morphing, which are solely de- 1

3 pendent on the geometrical characteristics of the source and target models. Thus, it is hard to incorporate the animator s intention into the output animation. To remedy those drawbacks, Pyun et al. [20] adopted a blend shape approach based on scattered interpolation. Given source key models and their corresponding target key models, the face model at each frame of an input animation is expressed as a weighted sum of source key models, and the weight values of source key models are applied to their corresponding target key models to obtain the face model at the same frame of the output animation. By providing target key models, this approach allows the animator to incorporate her imagination into the output animation. By blending the target key models with the weights passed by the input face model, the approach also allows the source and target models to have different topologies. In addition, the approach is computationally stable and efficient, as pointed out in [15]. However, the number of key models grows combinatorially as the number of facial aspects such as emotions, phonemes and facial gestures increases. Therefore, it is hard in general, with the approach, to clone dynamic facial movements such as asymmetric gestures. In order to generate diverse facial expressions with a small number of key models while still enjoying the advantages of the blend shape approach, Park et al. [17] proposed a feature-based approach. After segmenting the source and target face models into regions, this approach applies the blend shape scheme [20] to each of the regions separately and combines the results to obtain an output face model. The approach requires as input data not only the feature points, but also their correspondence between the source and target key models. Furthermore, the feature points are manually grouped for region segmentation. These requirements raise the following fundamental issues for effective facial expression cloning: How to automatically extract the feature points from key models, How to automatically establish a meaningful correspondence of the feature points between the source and target face models, How to automatically group the feature points for region segmentation. The motivation of this paper is to address each of these issues to automate the whole process of feature-based expression cloning, provided with the source and target key models, while possessing the inherent advantages of blend shape-based expression cloning such as topological independence, the user friendliness of imbuing the animator s imagination, and the generation of diverse expressions with fewer key models [17]. Our conjecture is that these issues can be settled by exploiting the information embedded in the key models. The requirement for the source and target key models and their correspondence as input data is ascribed to our implicit assumption that no example animations are available for the target face model. Otherwise, the style of the output animation would be learned from the existing example animations. 2

4 1.2 Basic idea The feature-based approach of Park et al. [17] provides a nice framework of expression cloning, although its potential has not been fully explored. To utilize the full potential while keeping this framework, we mainly deal with the three issues raised in the previous section without any additional input data except the key models. First, to extract the feature points from the source key models, we define a deformation measure for every vertex of a key model with respect to that of the neutral key model. By analyzing the measure distribution over a mesh for every key model, the vertices whose measures experience local maxima in one or more key models are chosen as the feature points of the source face model. The feature points of the target face model are also chosen in a similar way. Interpreting a mesh as a mass-spring network, our intuition is that the feature points selected in this way have enough potential energy to contract or stretch their connected springs to cause facial deformation. Next, to establish a (cross-model) correspondence of the feature points between the source and target models, we develop a notion of the movement coherency for each pair of feature points, one from the source face model and the other from the target model, that measures how coherently this feature point pair move. Our hidden assumption is that a pair of corresponding points move coherently even if they lie in different models. Interpreting coherency as preference, we can make a preference list of each feature point in the source (target) model to the feature points in the target (source) model, to formulate the feature point correspondence problem as a stable matching problem (a stable marriage problem) [9], which can be solved in O(p 2 log p) time, where p is the number of feature points. Finally, in order to group the feature points of the source face model for region segmentation, we compute the movement coherency of every pair of feature points in the source face model. Using these coherency measures, we formulate the feature point grouping problem as a classical graph problem called the connected component finding problem or the problem of finding the connected components in an undirected graph, after establishing edge connectivity by thresholding the pairwise movement coherency among the feature points. This problem can be solved in O(p 2 ) time with a standard graph algorithm [6]. Here, our intuition is that a group of feature points in the same region move coherently, and thus form a connected component of the graph. The source face model is segmented by classifying the vertices of the model into regions according to the movement coherency of each vertex with respect to the feature points. To segment the target key model, we transfer the results of the source feature point grouping to the target model by the feature point correspondence that we have established. 1.3 Overview Inherited from blend shape approaches based on scattered data interpolation [20, 15, 17], our approach consists of two parts, analysis and synthesis, as shown in Figure 1. Our main contributions lie in the analysis part, which fills technical holes left by this 3

5 line of approaches. The analysis part is for preprocessing, which is comprised of three steps: feature point extraction, region segmentation, and parameterization. The first step extracts two sets of feature points from source and target key models, respectively, which settles the first issue stated in Section 1.1. The second step segments the source and target models into regions moving coherently. In [17], region segmentation imposes extensive manual interactions on the animator to specify input data: the facial features, a set of feature points for each facial feature, and the feature point correspondence between the source and target models. We automate region segmentation by addressing the second and third issues without any extra input data except the two sets of feature points automatically extracted in the first step. The third step is for the parameterization of each segmented region, which is a standard technique for blend shape-based facial expression cloning, and thus we briefly describe this step. The second part is for run-time expression transfer from the source face model to the target face model, which is composed of four steps: head motion and gaze direction extraction, parameter extraction, key shape blending, and region composition. Except the first step and the weight composition in the third step, this part is treated rather well in [17]. Thus, we minimize our efforts by briefly providing the main idea and differences. The remainder of this paper is organized as follows: In Section 2, we review related work. Sections 3 and 4 describe the analysis part and the synthesis part, respectively. We show results in Section 5 and discuss limitations and weaknesses of our approach in Section 6. Finally, we conclude this paper and suggest further research in Section 7. 2 Related work Since Parke s pioneering work [18], there have been extensive results in facial animation. We focus on recent results closely related to facial expression cloning besides those already mentioned in Section 1. The origin of facial expression cloning can analysis source key-models target key-models feature point extraction region segmentation parameterization synthesis input animation head motion & gaze direction extraction parameter extraction key shape blending region composition output animation Figure 1: Overall structure. 4

6 probably be traced back to Williams [24], who proposed performance-driven facial animation, although the problem itself was posed by Noh and Neumann [16]. In fact, performance-driven facial animation can be regarded as a type of expression cloning from an input image sequence to a 3D face model. Blend shape scheme: Following Williams work, there have been many approaches in performance-based animation [13, 19, 2, 8, 4, 14, 1, 5, 3]. For our purposes, the most notable are blend shape approaches [13, 19, 2, 8, 1, 5, 3], in which a set of base models are blended to obtain an output model. In general, the blending weights are computed by least squares fitting [13, 19, 2, 8, 1, 5, 3]. From the observation that the deformation space of a face model is well approximated by a low-dimensional linear space, a series of research results on facial expression cloning have been presented based on a blend shape scheme with scattered data interpolation [20, 15, 17]. For the reason stated in Section 1, we also follow this line of research. Region segmentation: While being robust and efficient, the main difficulty of blend shape approaches is an exponential growth rate of the number of key models with respect to the number of facial attributes. Kleiser [12] applied a blend shape scheme to manually-segmented regions and then combined the results. Joshi et al. [11] automatically segmented a face model based on a deformation map. Inspired by these approaches, Park et al. [17] proposed a method for segmenting a face model into a predefined number of regions, provided with a set of feature points manually specified on each face feature. The idea was to classify the vertices into the regions, each containing a face feature, according to the movement coherency of each vertex with respect to the feature points. We further explore this idea for automatic segmentation without any extra input data except the source and target key models. Multi-linear model: Vlasic et al. [23] proposed a method based on a multi-linear human face model to map video-recorded performances of one individual to facial animations of another. This method is a generalization of blend shape approaches and thus is trivially adapted to facial expression cloning. Being a general data-driven tool, a reasonable multi-linear model requires a large number of face models with different attributes. Moreover, the multi-linear model is not quite adequate to address specific issues in facial expression cloning such as asymmetric facial gestures and topological independence between source and target face models. Mesh deformation transfer: Sumner and Popović [22] proposed a method to transfer the deformation of a triangular mesh to another mesh. This method can be applied to facial expression cloning. Unlike blend shape approaches [20, 15, 17], the method does not require key models. Instead, the animator manually provides facial feature points and their correspondence between source and target models. Without using key face models, however, it is hard to incorporate the animator s intention into the output animation. Another limitation is that the source and target models should share the same topology although their meshes may be different in both vertex count and connectivity. 5

7 3 Key model analysis Figure 2: Examples of key models. In this section, we analyze user-provided key models to segment them into regions, which may partially overlap. As illustrated in Figure 2, the set of source key models consists of fourteen face models: the face model with a neutral expression, six key models for emotional expressions, and seven key models for verbal expressions called visemes. Each source key model corresponds to a target key model. The neutral source and target face models are also called the source and target (base) face models, respectively. The source face models share the same mesh configuration as do the target face models. However, the source and target face model, in general, may have different mesh configuration and even different topologies. For example, the source and target models could be triangular meshes, topologically equivalent to a 3D ball and a 2D disk, respectively. 6

8 3.1 Feature point extraction Regarding the triangular mesh of a face model as a mass-spring network, a vertex and an edge of the mesh correspond to a mass point and a spring of the network, respectively. We first define the potential energy at every mass point and then extract the feature points of the mesh based on the energy distribution over the network. In what follows, we mainly describe how to deal with source key models. The target key models can be handled in a similar manner. Let G i = (V i,e i ) be an undirected graph representing the mesh of source key model i, where V i and E i denote the vertex and edge sets of key model i, respectively. The corresponding mass-spring network is also referred to as G i due to the structural similarity between the mesh and the network. G 0 = (V 0,E 0 ) denotes the base mesh (network) or the mesh for the key model with the neutral expression. We assume that the springs of the base network are in the rest state. Let l i jk be the length of a spring connecting mass points j and k in key model i. Then, the potential energy ε i j of mass point j is defined as follows: ε i j = k I j (l i jk l0 jk )2, (1) where I j denotes an index set of the vertices representing the mass points connected to mass point j by single springs. A vertex v j is said to be a feature point if ε i j is of a local maximum for any key model i, that is, if there is any key model i such that ε i j ε i k for all k I j. (2) In general, a face model is symmetrical with respect to the vertical bisecting plane. In other words, every vertex on the left-hand side of the plane has its corresponding vertex on the right-hand side, and vice versa, except those lying on the plane. Assuming that both halves of the face model have similar deformation capability, we duplicate every feature point on both sides. That is, if a vertex on one side is chosen as a feature point, its corresponding vertex on the other side is also chosen as a feature point, regardless of its energy. Figure 3 shows results for feature point extraction. 3.2 Region segmentation We first segment the source face model and then propagate the results to the target face model via the feature point correspondence between the source and target face models. The underlying idea of region segmentation is to classify the vertices of the face mesh into regions, each containing a facial feature such as an eye, the mouth, or a cheek. Unlike in [17], neither the number of regions nor the set of feature points for each region is specified by the user. We start with defining the movement coherency for each pair of vertices. With slight modifications depending on tasks to perform, we use this measure consistently for feature point grouping, their correspondence establishment, and vertex classification. 7

9 Movement coherency: The movement coherency c jk for a pair of vertices v j and v k of the face model is defined as follows: [ w1 [ w2 N 1 1 c jk = s i N 1 1 [ N jk] θ i N jk] d jk] 0 w3 (3) i=0 i=0 where 1 if v i j = v0 j and vi k = v0 k s i jk = 1 abs( vi j v0 j vi k v0 k ) max{ v i j v0 j, vi k v0 k } otherwise 1 if v i j = v0 j and vi k = v0 k θ jk i = 0 if v i j = v0 j or vi k = v0 k max d 0 jk = max { { v i j v 0 j } vi v i k v0 k j v0 j v i k v0, 0 k 1 v0 k v0 j }, 0. D otherwise (but not both) Here, N is the number of source key models, and w l, l=1,2,3 is the weight for each multiplicative term, which will be given by the user. In particular, we empirically set w 1 = 2 1, w 2 = 2 0, and w 3 = 2 2. We set D to be the minimum Euclidean distance such that the vertices form a single connected component when we connect every pair of vertices whose Euclidean distance is not greater than D. D can be obtained by binary search, starting from the maximum distance over all pairs of vertices. As shown in Equation 3, c jk consists of three multiplicative terms. The first term is the average of s i jk over all key models, where si jk measures the similarity of moving speeds of vertices, v i j and vi k. The second term is the average of θ jk i, which gives the similarity of moving directions. Finally, the third term measures the geometrical proximity of the pair of vertices, v 0 j and v0 k in the base face model (key model 0) that correspond to v i j and vi k, respectively. Note that every term takes on a value between zero and one, inclusively. Thus, the movement coherency c jk also takes on a value in the same range. Feature point grouping: Given a set of feature points, we partition them into groups according to their movement coherency such that the feature points in the same group move more coherently than the others. Letting F = { f 1, f 2,..., f p } be the set of feature points of a face model, we define an index set I F as follows: I F = { k f l, 1 l p is a vertex v k of the face model }. (4) Since every feature point is a vertex of the face model, I F is well-defined. We compute the movement coherency c jk between every pair of feature points, v j and v k for j,k I F 8

10 Figure 3: Feature point extraction. using Equation 3. To do this, we set D as the minimum distance such that the feature points form a single connected component if we connect all the pairs of feature points whose distances are not greater than D. We now construct a graph G F = (V F,E F ), where the vertex set V F consists of the feature points. A pair of feature points are connected by an edge of E F if their movement coherency is greater than or equal to a given threshold γ. Our implicit assumption for thresholding is that two feature points connected by an edge belong to the same group. Under this assumption, the feature point grouping problem is reduced to the problem of finding all connected components in the undirected graph G. This problem can be solved in O(p 2 ) time using a standard graph algorithm [6]. Empirically, we found that our scheme works well with γ = Exploiting the left-right symmetry of a face model, we solve the connected component finding problem for one half of the face model and reflect the results about the bisecting plane to obtain the solution. This scheme is not only efficient but also effective to handle asymmetric facial gestures such as winking, although the number of groups tends to increase. Suppose that the set F of feature points is partitioned into g groups, F l, 1 l g. Then, each feature point is contained in one group F l for some l unless it lies on the bisecting plane. Otherwise, it belongs to two groups, each lying on the opposite side of the plane. Figure 4 exhibits results of feature point grouping. Vertex classification: Now, we are ready to segment the source face model into (possibly overlapping) regions. Our strategy is to classify the vertices of the face model into the regions, each containing a group of feature points, by exploiting the movement coherency of every vertex with respect to the feature points. Specifically, each vertex is classified into a region containing a group F l of feature points, if it moves coherently with the feature point group F l. To derive the movement coherency c jfl of a vertex v j with respect to F l, we first compute the movement coherency c jk of each vertex v j with respect to every feature point v k for k I F using Equation 3. To do this, we use the same D as originally defined. Given c jk for all k I Fl, we then choose as c jfl the maximum of c jk over all k I Fl, that is, c jfl = max k I Fl {c jk }, (5) 9

11 Figure 4: Feature point grouping. where I Fl is the index set for the feature point in F l analogously defined as I F. By thresholding c jfl for each 1 l g, we classify the vertices to one or more regions, each containing a group F l of feature points. In other words, a vertex v j is classified into the region containing F l, 1 l g if c jfl is greater than or equal to a threshold value γ. We use the same value γ that we have used for feature point grouping. Note that each vertex can be classified into two or more regions, and thus we have to take this into account later for run-time key model blending. Feature point correspondence: To segment the target face model, we would repeat the same procedure that we have developed for the source face model. However, the resulting regions on the target model may not match those on the source model. Instead, we transfer the results of feature point grouping from the source model to the target model by feature point correspondence. The rest of the procedure of region segmentation for the target model is the same as for the source model. We now describe how to establish a feature point correspondence. The source and target face models may be significantly different in their sizes and positions. As preprocessing, we scale the target model so that its iso-axis bounding box is the same as Figure 5: Feature point correspondence. 10

12 that of the source model and translate the target model so that its center is coincident with that of the source model. To set D in Equation 3, we first choose the six feature points on the source face model, which have the maximum and minimum x, y, and z coordinates, respectively. Their corresponding feature points on the target face model are also chosen in the same manner to compute the Euclidean distance between each corresponding pair of points. Let D 0 be the maximum over the resulting six distance values. Then, we set D as follow: D = max{d 0,D S,D T }, (6) where D S is the minimum Euclidean distance such that the feature points of the source face model form a single connected component if we connect every pair of feature points whose distance is not greater than D S. D T is defined in a symmetrical manner for the feature points on the target face models. Using Equation 3 we then compute the movement coherency c jk between every pair of feature points, v j and v k for j I FS,k I FT. Here, I FS and I FT are the index sets for the source and target feature points, respectively. That is, a vertex v j, j I FS is a feature point in the source feature point set F S, and v k, j I FT is a target feature point in F T. Interpreting source and target feature points as men and women, respectively, the movement coherency c jk of a feature point v j, j F S (v k, k F T ) to feature point v k, k F T (v j, j F S ) can be regarded as the degree of preference of a man v j to an woman v k. We create a preference list of each man (each feature point of one model) to every woman (every feature point of the other model) and vice versa, to reduce our feature point correspondence problem to a stable matching problem(a stable marriage problem), which can be solved in O(p 2 log p) time [9, 10]. In our stable matching problem, monogamy is assumed. Since the number of the source feature points are different from that of the target feature points, not every feature point has a counterpart. However, the algorithm guarantees a list of h matching pairs of feature points, where h = min{m S,m T }, m S is the number of source feature points, and m T is that of target feature points. Moreover, the preference list of a feature point in a face model yields a relative ordering of the feature points in the other face model Figure 6: Region segmentation. 11

13 in terms of movement coherency. Therefore, some matching pairs may have low coherency values. As postprocessing, the list of matching pairs is scanned to filter out those with movement coherency values less than a threshold γ. Empirically, filtering works well with γ =.05. Provided with a collection of the target feature point groups, each corresponding to a source feature point group, we classify the vertices of the target face model into the regions for region segmentation. For later reference, we denote the set of source regions and that of target regions by R S and R T, respectively, where R S = { R S1,R S2,...,R Sg }, and R T = {R T1,R T2,...,R Tg }. (7) R Sl and R Tl, 1 l g have been derived by the source and target feature point groups, F Sl and F Tl, respectively. Figure 6 shows results of region segmentation. 3.3 Parameterization Using the set R S of the regions of the source face model, we segment each of the source key face models into regions, which can be done trivially since the source key face models share the same mesh configuration with the source (base) face model. We remind the readers that the source face model itself is the source key face model with a neutral expression. The target key models are also segmented in a symmetrical manner. Every region R Sl, 1 l g of the source face model gives rise to a set of source key regions. We place these key regions in their parameter space and derive a weight function to blend the corresponding target key regions at run time. As described in [17], we first construct the feature vector of each source key region by concatenating the 3D coordinates of its feature points and then apply principal component analysis (PCA) to the feature vectors to compute the parameter vector of the key region. Finally, the weight function for the region is derived using cardinal basis functions [21]. A D C B Figure 7: Skull joint (A), neck joint (B), and gaze direction (CD). 12

14 4 Run-time expression transfer At run time, the head motion of the input face model, its gaze direction, and the blending weight for each vertex are estimated frame by frame to apply to the target face model. The head motion is a rigid motion determined by the rotation matrices of the skull and neck joints, as shown in Figure 7. Based on a typical skinning scheme [7], the relationship between the vertex positions of the head and these matrices can be formulated as an over-constrained linear system of equations that can be solved by least squares approximation. Given the head motion, the rotation matrix of an eyeball can also be estimated in a similar manner. For details in the extraction of the head motion and the gaze direction, we refer the readers to the work in [7]. To compute the blending weights of the vertices of the source key models for a vertex of the input face model, we adopt the feature-based blend shape scheme based on scattered data interpolation, which is well explained in [17]. We segment the input face model into regions and compute the parameter vector of every input region. Using this vector, we evaluate the blending functions of the source key regions for the input region, to obtain their blending weights. Given the blending weights of the source key regions for each region of the input face model, we would apply the weights to their respective target key regions to obtain an output region and simply stitch all output regions thus synthesized to obtain the output face model at a frame. However, this naive scheme brings in unexpected visual artifacts since vertices may be duplicated in two or more regions of the face model. To overcome this anomaly, we instead adjust the blending weight for every duplicated vertex. Suppose that a vertex belongs to two or more regions. Then, we take as its blending weight the weighted sum of the blending weights of the key regions containing the vertex. The weight for the blending weight of each key region is inversely proportional to the Euclidean distance from the vertex to the center of the region, which is the average position of the feature points in the region. Results of expression transfer are exhibited in Figure 8. 5 Results In this section, we exhibit results obtained from four sets of experiments: one set of experiments for key model analysis and the other three sets for run-time facial expression transfer. The experiments were performed on an Intel Pentium R PC ( P-4 2.8GHz processor; 2GB RAM; RADEON X800 R ). We used eight face models, as shown in Figure 9. Table 1 shows the numbers of vertices and polygons in each face model, together with the number of used key models. The first two models have also been used for explanation purposes (see Figures 3,4,5,6, and 7). The first set of experiments was performed to show the effectiveness of our key region extraction scheme. We set γ =.05 identically for feature point grouping, feature point correspondence establishment, and vertex classification. For movement coherency, we 13

15 Figure 8: Results of expression transfer. set w 1 = 2 1, w 2 = 2 0, and w 3 = 2 2 for Equation 3. Table 2 gives the number of feature points and that of segmented regions for each face model. For the latter, we counted the regions extracted from a face model when the model is used as a source face model. The number of segmented regions for MIT-face is smallest over all face models even with the largest number of extracted feature points. We guess that this counterintuitive result was probably caused by lack of available key models for MIT-face (see Table 1). The next two sets of experiments were conducted in order to demonstrate accuracy and efficiency of our approach. To measure the accuracy, self-cloning was done for the face model Man in two ways, direct self-cloning (from Man to Man) and indirect self-cloning (first from Man to X and then from X back to Man). The cloning error ε is measured as follows: ε = n j=1 x j x j n j=1 x, (8) j where x j and x j are the original and cloned positions of a vertex v j of the face model Man, and n is the number of vertices in the model. The input animation for Man consists of 1500 frames. Results were collected in Table 3. For comparison with the work of Park et al. [17], we use a pair of models, Guy and Lady, since Park et al. also used these models. The input animation consists of 1820 frames. The results are summarized in Table 4. Even with no user-provided information except the key models, our approach negligibly sacrificed cloning accuracy or efficiency. 14

16 (a) (b) (c) (d) (e) (f) (g) (h) Figure 9: Face models. (a) Man, (b) Roney, (c) MIT-face, (d) Gorilla, (e) Toy, (f) Cartoon(2D), (g) Guy, and (h) Lady. Table 1: Key model specification appearance # vertices # polygons # key models Man Figure10(a) Roney Figure10(b) MIT-face Figure10(c) Gorilla Figure10(d) Toy Figure10(e) Cartoon Figure10(f) Guy Figure10(g) Lady Figure10(h) neutral, joy, surprise, anger, sadness, and disgust. neutral, joy, surprise, anger, sadness, disgust, and sleepiness. 15

17 Table 2: Key model analysis # feature points # regions Man Roney MIT-face Gorilla Toy 66 4 Cartoon Guy Lady Table 3: Self-cloning Errors for Man Types Intermediate face model Errors (%) direct self-cloning indirect self-cloning Roney MIT-face Gorilla Toy Cartoon Table 4: Comparison with Park et al. s Time Types Approaches Errors Key model analysis Run-time transfer direct cloning Ours 0.094% 3.11 sec msec. (1041) (Lady to Lady) Park et al. s 0.072% 0.96 sec msec. (1176) indirect cloning Ours 0.298% 7.06 sec msec. (735) (Lady to Guy to Lady) Park et al. s 0.244% 2.24 sec msec. (806) ( ) : Average frames per second The last set of experiments was performed to demonstrate the visual quality of cloned animations. In the first five experiments, the model Man was used as the source face model, and five models, MIT-face, Roney, Gorilla, Toy, and Cartoon were as the target models. Note that Toy and Cartoon were respectively a 3D object and a 2D picture, of which the topological characteristics are different from those of the source model Man. In the next experiment, the transfer of asymmetric facial gestures was emphasized. Face models, Lady and Guy were used as the source and target models, respectively. Finally, we show a comprehensive demonstration for facial expression cloning including facial expressions, asymmetric facial gestures, head motions, and gaze directions. The results are given in the accompanying movie file. 16

18 6 Discussion In our approach, the contents and style of the output animation are specified by the input animation and the key models, respectively. From the user s point of view, correspondence establishment between the source and target key models will be rather trivial if these models have to be created manually. However, our question is: does the key models are really needed in blend shape-based expression cloning? For example, the style of an output animation would be learned from an already-existing example animation instead of the user-provided key models, if a facial animation library were truly rich to have a facial animation for any face model. We believe that such a library would not be realized forever although the question itself is an excellent research topic. A crucial assumption behind our approach is that there is no information source to learn the style of the output animation except the animator herself. In particular, we assume that no example animation for the target face model is available. We have developed a scheme for automatic feature point extraction, interpreting a face mesh as a mass-spring network. Obviously, our mass-spring network cannot capture the face movement by jaw rotation. However, for some reason, we have not experienced any problems with the feature points thus extracted. Our conjecture is that the vertex classification by movement coherency would absorb this modeling deficiency. Further investigation is needed to verify our conjecture. To evaluate Equation 3 for the computation of movement coherency, the user manually specifies the multiplicative weight parameters, w 1, w 2, and w 3. Through trial and error, we found that the parameter values given in Section 5 work well with minor adjustments. The user also manually specifies threshold parameter γ for key model analysis, including feature point grouping, feature point correspondence establishment, and vertex classification. Again, the threshold value given in Section 5 will work with a minor adjustment. 7 Conclusions Initiated by our conjecture that the source and target key models together with their correspondence encode the information on an animation style, we have successfully addressed the three issues raised in the beginning of this paper: the extraction, correspondence establishment, and grouping, of the feature points on the source and target face models. The solutions together facilitate automatic region segmentation and registration while preserving the inherent capability of blend shape approaches [20, 15, 17], which greatly reduces the animator s burden. Based on the notion of spring energy and that of movement coherency, our success is mainly ascribed to the formulation of the issues so that classical results in combinatorics can be applied. Our eventual goal of research is to remove the key models to further reduce the animator s workload. We believe that the style of an animation would be learned by example animations, if any, rather than key models. 17

19 References [1] C. Bregler, L. Loeb, E. Chuang, and H. Deshpande. Turning to the masters: motion capturing animations. In Proc. of ACM SIGGRAPH, pages , [2] I. Buck, A. Finkelstein, and C. Jacobs. Performance-driven hand-drawn animation. In Symposium on Non-Photorealistic Animation and Rendering, [3] Jin Chai, Jing Xiao, and Jessica Hodgins. Vision-based control of 3d facial animation. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages , [4] Byoungwon Choe, Hanook Lee, and Hyeongseok Ko. Performance-driven muscle-based facial animation. Journal of Visualization and Computer Animation, 12(2):67 79, [5] Erika Chuang and Chris Bregler. Performance driven facial animation using blendshape interpolation. Stanford University Computer Science Technical Report,CS-TR , [6] Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms. The MIT Press, [7] Randima Fernando. GPU Gems. Addison Wesley, [8] Douglas Fidaleo, Junyong Noh, Reyes Enciso Taeyong Kim, and Ulrich Neumann. Classification and volume morphing for performance-driven facial animation. In International Workshop on Digital and Computational Video, [9] D. Gale and L. S. Shapley. College admissions and the stability of marriage. American Mathematical Monthly, 69, [10] D. Gusfield and R. W. Irving. The stable marriage problem: Structure and algorithms. MIT Press, Cambridge, [11] Pushkar Joshi, Wen C. Tien, Mathieu Desbrun, and F. Pighin. Learning controls for blend shape based realistic facial animation. In ACM SIG- GRAPH/Eurographics Symposium on Computer Animation, pages , [12] J. Kleiser. A fast, efficient, accurate way to represent the human face. ACM SIGGRAPH 89 Course #22 Notes, [13] Cyriaque Kouadio, Pierre Poulin, and Pierre Lachapelle. Real-time facial animation based upon a bank of 3d facial expressions. In Computer Animation, pages , [14] I-Chen Lin, Jeng-Sheng Yeh, and Ming Ouhyoung. Realistic 3d facial animation parameters from mirror-reflected multi-view video. In IEEE Computer Animation, pages ,

20 [15] K. Na and M. Jung. Hierarchical retargetting of fine facial motions. Computer Graphics Forum, 23(3): , [16] J. Noh and U. Neumann. Expression cloning. In ACM SIGGRAPH, pages , [17] Bongcheol Park, Heejin Chung, Tomoyuki Nishita, and Sung Yong Shin. A feature-based approach to facial expression cloning. Computer Animation and Virtual Worlds, 16(3-4): , [18] F. I. Parke. Computer generated animation of faces. In ACM National Conference, pages , [19] F. Pighin, R. Szeliski, and D. H. Salesin. Resynthesizing facial animation through 3d model-based tracking. In IEEE International Conference on Computer Vision, pages , [20] H. Pyun, Y. Kim, W. Chae, H. Y. Kang, and S. Y. Shin. An example-based approach for facial expression cloning. In ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pages , [21] PP. Sloan, CF. Rose, and MF. Cohen. Shape by example. In Symposium on Interactive 3D Graphics, pages , [22] Robert W. Sumner and Jovan Popović. Deformation transfer for triangle meshes. In ACM SIGGRAPH, pages , [23] Daniel Vlasic, Matthew Brand, Hanspeter Pfister, and Jovan Popović. Face transfer with multilinear models. In ACM SIGGRAPH, pages , [24] L. Williams. Performance-driven facial animation. Computer Graphics(Proceedings of SIGGRAPH 90), 24: ,

Muscle Based facial Modeling. Wei Xu

Muscle Based facial Modeling. Wei Xu Muscle Based facial Modeling Wei Xu Facial Modeling Techniques Facial modeling/animation Geometry manipulations Interpolation Parameterizations finite element methods muscle based modeling visual simulation

More information

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets Kyungha Min and Moon-Ryul Jung Dept. of Media Technology, Graduate School of Media Communications, Sogang Univ., Seoul,

More information

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,

More information

Master s Thesis. Cloning Facial Expressions with User-defined Example Models

Master s Thesis. Cloning Facial Expressions with User-defined Example Models Master s Thesis Cloning Facial Expressions with User-defined Example Models ( Kim, Yejin) Department of Electrical Engineering and Computer Science Division of Computer Science Korea Advanced Institute

More information

CS 231. Deformation simulation (and faces)

CS 231. Deformation simulation (and faces) CS 231 Deformation simulation (and faces) Deformation BODY Simulation Discretization Spring-mass models difficult to model continuum properties Simple & fast to implement and understand Finite Element

More information

Computer Animation Visualization. Lecture 5. Facial animation

Computer Animation Visualization. Lecture 5. Facial animation Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based

More information

Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation

Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation Qing Li and Zhigang Deng Department of Computer Science University of Houston Houston, TX, 77204, USA

More information

Personal style & NMF-based Exaggerative Expressions of Face. Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University

Personal style & NMF-based Exaggerative Expressions of Face. Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University Personal style & NMF-based Exaggerative Expressions of Face Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University Outline Introduction Related Works Methodology Personal

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

CS 231. Deformation simulation (and faces)

CS 231. Deformation simulation (and faces) CS 231 Deformation simulation (and faces) 1 Cloth Simulation deformable surface model Represent cloth model as a triangular or rectangular grid Points of finite mass as vertices Forces or energies of points

More information

K A I S T Department of Computer Science

K A I S T Department of Computer Science An Example-based Approach to Text-driven Speech Animation with Emotional Expressions Hyewon Pyun, Wonseok Chae, Yejin Kim, Hyungwoo Kang, and Sung Yong Shin CS/TR-2004-200 July 19, 2004 K A I S T Department

More information

Registration of Dynamic Range Images

Registration of Dynamic Range Images Registration of Dynamic Range Images Tan-Chi Ho 1,2 Jung-Hong Chuang 1 Wen-Wei Lin 2 Song-Sun Lin 2 1 Department of Computer Science National Chiao-Tung University 2 Department of Applied Mathematics National

More information

FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING

FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave., Ottawa, Ontario, Canada,

More information

Shape and Expression Space of Real istic Human Faces

Shape and Expression Space of Real istic Human Faces 8 5 2006 5 Vol8 No5 JOURNAL OF COMPU TER2AIDED DESIGN & COMPU TER GRAPHICS May 2006 ( 0087) (peiyuru @cis. pku. edu. cn) : Canny ; ; ; TP394 Shape and Expression Space of Real istic Human Faces Pei Yuru

More information

The accuracy and robustness of motion

The accuracy and robustness of motion Orthogonal-Blendshape-Based Editing System for Facial Motion Capture Data Qing Li and Zhigang Deng University of Houston The accuracy and robustness of motion capture has made it a popular technique for

More information

Sample Based Texture extraction for Model based coding

Sample Based Texture extraction for Model based coding DEPARTMENT OF APPLIED PHYSICS AND ELECTRONICS UMEÅ UNIVERISTY, SWEDEN DIGITAL MEDIA LAB Sample Based Texture extraction for Model based coding Zhengrong Yao 1 Dept. Applied Physics and Electronics Umeå

More information

Hierarchical Retargetting of Fine Facial Motions

Hierarchical Retargetting of Fine Facial Motions EUROGRAPHICS 2004 / M.-P. Cani and M. Slater (Guest Editors) Volume 23 (2004), Number 3 Hierarchical Retargetting of Fine Facial Motions Kyunggun Na and Moonryul Jung Department of Media Technology, Graduate

More information

Transfer Facial Expressions with Identical Topology

Transfer Facial Expressions with Identical Topology Transfer Facial Expressions with Identical Topology Alice J. Lin Department of Computer Science University of Kentucky Lexington, KY 40506, USA alice.lin@uky.edu Fuhua (Frank) Cheng Department of Computer

More information

Data-Driven Face Modeling and Animation

Data-Driven Face Modeling and Animation 1. Research Team Data-Driven Face Modeling and Animation Project Leader: Post Doc(s): Graduate Students: Undergraduate Students: Prof. Ulrich Neumann, IMSC and Computer Science John P. Lewis Zhigang Deng,

More information

Performance Driven Facial Animation using Blendshape Interpolation

Performance Driven Facial Animation using Blendshape Interpolation Performance Driven Facial Animation using Blendshape Interpolation Erika Chuang Chris Bregler Computer Science Department Stanford University Abstract This paper describes a method of creating facial animation

More information

Facial Animation System Design based on Image Processing DU Xueyan1, a

Facial Animation System Design based on Image Processing DU Xueyan1, a 4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 206) Facial Animation System Design based on Image Processing DU Xueyan, a Foreign Language School, Wuhan Polytechnic,

More information

FACIAL ANIMATION FROM SEVERAL IMAGES

FACIAL ANIMATION FROM SEVERAL IMAGES International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL ANIMATION FROM SEVERAL IMAGES Yasuhiro MUKAIGAWAt Yuichi NAKAMURA+ Yuichi OHTA+ t Department of Information

More information

Deformation Transfer for Triangle Meshes

Deformation Transfer for Triangle Meshes Deformation Transfer for Triangle Meshes a Paper (SIGGRAPH 2004) by Robert W. Sumner & Jovan Popovic presented by Roni Oeschger Deformation Transfer Source deformed Target deformed 1 Outline of my presentation

More information

Animation of 3D surfaces.

Animation of 3D surfaces. Animation of 3D surfaces Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible =

More information

Pose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation

Pose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation Pose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation J.P. Lewis Matt Cordner Nickson Fong Presented by 1 Talk Outline Character Animation Overview Problem Statement

More information

Interactive Deformation with Triangles

Interactive Deformation with Triangles Interactive Deformation with Triangles James Dean Palmer and Ergun Akleman Visualization Sciences Program Texas A&M University Jianer Chen Department of Computer Science Texas A&M University Abstract In

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE452 Computer Graphics Lecture 19: From Morphing To Animation Capturing and Animating Skin Deformation in Human Motion, Park and Hodgins, SIGGRAPH 2006 CSE452 Lecture 19: From Morphing to Animation 1

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Facial Expression Morphing and Animation with Local Warping Methods

Facial Expression Morphing and Animation with Local Warping Methods Facial Expression Morphing and Animation with Local Warping Methods Daw-Tung Lin and Han Huang Department of Computer Science and Information Engineering Chung Hua University 30 Tung-shiang, Hsin-chu,

More information

Speech Driven Synthesis of Talking Head Sequences

Speech Driven Synthesis of Talking Head Sequences 3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

05 Mesh Animation. Steve Marschner CS5625 Spring 2016

05 Mesh Animation. Steve Marschner CS5625 Spring 2016 05 Mesh Animation Steve Marschner CS5625 Spring 2016 Basic surface deformation methods Blend shapes: make a mesh by combining several meshes Mesh skinning: deform a mesh based on an underlying skeleton

More information

THE development of stable, robust and fast methods that

THE development of stable, robust and fast methods that 44 SBC Journal on Interactive Systems, volume 5, number 1, 2014 Fast Simulation of Cloth Tearing Marco Santos Souza, Aldo von Wangenheim, Eros Comunello 4Vision Lab - Univali INCoD - Federal University

More information

SURFACE CONSTRUCTION USING TRICOLOR MARCHING CUBES

SURFACE CONSTRUCTION USING TRICOLOR MARCHING CUBES SURFACE CONSTRUCTION USING TRICOLOR MARCHING CUBES Shaojun Liu, Jia Li Oakland University Rochester, MI 4839, USA Email: sliu2@oakland.edu, li4@oakland.edu Xiaojun Jing Beijing University of Posts and

More information

Analyzing and Segmenting Finger Gestures in Meaningful Phases

Analyzing and Segmenting Finger Gestures in Meaningful Phases 2014 11th International Conference on Computer Graphics, Imaging and Visualization Analyzing and Segmenting Finger Gestures in Meaningful Phases Christos Mousas Paul Newbury Dept. of Informatics University

More information

Facial Expression Recognition in Real Time

Facial Expression Recognition in Real Time Facial Expression Recognition in Real Time Jaya Prakash S M 1, Santhosh Kumar K L 2, Jharna Majumdar 3 1 M.Tech Scholar, Department of CSE, Nitte Meenakshi Institute of Technology, Bangalore, India 2 Assistant

More information

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

animation computer graphics animation 2009 fabio pellacini 1

animation computer graphics animation 2009 fabio pellacini 1 animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

CS 523: Computer Graphics, Spring Shape Modeling. Skeletal deformation. Andrew Nealen, Rutgers, /12/2011 1

CS 523: Computer Graphics, Spring Shape Modeling. Skeletal deformation. Andrew Nealen, Rutgers, /12/2011 1 CS 523: Computer Graphics, Spring 2011 Shape Modeling Skeletal deformation 4/12/2011 1 Believable character animation Computers games and movies Skeleton: intuitive, low-dimensional subspace Clip courtesy

More information

Physical based Rigging

Physical based Rigging Physical based Rigging Dinghuang Ji Introduction Computer animation has been a popular research topic since 1970s, when the first parametric facial model is proposed[0]. In the recent few years, a lot

More information

Interactive facial expression editing based on spatio-temporal coherency

Interactive facial expression editing based on spatio-temporal coherency Vis Comput 017) 33:981 991 DOI 10.1007/s00371-017-1387-4 ORIGINAL ARTICLE Interactive facial expression editing based on spatio-temporal coherency Jing Chi 1 Shanshan Gao 1 Caiming Zhang 1 Published online:

More information

Which n-venn diagrams can be drawn with convex k-gons?

Which n-venn diagrams can be drawn with convex k-gons? Which n-venn diagrams can be drawn with convex k-gons? Jeremy Carroll Frank Ruskey Mark Weston Abstract We establish a new lower bound for the number of sides required for the component curves of simple

More information

SCAPE: Shape Completion and Animation of People

SCAPE: Shape Completion and Animation of People SCAPE: Shape Completion and Animation of People By Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, James Davis From SIGGRAPH 2005 Presentation for CS468 by Emilio Antúnez

More information

The minimum spanning tree and duality in graphs

The minimum spanning tree and duality in graphs The minimum spanning tree and duality in graphs Wim Pijls Econometric Institute Report EI 2013-14 April 19, 2013 Abstract Several algorithms for the minimum spanning tree are known. The Blue-red algorithm

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Introduction to Computer Graphics. Animation (1) May 19, 2016 Kenshi Takayama

Introduction to Computer Graphics. Animation (1) May 19, 2016 Kenshi Takayama Introduction to Computer Graphics Animation (1) May 19, 2016 Kenshi Takayama Skeleton-based animation Simple Intuitive Low comp. cost https://www.youtube.com/watch?v=dsonab58qva 2 Representing a pose using

More information

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala Animation CS 4620 Lecture 33 Cornell CS4620 Fall 2015 1 Announcements Grading A5 (and A6) on Monday after TG 4621: one-on-one sessions with TA this Friday w/ prior instructor Steve Marschner 2 Quaternions

More information

Animation of 3D surfaces

Animation of 3D surfaces Animation of 3D surfaces 2013-14 Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible

More information

POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES

POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES POLYMORPH: AN ALGORITHM FOR MORPHING AMONG MULTIPLE IMAGES Seungyong Lee Department of Computer Science and Engineering Pohang University of Science and Technology Pohang, 790-784, S. Korea leesy@postech.ac.kr

More information

3D Morphable Model Based Face Replacement in Video

3D Morphable Model Based Face Replacement in Video 3D Morphable Model Based Face Replacement in Video Yi-Ting Cheng, Virginia Tzeng, Yung-Yu Chuang, Ming Ouhyoung Dept. of Computer Science and Information Engineering, National Taiwan University E-mail:

More information

MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL

MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave.,

More information

Connected Components of Underlying Graphs of Halving Lines

Connected Components of Underlying Graphs of Halving Lines arxiv:1304.5658v1 [math.co] 20 Apr 2013 Connected Components of Underlying Graphs of Halving Lines Tanya Khovanova MIT November 5, 2018 Abstract Dai Yang MIT In this paper we discuss the connected components

More information

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn Facial Image Synthesis Page 1 of 5 Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn 1 Introduction Facial expression has been central to the

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

Skeletal deformation

Skeletal deformation CS 523: Computer Graphics, Spring 2009 Shape Modeling Skeletal deformation 4/22/2009 1 Believable character animation Computers games and movies Skeleton: intuitive, low dimensional subspace Clip courtesy

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Video based Animation Synthesis with the Essential Graph. Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes

Video based Animation Synthesis with the Essential Graph. Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes Video based Animation Synthesis with the Essential Graph Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes Goal Given a set of 4D models, how to generate realistic motion from user specified

More information

Fast Facial Motion Cloning in MPEG-4

Fast Facial Motion Cloning in MPEG-4 Fast Facial Motion Cloning in MPEG-4 Marco Fratarcangeli and Marco Schaerf Department of Computer and Systems Science University of Rome La Sapienza frat,schaerf@dis.uniroma1.it Abstract Facial Motion

More information

Reading. Animation principles. Required:

Reading. Animation principles. Required: Reading Required: Animation principles John Lasseter. Principles of traditional animation applied to 3D computer animation. Proceedings of SIGGRAPH (Computer Graphics) 21(4): 35-44, July 1987. Recommended:

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial

More information

A GRAPH FROM THE VIEWPOINT OF ALGEBRAIC TOPOLOGY

A GRAPH FROM THE VIEWPOINT OF ALGEBRAIC TOPOLOGY A GRAPH FROM THE VIEWPOINT OF ALGEBRAIC TOPOLOGY KARL L. STRATOS Abstract. The conventional method of describing a graph as a pair (V, E), where V and E repectively denote the sets of vertices and edges,

More information

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction Motion Texture Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays 1. Introduction Motion capture data is often used in movies and video games because it is able to realistically depict human

More information

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component A Fully Automatic System To Model Faces From a Single Image Zicheng Liu Microsoft Research August 2003 Technical Report MSR-TR-2003-55 Microsoft Research Microsoft Corporation One Microsoft Way Redmond,

More information

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves

CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines Perceptual Grouping and Segmentation

More information

3-D Morphing by Direct Mapping between Mesh Models Using Self-organizing Deformable Model

3-D Morphing by Direct Mapping between Mesh Models Using Self-organizing Deformable Model 3-D Morphing by Direct Mapping between Mesh Models Using Self-organizing Deformable Model Shun Matsui Ken ichi Morooka Hiroshi Nagahashi Tokyo Institute of Technology Kyushu University Tokyo Institute

More information

Motion Interpretation and Synthesis by ICA

Motion Interpretation and Synthesis by ICA Motion Interpretation and Synthesis by ICA Renqiang Min Department of Computer Science, University of Toronto, 1 King s College Road, Toronto, ON M5S3G4, Canada Abstract. It is known that high-dimensional

More information

03 - Reconstruction. Acknowledgements: Olga Sorkine-Hornung. CSCI-GA Geometric Modeling - Spring 17 - Daniele Panozzo

03 - Reconstruction. Acknowledgements: Olga Sorkine-Hornung. CSCI-GA Geometric Modeling - Spring 17 - Daniele Panozzo 3 - Reconstruction Acknowledgements: Olga Sorkine-Hornung Geometry Acquisition Pipeline Scanning: results in range images Registration: bring all range images to one coordinate system Stitching/ reconstruction:

More information

VIDEO STABILIZATION WITH L1-L2 OPTIMIZATION. Hui Qu, Li Song

VIDEO STABILIZATION WITH L1-L2 OPTIMIZATION. Hui Qu, Li Song VIDEO STABILIZATION WITH L-L2 OPTIMIZATION Hui Qu, Li Song Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University ABSTRACT Digital videos often suffer from undesirable

More information

Interactive Scientific Visualization of Polygonal Knots

Interactive Scientific Visualization of Polygonal Knots Interactive Scientific Visualization of Polygonal Knots Abstract Dr. Kenny Hunt Computer Science Department The University of Wisconsin La Crosse hunt@mail.uwlax.edu Eric Lunde Computer Science Department

More information

Question 2: Linear algebra and transformations matrices rotation vectors linear transformation T=U*D*VT

Question 2: Linear algebra and transformations matrices rotation vectors linear transformation T=U*D*VT You must answer all questions. For full credit, an answer must be both correct and well-presented (clear and concise). If you feel a question is ambiguous, state any assumptions that you need to make.

More information

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster

More information

Mobile Cloud Multimedia Services Using Enhance Blind Online Scheduling Algorithm

Mobile Cloud Multimedia Services Using Enhance Blind Online Scheduling Algorithm Mobile Cloud Multimedia Services Using Enhance Blind Online Scheduling Algorithm Saiyad Sharik Kaji Prof.M.B.Chandak WCOEM, Nagpur RBCOE. Nagpur Department of Computer Science, Nagpur University, Nagpur-441111

More information

Iterative Estimation of 3D Transformations for Object Alignment

Iterative Estimation of 3D Transformations for Object Alignment Iterative Estimation of 3D Transformations for Object Alignment Tao Wang and Anup Basu Department of Computing Science, Univ. of Alberta, Edmonton, AB T6G 2E8, Canada Abstract. An Iterative Estimation

More information

Virtual Marionettes: A System and Paradigm for Real-Time 3D Animation

Virtual Marionettes: A System and Paradigm for Real-Time 3D Animation Virtual Marionettes: A System and Paradigm for Real-Time 3D Animation Adi Bar-Lev, Alfred M. Bruckstein, Gershon Elber Computer Science Department Technion, I.I.T. 32000 Haifa, Israel Abstract This paper

More information

Study of Panelization Techniques to Inform Freeform Architecture

Study of Panelization Techniques to Inform Freeform Architecture Study of Panelization Techniques to Inform Freeform Architecture Daniel Hambleton, Crispin Howes, Jonathan Hendricks, John Kooymans Halcrow Yolles Keywords 1 = Freeform geometry 2 = Planar quadrilateral

More information

Vision-based Control of 3D Facial Animation

Vision-based Control of 3D Facial Animation Eurographics/SIGGRAPH Symposium on Computer Animation (2003) D. Breen, M. Lin (Editors) Vision-based Control of 3D Facial Animation Jin-xiang Chai,1 Jing Xiao1 and Jessica Hodgins1 1 The Robotics Institute,

More information

Edge Equalized Treemaps

Edge Equalized Treemaps Edge Equalized Treemaps Aimi Kobayashi Department of Computer Science University of Tsukuba Ibaraki, Japan kobayashi@iplab.cs.tsukuba.ac.jp Kazuo Misue Faculty of Engineering, Information and Systems University

More information

3D Physics Engine for Elastic and Deformable Bodies. Liliya Kharevych and Rafi (Mohammad) Khan Advisor: David Mount

3D Physics Engine for Elastic and Deformable Bodies. Liliya Kharevych and Rafi (Mohammad) Khan Advisor: David Mount 3D Physics Engine for Elastic and Deformable Bodies Liliya Kharevych and Rafi (Mohammad) Khan Advisor: David Mount University of Maryland, College Park December 2002 Abstract The purpose of this project

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Animating cuts with on-the-fly re-meshing

Animating cuts with on-the-fly re-meshing EUROGRAPHICS 2001 / Jonathan C. Roberts Short Presentations Animating cuts with on-the-fly re-meshing F. Ganovelli and C. O Sullivan Image Synthesis Group, Computer Science Department, Trinity College

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

Meshless Modeling, Animating, and Simulating Point-Based Geometry

Meshless Modeling, Animating, and Simulating Point-Based Geometry Meshless Modeling, Animating, and Simulating Point-Based Geometry Xiaohu Guo SUNY @ Stony Brook Email: xguo@cs.sunysb.edu http://www.cs.sunysb.edu/~xguo Graphics Primitives - Points The emergence of points

More information

Human Body Shape Deformation from. Front and Side Images

Human Body Shape Deformation from. Front and Side Images Human Body Shape Deformation from Front and Side Images Yueh-Ling Lin 1 and Mao-Jiun J. Wang 2 Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan

More information

Depth Estimation for View Synthesis in Multiview Video Coding

Depth Estimation for View Synthesis in Multiview Video Coding MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Depth Estimation for View Synthesis in Multiview Video Coding Serdar Ince, Emin Martinian, Sehoon Yea, Anthony Vetro TR2007-025 June 2007 Abstract

More information

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides Rigging / Skinning based on Taku Komura, Jehee Lee and Charles B.Own's slides Skeletal Animation Victoria 2 CSE 872 Dr. Charles B. Owen Advanced Computer Graphics Skinning http://www.youtube.com/watch?

More information

Bending Circle Limits

Bending Circle Limits Proceedings of Bridges 2013: Mathematics, Music, Art, Architecture, Culture Bending Circle Limits Vladimir Bulatov Corvallis Oregon, USA info@bulatov.org Abstract M.C.Escher s hyperbolic tessellations

More information

Chapter 2 Basic Structure of High-Dimensional Spaces

Chapter 2 Basic Structure of High-Dimensional Spaces Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

Singularity Analysis of an Extensible Kinematic Architecture: Assur Class N, Order N 1

Singularity Analysis of an Extensible Kinematic Architecture: Assur Class N, Order N 1 David H. Myszka e-mail: dmyszka@udayton.edu Andrew P. Murray e-mail: murray@notes.udayton.edu University of Dayton, Dayton, OH 45469 James P. Schmiedeler The Ohio State University, Columbus, OH 43210 e-mail:

More information

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Hartmann - 1 Bjoern Hartman Advisor: Dr. Norm Badler Applied Senior Design Project - Final Report Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Introduction Realistic

More information

Topic: 1-One to Five

Topic: 1-One to Five Mathematics Curriculum Kindergarten Suggested Blocks of Instruction: 12 days /September Topic: 1-One to Five Know number names and the count sequence. K.CC.3. Write numbers from 0 to 20. Represent a number

More information

Reducing Blendshape Interference by Selected Motion Attenuation

Reducing Blendshape Interference by Selected Motion Attenuation Reducing Blendshape Interference by Selected Motion Attenuation J.P. Lewis, Jonathan Mooser, Zhigang Deng, and Ulrich Neumann Computer Graphics and Immersive Technology Lab University of Southern California

More information

Simple Silhouettes for Complex Surfaces

Simple Silhouettes for Complex Surfaces Eurographics Symposium on Geometry Processing(2003) L. Kobbelt, P. Schröder, H. Hoppe (Editors) Simple Silhouettes for Complex Surfaces D. Kirsanov, P. V. Sander, and S. J. Gortler Harvard University Abstract

More information

3D Editing System for Captured Real Scenes

3D Editing System for Captured Real Scenes 3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Sketching Articulation and Pose for Facial Meshes

Sketching Articulation and Pose for Facial Meshes Sketching Articulation and Pose for Facial Meshes Edwin Chang Brown University Advisor: Odest Chadwicke Jenkins Brown University Figure 1: A reference curve (green) and target curve (blue) are sketched

More information

Simulation in Computer Graphics. Deformable Objects. Matthias Teschner. Computer Science Department University of Freiburg

Simulation in Computer Graphics. Deformable Objects. Matthias Teschner. Computer Science Department University of Freiburg Simulation in Computer Graphics Deformable Objects Matthias Teschner Computer Science Department University of Freiburg Outline introduction forces performance collision handling visualization University

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information