3D Morphable Model Based Face Replacement in Video

Size: px
Start display at page:

Download "3D Morphable Model Based Face Replacement in Video"

Transcription

1 3D Morphable Model Based Face Replacement in Video Yi-Ting Cheng, Virginia Tzeng, Yung-Yu Chuang, Ming Ouhyoung Dept. of Computer Science and Information Engineering, National Taiwan University {zzz88213, mantic, cyy, ABSTRACT Face replacement in video is a useful application in the entertainment and special effect industries. However, the frame-by-frame manipulation process using the software is often time-consuming and labor-intensive. We present a system that replaces the target subject face in the target video with the source subject face, under similar pose, expression, and illumination. Our approach is based on 3D morphable model and an expression database, and the input information of the source subject face is reduced to one to two images. The face replacement procedure requires little user interference. To demonstrate the effectiveness of our system, we test our implementation on both movie clips and video we captured. Keywords: Face replacement, 3D morphable model, Face alignment, Face relighting. 1. INTRODUCTION With rapid advances in the information technology, users can access a huge amount of videos either from digital camera or from the Internet. The concept of editing and modifying these videos to create new digital works becomes an interesting and important issue. This paper focuses on one of the problems of this issue: replacing faces in the video. The ability of replacing the face in a video with another person s face is useful in the entertainment and special effect industries. Most digital processing software can perform face replacement in images only when the poses of the source and target faces are similar, and the manipulation process using the software is often timeconsuming and labor-intensive. With these drawbacks, it is almost impossible to perform replacement in video using the software. There are plenty related works about face replacement, but most of them are only applied to images, while we focus on face replacement in video. The most naïve method is to ask the source subject to act the same as the target subject under the similar lighting condition, then the pose, expression and lighting will be similar. However, the acquisition complexity of this method is so hight that the method is impracticable. Thus, we propose a 3D model based approach to deal with the face replacement problem, and we can deal with pose and expression naturally and reduce the acquisition complexity of the source subject to one or two images. In this paper, we present a system for face replacement in video to replace the target subject face in the target video with the source subject face, under similar pose, expression, and illumination. Our model based approach reduces the data of the source subject face to one or two images. The approach is based on 3D morphable model [6] and an expression model database to deal with expressions. 2. RELATED WORK 2.1. Image-Based Modeling Modeling human faces from images has been a computer graphics research topics for years. Some of them rely on manual assistance for matching a deformable 3D face model to images. Poghin et al. [11] employ a user-assisted technique to fit the particular geometry of the subject s face. They are able to generate realistic face models, but with a manually intensive procedure. Liu et al. [8] develop a system that constructs textured 3D face models from videos and images with minimal user interaction. Their system require user to label several points on two base images, and then use feature ex-

2 traction and tracking technique to reconstruct 3D model. Our work is based on Blanz and Vetter s works [6]. They use a example set of 3D face models to derive a morphable face model and use the morphable model to reconstruct 3D model from single or multiple images. Expression Model DB Expression Estimation Target Video Face Alignment Feature Position Pose Estimation Source Image Face Extraction Morphable Model Fitting 3D Face Database 2.2. Expression Manipulation Noh and Neumann [13] present an approach to retargetting existing face expressions of one model to another. Based on 3D geometry morphing between the source and target face models, their approach transfers the facial motion vectors from a source model to a target model in order to clone expressions. Pyun et al. [12] present an examplebased approach for cloning facial expressions while reflecting the characteristic feature of the target model. We refer to their observation about the relation between the emotional and verbal expressions, and the method of parameterization and expression blending. Blanz et al. [4] present a method for photo-realistic animation in a image or a video. Based on a set of static laser scans of one person, the system transfers mouth movements and expressions across individuals, and automatically estimates 3D shape and rendering parameters to reanimate the model with new mouth movements Face Replacement Bitouk et al. [3] present a system for automatic face replacement in images. They use a large database of face images from the Internet and select candidate face. Instead of image based method, our face replacement algorithm is based on 3D morphable model, and our input of source subject is simply one or two images. Another work of Blanz et al. [5] replace the faces in images by reconstruct the 3D morphable model of both the source and target subject face. They replace the face region with the source subject model under the pose and lighting parameters of the target subject. 3. SYSTEM OVERVIEW Figure 1 shows the overview of our approach. The system takes a target video and one source image as input, and the output is the video with the target subject face replaced with the source subject face. Expression Parameter Pose Parameter Face Synthesis Face Relighting Source Face Image Composition Output Video 3D Face Model Figure 1: Flowchart for our face replacement approach. Given the source image, we reconstruct the 3D model of the source subject face using 3D morphable model [6]. Our 3D face synthesizer derives a morphable face to fit the input images, and map the texture from the image to the derived 3D face model (Section 4). A face alignment algorithm is applied to the target video to detect the detailed facial features and outlines of the target subject face [7]. A pose estimator exploit the face alignment results to estimate the head pose parameters of the target subject face (Section 6). We employ a 3D face expression database to clone the expressions to the source face model. To fit the expressions to the target video, we propose an algorithm to extract the expression parameters (Section 5). In some videos, directly rendering the source subject face model onto the target frame results in illumination inconsistency. A relighting algorithm relights the rendered source subject face for illumination consistency (Section 6). Finally, we seamlessly composite the rendered source model with the target frame using Poisson blending [10]. The output is a video with the target face replaced by the source face, with similar pose, expression, and lighting.

3 4. 3D FACE RECONSTRUCTION The morphable model of 3D faces [6] is a vector space of 3D shapes and textures spanned by a set of example faces. Our morphable model is derived from structural light 3D scans of 117 adults (92 males and 25 females). A correspondence algorithm makes all the faces fully correspondent. Each 3D face model is represented by vertices with textures Morphable 3D Face Model A face model is represented by a 3n-dimension shape vector S = (x 1, y 1, z 1,..., x n, y n, z n ) T and a texture vector T = (R 1, G 1, B 1,..., R n, G n, B n ) T. Each of the 117 example face is represented by a shape vector S i and texture vector T i. We fit a multivariate normal distribution to our example faces, based on the mean shape µ S = 1 m m S i and mean texture µ T = 1 m m T i. And then we calculate the covariance matrices C S and C T. A common technique for data compression known as Principal Component Analysis (PCA) is used to perform a basis transformation to an orthogonal coordinate system formed by the eigenvectors e S,i and e T,i in descending order according to the eigenvalues σ S,i and σ T,i of the covariance matrices. Analysis of shape and texture are performed separately. The morphable model is m 1 m 1 S m = µ S + α i e S,i, T m = µ T + β i e T,i, and the probabilities for coefficients α and β are p(α) e 1 2 P m 1 α 2 i σ 2 S,i and p(β) e 1 2 P m 1 β 2 i σ 2 T,i Matching A Morphable Model to Images Since the acquisition process of the source subject is under control, we assume the face in the source image I input is illuminated fully and evenly, so we can ignore shading algorithm. The face in the source images should be neutral face without any expression, with eyes opened, and without any occlusion. For more suitably matching the morphable model to the input image, we labels 13 facial feature points, including: four corners of the eyes, two corners of the mouth, the philtrum, two ings of the nose, the bottom of jaw, the bottom of lower lip, and the two bottom of eyes. The fitting algorithm not only optimizes the model coefficients α and β, but also estimates the head pose Cost Function A rigid body transformation maps the object-space coordinations x k = (x k, y k, z k ) T of each vertex to camera-space coordinates x k = R γ R θ R φ x k + t. The angels θ, φ and γ represent rotations around three axes, and t is translation. And then a perspective projection maps the camera coordinates to image coordinates. Given an input image, the goal is to minimize the Euclidean distance over all color channels and all pixels between the input image I input and the image I model synthesized from the current model, E I = x,y I input(x, y) I model (x, y) 2. To match the geometry of model better, we exploit the labeled feature points (q x,i, q y,i ) and the image-plane position (p x,ki, p y,ki ) of the corresponding vertices k i in an additional feature term E F = ( ) ( ) qx,i px,ki 2 i. q y,i p y,ki Furthermore, to match the profile line of the model better, we define an energy term to match the profile line between the height of eyes and jaw. At each horizontal scan line, an energy term restricts the x coordinate of the profile line in I input close to the one in I model : E P = yeye i=y jaw x p,i x 2 p,i, where x p,i and x p,i are the x coordinate of the profile line on the ith horizontal scan line in I input and I model respectively. Minimization of these energy functions wit respect to α, β, ρ may cause overfitting effects. Therefore, we employ a maximum a posteriori estimator (MAP). Finally, the posteriori probability is then maximized by minimizing E = 1 σi 2 E I + 1 σf 2 E F + 1 σp 2 E P + i Optimization α 2 i σ 2 S,i + i β 2 i σ 2 T,i In each analysis-by-synthesis loop, the algorithm reconstructs the 3D face model from the current model coefficients, renders with the current rendering coefficients, and compare the rendered image I model with the input image I input to compute the current energy. Since the partial derivatives of the cost function are hard to compute analytically and expensive to compute numerically, we choose an.

4 optimization algorithm named NMsimplex (Nelder- Mead simplex algorithm) [9] to minimize the energy function. Our iterative optimization starts from the mean face, and with rendering parameters roughly initialized by the user. The iterations only optimize the first k shape PCA coefficients and first k texture PCA coefficients, α i and β i, along with all the rendering parameters. Typically we choose k between 16 to 32. After convergence, we map the pixels of the input image to the reconstructed 3D model. Since we assume the input image is lighted evenly, the mapped texture can be considered as the albedo color of the 3D model. When using single image as input, and optimizing the first 32 shape and texture coefficients, the algorithm runs about 500 iterations. The fitting process takes about half a minute on a PC with an Intel Core 2 Quad 2.4GHz CPU and an Nvidia Geforce 8800GT GPU. 5. EXPRESSION MATCHING Since the database of the example face models are neutral face, the reconstructed 3D models are neutral faces as well. We have to extend the neutral source face model to expressional model. We employ a 3D face expression database to clone the basis expressions to the source face model. Furthermore, we morph the expressional model to match the expression of the target subject in each frame. In this section, we introduce the method for expression cloning, and the algorithm to match the expressions to the target video Expression Cloning In our system, we employ a expression 3D model database with 13 key expressions and a neutral face. Five of them are emotional expressions, including: angry, smiling, happy, sad, and surprised. Six of them are verbal key expressions, pronouncing a, e, uh, m, o, and u. The rest two are the expressions of closing eyes. Each model in the expression database is represented by 436 vertices, and we manually match these vertices to the vertices of the reconstructed model. Since all the models are human faces, we can assume that they have similar proportions. We can simply adjust the models to the same scale and apply the displacement of the expression database to Figure 2: The result of expression cloning. The left is the neutral face and key expressions of the database, and the right is the Cyy model after expression cloning. the source model. The geometry of the ith expression face model in the expression database is represented by Ẽ i = (ṽ i 1, ṽ i 2,..., ṽ i n), and the neutral face of the expression database is Ñ = (ṽ 1, ṽ 2,..., ṽ n ). The ith expression in the expression database can be represented by a displacement vector Ẽ i = Ẽ i Ñ. The geometry of the source neutral face model is represented by N = (v 1, v 2,..., v n ). We directly displace each vertices in the model with the displacement vector Ẽ i, and we can get the ith expressions of the source face: E i = N + Ẽ i 5.2. Matching Expressions to Target Video After expression cloning, we have a set of key expressions E i of the source model E. Consider a hyperspace spanned by the key expressions of the source model, any novel expressions can be approximated by a linear combination of these key expressions. The goal of expression matching is to find a trajectory in the hyperspace to generate a sequence of expressions which looks similar to the target video. We exploit the face alignment result and the head pose parameters to estimate the expression parameters. The face alignment algorithm labels 87 feature points on each face (Figure 3). In the facial features, the feature points at the eyebrows and outline are most ambiguous since it is hard to specifically define the feature points at these area. The feature points at the nose are least relative to facial expressions. Thus, our algorithms focus on the movement of eyelids and mouth.

5 The v are adjusted in the same manner, and we can get the normalized displacement vector: Figure 3: The face alignment algorithm labels detect the detailed facial features and outlines. It labels 87 feature points on each face Matching the Mouth In the 13 key expressions E i, there are 5 emotional expressions and 6 verbal expressions which lead to mouth movements. However, the movements are mainly constrained by verbal expressions to produce accurate pronunciations [12]. Based on this observation, we use the verbal key expressions to interpolate the expression at the current frame. In each frame I t at time t, we represent the n feature points around the mouth as a 2n-dimension vector s t = (x t 1, y1, t x t 2, y2, t..., x t n, yn) t T. The pose parameters of the current frame are represented by a 6-dimension vector ρ t = (θ x, θ y, θ z, s, d x, d y ), which contains the rotation around three axes, scaling, and image plane translation. We define a m-dimension weight vector w = (w 1, w 2,..., w m ), and we can linearly combine the m key expressions E i with w as weights to construct a current model E c = m w i E i. We employ an iterative optimizer to find a w t = (w1, t w2, t..., wm) t which producing a model E t c best fitting s t in each frame I t. At each iteration, we use the current weight w to construct the current model E c, and then we transform E c with ρ t. After the perspective projection, the n corresponding vertices V are projected to the image plane, and their 2D coordinates form a 2n-dimension vector v = (ˆx 1, ŷ 1, ˆx 2, ŷ 2,..., ˆx n, ŷ n ). We calculate the normalized displacement vector of the face alignment feature points, δs t = ( xt 1 x t c l target, yt 1 y t c l target,..., xt n x t c l target, yt n y t c l target ) T where x t c = 1 n n x i and y t c = 1 n n y i are the center of the target mouth, and l target is the width of mouth. We normalize the offset vector with the mouth width because we assume the scale of mouth movement is proportioned to the size of mouth. δv = ( ˆx 1 ˆx c l source, ŷ1 ŷ c l source,..., ˆx n ˆx c l source, ŷn ŷ c l source ) T. Now we can define the energy function of the optimization with respect to the normalized displacement vector δv and δs t. We define the data term of the energy function to be the L1 norm between δv and δs t : f data = δv δs t Simply optimizing the data energy frame by frame will cause serious flicker in the video because we do not consider the temporal coherence yet. Thus, we add a simple smoothness term to make sure that the weight vector is similar as the previous frame: f smooth = w w t 1 Consequently, combine the data term and smooth term with a regularization scalar λ, the final objective function is: O = f data + λf smooth where λ weights the smoothness term relative to the data fitting term. λ should be chosen appropriately, and it could be determined according to the distance between the current face alignment normalized displacement vector δs t and the previous one δs t 1. Finally, the NMsimplex iterative minimizer find the best w t to fit the objective function O: w t = arg min O Matching the Blinking Beside the mouth motion, we match the blinking. In our expression database, there are two key expressions about the movement of eyelids: key expressions of closing the left and right eye respectively. Conceptually, the match of blinking can be handled with the algorithm in previous section. However, since there is only one key expression at each eye, it would be redundant to run the optimize algorithm in previous section. We can match the blinking heuristically by finding the degree of closing eyelids to determine whether the target subject is blinking.

6 adjust skin color and lighting of the source face. We estimate lighting parameters of the source and target faces, and then use the lighting parameters to relight the source face model. We use a face relighting method similar to the one used in [3]. We assume the face have constant albedo and Lambertian surface, and the image intensity can be approximated as Ĩc using a linear combination of nine spherical harmonics [2]: Figure 4: The result of blinking and mouth movement mapping. 6. POSE AND LIGHTING ESTIMATION In this section, we introduce the head pose estimator first, and then we introduce the lighting estimator and face relighting module in detail Pose Estimation First, we use the model fitting algorithm introduced in Section 4 to reconstruct the 3D face model of the target subject. Based on the set of 87 feature points which are detected by the face alignment module and the corresponding pre-selected feature points in the target face model, we can estimate pose parameters (rotation angles around three axes, scaling, and translation) by minimizing the error E between them. 87 ( ) qx,i E = (w i q y,i ( px,i p y,i ) 2 ), where w i is the weight of the ith feature point, (q x,i, q y,i ) is the position of the ith feature point of face alignment, and (p x,i, p y,i ) is the project position of the ith feature point of the target model. We estimate the pose parameters frame by frame, and then we use box filter to smooth the estimated parameters. Finally,we use hysteresis smoothing to remove minor fluctuations and preserve major transients Face Relighting If the source face and the target face were under different illumination, the replacement result would appear perceptually unreasonable, so we need to Ĩ c (x, y) = ρ c 9 a c,k H k (n(x, y)), k=1 c {R, G, B}. where ρ c is the average color of each color channel, a c,k are the spherical harmonic coefficients which we want to estimate as lighting parameters, H are the spherical harmonics, and n(x, y) is the surface normal at the image location (x, y). We use the 3D face models to render normal maps of both the target face and source face. Now we can solve the lighting parameters a (s,t) c,k of the source and target faces, and then we use the lighting parameters to construct the source and target lighting images I (s,t) c Ĩ (s,t) c (x, y) = ρ s,t c 9 k=1 of the source face model. a (s,t) c,k H k(n s (x, y)). Finally, relighting image R c is obtained by dividing the source image Ic s by the source lighting image I c s, and then multiplying the target lighting image I c t : R c = Ic s ( I t c ), c {R, G, B}. I c s 7. RESULTS To show the effectiveness of our face replacement algorithm, we test our method on the video clips from movies or other sources. The video Prestige01 is a video clip from the movie The Prestige [1]. In Figure 6, we replace the Cyy source model into the Prestige01 video. Since there is no obviously sharp illumination in the target video, the generated face replacement result looks nature even without relighting, and the pose and expressions are similar to the target subject. The video ObamaTalk is a video clip that we download from YouTube. This is a hard test data because the illumination is obvious and there are many wrinkles on the face, which tend to results in flickering in the composition result. We replace

7 the Wildmb source model into it. Since the illumination of the target face is obvious, relighting is necessary in this case. In Figure 7, the final result looks satisfiable, even though the wrinkles on the target face cause some artifacts. (a) source image (b) source model (c) target frame (d) head pose (e) expression (f) composition Figure 5: The face replacement process. Given the source image (a), we reconstruct the source face model (b). In the target frame (c), the head pose (d) is estimated, and then expression (e) is matched. Finally, Poisson blending seamlessly composite the output frame (f). 8. CONCLUSION In this paper, we presents a system that replace the target subject face in the target video with the source subject face. We reduce the amount of user interference required, and reduce the acquisition complexity of the source subject to image. Our system uses the source image to reconstruct the 3D face model of the source subject. We propose an algorithm to effectively match the expression to the target frames. Combined with face alignment algorithm, a lighting and pose estimator, and a composition procedure, we can naturally replace the faces in videos under similar poses, expressions, and illumination. Our face replacement system works well in some cases. However, limitations remain in our approach. The tolerance to pose variance is still limited by the robustness of face alignment algorithm. Besides, the properties of the target video, such as sharp lighting, violent movement, and wrinkles may result in undesirable artifacts. For the future work, we plan to enhance the robustness of our system to avoid the various limitations, and the accuracy of the reconstructed 3D model should be enhanced. Besides, we will extend the expression matching algorithm to deal with more abundant expressions beside mouth movement and blinking. REFERENCES [1] The prestige, [2] R. Basri and D. W. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2): , [3] D. Bitouk, N. Kumar, S. Dhillon, P. N. Belhumeur, and S. K. Nayar. Face swapping: automatically replacing faces in photographs. ACM Transactions on Graphics (SIGGRAPH), 27(3), [4] V. Blanz, C. Basso, T. Poggio, and T. Vetter. Reanimating faces in images and video. Computer Graphics Forum, 22: , [5] V. Blanz, K. Scherbaum, T. Vetter, and H.-P. Seidel. Exchanging faces in images. Computer Graphics Forum, 23(3): , [6] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In Computer Graphics Proc. SIGGRAPH 99, pages , [7] Y. Liang. Image based face replacement in video. Master s thesis, CSIE Department, National Taiwan University, [8] Z. Liu, Z. Zhang, C. Jacobs, and M. Cohen. Rapid modeling of animated faces from video images. In Proceedings of ACM International Conference on Multimedia, pages , [9] J. A. Nelder and R. Mead. A simplex method for function minimization. Computer Journal, 7: , [10] P. Pérez, M. Gangnet, and A. Blake. Poisson image editing. ACM Transactions on Graphics (SIG- GRAPH), 22: , [11] F. Pighin, J. Hecker, D. Lischinski, R. Szeliski, and D. H. Salesin. Synthesizing realistic facial expressions from photographs. In SIGGRAPH 06: ACM SIGGRAPH 2006 Courses, page 19, [12] H. Pyun, Y. Kim, W. Chae, H. W. Kang, and S. Y. Shin. An example-based approach for facial expression cloning. In 2003 ACM SIGGRAPH / Eurographics Symposium on Computer Animation, pages , [13] J. yong Noh and U. Neumann. Expression cloning. In SIGGRAPH 06: ACM SIGGRAPH 2006 Courses, page 22, 2006.

8 Figure 6: The results of replacing Cyy model to the Prestige01 video. The first row shows the input source image and the reconstructed face model. The rest of the image shows some frames of the target video and the result. The images in the first column are the target frames, and the images in the second column are the face replacement results. Figure 7: The results of replacing Wildmb model to the ObamaTalk video. The first row shows the input source image and the reconstructed face model. The rest of the images in the first column are the target frames, and the second column shows the face replacement results.

A Morphable Model for the Synthesis of 3D Faces

A Morphable Model for the Synthesis of 3D Faces A Morphable Model for the Synthesis of 3D Faces Marco Nef Volker Blanz, Thomas Vetter SIGGRAPH 99, Los Angeles Presentation overview Motivation Introduction Database Morphable 3D Face Model Matching a

More information

Face Re-Lighting from a Single Image under Harsh Lighting Conditions

Face Re-Lighting from a Single Image under Harsh Lighting Conditions Face Re-Lighting from a Single Image under Harsh Lighting Conditions Yang Wang 1, Zicheng Liu 2, Gang Hua 3, Zhen Wen 4, Zhengyou Zhang 2, Dimitris Samaras 5 1 The Robotics Institute, Carnegie Mellon University,

More information

Faces and Image-Based Lighting

Faces and Image-Based Lighting Announcements Faces and Image-Based Lighting Project #3 artifacts voting Final project: Demo on 6/25 (Wednesday) 13:30pm in this room Reports and videos due on 6/26 (Thursday) 11:59pm Digital Visual Effects,

More information

Faces. Face Modeling. Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 17

Faces. Face Modeling. Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 17 Face Modeling Topics in Image-Based Modeling and Rendering CSE291 J00 Lecture 17 Faces CS291-J00, Winter 2003 From David Romdhani Kriegman, slides 2003 1 Approaches 2-D Models morphing, indexing, etc.

More information

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component A Fully Automatic System To Model Faces From a Single Image Zicheng Liu Microsoft Research August 2003 Technical Report MSR-TR-2003-55 Microsoft Research Microsoft Corporation One Microsoft Way Redmond,

More information

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets Kyungha Min and Moon-Ryul Jung Dept. of Media Technology, Graduate School of Media Communications, Sogang Univ., Seoul,

More information

FACIAL ANIMATION FROM SEVERAL IMAGES

FACIAL ANIMATION FROM SEVERAL IMAGES International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL ANIMATION FROM SEVERAL IMAGES Yasuhiro MUKAIGAWAt Yuichi NAKAMURA+ Yuichi OHTA+ t Department of Information

More information

Facial Animation System Design based on Image Processing DU Xueyan1, a

Facial Animation System Design based on Image Processing DU Xueyan1, a 4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 206) Facial Animation System Design based on Image Processing DU Xueyan, a Foreign Language School, Wuhan Polytechnic,

More information

Registration of Expressions Data using a 3D Morphable Model

Registration of Expressions Data using a 3D Morphable Model Registration of Expressions Data using a 3D Morphable Model Curzio Basso, Pascal Paysan, Thomas Vetter Computer Science Department, University of Basel {curzio.basso,pascal.paysan,thomas.vetter}@unibas.ch

More information

HIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS

HIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS HIGH-RESOLUTION ANIMATION OF FACIAL DYNAMICS N. Nadtoka, J.R. Tena, A. Hilton, J. Edge Centre for Vision, Speech and Signal Processing, University of Surrey {N.Nadtoka, J.Tena, A.Hilton}@surrey.ac.uk Keywords:

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Image Morphing. Application: Movie Special Effects. Application: Registration /Alignment. Image Cross-Dissolve

Image Morphing. Application: Movie Special Effects. Application: Registration /Alignment. Image Cross-Dissolve Image Morphing Application: Movie Special Effects Morphing is turning one image into another (through a seamless transition) First movies with morphing Willow, 1988 Indiana Jones and the Last Crusade,

More information

REAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR

REAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR REAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR Nuri Murat Arar1, Fatma Gu ney1, Nasuh Kaan Bekmezci1, Hua Gao2 and Hazım Kemal Ekenel1,2,3 1 Department of Computer Engineering, Bogazici University,

More information

Muscle Based facial Modeling. Wei Xu

Muscle Based facial Modeling. Wei Xu Muscle Based facial Modeling Wei Xu Facial Modeling Techniques Facial modeling/animation Geometry manipulations Interpolation Parameterizations finite element methods muscle based modeling visual simulation

More information

Shape and Expression Space of Real istic Human Faces

Shape and Expression Space of Real istic Human Faces 8 5 2006 5 Vol8 No5 JOURNAL OF COMPU TER2AIDED DESIGN & COMPU TER GRAPHICS May 2006 ( 0087) (peiyuru @cis. pku. edu. cn) : Canny ; ; ; TP394 Shape and Expression Space of Real istic Human Faces Pei Yuru

More information

Master s Thesis. Cloning Facial Expressions with User-defined Example Models

Master s Thesis. Cloning Facial Expressions with User-defined Example Models Master s Thesis Cloning Facial Expressions with User-defined Example Models ( Kim, Yejin) Department of Electrical Engineering and Computer Science Division of Computer Science Korea Advanced Institute

More information

Computer Animation Visualization. Lecture 5. Facial animation

Computer Animation Visualization. Lecture 5. Facial animation Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based

More information

K A I S T Department of Computer Science

K A I S T Department of Computer Science An Example-based Approach to Text-driven Speech Animation with Emotional Expressions Hyewon Pyun, Wonseok Chae, Yejin Kim, Hyungwoo Kang, and Sung Yong Shin CS/TR-2004-200 July 19, 2004 K A I S T Department

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

3D Active Appearance Model for Aligning Faces in 2D Images

3D Active Appearance Model for Aligning Faces in 2D Images 3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active

More information

Image-Based Deformation of Objects in Real Scenes

Image-Based Deformation of Objects in Real Scenes Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method

More information

Facial Expression Analysis for Model-Based Coding of Video Sequences

Facial Expression Analysis for Model-Based Coding of Video Sequences Picture Coding Symposium, pp. 33-38, Berlin, September 1997. Facial Expression Analysis for Model-Based Coding of Video Sequences Peter Eisert and Bernd Girod Telecommunications Institute, University of

More information

Pose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation

Pose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation Pose Space Deformation A unified Approach to Shape Interpolation and Skeleton-Driven Deformation J.P. Lewis Matt Cordner Nickson Fong Presented by 1 Talk Outline Character Animation Overview Problem Statement

More information

Realistic Texture Extraction for 3D Face Models Robust to Self-Occlusion

Realistic Texture Extraction for 3D Face Models Robust to Self-Occlusion Realistic Texture Extraction for 3D Face Models Robust to Self-Occlusion Chengchao Qu 1,2 Eduardo Monari 2 Tobias Schuchert 2 Jürgen Beyerer 2,1 1 Vision and Fusion Laboratory, Karlsruhe Institute of Technology

More information

Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation

Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation Qing Li and Zhigang Deng Department of Computer Science University of Houston Houston, TX, 77204, USA

More information

Learning-Based Facial Rearticulation Using Streams of 3D Scans

Learning-Based Facial Rearticulation Using Streams of 3D Scans Learning-Based Facial Rearticulation Using Streams of 3D Scans Robert Bargmann MPI Informatik Saarbrücken, Germany Bargmann@mpi-inf.mpg.de Volker Blanz Universität Siegen Germany Blanz@informatik.uni-siegen.de

More information

Real-time Expression Cloning using Appearance Models

Real-time Expression Cloning using Appearance Models Real-time Expression Cloning using Appearance Models Barry-John Theobald School of Computing Sciences University of East Anglia Norwich, UK bjt@cmp.uea.ac.uk Iain A. Matthews Robotics Institute Carnegie

More information

Re-mapping Animation Parameters Between Multiple Types of Facial Model

Re-mapping Animation Parameters Between Multiple Types of Facial Model Re-mapping Animation Parameters Between Multiple Types of Facial Model Darren Cosker, Steven Roy, Paul L. Rosin, and David Marshall School of Computer Science, Cardiff University, U.K D.P.Cosker,Paul.Rosin,Dave.Marshal@cs.cardiff.ac.uk

More information

Sample Based Texture extraction for Model based coding

Sample Based Texture extraction for Model based coding DEPARTMENT OF APPLIED PHYSICS AND ELECTRONICS UMEÅ UNIVERISTY, SWEDEN DIGITAL MEDIA LAB Sample Based Texture extraction for Model based coding Zhengrong Yao 1 Dept. Applied Physics and Electronics Umeå

More information

Face Morphing using 3D-Aware Appearance Optimization

Face Morphing using 3D-Aware Appearance Optimization Face Morphing using 3D-Aware Appearance Optimization Fei Yang 1 Eli Shechtman 2 Jue Wang 2 Lubomir Bourdev 2 Dimitris Metaxas 1 1 Rutgers University 2 Adobe Systems Figure 1: Our system can generate fully

More information

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos Machine Learning for Computer Vision 1 22 October, 2012 MVA ENS Cachan Lecture 5: Introduction to generative models Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Center for Visual Computing Ecole Centrale Paris

More information

Learning a generic 3D face model from 2D image databases using incremental structure from motion

Learning a generic 3D face model from 2D image databases using incremental structure from motion Learning a generic 3D face model from 2D image databases using incremental structure from motion Jose Gonzalez-Mora 1,, Fernando De la Torre b, Nicolas Guil 1,, Emilio L. Zapata 1 a Department of Computer

More information

3D Editing System for Captured Real Scenes

3D Editing System for Captured Real Scenes 3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:

More information

Speech Driven Synthesis of Talking Head Sequences

Speech Driven Synthesis of Talking Head Sequences 3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University

More information

Vision-based Control of 3D Facial Animation

Vision-based Control of 3D Facial Animation Eurographics/SIGGRAPH Symposium on Computer Animation (2003) D. Breen, M. Lin (Editors) Vision-based Control of 3D Facial Animation Jin-xiang Chai,1 Jing Xiao1 and Jessica Hodgins1 1 The Robotics Institute,

More information

FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING

FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING FACIAL ANIMATION WITH MOTION CAPTURE BASED ON SURFACE BLENDING Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave., Ottawa, Ontario, Canada,

More information

Supplementary Material for Synthesizing Normalized Faces from Facial Identity Features

Supplementary Material for Synthesizing Normalized Faces from Facial Identity Features Supplementary Material for Synthesizing Normalized Faces from Facial Identity Features Forrester Cole 1 David Belanger 1,2 Dilip Krishnan 1 Aaron Sarna 1 Inbar Mosseri 1 William T. Freeman 1,3 1 Google,

More information

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,

More information

Image-based Motion-driven Facial Texture

Image-based Motion-driven Facial Texture Image-based Motion-driven Facial Texture Bing Zhang and Hai Tao and Alex Pang Computer Science and Computer Engineering University of California, Santa Cruz 1 INTRODUCTION Facial animation is a fundamental

More information

AAM Based Facial Feature Tracking with Kinect

AAM Based Facial Feature Tracking with Kinect BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking

More information

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video

More information

Data-Driven Face Modeling and Animation

Data-Driven Face Modeling and Animation 1. Research Team Data-Driven Face Modeling and Animation Project Leader: Post Doc(s): Graduate Students: Undergraduate Students: Prof. Ulrich Neumann, IMSC and Computer Science John P. Lewis Zhigang Deng,

More information

Human Face Shape Analysis under Spherical Harmonics Illumination Considering Self Occlusion

Human Face Shape Analysis under Spherical Harmonics Illumination Considering Self Occlusion Human Face Shape Analysis under Spherical Harmonics Illumination Considering Self Occlusion Jasenko Zivanov Andreas Forster Sandro Schönborn Thomas Vetter jasenko.zivanov@unibas.ch forster.andreas@gmail.com

More information

The accuracy and robustness of motion

The accuracy and robustness of motion Orthogonal-Blendshape-Based Editing System for Facial Motion Capture Data Qing Li and Zhigang Deng University of Houston The accuracy and robustness of motion capture has made it a popular technique for

More information

Facial Expression Recognition in Real Time

Facial Expression Recognition in Real Time Facial Expression Recognition in Real Time Jaya Prakash S M 1, Santhosh Kumar K L 2, Jharna Majumdar 3 1 M.Tech Scholar, Department of CSE, Nitte Meenakshi Institute of Technology, Bangalore, India 2 Assistant

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen Zicheng Liu Thomas S. Huang University of Illinois Microsoft Research University of Illinois Urbana, IL 61801 Redmond, WA 98052 Urbana, IL 61801

More information

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003 Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there

More information

Lecture 7: Image Morphing. Idea #2: Align, then cross-disolve. Dog Averaging. Averaging vectors. Idea #1: Cross-Dissolving / Cross-fading

Lecture 7: Image Morphing. Idea #2: Align, then cross-disolve. Dog Averaging. Averaging vectors. Idea #1: Cross-Dissolving / Cross-fading Lecture 7: Image Morphing Averaging vectors v = p + α (q p) = (1 - α) p + α q where α = q - v p α v (1-α) q p and q can be anything: points on a plane (2D) or in space (3D) Colors in RGB or HSV (3D) Whole

More information

Face Recognition Across Poses Using A Single 3D Reference Model

Face Recognition Across Poses Using A Single 3D Reference Model 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Face Recognition Across Poses Using A Single 3D Reference Model Gee-Sern Hsu, Hsiao-Chia Peng National Taiwan University of Science

More information

VIDEO FACE BEAUTIFICATION

VIDEO FACE BEAUTIFICATION VIDEO FACE BEAUTIFICATION Yajie Zhao 1, Xinyu Huang 2, Jizhou Gao 1, Alade Tokuta 2, Cha Zhang 3, Ruigang Yang 1 University of Kentucky 1 North Carolina Central University 2 Microsoft Research 3 Lexington,

More information

Face Recognition based on a 3D Morphable Model

Face Recognition based on a 3D Morphable Model Face Recognition based on a 3D Morphable Model Volker Blanz University of Siegen Hölderlinstr. 3 57068 Siegen, Germany blanz@informatik.uni-siegen.de Abstract This paper summarizes the main concepts of

More information

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn Facial Image Synthesis Page 1 of 5 Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn 1 Introduction Facial expression has been central to the

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Face Relighting with Radiance Environment Maps

Face Relighting with Radiance Environment Maps Face Relighting with Radiance Environment Maps Zhen Wen University of Illinois Urbana Champaign zhenwen@ifp.uiuc.edu Zicheng Liu Microsoft Research zliu@microsoft.com Tomas Huang University of Illinois

More information

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

Recognition: Face Recognition. Linda Shapiro EE/CSE 576 Recognition: Face Recognition Linda Shapiro EE/CSE 576 1 Face recognition: once you ve detected and cropped a face, try to recognize it Detection Recognition Sally 2 Face recognition: overview Typical

More information

Hierarchical Retargetting of Fine Facial Motions

Hierarchical Retargetting of Fine Facial Motions EUROGRAPHICS 2004 / M.-P. Cani and M. Slater (Guest Editors) Volume 23 (2004), Number 3 Hierarchical Retargetting of Fine Facial Motions Kyunggun Na and Moonryul Jung Department of Media Technology, Graduate

More information

FACIAL MOVEMENT BASED PERSON AUTHENTICATION

FACIAL MOVEMENT BASED PERSON AUTHENTICATION FACIAL MOVEMENT BASED PERSON AUTHENTICATION Pengqing Xie Yang Liu (Presenter) Yong Guan Iowa State University Department of Electrical and Computer Engineering OUTLINE Introduction Literature Review Methodology

More information

REALISTIC facial expression synthesis has been one of the

REALISTIC facial expression synthesis has been one of the 48 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 12, NO. 1, JANUARY/FEBRUARY 2006 Geometry-Driven Photorealistic Facial Expression Synthesis Qingshan Zhang, Zicheng Liu, Senior Member,

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

Personal style & NMF-based Exaggerative Expressions of Face. Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University

Personal style & NMF-based Exaggerative Expressions of Face. Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University Personal style & NMF-based Exaggerative Expressions of Face Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University Outline Introduction Related Works Methodology Personal

More information

Ray tracing based fast refraction method for an object seen through a cylindrical glass

Ray tracing based fast refraction method for an object seen through a cylindrical glass 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Ray tracing based fast refraction method for an object seen through a cylindrical

More information

Inverse Rendering with a Morphable Model: A Multilinear Approach

Inverse Rendering with a Morphable Model: A Multilinear Approach ALDRIAN, SMITH: INVERSE RENDERING OF FACES WITH A MORPHABLE MODEL 1 Inverse Rendering with a Morphable Model: A Multilinear Approach Oswald Aldrian oswald@cs.york.ac.uk William A. P. Smith wsmith@cs.york.ac.uk

More information

Multi-View Stereo for Static and Dynamic Scenes

Multi-View Stereo for Static and Dynamic Scenes Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.

More information

On the Dimensionality of Deformable Face Models

On the Dimensionality of Deformable Face Models On the Dimensionality of Deformable Face Models CMU-RI-TR-06-12 Iain Matthews, Jing Xiao, and Simon Baker The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract

More information

Face View Synthesis Across Large Angles

Face View Synthesis Across Large Angles Face View Synthesis Across Large Angles Jiang Ni and Henry Schneiderman Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 1513, USA Abstract. Pose variations, especially large out-of-plane

More information

Chapter 1 Introduction

Chapter 1 Introduction Chapter 1 Introduction The central problem in computer graphics is creating, or rendering, realistic computergenerated images that are indistinguishable from real photographs, a goal referred to as photorealism.

More information

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING

EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING EFFICIENT REPRESENTATION OF LIGHTING PATTERNS FOR IMAGE-BASED RELIGHTING Hyunjung Shim Tsuhan Chen {hjs,tsuhan}@andrew.cmu.edu Department of Electrical and Computer Engineering Carnegie Mellon University

More information

Facial Expression Recognition Using Non-negative Matrix Factorization

Facial Expression Recognition Using Non-negative Matrix Factorization Facial Expression Recognition Using Non-negative Matrix Factorization Symeon Nikitidis, Anastasios Tefas and Ioannis Pitas Artificial Intelligence & Information Analysis Lab Department of Informatics Aristotle,

More information

MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL

MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave.,

More information

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry

Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu

More information

JOHN FRITSCHE & HANS WERNER CS534 SPRING11 Page [1]

JOHN FRITSCHE & HANS WERNER CS534 SPRING11 Page [1] e JOHN FRITSCHE & HANS WERNER CS534 SPRING11 Page [1] FACE-IT Face Authoring Compiler Engine - Interactive Tool - { An online application to construct a new face automatically from existing images with

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda

More information

Digital Makeup Face Generation

Digital Makeup Face Generation Digital Makeup Face Generation Wut Yee Oo Mechanical Engineering Stanford University wutyee@stanford.edu Abstract Make up applications offer photoshop tools to get users inputs in generating a make up

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information

3D Morphable Model Parameter Estimation

3D Morphable Model Parameter Estimation 3D Morphable Model Parameter Estimation Nathan Faggian 1, Andrew P. Paplinski 1, and Jamie Sherrah 2 1 Monash University, Australia, Faculty of Information Technology, Clayton 2 Clarity Visual Intelligence,

More information

Animation of 3D surfaces.

Animation of 3D surfaces. Animation of 3D surfaces Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible =

More information

A Comparative Study of Region Matching Based on Shape Descriptors for Coloring Hand-drawn Animation

A Comparative Study of Region Matching Based on Shape Descriptors for Coloring Hand-drawn Animation A Comparative Study of Region Matching Based on Shape Descriptors for Coloring Hand-drawn Animation Yoshihiro Kanamori University of Tsukuba Email: kanamori@cs.tsukuba.ac.jp Abstract The work of coloring

More information

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz Supplemental Material

Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz Supplemental Material Self-supervised Multi-level Face Model Learning for Monocular Reconstruction at over 250 Hz Supplemental Material Ayush Tewari 1,2 Michael Zollhöfer 1,2,3 Pablo Garrido 1,2 Florian Bernard 1,2 Hyeongwoo

More information

Transfer Facial Expressions with Identical Topology

Transfer Facial Expressions with Identical Topology Transfer Facial Expressions with Identical Topology Alice J. Lin Department of Computer Science University of Kentucky Lexington, KY 40506, USA alice.lin@uky.edu Fuhua (Frank) Cheng Department of Computer

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover

More information

Animation of 3D surfaces

Animation of 3D surfaces Animation of 3D surfaces 2013-14 Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible

More information

MediaTek Video Face Beautify

MediaTek Video Face Beautify MediaTek Video Face Beautify November 2014 2014 MediaTek Inc. Table of Contents 1 Introduction... 3 2 The MediaTek Solution... 4 3 Overview of Video Face Beautify... 4 4 Face Detection... 6 5 Skin Detection...

More information

Object. Radiance. Viewpoint v

Object. Radiance. Viewpoint v Fisher Light-Fields for Face Recognition Across Pose and Illumination Ralph Gross, Iain Matthews, and Simon Baker The Robotics Institute, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213

More information

SCAPE: Shape Completion and Animation of People

SCAPE: Shape Completion and Animation of People SCAPE: Shape Completion and Animation of People By Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, James Davis From SIGGRAPH 2005 Presentation for CS468 by Emilio Antúnez

More information

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Detection Using Implemented (PCA) Algorithm Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with

More information

VISEME SPACE FOR REALISTIC SPEECH ANIMATION

VISEME SPACE FOR REALISTIC SPEECH ANIMATION VISEME SPACE FOR REALISTIC SPEECH ANIMATION Sumedha Kshirsagar, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva {sumedha, thalmann}@miralab.unige.ch http://www.miralab.unige.ch ABSTRACT For realistic

More information

Performance Driven Facial Animation using Blendshape Interpolation

Performance Driven Facial Animation using Blendshape Interpolation Performance Driven Facial Animation using Blendshape Interpolation Erika Chuang Chris Bregler Computer Science Department Stanford University Abstract This paper describes a method of creating facial animation

More information

Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL

Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL Neural Face Editing with Intrinsic Image Disentangling SUPPLEMENTARY MATERIAL Zhixin Shu 1 Ersin Yumer 2 Sunil Hadap 2 Kalyan Sunkavalli 2 Eli Shechtman 2 Dimitris Samaras 1,3 1 Stony Brook University

More information

Nonlinear Multiresolution Image Blending

Nonlinear Multiresolution Image Blending Nonlinear Multiresolution Image Blending Mark Grundland, Rahul Vohra, Gareth P. Williams and Neil A. Dodgson Computer Laboratory, University of Cambridge, United Kingdom October, 26 Abstract. We study

More information

Graphics, Vision, HCI. K.P. Chan Wenping Wang Li-Yi Wei Kenneth Wong Yizhou Yu

Graphics, Vision, HCI. K.P. Chan Wenping Wang Li-Yi Wei Kenneth Wong Yizhou Yu Graphics, Vision, HCI K.P. Chan Wenping Wang Li-Yi Wei Kenneth Wong Yizhou Yu Li-Yi Wei Background Stanford (95-01), NVIDIA (01-05), MSR (05-11) Research Nominal: Graphics, HCI, parallelism Actual: Computing

More information

Data-driven Methods: Faces. Portrait of Piotr Gibas Joaquin Rosales Gomez (2003)

Data-driven Methods: Faces. Portrait of Piotr Gibas Joaquin Rosales Gomez (2003) Data-driven Methods: Faces Portrait of Piotr Gibas Joaquin Rosales Gomez (2003) CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2016 The Power of Averaging 8-hour

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Digitization of 3D Objects for Virtual Museum

Digitization of 3D Objects for Virtual Museum Digitization of 3D Objects for Virtual Museum Yi-Ping Hung 1, 2 and Chu-Song Chen 2 1 Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan 2 Institute of

More information

Supplemental Material: Detailed, accurate, human shape estimation from clothed 3D scan sequences

Supplemental Material: Detailed, accurate, human shape estimation from clothed 3D scan sequences Supplemental Material: Detailed, accurate, human shape estimation from clothed 3D scan sequences Chao Zhang 1,2, Sergi Pujades 1, Michael Black 1, and Gerard Pons-Moll 1 1 MPI for Intelligent Systems,

More information

M I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies?

M I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies? MIRALab Where Research means Creativity Where do we stand today? M I RA Lab Nadia Magnenat-Thalmann MIRALab, University of Geneva thalmann@miralab.unige.ch Video Input (face) Audio Input (speech) FAP Extraction

More information

Registration of Dynamic Range Images

Registration of Dynamic Range Images Registration of Dynamic Range Images Tan-Chi Ho 1,2 Jung-Hong Chuang 1 Wen-Wei Lin 2 Song-Sun Lin 2 1 Department of Computer Science National Chiao-Tung University 2 Department of Applied Mathematics National

More information

Expression Invariant 3D Face Recognition with a Morphable Model

Expression Invariant 3D Face Recognition with a Morphable Model Expression Invariant 3D Face Recognition with a Morphable Model Brian Amberg brian.amberg@unibas.ch Reinhard Knothe reinhard.knothe@unibas.ch Thomas Vetter thomas.vetter@unibas.ch Abstract We describe

More information

3D Face Recognition. Anil K. Jain. Dept. of Computer Science & Engineering Michigan State University.

3D Face Recognition. Anil K. Jain. Dept. of Computer Science & Engineering Michigan State University. 3D Face Recognition Anil K. Jain Dept. of Computer Science & Engineering Michigan State University http://biometrics.cse.msu.edu Face Recognition 1959? 1960 1972 1973 Face detection using OpenCV Viola-Jones

More information

SYNTHESIS OF 3D FACES

SYNTHESIS OF 3D FACES SYNTHESIS OF 3D FACES R. Enciso, J. Li, D.A. Fidaleo, T-Y Kim, J-Y Noh and U. Neumann Integrated Media Systems Center University of Southern California Los Angeles, CA 90089, U.S.A. Abstract In this paper,

More information