Rapid 3D Face Modeling using a Frontal Face and a Profile Face for Accurate 2D Pose Synthesis
|
|
- Pearl Logan
- 5 years ago
- Views:
Transcription
1 Rapid 3D Face Modeling using a Frontal Face and a Profile Face for Accurate 2D Pose Synthesis Jingu Heo and Marios Savvides CyLab Biometrics Center Carnegie Mellon University Pittsburgh, PA jheo@cmu.edu, msavvid@ri.cmu.edu Abstract This paper proposes an efficient way of modeling 3D faces by using only two -afrontal and a profile - images. Although it is desirable to utilize only one single image for 3D face modeling, more accurate depth information can be obtained if we use a profile face image additionally. Despite this seemly straightforward task, however, no standard solutions for 3D face modeling with two images have yet been reported. To tackle this problem, in our work, we first extract facial shape information from each image and then align these two shapes in order to obtain a sparse 3D face. Then, the observed sparse 3D face is combined into generic dense depth information. By doing so, we reflect both the observed 3D sparse depth information and smooth depth changes around facial areas in our reconstructed 3D shape. Finally, the intensity of the frontal image is texture-mapped onto the reconstructed 3D shape for realistic 3D modeling. Unlike other 3D modeling methods, our proposed work is extremely fast (within a few seconds) and does not require any complex hardware settings or calibration. We illustrate our 3D modeling results by using the MPIE-database and demonstrate the effectiveness of the proposed approach. I. INTRODUCTION 3D modeling from a single or a set of 2D face images is one of the most difficult and challenging tasks in computer vision, due to many visual changes typically occurring in human faces. 3D Morphable Models (3DMMs) [5] have proven an essential tool in a variety of interdisciplinary research areas, such as computer vision, computer graphics, human computer interaction and face recognition. However, many real world applications, such as access control, entertainment, online gaming (3D avatar), and video conferences, need alternative methods for 3D face modeling due to the computational burdens associated with 3DMMs. Although there are several approaches to modeling 3D faces efficiently [3] [4], most of the current methods require user interaction, camera calibration, and strong prior knowledge of a 3D model, which might have a limited ability to model any face under various conditions. Recently, Generic Elastic Models (GEMs) [6] were introduced as a new efficient method to generate 3D models from a single 2D image. GEM assumes that depth information of a human face is not very discriminative among people and it can be synthesized by using other people s depth information or generic depth information. Based on the observed (x, y) spatial information, generic depth information is elastically deformed to fit the observed information for realistic 3D face 632 modeling. The 3D models, generated from the GEM framework, can be obtained at a relatively very low computational expense (1-2 seconds) compared to traditional 3DMMs that take 4-5 minutes and require manual feature annotation. However, we believe that there are many human faces that deviate from a typical face due to significant depth changes. In this case, more observations in depth information, especially obtained by utilizing a profile face, are important in 3D face modeling. A closely related research topic for 3D reconstruction using multiple 2D observations is known as Structure from Motion (SfM), which utilize a set of random 2D images for 3D reconstruction. As a special case of SfM, 3D reconstruction using a minimum of two images with known camera positions might be relatively a simple straightforward task. Despite this seemly easy task, however, no standard solutions for 3D face modeling with two images have yet been reported due to the difficulty associated with registering corresponding points across these images. In this paper, we provide an efficient solution to tackle this problem by using two - a frontal and a profile - images. We select these two images since these face images contain most essential and significant information for 3D reconstruction. Although it is desirable to utilize only a single frontal image for 3D modeling, we believe that more accurate depth information can be obtained if we use an additional profile face image. Here are the steps for our proposed work. First, we extract shape information by using a frontal Active Shape Model (ASM) [8] - a variant of the conventional ASM [9] - from the frontal image and a profile ASM from the profile image, respectively. Then, we align these two shapes for scale and rotation normalization in order to obtain a single sparse 3D face, followed by a step which the observed sparse 3D face is combined into generic dense depth information. In this way, we reflect both the observed 3D sparse features and smooth depth changes around other facial areas in the reconstructed 3D shape. Finally, the intensity of the frontal image is texture-mapped onto the reconstructed 3D shape for realistic 3D modeling. More accurate 2D pose synthesis results are obtained by using the proposed work. This paper is organized as follows. In Section II, we briefly review related work. In Section III, we provide an efficient solution for 3D modeling by using two images. In Section V, we demonstrate 2D face synthesis results by using the
2 proposed approach. Finally, in Section VI, we summarize our results and discuss future work. II. BACKGROUND Due to the recent developments in 3D sensing techniques either by using 3D scanners [10] [5] or 2D multiple camera setups [11], 3D face acquisition becomes easier and provides a decent quality of 3D faces. However, there are many scenarios where one may want to model 3D faces realistically by utilizing one or a minimum number of face images without the needs of sensors or cumbersome hardware setups. Common approaches to obtaining 3D face modeling from 2D face images without using camera calibration and hardware setups can be largely divided into two categories; one approach needs only sparse face shape information while the other approach requires dense shape information. One popular example of the sparse approach is 3D Active Appearance Model 3DAAMs [12], while that of the dense approach is 3DMMs. Due to the associated problems in both approaches (AAMs have difficulty in rendering under novel illumination changes while 3DMMs require huge computational expenses), an alternative solution was proposed by Heo [6][7], known as GEM. The author of [6] claimed that the depth information z of faces is not that discriminative and can be approximated from either another person s depth or from generic depth information, assuming we have accurate correspondence with all the spatial 2D (x, y) facial features. In this section, we first briefly review and compare aforementioned well-known techniques for 3D face modeling by using a single image: AAMs, and 3DMMs. Then we summarize other techniques which can also achieve 3D face modeling. Finally, we introduce the concept of the GEM approach. As one of the leading methods in face modeling, AAMs [13] [14] [15] and 3DMMs [5] have become increasingly popular in computer graphics for realistically modeling human faces. Although face modeling can be more efficiently achieved by AAMs compared to 3DMMs, large rotations (particularly out-of-plane) cannot be generated by the 2D warping technique [13] used in AAMs due to occlusions of facial regions. In order to handle such large pose changes, view based models [16] [17][18] or 3D Active Appearance Models (3DAAMs) [12] should be used. However, 3DAAMs have still difficulty in synthesizing images under novel illumination conditions due to the sparse 3D shape representation. 3DMMs can overcome these problems because the appearance model of a 3DMM is defined per each 3D vertex (point), allowing us to understand image formulation of faces under various lighting and pose variations. AAMs and 3DMMs use similar shape and appearance representations. The representation space of AAMs is 2D, whereas the representation space of 3DMMs is 3D. Additionally, AAMs and 3DMMs use similar functional fitting procedures, which can be described by minimizing the following cost function: E = I input I model 2 (1) 633 where I input is the input image and I model is the reconstructed image, obtained by the model instance. 2 indicates the L2-norm. In case of AAMs, I model considers 2D pose (scale, rotation, and translation), 2D shape, and 2D appearance parameters. AAMs try to find a shape and texture which minimizes the above cost function by iteratively changing these parameters in 2D. On the other hand, in case of 3DMMs, I model includes 3D-2D perspective projection, rendering, 3D rotation, 3D shape, and 3D appearance parameters. Therefore, 3DMMs try to find a 3D shape with a 3D illumination normalized texture so that it generates the input image as closely as possible after a 3D rotation and the 3D-2D projection of the 3D shape. It is well known that the fitting procedure of 3DMMs is computationally expensive due to the problem of estimating dense 3D shapes. Instead of utilizing statistical shape information for face representation, the Structure from Shading (SfS) approach emphasizes on recovering depth estimation purely from images. See [20] [21] [19] for details. It is known that SfS techniques have room for improvement and typically can be combined with symmetrical [25] and statistical shape information for enhanced 3D modeling [22]. On the other hand, the SfM technique [26] requires multiple images for depth recovery. Due to the difficulty associated with correspondence and occlusion problems and non-rigid changes in face images, several improved techniques have been developed [12] [23] [24]. However, the recovered shapes obtained by using these methods are not dense enough to be used for rendering under different illumination conditions (similar problems to 3DAAMs). The recently proposed GEM approach can be an attractive solution for 3D models since it provides dense shape information in a computationally feasible manner. We briefly review the GEM approach and propose an efficient way of more accurate 3D face modeling by adding an additional profile image into observation since the original GEM framework is based on a single frontal image. A. Generic Elastic Model GEM utilizes a depth information after aligning (x, y) positions. In other words, In order to model depth changes only, it is necessary to sample all depth information at the same relative spatial locations of (x, y). It can be considered that this sampling step is a warping process from an input (x, y, z) to a mean shape ( x, ȳ). We write this as: Z sf = W(x, y, z; x, ȳ) (2) where Z sf is the shape-free (in terms of x and y) depthmap. This allows us to model only depth changes. These depth-maps are synthesized by using all of the USF-database [5][6]. Based on this depth-map along with the spatial locations of important facial features, GEM can reconstruct dense 3D facial information. The original GEM procedure is shown in Fig. 1. Formally, this problem can be stated as follows. Given a face image
3 Fig. 1: GEM for 3D modeling from a single image. Each point in the input image has an exact corresponding point in the GEM depth-map and the intensity of the GEM depth-map can be used for the estimation of depth in the input image. A piecewise affine transformation (W) is used for warping the GEM depth-map (D) sampled at the spatial locations of M onto the input triangle mesh (P) in order to estimate depth information. Finally, the reconstructed 3D model can be interpolated by using the intensity of the input image I(P(x, y)) sampled at the spatial location of P(x, y). The more iteration in the subdivision method, the better the quality of the 3D face obtained particularly for high-resolution 2D face images. (I), automatically extract input face landmarks and assign depth information (z Z only ). Then, each face (I) is partitioned into a mesh of triangular polygons (P). Similarly, the generic depth-map (D) is partitioned into a mesh (M) from predefined landmark points. After registering points between the input image and the generic depth-map, we increase the point density simultaneously using the Loop subdivision method [1]. The subdivision method used here can be considered an intermediate step for establishing correspondence densely between the input mesh and the depth-map. A piece-wise affine transform W [15] is used for warping the depth-map (D), sampled at the spatial locations of M, onto the input triangle mesh (P) in order to estimate depth information. Each point in the input image has an exact corresponding point in the depth-map and the intensity of the depth-map can be used for the estimation of depth in the input image. Finally, the intensity of the input image I(P(x, y)), sampled at the spatial locations of P(x, y), is mapped on to the 3D shape. Therefore, the reconstructed 3D face can be represented by: S r =(x, y, z = D(M( x, ỹ)) (3) T r = I(P(x, y, z)) = (R x,y,z,g x,y,z,b x,y,z ) where x and ỹ in M are the registered points x and y in image P. original GEM framework, which utilizes only a single frontal image for 3D modeling, a profile image can provide more accurate depth information for important facial features as we discussed earlier. In order to obtain sparse shape information from profile faces, we develop a profile face alignment scheme based on Active Shape Models (ASMs), similar to the frontal ASM used in the original GEM framework. Although there is room for improvement for facial alignment to achieve very precise alignment towards generic people, our evaluations on the MPIE database session 1[27] show encouraging results with an average of 2.0 pixels errors (100 by 100 face size) for both frontal and profile images. Some faces with more than these average errors, we manually adjust to obtain better 3D modeling results. Since our main contribution in the paper is focusing on developing methods after alignment, reducing errors in facial alignment is beyond the scope of our contributions. An overview of our proposed approach is illustrated in Fig. 2. Exactly the same procedure used in the standard GEM framework can be applied except the use of a sparse 3D information, which can be obtained by using a set of (x, y) information from the frontal face and a set of (y, z) from the profile face. Therefore, our 3D modeling method can be largely divided into two steps. We first introduce our sparse 3D reconstruction method after facial alignment (Fig. 4) and then explain how to merge the observed depth information into the generic depth information (Fig. 5). Based on these two steps, we can achieve realistic 3D modeling by texturemapping the frontal image onto the reconstructed 3D shape. It is important note that no texture information from the profile image is utilized throughout this paper, but only sparse shape information is utilized. A. Sparse 3D Reconstruction Based on the landmarks retrieved from each image, our aim is to reconstruct a sparse 3D face shape. We use S 2xn for 2D and S 3xn for 3D, where n indicates the number of vertices. In this paper, we use n =79for frontal facial faces (S F 79) and n =46for profile faces (S P 46). We define the 2D III. PROPOSED APPROACH We utilize a frontal and a profile face images in order to improve the quality of 3D models. In contrast to the 634 Fig. 2: Overview of the modeling approach using 3D a frontal face and a profile face images.
4 Fig. 3: Face landmarks used in frontal (a) and profile (b) faces. We use a lookup table to associate the correspondences between frontal and profile face images. The point 17 in (b) has two corresponding points (point 1 and 15) in (a). Important features used to normalize scales between the two faces are shown in (c). TABLE I: Procedures of registering a frontal and a profile face shape for 3D sparse shape reconstruction. Fig. 4: Overview of the proposed parse 3D shape reconstruction method by using a frontal and a profile face. After aligning each shape using view-based ASMs, we normalize each shape to compensate scale and rotation in order to obtain a single 3D sparse shape, based on the procedures depicted in Table I. 1. Extract S F 79 from the frontal face and SP 46 from the profile face. 2. Rotate S F 79 to 0 degree based on the angle (θ 1 = arctan(s(:, 30) F,S(:, 22) F )). 3. Rotate S P 46 to π/2 degree based on the angle (θ 2 = arctan(s(:, 1) P ),S(:, 26) P )). 4. Compute the following distances d 1 = S(:, 79) F S(:, 35) F 2. d 2 = S(:, 1) P S(:, 26) P Normalize the rotated S P 46 with the following S P 46 = SP 46 /d 1 d 2 6. Convert new S P 46 into SP 79 using the lookup table. S 4x79 (x, y, y, z) =[S F 2x79 (x, y);sp 2x79 (y, z)] 7. Remove the 3rd row in S 4x79 (x, y, y, z) S 3x79 (x, y, z) =[S F 2x79 (x, y); SP 1x79 (z)] shape matrix S 2xn as the 2D coordinates (x, y) of the n vertices: ( ) x1 x S 2xn = 2...x n (4) y 1 y 2...y n where each column contains a vector of (x, y) coordinates. Similarly, the 3D shape matrix can be represented by the 3D coordinates (x, y, z): S 3xn = x 1 x 2...x n y 1 y 2...y n (5) z 1 z 2...z n where each column contains a vector of (x, y, z) coordinates. In order to reconstruct S 3xn (n=79) from a single frontal face and a profile face, we extract S 2xn from a single frontal face (x, y) n and z n from a non-frontal face of the same person. Since these two images of the same person can be acquired independently (taken at different times or illumination changes), each face should be aligned based on the corresponding view based ASMs, i.e., a frontal ASM and a profile ASM, respectively. We use landmark schemes defined in Fig. 3. In order to associate the landmarks between frontal and profile faces, we use a lookup table which 635 Fig. 5: Generation of a new depth-map which considers both input observations and generic depth information. By combining both information in D o and D g, more accurate and smooth 3D faces can be obtained. establishes correspondences between these two images. In case of occlusions, typically occurred in profile faces, we duplicate the points by using the face symmetry property. For example, the spatial positions (x, y) of the point 17 (y, z) in the profile face can be assigned in both the point 1 and the point 15 in the frontal face. In this way, we can extract the (x, y) positions from the frontal face and the (y, z) positions from the profile face with the same number of landmarks. However, the y observations which commonly occur in both faces should be normalized under scale and rotation in order to combine these two faces into a single 3D face, because the two faces may be acquired under different scales and rotations. To achieve this normalization, we perform inplane 2D rotations based on the center of the lower eyelidcoordinates on the frontal face (the 22nd and 30th points (F22 and F30) in Fig. 3 (c)) and based on the center between the eyes and the center of the upper lips (the 1st and 26th points (P1 and P26) in Fig. 3 (c)) on the profile face, respectively. In this way, we can eliminate any distortions caused by in-
5 plane 2D rotations. Then, based on the scale of the length between the point 79 and the point 35 (F79 and F35) in the frontal image, we re-scale the profile shape. Finally, we use a lookup table to associate the correspondences between these two faces. To enhance readability, we summarize these overall registration procedures in Table I. S(:, i) indicates the ith (x, y) points in S 2xn and F denotes a frontal face and P denotes a profile face. Therefore, the reconstructed 3D shape is aligned both axis in x and y. This step is important for view synthesis by removing any distortions caused by rotation changes. Based on the above registration step, we utilize the GEM framework for enhanced 3D modeling. Although the original GEM framework can be used for 2D pose synthesis, it has difficulty in modeling for the people with completely different depth information from a common face. Therefore, in this case, the range of angles in which the original GEM framework can cover might be limited. If we use additional depth information, more accurate 3D modeling with improved generalization can be obtained. An overview of this sparse 3D reconstruction procedure is shown in Fig. 4. We expect that our proposed sparse 3D method can be applied to extremely less constrained scenarios as long as we have two (frontal and profile) images taken under completely different illumination conditions and with expression changes, since it does not require any multi-view geometry constraints or camera setups [11]. B. GEM with Sparse 3D Information This section introduces how to combine the 3D sparse information into the GEM framework after 3D sparse reconstruction. We first increase the point density of the observed 3D sparse points by a subdivision method [1], and we synthesize a new depth-map by using the Cubic spline interpolation method [2]. Here, the intensity of the depth-map represents the depth of the input face. Then, we generate a third depth-map (D new ), which consider both depth-maps (the observed depth-map (D o ) and the generic depth-map (D g )). We consider the third depth-map as a linear combination of the two depth-maps: D new =(1 U)D o + UD g (6) where U is the matrix which assigns a weight for each pixel in these depth-maps. We use constant values (1/2) in U for our implementation, showing these simple weights are still achieving reasonable 3D modeling. More reasonable weight choice might improve the quality of 3D models; however, this will be conducted in our future works. A visual example of intermediate depthmaps is illustrated in Fig. 5, containing three different rendered depth-maps - observation-only, generic, and new depth-maps. The observation-only depth-map seems to preserve the reconstructed 3D shape information around facial feature areas due to the sparse observations in the input face while the generic depth-map focuses on smooth changes in these areas. The generic depth-information is a necessary step to improve the quality of the reconstructed shape. On the 636 Fig. 6: Comparisons with the original GEM, observation-only, and our proposed approach (GEM + observation). The first row contains frontal images of each 3D model. The middle row contains the intermediate 3D shapes reconstructed shapes and the third row contains images of synthesized profile faces based on the frontal image. other hand, the newly synthesized depth information tends to reflect both the observed 3D sparse depth information and smooth depth changes around facial areas in our reconstructed 3D shape. Based on the newly generated depthmap, we map the texture of the frontal image. By using the visually correct 3D dense shape information and the input frontal image, a wide range of 2D pose synthesis can be achieved in an extremely faster way (3-4 seconds) compared to the state of the art approaches. IV. EXPERIMENTAL RESULTS In order to evaluate the proposed 3D modeling approach, we compare 3D modeling results obtained by the original GEM approach, the observation-only method, and our proposed method (GEM + observation) by showing the intermediate 3D shapes and synthesized frontal and profile images. Due to the difficulty in generating 3D ground truth data from the MPIE database, we qualitatively evaluate our proposed method throughout this section. We emphasize on showing the results of 2D pose synthesis, since the majority of the problems in face recognition lies in how to match a test image taken under uncontrolled conditions (pose, illumination, expression, and age) with the images
6 Fig. 8: New novel pose synthesized results. The new pose synthesized images are generated across a wide range of pose changes. Fig. 7: More comparison results with the original GEM, observation-only, and our proposed approach (GEM + observation). taken under controlled conditions (frontal and well-lit) in the database. We think that the ability to synthesize 2D facial images in a wide range of angles is a key step towards unconstrained face recognition. Fig. 6 shows comparison results obtained by using the aforementioned methods. Due to the illumination differences in the original profile image compared to the frontal image of the same person, although these images were taken at the same time, we assume that a reasonable distance can be calculated after preprocessing illumination problems. Since our main contributions made in this paper is to synthesize visually correct novel 2D poses under a wide range of views, developing illumination normalization or re-rendering schemes are beyond the scope of our main contributions and will be conducted in our future work. Rather, we focus on our evaluation in a qualitative manner as we mentioned earlier. More comparison examples are illustrated in Fig. 637 Fig. 9: New novel pose synthesized results with expression changes. 7. As evidence by these figures, the proposed approach utilizes both GEM and observation tends to model 3D faces more reasonably compared to the original GEM and the observation-only methods. In addition, we provide a wide range of 2D pose synthesis results by using our proposed method. As shown in Fig. 8, reasonable pose synthesis results are obtained by using two image observation and generic depth information. In case of expression changes, we can still achieve reasonable 3D faces for 2D pose synthesis, as demonstrated in Fig. 9. Similar results are obtained for all other people (total 249 people) from the MPIE database session 1 with/without expression changes.
7 V. DISCUSSION AND FUTURE WORK In this paper, we have shown that we can successfully model 3D faces by using a frontal and a profile face image in an extremely faster way. The proposed method does not need any calibration, which is a necessary step in multi-view geometry, and only requires two face images taken by any standard camera from a reasonable distance. We observed that single face image based 3D modeling approaches have difficulty in 3D modeling for the people with completely different depth information from a common face, especially around the nose areas. We have shown that more observations in depth information, easily obtained by utilizing profile images, can solve this problem. Although we focus on the use of profile images in our sparse 3D reconstruction in our paper, this proposed approach can be used in generic SfM problems, which need multiple image observations, as long as a sparse 3D reconstruction can be achievable. It will be our ongoing work on rendering faces by using our dense 3D shape models in order to compensate illumination problems and more serious evaluations on face recognition will be conducted toward pose, illumination, and expression invariant face recognition. ACKNOWLEDGMENT We would like to thank our sponsors 1. REFERENCES [1] C. T. Loop, Smooth Subdivision Surfaces Based on Triangles, M.S. Thesis, Department of Mathematics, University of Utah, August [2] C. deboor, A practice Guide to Splines, Applied Mathematical Sciences, Vol. 27, [3] S.F Wang, and S. H. Lai, Efficient 3D Face Reconstruction from a Single 2D Image by Combining Statistical and Geometrical Information, Asian Conf. on Computer Vision, pp , [4] D. Fidaleo and G. Medioni, Model-Assisted 3D Face Reconstruction from Video, IEEE Intl Workshop on Analysis and Modeling of Faces and Gestures (AMFG), pp , [5] V. Blanz, and T. Vetter, Face recognition based on fitting a 3D morphable model, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 25, no. 9, pp , [6] J. Heo, Generic Elastic Models for 2D Pose Synthesis and Face Recognition, Ph.D thesis, Carnegie Mellon University, [7] J. Heo, and M. Savvides, In Between 3D Active Appearance Models and 3D Morphable Models,, IEEE conference on Computer Vision and Pattern Recognition Workshop, [8] K, Seshadri, and M. Savvides, Robust modified active shape model for automatic facial landmark annotation of frontal faces, Proceedings of the 3rd IEEE international conference on Biometrics: Theory, Applications and Systems, pp , [9] T. F. Cootes, C. J. Taylor, D. Cooper and J. Graham, Active Shape Models: Their Training and Application, Computer Vision and Image Understanding, vol. 61, no. 1, pp , [10] retrieved at [11] U. Lin, G. Medioni, and J. Choi, Accurate 3D Face Reconstruction from Weakly Calibrated Wide Baseline Images with Profile Contours, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp , [12] J. Xiao, S. Baker, I. Matthews, and T. Kanade, Real-Time Combined 2D+3D Active Appearance Models, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp , [13] T. Cootes, G. Edwards, and C. Taylor, Active appearance models, In Proc. of the European Conf. on Computer Vision, vol. 2, pp , [14] X. Hou, S. Z. Li, H. Zhang, and Q. Cheng, Direct appearance models, In Proc. IEEE Conf. on Computer Vision and Pattern Recognition, vol. 1, pp , [15] I. Matthews and S. Baker, Active Appearance Models Revisited, Int l Journal of Computer Vision, vol. 60, no. 2, pp , [16] T. Cootes, K. Walker, and C. Taylor, View-based active appearance models, IEEE Int l Conf. on Automatic Face and Gesture Recognition, [17] A. Pentland, B. Moghaddam, and T. Starner, View-Based and Modular Eigenspaces for Face Recognition, In Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp , [18] S. Romdhani, S. Gong, and A. Psarrou, A multi-view non-linear active shape model using kernel pca, 10th British Machine Vision Conf., vol. 2, pp , [19] J.J. Atick, P.A. Griffin, and A.N. Redlich, Statistical Approach to Shape from Shading: Reconstruction of 3D Face Surfaces from Single 2D Images, Computation in Neurological Systems, vol. 7, no. 1, [20] B. K. P. Horn, Shape from Shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View, PhD thesis, MIT, [21] R. Zhang, P. S. Tsai, J. Edwin Cryer, and M. Shah, Shape from Shading: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 8, pp , [22] R. Dovgard and R. Basri, Statistical Symmetric Shape from Shading for 3D Structure Recovery of Faces, European Conference on Computer Vision (ECCV), vol. 3022, [23] J. Xiao, J. Chai and T. Kanade, A closed-form solution to non-rigid shape and motion recovery, International Journal of Computer Vision (IJCV), vol. 67, pp , [24] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, [25] W. Zhao, R. Chellappa, Robust Face Recognition using Symmetric Shape-from- Shading, University of Maryland, CARTR-919, [26] C. Tomasi, and T. Kanade, Shape and motion from image streams under orthography: A factorization method, Int. Journal of Computer Vision, vol. 9, no. 2, pp , [27] R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, Multi-PIE, Proc. of Int l Conf. on Automatic Face and Gesture Recognition, This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory (ARL). All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI, or the U.S. Government. 638
In Between 3D Active Appearance Models and 3D Morphable Models
In Between 3D Active Appearance Models and 3D Morphable Models Jingu Heo and Marios Savvides Biometrics Lab, CyLab Carnegie Mellon University Pittsburgh, PA 15213 jheo@cmu.edu, msavvid@ri.cmu.edu Abstract
More information3D Active Appearance Model for Aligning Faces in 2D Images
3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active
More informationFace Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation
Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationNonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.
Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2
More informationGeneric 3D Face Pose Estimation using Facial Shapes
Generic 3D Face Pose Estimation using Facial Shapes Jingu Heo CyLab Biometrics Center Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213 jheo@cmu.edu Marios Savvides CyLab Biometrics Center
More informationOccluded Facial Expression Tracking
Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento
More informationPose Normalization for Robust Face Recognition Based on Statistical Affine Transformation
Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Xiujuan Chai 1, 2, Shiguang Shan 2, Wen Gao 1, 2 1 Vilab, Computer College, Harbin Institute of Technology, Harbin,
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationGeneric Face Alignment Using an Improved Active Shape Model
Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn
More informationFace View Synthesis Across Large Angles
Face View Synthesis Across Large Angles Jiang Ni and Henry Schneiderman Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 1513, USA Abstract. Pose variations, especially large out-of-plane
More informationAbstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component
A Fully Automatic System To Model Faces From a Single Image Zicheng Liu Microsoft Research August 2003 Technical Report MSR-TR-2003-55 Microsoft Research Microsoft Corporation One Microsoft Way Redmond,
More informationMulti-View AAM Fitting and Camera Calibration
To appear in the IEEE International Conference on Computer Vision Multi-View AAM Fitting and Camera Calibration Seth Koterba, Simon Baker, Iain Matthews, Changbo Hu, Jing Xiao, Jeffrey Cohn, and Takeo
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationTEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA
TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,
More informationFitting a Single Active Appearance Model Simultaneously to Multiple Images
Fitting a Single Active Appearance Model Simultaneously to Multiple Images Changbo Hu, Jing Xiao, Iain Matthews, Simon Baker, Jeff Cohn, and Takeo Kanade The Robotics Institute, Carnegie Mellon University
More informationDISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS
DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS Sylvain Le Gallou*, Gaspard Breton*, Christophe Garcia*, Renaud Séguier** * France Telecom R&D - TECH/IRIS 4 rue du clos
More informationREAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR
REAL-TIME FACE SWAPPING IN VIDEO SEQUENCES: MAGIC MIRROR Nuri Murat Arar1, Fatma Gu ney1, Nasuh Kaan Bekmezci1, Hua Gao2 and Hazım Kemal Ekenel1,2,3 1 Department of Computer Engineering, Bogazici University,
More informationImage Coding with Active Appearance Models
Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing
More informationIllumination invariant face recognition and impostor rejection using different MINACE filter algorithms
Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Rohit Patnaik and David Casasent Dept. of Electrical and Computer Engineering, Carnegie Mellon University,
More informationModel-based 3D Shape Recovery from Single Images of Unknown Pose and Illumination using a Small Number of Feature Points
Model-based 3D Shape Recovery from Single Images of Unknown Pose and Illumination using a Small Number of Feature Points Ham M. Rara and Aly A. Farag CVIP Laboratory, University of Louisville {hmrara01,
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationAutomatic Construction of Active Appearance Models as an Image Coding Problem
Automatic Construction of Active Appearance Models as an Image Coding Problem Simon Baker, Iain Matthews, and Jeff Schneider The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1213 Abstract
More informationParametric Manifold of an Object under Different Viewing Directions
Parametric Manifold of an Object under Different Viewing Directions Xiaozheng Zhang 1,2, Yongsheng Gao 1,2, and Terry Caelli 3 1 Biosecurity Group, Queensland Research Laboratory, National ICT Australia
More informationCHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION
CHAPTER 3 DISPARITY AND DEPTH MAP COMPUTATION In this chapter we will discuss the process of disparity computation. It plays an important role in our caricature system because all 3D coordinates of nodes
More informationEnhanced Active Shape Models with Global Texture Constraints for Image Analysis
Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Shiguang Shan, Wen Gao, Wei Wang, Debin Zhao, Baocai Yin Institute of Computing Technology, Chinese Academy of Sciences,
More informationFace Alignment Under Various Poses and Expressions
Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.
More informationTEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA
TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi, Francois de Sorbier and Hideo Saito Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku,
More informationFacial Feature Points Tracking Based on AAM with Optical Flow Constrained Initialization
Journal of Pattern Recognition Research 7 (2012) 72-79 Received Oct 24, 2011. Revised Jan 16, 2012. Accepted Mar 2, 2012. Facial Feature Points Tracking Based on AAM with Optical Flow Constrained Initialization
More informationVehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video
Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using
More informationVideo-Based Online Face Recognition Using Identity Surfaces
Video-Based Online Face Recognition Using Identity Surfaces Yongmin Li, Shaogang Gong and Heather Liddell Department of Computer Science, Queen Mary, University of London, London E1 4NS, UK Email: yongmin,sgg,heather
More informationRecovering 3D Facial Shape via Coupled 2D/3D Space Learning
Recovering 3D Facial hape via Coupled 2D/3D pace Learning Annan Li 1,2, higuang han 1, ilin Chen 1, iujuan Chai 3, and Wen Gao 4,1 1 Key Lab of Intelligent Information Processing of CA, Institute of Computing
More informationActive Wavelet Networks for Face Alignment
Active Wavelet Networks for Face Alignment Changbo Hu, Rogerio Feris, Matthew Turk Dept. Computer Science, University of California, Santa Barbara {cbhu,rferis,mturk}@cs.ucsb.edu Abstract The active appearance
More informationIntensity-Depth Face Alignment Using Cascade Shape Regression
Intensity-Depth Face Alignment Using Cascade Shape Regression Yang Cao 1 and Bao-Liang Lu 1,2 1 Center for Brain-like Computing and Machine Intelligence Department of Computer Science and Engineering Shanghai
More informationOn the Dimensionality of Deformable Face Models
On the Dimensionality of Deformable Face Models CMU-RI-TR-06-12 Iain Matthews, Jing Xiao, and Simon Baker The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract
More informationPassive 3D Photography
SIGGRAPH 2000 Course on 3D Photography Passive 3D Photography Steve Seitz Carnegie Mellon University University of Washington http://www.cs cs.cmu.edu/~ /~seitz Visual Cues Shading Merle Norman Cosmetics,
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationFace Re-Lighting from a Single Image under Harsh Lighting Conditions
Face Re-Lighting from a Single Image under Harsh Lighting Conditions Yang Wang 1, Zicheng Liu 2, Gang Hua 3, Zhen Wen 4, Zhengyou Zhang 2, Dimitris Samaras 5 1 The Robotics Institute, Carnegie Mellon University,
More informationIncreasing the Density of Active Appearance Models
Increasing the Density of Active Appearance Models Krishnan Ramnath ObjectVideo, Inc. Simon Baker Microsoft Research Iain Matthews Weta Digital Ltd. Deva Ramanan UC Irvine Abstract Active Appearance Models
More informationLight Field Appearance Manifolds
Light Field Appearance Manifolds Chris Mario Christoudias, Louis-Philippe Morency, and Trevor Darrell Computer Science and Artificial Intelligence Laboratory Massachussetts Institute of Technology Cambridge,
More informationUsing the Orthographic Projection Model to Approximate the Perspective Projection Model for 3D Facial Reconstruction
Using the Orthographic Projection Model to Approximate the Perspective Projection Model for 3D Facial Reconstruction Jin-Yi Wu and Jenn-Jier James Lien Department of Computer Science and Information Engineering,
More informationSynthesizing Realistic Facial Expressions from Photographs
Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationFactorization Method Using Interpolated Feature Tracking via Projective Geometry
Factorization Method Using Interpolated Feature Tracking via Projective Geometry Hideo Saito, Shigeharu Kamijima Department of Information and Computer Science, Keio University Yokohama-City, 223-8522,
More informationThe Template Update Problem
The Template Update Problem Iain Matthews, Takahiro Ishikawa, and Simon Baker The Robotics Institute Carnegie Mellon University Abstract Template tracking dates back to the 1981 Lucas-Kanade algorithm.
More informationStatistical Symmetric Shape from Shading for 3D Structure Recovery of Faces
Statistical Symmetric Shape from Shading for 3D Structure Recovery of Faces Roman Dovgard and Ronen Basri Dept. of Applied Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 76100,
More informationAAM Based Facial Feature Tracking with Kinect
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking
More informationSingle view-based 3D face reconstruction robust to self-occlusion
Lee et al. EURASIP Journal on Advances in Signal Processing 2012, 2012:176 RESEARCH Open Access Single view-based 3D face reconstruction robust to self-occlusion Youn Joo Lee 1, Sung Joo Lee 2, Kang Ryoung
More informationHead Frontal-View Identification Using Extended LLE
Head Frontal-View Identification Using Extended LLE Chao Wang Center for Spoken Language Understanding, Oregon Health and Science University Abstract Automatic head frontal-view identification is challenging
More informationA Factorization Method for Structure from Planar Motion
A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College
More informationActive Appearance Models
Active Appearance Models Edwards, Taylor, and Cootes Presented by Bryan Russell Overview Overview of Appearance Models Combined Appearance Models Active Appearance Model Search Results Constrained Active
More informationHaresh D. Chande #, Zankhana H. Shah *
Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information
More informationIllumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model
Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering
More informationIEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 8, AUGUST /$ IEEE
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 8, AUGUST 2008 1331 A Subspace Model-Based Approach to Face Relighting Under Unknown Lighting and Poses Hyunjung Shim, Student Member, IEEE, Jiebo Luo,
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More informationIMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur
IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important
More informationAn Active Illumination and Appearance (AIA) Model for Face Alignment
An Active Illumination and Appearance (AIA) Model for Face Alignment Fatih Kahraman, Muhittin Gokmen Istanbul Technical University, Computer Science Dept., Turkey {fkahraman, gokmen}@itu.edu.tr Sune Darkner,
More information3D Face Texture Modeling from Uncalibrated Frontal and Profile Images
3D Face Texture Modeling from Uncalibrated Frontal and Profile Images Hu Han and Anil K. Jain Department of Computer Science and Engineering Michigan State University, East Lansing, MI, U.S.A. {hhan,jain}@cse.msu.edu
More informationReal-time non-rigid driver head tracking for driver mental state estimation
Carnegie Mellon University Research Showcase @ CMU Robotics Institute School of Computer Science 2004 Real-time non-rigid driver head tracking for driver mental state estimation Simon Baker Carnegie Mellon
More informationDepth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences
Depth-Layer-Based Patient Motion Compensation for the Overlay of 3D Volumes onto X-Ray Sequences Jian Wang 1,2, Anja Borsdorf 2, Joachim Hornegger 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität
More information3D Morphable Model Parameter Estimation
3D Morphable Model Parameter Estimation Nathan Faggian 1, Andrew P. Paplinski 1, and Jamie Sherrah 2 1 Monash University, Australia, Faculty of Information Technology, Clayton 2 Clarity Visual Intelligence,
More informationPose Normalization via Learned 2D Warping for Fully Automatic Face Recognition
A ASTHANA ET AL: POSE NORMALIZATION VIA LEARNED D WARPING 1 Pose Normalization via Learned D Warping for Fully Automatic Face Recognition Akshay Asthana 1, aasthana@rsiseanueduau Michael J Jones 1 and
More informationObject. Radiance. Viewpoint v
Fisher Light-Fields for Face Recognition Across Pose and Illumination Ralph Gross, Iain Matthews, and Simon Baker The Robotics Institute, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213
More informationGeneral Pose Face Recognition Using Frontal Face Model
General Pose Face Recognition Using Frontal Face Model Jean-Yves Guillemaut 1, Josef Kittler 1, Mohammad T. Sadeghi 2, and William J. Christmas 1 1 School of Electronics and Physical Sciences, University
More informationSparse Shape Registration for Occluded Facial Feature Localization
Shape Registration for Occluded Facial Feature Localization Fei Yang, Junzhou Huang and Dimitris Metaxas Abstract This paper proposes a sparsity driven shape registration method for occluded facial feature
More informationLOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM
LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany
More informationHand-Eye Calibration from Image Derivatives
Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed
More informationHuman pose estimation using Active Shape Models
Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body
More informationProject Updates Short lecture Volumetric Modeling +2 papers
Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23
More informationFace analysis : identity vs. expressions
Face analysis : identity vs. expressions Hugo Mercier 1,2 Patrice Dalle 1 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd 3, passage André Maurois -
More informationSupplementary Material for Synthesizing Normalized Faces from Facial Identity Features
Supplementary Material for Synthesizing Normalized Faces from Facial Identity Features Forrester Cole 1 David Belanger 1,2 Dilip Krishnan 1 Aaron Sarna 1 Inbar Mosseri 1 William T. Freeman 1,3 1 Google,
More informationRENDERING AND ANALYSIS OF FACES USING MULTIPLE IMAGES WITH 3D GEOMETRY. Peter Eisert and Jürgen Rurainsky
RENDERING AND ANALYSIS OF FACES USING MULTIPLE IMAGES WITH 3D GEOMETRY Peter Eisert and Jürgen Rurainsky Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institute Image Processing Department
More informationFacial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn
Facial Image Synthesis Page 1 of 5 Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn 1 Introduction Facial expression has been central to the
More informationOn-line, Incremental Learning of a Robust Active Shape Model
On-line, Incremental Learning of a Robust Active Shape Model Michael Fussenegger 1, Peter M. Roth 2, Horst Bischof 2, Axel Pinz 1 1 Institute of Electrical Measurement and Measurement Signal Processing
More informationTranslation Symmetry Detection: A Repetitive Pattern Analysis Approach
2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Translation Symmetry Detection: A Repetitive Pattern Analysis Approach Yunliang Cai and George Baciu GAMA Lab, Department of Computing
More information3D-MAM: 3D Morphable Appearance Model for Efficient Fine Head Pose Estimation from Still Images
3D-MAM: 3D Morphable Appearance Model for Efficient Fine Head Pose Estimation from Still Images Markus Storer, Martin Urschler and Horst Bischof Institute for Computer Graphics and Vision, Graz University
More informationFACIAL ANIMATION FROM SEVERAL IMAGES
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 FACIAL ANIMATION FROM SEVERAL IMAGES Yasuhiro MUKAIGAWAt Yuichi NAKAMURA+ Yuichi OHTA+ t Department of Information
More informationLight-Invariant Fitting of Active Appearance Models
Light-Invariant Fitting of Active Appearance Models Daniel Pizarro Alcalà University- Madrid Julien Peyras LASMEA- Clermont-Ferrand Adrien Bartoli LASMEA- Clermont-Ferrand Abstract This paper deals with
More informationOcclusion Robust Multi-Camera Face Tracking
Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,
More informationFacial Recognition Using Active Shape Models, Local Patches and Support Vector Machines
Facial Recognition Using Active Shape Models, Local Patches and Support Vector Machines Utsav Prabhu ECE Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA-15213 uprabhu@andrew.cmu.edu
More informationcoding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight
Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image
More informationAccurate Reconstruction by Interpolation
Accurate Reconstruction by Interpolation Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore International Conference on Inverse Problems and Related Topics
More informationStereo. 11/02/2012 CS129, Brown James Hays. Slides by Kristen Grauman
Stereo 11/02/2012 CS129, Brown James Hays Slides by Kristen Grauman Multiple views Multi-view geometry, matching, invariant features, stereo vision Lowe Hartley and Zisserman Why multiple views? Structure
More informationFACE RECOGNITION USING INDEPENDENT COMPONENT
Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major
More information3D Face Modelling Under Unconstrained Pose & Illumination
David Bryan Ottawa-Carleton Institute for Biomedical Engineering Department of Systems and Computer Engineering Carleton University January 12, 2009 Agenda Problem Overview 3D Morphable Model Fitting Model
More informationOn Modeling Variations for Face Authentication
On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu
More informationPassive driver gaze tracking with active appearance models
Carnegie Mellon University Research Showcase @ CMU Robotics Institute School of Computer Science 2004 Passive driver gaze tracking with active appearance models Takahiro Ishikawa Carnegie Mellon University
More informationFacial Feature Detection
Facial Feature Detection Rainer Stiefelhagen 21.12.2009 Interactive Systems Laboratories, Universität Karlsruhe (TH) Overview Resear rch Group, Universität Karlsruhe (TH H) Introduction Review of already
More informationAn Algorithm for Seamless Image Stitching and Its Application
An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.
More informationRobust Estimation of Albedo for Illumination-invariant Matching and Shape Recovery
Robust Estimation of Albedo for Illumination-invariant Matching and Shape Recovery Soma Biswas, Gaurav Aggarwal and Rama Chellappa Center for Automation Research, UMIACS Dept. of ECE, Dept. of Computer
More informationFace Recognition Under Varying Illumination Based on MAP Estimation Incorporating Correlation Between Surface Points
Face Recognition Under Varying Illumination Based on MAP Estimation Incorporating Correlation Between Surface Points Mihoko Shimano 1, Kenji Nagao 1, Takahiro Okabe 2,ImariSato 3, and Yoichi Sato 2 1 Panasonic
More informationTextureless Layers CMU-RI-TR Qifa Ke, Simon Baker, and Takeo Kanade
Textureless Layers CMU-RI-TR-04-17 Qifa Ke, Simon Baker, and Takeo Kanade The Robotics Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 Abstract Layers are one of the most well
More informationBoosting Sex Identification Performance
Boosting Sex Identification Performance Shumeet Baluja, 2 Henry Rowley shumeet@google.com har@google.com Google, Inc. 2 Carnegie Mellon University, Computer Science Department Abstract This paper presents
More informationSUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS
SUBDIVISION ALGORITHMS FOR MOTION DESIGN BASED ON HOMOLOGOUS POINTS M. Hofer and H. Pottmann Institute of Geometry Vienna University of Technology, Vienna, Austria hofer@geometrie.tuwien.ac.at, pottmann@geometrie.tuwien.ac.at
More informationFace Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian
4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 2016) Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian Hebei Engineering and
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationA Method of Automated Landmark Generation for Automated 3D PDM Construction
A Method of Automated Landmark Generation for Automated 3D PDM Construction A. D. Brett and C. J. Taylor Department of Medical Biophysics University of Manchester Manchester M13 9PT, Uk adb@sv1.smb.man.ac.uk
More informationDTU Technical Report: ARTTS
DTU Technical Report: ARTTS Title: Author: Project: Face pose tracking and recognition and 3D cameras Rasmus Larsen ARTTS Date: February 10 th, 2006 Contents Contents...2 Introduction...2 State-of-the
More information