Symmetry-based Face Pose Estimation from a Single Uncalibrated View

Size: px
Start display at page:

Download "Symmetry-based Face Pose Estimation from a Single Uncalibrated View"

Transcription

1 Symmetry-based Face Pose Estimation from a Single Uncalibrated View Vinod Pathangay and Sukhendu Das Visualization and Perception Laboratory Indian Institute of Technology Madras Chennai , India vinod@cse.iitm.ernet.in, sdas@iitm.ac.in Abstract In this paper, a geometric method for estimating the face pose (roll and yaw angles) from a single uncalibrated view is presented. The symmetric structure of the human face is exploited by taking the mirror image (horizontal flip) of a test face image as a virtual second view. Facial feature point correspondences are established between the given test and its mirror image using an active appearance model. Thus, the face pose estimation problem is cast as a two-view rotation estimation problem. By using the bilateral symmetry, roll and yaw angles are estimated without the need for camera calibration. The proposed pose estimation method is evaluated on synthetic and natural face datasets, and the results are compared with an eigenspace-based method. It is shown that the proposed symmetry-based method shows performance that is comparable to the eigenspace-based method for both synthetic and real face image datasets. 1. Introduction Computer vision algorithms that analyze human faces have to deal with significant variations in pose. The human ability of maintaining visual constancy across different poses of the face is a requirement for algorithms that recognize faces, facial expression and gestures. The pose of the face (in terms of pitch, yaw and roll angles) can thus be a significant input to applications that analyze human faces. The human face exhibits near bilateral symmetry. Although the facial symmetry is not exhibited at a fine level of detail, it is apparent at a coarser level. In this paper, we exploit this coarse symmetry of faces to estimate the roll and yaw angles. In order to exploit symmetry, the test face image is flipped about the vertical axis to get a mirror image. This mirror image is treated as a virtual second view of the face after rotation. Therefore the pose estimation problem is now cast as a rotation estimation problem from two views (either with the face being stationary with two convergent cameras, or with a single camera and rotating face). The block diagram of the proposed method is shown in Fig. 1. The input image (obtained after applying a face detection algorithm [19]) is flipped along the vertical axis T. Greiner Faculty of Engineering, Pforzheim University Tiefenbronner Str Pforzheim, Germany thomas.greiner@hs-pforzheim.de Input image (at the center of the image) to obtain a corresponding mirror image. Facial feature points are located on both these images using an active appearance model [12] to obtain the correspondences. The correspondences are used as input to the rotation model for estimating roll and yaw angles. This paper is organized as follows. In section 2, an overview of the related work on the face pose estimation is discussed. In section 3, the rotation model for pose estimation is derived. In section 4, we discuss the experimental results of the proposed pose estimation technique on synthetic and real face image (public domain) datasets. We also compare the accuracy of the proposed method against a baseline eigenspace-based pose estimation method for each dataset. In section 5, we discuss the limitations of the proposed method as well as further possible improvements of this work. 2. Related work Flip Feature point localization Correspondences Rotation Roll, yaw Mirror image Figure 1: Block diagram of the proposed method. An extensive list of previous work on face pose estimation can be obtained from [1]. Most work on face pose estimation can be classified into appearance-based, feature-based and geometry-based methods. Appearancebased methods use linear subspace decomposition and other non-linear variants to model appearances of different poses of the face [2], [3], [4], [5], [6]. The method presented in [2] uses weighted linear combination of local /08/$ IE

2 linear models in a pose parameter space. In [3], an independent component analysis (ICA) based approach is presented for learning view-specific subspace representations for different face poses in an unsupervised manner. A nonlinear interpolation method to estimate the head-pose between two training views using Fisher manifold learning is presented in [5]. In [6], biased manifold embedding is proposed for estimating face pose under the assumption that face images are considered to lie on a smooth lower dimensional manifold in a higher dimension feature space. Appearance-based methods can be used for large pose angles (-90º to +90º) with a step of 2º (as reported in [6]). However, training images are necessary for each step angle. In feature-based methods, the pose-specific properties of certain points on the face (eye, nose etc.) are represented as feature templates. One of the commonly used features is Gabor jet [7], which is derived from a set of linear filter operations in the form of convolutions of the image with a set of Gabor wavelets with parameterized orientations and wavelengths. Such points and their features can be connected to form pose-specific attribute graphs or bunch graphs [8], [7]. Another feature-based method uses iterated closest point algorithm to match facial feature curves and edges with a generic template [20]. Several low-level features such as colour, area, and moments of the segmented face region have also been used [21]. Geometric methods on the other hand depend on the relationship between the locations of certain feature points on the face images and derive an analytical solution for the face pose angles. Epipolar geometry and weakperspective camera model is used to obtain face pose by tracking facial feature points [9]. Facial asymmetry is used for coarse-pose estimation in [10], where an iterative 2Dto-3D geometric model-based matching is used to obtain the fine-pose. In [11], a relation between the facial feature locations in the image and facial normal and gaze direction is derived from ratios of lengths between the feature points using a weak-perspective projection. Geometric methods have been used to obtain continuous values of face pose. Some geometric methods (e.g. [9], [24]) use multiple views of the moving face to track features between views for estimating rotation. In [24], symmetry of the face has been used to compute the rotation angles using differences in left-right feature projections. Other methods ([10], [11]) locate specific facial features and estimate pose from a single view based on the spatial relationship between the feature locations. In this paper, we present a geometric method for estimating pose from a single view using the symmetric property of the face. We use active appearance model (AAM) for localization of facial feature points [12] and a rotation model to estimate roll and yaw. In the following section, we discuss the rotation model that Test image exploits the symmetry of the face for pose estimation. 3. Rotation model As shown in the block diagram in Fig. 1, the second virtual view is synthesized by flipping the given test face image about its vertical axis. Here we use the property of bilateral symmetry of the face where, reflection appears to be equivalent to rotational transformation. Using this assumption, we describe two rotation models for estimating roll and yaw angles of the face. 3.1 Roll estimation Roll is essentially in-plane rotation of the face. This is modeled as a 2D rotation. Fig. 2 shows the in-plane 2D rotation between the test image and its mirror image. The roll angle with respect to the vertical axis (shown dashed) is half α. Let the facial feature point correspondences from the test image and its mirror image be represented as x i and x i (image coordinates represented in homogeneous form). The relation between correspondences for pure roll is given by a b x i = b a x i (1) 1 where a, b are the cosine and sine angles of the 2D rotation angle α (as shown in Fig. 2). With sufficient number of correspondences, an over-determined linear system of equations can be set up to estimate a and b with 2 2 the additional constraint that a + b = Yaw estimation α Fig. 4 illustrates the symmetry assumption used for estimating yaw. The test image and its mirror image are Mirror image Figure 2: Calculation of roll: α is the 2D rotation angle between the test image (left) and the mirror image (right). Roll angle with respect to the vertical axis is half the

3 Test image Z Y Figure 3: Relative placement of camera and face; camera center is at the origin. β Mirror image Figure 4: Calculation of yaw: β is the 3D rotation angle between the test image (left) and the mirror image (right). Yaw angle with respect to the frontal axis (dashed) is half the rotation angle (β/2). considered as two different views of the face. Both images can be considered as either taken from two different convergent cameras, or as images taken from a stationary camera while the face undergoes a 3D yaw rotation. We use the latter assumption of the stationary camera and rotating face as it makes the formulation simple. Unlike roll, yaw rotations are in-depth and therefore we use a camera model since our measurements of the points on the 3D face are in 2D images. We use a canonical perspective camera model that is general enough for most type of facial images. Also, we do not use any calibration X θ information as most face images extracted from the web or personal image collections are imaged with various cameras with unknown calibration. We use the notation as followed in [14]. A 3D point on the face X is imaged as x and x, before and after flipping (or rotation) respectively. As shown in Fig. 3, we place the camera center such that its center coincides with the origin of the world coordinates. The face is located along the negative Z-axis with a displacement t z. We assume that due to the symmetry, the flipping of the face image is the same as a rotation of the face about an axis parallel to Y-axis located t z from the origin. This motion involves: i) translation of the face to the origin along Z-axis M 1, ii) rotation of the face about the Y-axis M 2 (Fig. 3) and iii) translation back to the original location which is at a distance t z from the origin along the negative Z-axis M 3. (M 1, M 2 and M 3 are 4x4 motion matrices). Therefore the motion matrix is of the form M = M 1 M 2 M 3 which is expanded as follows 1 c d M = 1 t z d c 1 t (2) z Here c and d are the cosine and sine of the 3D rotation angle β (as shown in Fig. 4). The projection of a 3D point X on the test image is x = PX where P is the 3x4 canonical perspective camera matrix of the form[ I ]. After rotation, we get x = PX = PMX. The relation between the correspondences x and x is given by + x' = PMP x (3) where P + is the pseudo-inverse of the matrix P. On substitution of M and P in (3), we get c d x' = 1 x (4) d c This shows that the relationship between x and x is independent of t z. In order to have a robust estimate of yaw, we use a bi-directional error function given by 2 T 2 i i i i e = x Rx + x R x (5) i where i is the point index and R is the 3x3 rotation matrix for pure yaw as in (4). The error function is iteratively minimized for a range of possible yaw angles (-45º to +45º). This range has been considered because all the facial features about the line of symmetry of the face (not image) can be localized in this range. For yaw angles greater than 45º, the natural self-occlusion of the face hinders localization of facial features on both side of the nose (or line of symmetry). This has been observed

4 Figure 5: Synthetic images used for testing roll estimation. empirically over a set of images/faces. For the angle β corresponding to the minimum error the correct yaw angle with respect to the camera principal axis is β/2 (as shown in Fig. 4). In the following section, we show experimental results of the proposed method. 4. Experimental results We evaluated the proposed pose estimation method on two synthetic and one natural face image datasets that have labeled pose variations. We also compared our yaw estimation results with a baseline eigenspace-based pose estimation method, which is a simplified version of [15]. 4.1 Face datasets used Different datasets were used to estimate roll and yaw. As the roll estimation method is relatively simple compared to yaw, we present results mainly for yaw estimation and show limited results for roll estimation. For evaluating roll estimation, we used rotated frontal images as shown in Fig. 5. For evaluating yaw estimation, three face datasets were used: i) MPI face dataset [16] consists of 100 male and 100 female synthetic faces images in three poses (-30º, 0º and +30º). We used images of two male and two female subjects for training and the rest for testing. ii) MIT-CBCL face recognition dataset [17] consists of 10 subjects with face pose (yaw) ranging from 32º to 0º in steps of 7º. We used the synthetically rendered face images from the training-synthetic directory with constant lighting. iii) INRIA head pose dataset [18] consists of real image sequences of 10 subjects with two sessions each with combined yaw and pitch variations. We used the subset of images with pure yaw in the range 45º to +45º in steps of 15º. For both MIT-CBCL and INRIA head pose datasets, we used the first two subjects for training and the rest for testing. Figure 6: Result of facial feature detection using active appearance model on a test image (left) and its mirror image (right). The yaw angle in this case is 44º. 4.2 Pre-processing: Facial feature localization The face region is automatically detected using the Haar-cascade technique [19]. We use the public domain OpenCV implementation for extracting faces. Next, facial features are localized and correspondences established between the test face and its mirror image. Feature-point matching using Harris [22] and SIFT [23] methods produce highly erroneous matched correspondences for face and their mirror images. In order to avoid such errors, we localize facial features using active appearance models (AAM). AAM localizes feature points on the face image based on global structural constraints. We use a public domain implementation of active appearance models (aam-api [13]) trained on face contours for facial feature localization. We trained the AAM with 18-point contours (eye-brows, eyes and lips) for a range of poses for which both eyes are visible (i.e. magnitude of yaw angle is up to 45º). We did not use any contours on the nose as the nose exhibits maximum variations for different poses. Fig. 6 shows the result of the AAM search applied on a face and its mirror images, with the correspondences overlaid. It can be observed that the point-1 is reflected to point-10 in the mirror image. The AAM technique however maintains structural relationship and thus point-1 in both the images corresponds to first point on the left eyebrow (as viewed). X-coordinates of all feature points from both images are subtracted by the X-coordinate of the respective (facecentric) axial line (passing through the lower or upper middle lip point) on the image domain. This normalizes any relative translations induced due to flipping. It may also be noted that the AAM search is performed (twice) on both the images and therefore the correspondences between the feature points localized on both test and mirror images are not related by a reflection.

5 100 Estimated roll (degrees) True roll (degrees) Figure 7: Results of the symmetry-based roll estimation method on different rotations of a frontal face. (Blue o show the estimated roll and the red + show the ground truth. 4.3 Roll estimation Fig. 7 show results of roll estimation on a set of face images rotated at 20º intervals. Active appearance model is used to localize facial feature points in the test image and its mirror image. The correspondences are used to estimate the in-plane rotation. The roll angle is taken as half the rotation angle as discussed in Sec. 3. It can also be observed in Fig. 7 that there is an offset between the estimated roll and the ground truth. This is because of the tilt in the center face image towards the left side, which gets propagated for all images. Estimated yaw Yaw estimation The baseline system used for comparison is an eigenspace-based method (similar to [15]) that classifies the face into classes of different yaw angles. The training face images for the eigenspace-based method are the same as those used for training the AAM in the symmetry-based method. For each face dataset, training face images with different values of yaw are resized to 32x32 and reordered to produce a 1024-dimensional column vector. Principal component analysis (PCA) is used to reduce the dimensionality of the feature vector. The projections of the test feature on the first six eigenvectors are used to form the feature vector and the nearest neighbor classification is used for assigning the pose of a test face image. Fig. 8 shows the result of yaw estimation for single subject of the MPI face dataset. The blue o indicate the result of the symmetry-based method and the red + indicate the ground truth. Table 1 shows a comparison of the symmetry and eigenspace-based methods. The errors True yaw Figure 8: Result of yaw estimation on single subject of MPI face dataset. (Yaw angles in degrees). in yaw estimation are averaged for 196 subjects (4 subjects used for training out of 200). It can be noted that the eigenspace-based method performs better compared to the symmetry-based method for the MPI dataset. This is due to the fact that faces with 30º, 0º and +30º yaw angles form three well separated classes in eigenspace that can be easily discriminated using the nearest neighbor classifier. Fig. 9 shows the result of yaw estimation for subject 5 of the MIT-CBCL face recognition database. It can be observed that the face with yaw angle -32º and -28º are wrongly estimated. This is because the facial features are not correctly localized using the AAM. Therefore this

6 Estimated yaw True yaw Figure 10: Results of yaw estimation on different subjects of the INRIA-head pose dataset. Blue circles show estimated values and red + show ground truth. (Yaw angles in degrees). leads to wrong correspondences and thus an erroneous yaw estimate. In Table 2, a comparison of the average yaw estimation error (averaged over all subjects) for the proposed symmetry-based method and the eigenspacebased method is shown. The first four subjects were used for training the AAM and the eigenspace-based method. It can be observed that the error for the eigenspace-based method is almost the same (around 6º) for all yaw angles. This is because the pose angles are estimated by nearest neighbor classification and the misclassification rate (error in yaw estimate) remains almost same due to the similarity between faces at every 4º yaw intervals. On the other hand, the symmetry-based method shows a higher average error for larger yaw angles. Fig. 10 shows the yaw estimates for all test subjects in the INRIA head pose dataset. It can be noted that there is a larger error for larger angles. This trend is also observed for the MIT-CBCL dataset (Table 2). This can be justified as follows: i) For larger yaw angles, the face images deviate from the ideal canonical camera assumption. This is due to the larger effect of foreshortening at larger angles. ii) Another source of error is the inaccuracies in correspondences at large angles. Hence larger errors are observed for larger angles. The comparison of the proposed symmetry-based method with eigenspace-based method is shown in Table 3 in terms of average error in yaw estimation for all subjects. Here, the performance of the baseline eigenspace-based method is relatively poor compared to other datasets. Unlike the previous two synthetic datasets, the lack of uniformity in face alignment after the face detection step contributed to Estimated yaw True yaw Figure 9: Result of yaw estimation on single subject of the MIT-CBCL face dataset. Blue circles show estimated values and red + show ground truth. Inset face image shows error in yaw estimate due to wrong facial feature localization. the relatively low performance of the baseline method. However, the proposed symmetry-based method is not affected by misalignment during face detection as the AAM can localize feature points in such cases. The average time required for pre-processing is around 19 secs whereas the time required for roll and yaw estimation is 0.1 and 0.4 secs respectively (evaluated with no optimizations on a 1.73GHz CPU). Therefore with a

7 faster preprocessing, the proposed symmetry-based pose estimation method can be used in real-time applications. Table 1: Average yaw estimation errors (degrees) for MPI dataset -30º 0º +30º Method Symmetry-based Eigenspace-based Table 2: Average yaw estimation errors (degrees) for MIT-CBCL dataset 0º -4º -8º -12º -16º Method Symmetry-based Eigenspace-based º -24º -28º -32º Symmetry-based Eigenspace-based Table 3: Average yaw estimation errors (degrees) for INRIA Head Pose dataset -45º -30º -15º 0º Method Symmetry-based Eigenspace-based º 30º 45º Symmetry-based Eigenspace-based Conclusions and future work In this paper, we propose a symmetry-based face pose estimation method, where the problem of estimating the face pose is cast a two-view rotation estimation problem. The roll angle is half the 2D in-plane rotation estimated between the test face image and its mirror image. Yaw angle is taken as half the 3D in-depth rotation between the two views generated similarly. The rotation angles are estimated using facial feature correspondences obtained by using an active appearance model. Our experiments show that eigenspace-based method performs better than the symmetry-based method for discriminating between coarse pose angles (such as 30º, 0º, +30º). For finer angles, the symmetry-based method outperforms the eigenspace-based method. The proposed method is also invariant to misalignments in the output of the face detection step as the AAM can localize feature points easily. Also, it is observed that the proposed method has larger errors at larger angles of yaw, whereas the eigenspace-based method has almost constant errors for all angles. We do not consider tilt in this paper, although it should be possible to estimate roll and yaw for faces with very small tilt. The work presented in this paper estimates pure roll and yaw angles independently. This method can be considered as a coarse yaw and an accurate roll angle estimator, to provide an initial estimate for any iterative process required for a finer search (in the presence of additional information). We are extending the current work for estimating more accurate pose angles with combined roll, yaw and pitch for face image sequences using a multiview framework. References [1] Keith Price Bibliography Face Pose, Head Pose, [2] K. Okada and von der Malsburg, Analysis and Synthesis of Human Faces with Pose Variations by a Parametric Piecewise Linear Subspace Method, in Proc. Computer Vision and Pattern Recognition, (CVPR), [3] Li, S.Z., Lu, X., Hou, X., Peng, X., Cheng, Q., Learning Multiview Face Subspaces and Facial Pose Estimation Using Independent Component Analysis, in IEEE Tran. Image Processing, vol. 14, no. 6, Jun. 2005, pp [4] Srinivasan, S., Boyer, K.L., Head pose estimation using view based eigenspaces, in Proc. Int. Conf. Pattern Recognition (ICPR) [5] Chen, L., Zhang, L., Hu, Y.X., Li, M.J., Zhang, H.J., Head pose estimation using Fisher manifold learning, in Proc. Int. Workshop on Analysis and Modeling of Faces and Gestures (AMFG) [6] V. N. Balasubramanian, J. P. Ye,, S. Panchanathan, Biased Manifold Embedding: A Framework for Person- Independent Head Pose Estimation, in Proc. Computer Vision and Pattern Recognition (CVPR) [7] J.W. Wu, M.M. Trivedi, A two-stage head pose estimation framework and evaluation, Pattern Recognition, vol. 41, no. 3, Mar. 2008, pp [8] Kruger, N., Potzsch, M., von der Malsburg, C., Determination of Face Position and Pose with a Learned Representation Based on Labeled Graphs, Image and Vision Computing, vol. 15, no. 8, Aug. 1997, pp [9] Takahiro Otsuka and Jun Ohya, Real-time Estimation of Head Motion Using Weak Perspective Epipolar Geometry, in Proc. Workshop in Applications of Computer Vision (WACV ), [10] Hu, Y.X., Chen, L.B., Zhou, Y., Zhang, H.J., Estimating face pose by facial asymmetry and geometry, in Proc. Automatic Face and Gesture Recogntion (AFGR) [11] Andrew Gee and Rober Cipolla, Determining Gaze of Faces in Images, Image and Vision Computing Vol. 12 No. 10 Dec [12] G. J. E. Edwards, C. J. Taylor and T. F. Cootes, Interpreting Face Images using Active Appearance

8 Models, Int. Conf. On Face and Gesture Recognition 1998, pp [13] Mikkel B. Stegmann, Active Appearance Models, [14] Richard Hartley and Andrew Zisserman, Multiple View Geometry in Computer Vision, 2 nd Edition, [15] Trevor Darell, Babback Moghaddam and Alex Pentland, Active Face Tracking and Pose Estimation in an Interactive Room, in Proc. Computer Vision and Pattern Reocgnition (CVPR) [16] B. Weyrauch, J. Huang, B. Heisele, and V. Blanz. Component-based Face Recognition with 3D Morphable Models, First IEEE Workshop on Face Processing in Video, Washington, D. C., [17] Troje, N. and H. H. Bülthoff: Face recognition under varying poses: The role of texture and shape. Vision Research 36, pp (1996). [18] N. Gourier, D. Hall, J. L. Crowley, Estimating Face Orientation from Robust Detection of Salient Facial Features, in Proc. Pointing 2004, ICPR, International Workshop on Visual Observation of Deictic Gestures, Cambridge, UK. [19] P. Viola and M. Jones, Rapid Object Detection using a Boosted Cascade of Simple Features, Proc. Computer Vision and Pattern Recognition, [20] Shimizu, I., Zhang, Z., Akamatsu, S., Deguchi, K., Head Pose Determination from One Image Using a Generic Model, in Proc. Automatic Face and Gesture Recognition (AFGR), [21] Chen, Q., Wu, H., Fukumoto, T., Yachida, M., 3D Head Pose Estimation without Feature Tracking, in Proc. Automatic Face and Gesture Recognition (AFGR), [22] C. Harris and M. Stephens A combined corner and edge detector, Proc. 4 th Alvey Vision Conference, pp , [23] K. Mikolajczyk, K. and C. Schmid, Scale and affine invariant interest point detectors, Int. Journal of Computer Vision, vol. 60, no. 1, pp [24] T. Horprasert, Y. Yacoob and L. Davis, Computing 3-D Head Orientation from Monocular Image Sequence, Int. Conf. Face and Gesture Recognition 1996, pp

Face Alignment Under Various Poses and Expressions

Face Alignment Under Various Poses and Expressions Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.

More information

Head Frontal-View Identification Using Extended LLE

Head Frontal-View Identification Using Extended LLE Head Frontal-View Identification Using Extended LLE Chao Wang Center for Spoken Language Understanding, Oregon Health and Science University Abstract Automatic head frontal-view identification is challenging

More information

Subject-Oriented Image Classification based on Face Detection and Recognition

Subject-Oriented Image Classification based on Face Detection and Recognition 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Multi-View AAM Fitting and Camera Calibration

Multi-View AAM Fitting and Camera Calibration To appear in the IEEE International Conference on Computer Vision Multi-View AAM Fitting and Camera Calibration Seth Koterba, Simon Baker, Iain Matthews, Changbo Hu, Jing Xiao, Jeffrey Cohn, and Takeo

More information

Gender Classification Technique Based on Facial Features using Neural Network

Gender Classification Technique Based on Facial Features using Neural Network Gender Classification Technique Based on Facial Features using Neural Network Anushri Jaswante Dr. Asif Ullah Khan Dr. Bhupesh Gour Computer Science & Engineering, Rajiv Gandhi Proudyogiki Vishwavidyalaya,

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

3D Active Appearance Model for Aligning Faces in 2D Images

3D Active Appearance Model for Aligning Faces in 2D Images 3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active

More information

ESTIMATING HEAD POSE WITH AN RGBD SENSOR: A COMPARISON OF APPEARANCE-BASED AND POSE-BASED LOCAL SUBSPACE METHODS

ESTIMATING HEAD POSE WITH AN RGBD SENSOR: A COMPARISON OF APPEARANCE-BASED AND POSE-BASED LOCAL SUBSPACE METHODS ESTIMATING HEAD POSE WITH AN RGBD SENSOR: A COMPARISON OF APPEARANCE-BASED AND POSE-BASED LOCAL SUBSPACE METHODS Donghun Kim, Johnny Park, and Avinash C. Kak Robot Vision Lab, School of Electrical and

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

Cross-pose Facial Expression Recognition

Cross-pose Facial Expression Recognition Cross-pose Facial Expression Recognition Abstract In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose

More information

Facial Feature Detection

Facial Feature Detection Facial Feature Detection Rainer Stiefelhagen 21.12.2009 Interactive Systems Laboratories, Universität Karlsruhe (TH) Overview Resear rch Group, Universität Karlsruhe (TH H) Introduction Review of already

More information

Video Google faces. Josef Sivic, Mark Everingham, Andrew Zisserman. Visual Geometry Group University of Oxford

Video Google faces. Josef Sivic, Mark Everingham, Andrew Zisserman. Visual Geometry Group University of Oxford Video Google faces Josef Sivic, Mark Everingham, Andrew Zisserman Visual Geometry Group University of Oxford The objective Retrieve all shots in a video, e.g. a feature length film, containing a particular

More information

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 181 A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods Zahra Sadri

More information

A Hierarchical Face Identification System Based on Facial Components

A Hierarchical Face Identification System Based on Facial Components A Hierarchical Face Identification System Based on Facial Components Mehrtash T. Harandi, Majid Nili Ahmadabadi, and Babak N. Araabi Control and Intelligent Processing Center of Excellence Department of

More information

Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation

Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Xiujuan Chai 1, 2, Shiguang Shan 2, Wen Gao 1, 2 1 Vilab, Computer College, Harbin Institute of Technology, Harbin,

More information

Out-of-Plane Rotated Object Detection using Patch Feature based Classifier

Out-of-Plane Rotated Object Detection using Patch Feature based Classifier Available online at www.sciencedirect.com Procedia Engineering 41 (2012 ) 170 174 International Symposium on Robotics and Intelligent Sensors 2012 (IRIS 2012) Out-of-Plane Rotated Object Detection using

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

FACE RECOGNITION USING INDEPENDENT COMPONENT

FACE RECOGNITION USING INDEPENDENT COMPONENT Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

Determining pose of a human face from a single monocular image

Determining pose of a human face from a single monocular image Determining pose of a human face from a single monocular image Jian-Gang Wang 1, Eric Sung 2, Ronda Venkateswarlu 1 1 Institute for Infocomm Research 21 Heng Mui Keng Terrace, Singapore 119613 2 Nanyang

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Visualization 2D-to-3D Photo Rendering for 3D Displays

Visualization 2D-to-3D Photo Rendering for 3D Displays Visualization 2D-to-3D Photo Rendering for 3D Displays Sumit K Chauhan 1, Divyesh R Bajpai 2, Vatsal H Shah 3 1 Information Technology, Birla Vishvakarma mahavidhyalaya,sumitskc51@gmail.com 2 Information

More information

DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS

DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS Sylvain Le Gallou*, Gaspard Breton*, Christophe Garcia*, Renaud Séguier** * France Telecom R&D - TECH/IRIS 4 rue du clos

More information

Mouse Pointer Tracking with Eyes

Mouse Pointer Tracking with Eyes Mouse Pointer Tracking with Eyes H. Mhamdi, N. Hamrouni, A. Temimi, and M. Bouhlel Abstract In this article, we expose our research work in Human-machine Interaction. The research consists in manipulating

More information

Object and Class Recognition I:

Object and Class Recognition I: Object and Class Recognition I: Object Recognition Lectures 10 Sources ICCV 2005 short courses Li Fei-Fei (UIUC), Rob Fergus (Oxford-MIT), Antonio Torralba (MIT) http://people.csail.mit.edu/torralba/iccv2005

More information

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using

More information

Selection of Location, Frequency and Orientation Parameters of 2D Gabor Wavelets for Face Recognition

Selection of Location, Frequency and Orientation Parameters of 2D Gabor Wavelets for Face Recognition Selection of Location, Frequency and Orientation Parameters of 2D Gabor Wavelets for Face Recognition Berk Gökberk, M.O. İrfanoğlu, Lale Akarun, and Ethem Alpaydın Boğaziçi University, Department of Computer

More information

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1 Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

More information

A Real Time Facial Expression Classification System Using Local Binary Patterns

A Real Time Facial Expression Classification System Using Local Binary Patterns A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial

More information

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map? Announcements I HW 3 due 12 noon, tomorrow. HW 4 to be posted soon recognition Lecture plan recognition for next two lectures, then video and motion. Introduction to Computer Vision CSE 152 Lecture 17

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37 Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The

More information

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe Face detection and recognition Many slides adapted from K. Grauman and D. Lowe Face detection and recognition Detection Recognition Sally History Early face recognition systems: based on features and distances

More information

Using Geometric Blur for Point Correspondence

Using Geometric Blur for Point Correspondence 1 Using Geometric Blur for Point Correspondence Nisarg Vyas Electrical and Computer Engineering Department, Carnegie Mellon University, Pittsburgh, PA Abstract In computer vision applications, point correspondence

More information

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface , 2 nd Edition Preface ix 1 Introduction 1 1.1 Overview 1 1.2 Human and Computer Vision 1 1.3 The Human Vision System 3 1.3.1 The Eye 4 1.3.2 The Neural System 7 1.3.3 Processing 7 1.4 Computer Vision

More information

Real-time facial feature point extraction

Real-time facial feature point extraction University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2007 Real-time facial feature point extraction Ce Zhan University of Wollongong,

More information

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation

Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Fast Outlier Rejection by Using Parallax-Based Rigidity Constraint for Epipolar Geometry Estimation Engin Tola 1 and A. Aydın Alatan 2 1 Computer Vision Laboratory, Ecóle Polytechnique Fédéral de Lausanne

More information

Object. Radiance. Viewpoint v

Object. Radiance. Viewpoint v Fisher Light-Fields for Face Recognition Across Pose and Illumination Ralph Gross, Iain Matthews, and Simon Baker The Robotics Institute, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA 15213

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

AAM Based Facial Feature Tracking with Kinect

AAM Based Facial Feature Tracking with Kinect BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking

More information

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test

Announcements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying

More information

Abstract. 1 Introduction. 2 Motivation. Information and Communication Engineering October 29th 2010

Abstract. 1 Introduction. 2 Motivation. Information and Communication Engineering October 29th 2010 Information and Communication Engineering October 29th 2010 A Survey on Head Pose Estimation from Low Resolution Image Sato Laboratory M1, 48-106416, Isarun CHAMVEHA Abstract Recognizing the head pose

More information

Rapid 3D Face Modeling using a Frontal Face and a Profile Face for Accurate 2D Pose Synthesis

Rapid 3D Face Modeling using a Frontal Face and a Profile Face for Accurate 2D Pose Synthesis Rapid 3D Face Modeling using a Frontal Face and a Profile Face for Accurate 2D Pose Synthesis Jingu Heo and Marios Savvides CyLab Biometrics Center Carnegie Mellon University Pittsburgh, PA 15213 jheo@cmu.edu,

More information

Categorization by Learning and Combining Object Parts

Categorization by Learning and Combining Object Parts Categorization by Learning and Combining Object Parts Bernd Heisele yz Thomas Serre y Massimiliano Pontil x Thomas Vetter Λ Tomaso Poggio y y Center for Biological and Computational Learning, M.I.T., Cambridge,

More information

Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction

Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction Real Time Face Tracking and Pose Estimation Using an Adaptive Correlation Filter for Human-Robot Interaction Vo Duc My and Andreas Zell Abstract In this paper, we present a real time algorithm for mobile

More information

Multiple-Choice Questionnaire Group C

Multiple-Choice Questionnaire Group C Family name: Vision and Machine-Learning Given name: 1/28/2011 Multiple-Choice naire Group C No documents authorized. There can be several right answers to a question. Marking-scheme: 2 points if all right

More information

Estimation of common groundplane based on co-motion statistics

Estimation of common groundplane based on co-motion statistics Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at 14th International Conference of the Biometrics Special Interest Group, BIOSIG, Darmstadt, Germany, 9-11 September,

More information

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction

Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Hand Posture Recognition Using Adaboost with SIFT for Human Robot Interaction Chieh-Chih Wang and Ko-Chih Wang Department of Computer Science and Information Engineering Graduate Institute of Networking

More information

Semi-Supervised PCA-based Face Recognition Using Self-Training

Semi-Supervised PCA-based Face Recognition Using Self-Training Semi-Supervised PCA-based Face Recognition Using Self-Training Fabio Roli and Gian Luca Marcialis Dept. of Electrical and Electronic Engineering, University of Cagliari Piazza d Armi, 09123 Cagliari, Italy

More information

Selection of Scale-Invariant Parts for Object Class Recognition

Selection of Scale-Invariant Parts for Object Class Recognition Selection of Scale-Invariant Parts for Object Class Recognition Gy. Dorkó and C. Schmid INRIA Rhône-Alpes, GRAVIR-CNRS 655, av. de l Europe, 3833 Montbonnot, France fdorko,schmidg@inrialpes.fr Abstract

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Illumination Invariant Face Alignment Using Multi-band Active Appearance Model

Illumination Invariant Face Alignment Using Multi-band Active Appearance Model Illumination Invariant Face Alignment Using Multi-band Active Appearance Model Fatih Kahraman 1 and Muhittin Gökmen 2 1 Istanbul Technical University, Institute of Informatics, Computer Science, 80626

More information

Active Wavelet Networks for Face Alignment

Active Wavelet Networks for Face Alignment Active Wavelet Networks for Face Alignment Changbo Hu, Rogerio Feris, Matthew Turk Dept. Computer Science, University of California, Santa Barbara {cbhu,rferis,mturk}@cs.ucsb.edu Abstract The active appearance

More information

The Isometric Self-Organizing Map for 3D Hand Pose Estimation

The Isometric Self-Organizing Map for 3D Hand Pose Estimation The Isometric Self-Organizing Map for 3D Hand Pose Estimation Haiying Guan, Rogerio S. Feris, and Matthew Turk Computer Science Department University of California, Santa Barbara {haiying,rferis,mturk}@cs.ucsb.edu

More information

Facial Feature Extraction Based On FPD and GLCM Algorithms

Facial Feature Extraction Based On FPD and GLCM Algorithms Facial Feature Extraction Based On FPD and GLCM Algorithms Dr. S. Vijayarani 1, S. Priyatharsini 2 Assistant Professor, Department of Computer Science, School of Computer Science and Engineering, Bharathiar

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Shiguang Shan, Wen Gao, Wei Wang, Debin Zhao, Baocai Yin Institute of Computing Technology, Chinese Academy of Sciences,

More information

Unconstrained Face Recognition using MRF Priors and Manifold Traversing

Unconstrained Face Recognition using MRF Priors and Manifold Traversing Unconstrained Face Recognition using MRF Priors and Manifold Traversing Ricardo N. Rodrigues, Greyce N. Schroeder, Jason J. Corso and Venu Govindaraju Abstract In this paper, we explore new methods to

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

3D Reconstruction from Scene Knowledge

3D Reconstruction from Scene Knowledge Multiple-View Reconstruction from Scene Knowledge 3D Reconstruction from Scene Knowledge SYMMETRY & MULTIPLE-VIEW GEOMETRY Fundamental types of symmetry Equivalent views Symmetry based reconstruction MUTIPLE-VIEW

More information

Oriented Filters for Object Recognition: an empirical study

Oriented Filters for Object Recognition: an empirical study Oriented Filters for Object Recognition: an empirical study Jerry Jun Yokono Tomaso Poggio Center for Biological and Computational Learning, M.I.T. E5-0, 45 Carleton St., Cambridge, MA 04, USA Sony Corporation,

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Chaplin, Modern Times, 1936

Chaplin, Modern Times, 1936 Chaplin, Modern Times, 1936 [A Bucket of Water and a Glass Matte: Special Effects in Modern Times; bonus feature on The Criterion Collection set] Multi-view geometry problems Structure: Given projections

More information

Fitting a Single Active Appearance Model Simultaneously to Multiple Images

Fitting a Single Active Appearance Model Simultaneously to Multiple Images Fitting a Single Active Appearance Model Simultaneously to Multiple Images Changbo Hu, Jing Xiao, Iain Matthews, Simon Baker, Jeff Cohn, and Takeo Kanade The Robotics Institute, Carnegie Mellon University

More information

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract

University of Southern California, 1590 the Alameda #200 Los Angeles, CA San Jose, CA Abstract Mirror Symmetry 2-View Stereo Geometry Alexandre R.J. François +, Gérard G. Medioni + and Roman Waupotitsch * + Institute for Robotics and Intelligent Systems * Geometrix Inc. University of Southern California,

More information

Pose estimation using a variety of techniques

Pose estimation using a variety of techniques Pose estimation using a variety of techniques Keegan Go Stanford University keegango@stanford.edu Abstract Vision is an integral part robotic systems a component that is needed for robots to interact robustly

More information

Mobile Face Recognization

Mobile Face Recognization Mobile Face Recognization CS4670 Final Project Cooper Bills and Jason Yosinski {csb88,jy495}@cornell.edu December 12, 2010 Abstract We created a mobile based system for detecting faces within a picture

More information

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION

A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM INTRODUCTION A NEW FEATURE BASED IMAGE REGISTRATION ALGORITHM Karthik Krish Stuart Heinrich Wesley E. Snyder Halil Cakir Siamak Khorram North Carolina State University Raleigh, 27695 kkrish@ncsu.edu sbheinri@ncsu.edu

More information

Adaptive Skin Color Classifier for Face Outline Models

Adaptive Skin Color Classifier for Face Outline Models Adaptive Skin Color Classifier for Face Outline Models M. Wimmer, B. Radig, M. Beetz Informatik IX, Technische Universität München, Germany Boltzmannstr. 3, 87548 Garching, Germany [wimmerm, radig, beetz]@informatik.tu-muenchen.de

More information

NOWADAYS, there are many human jobs that can. Face Recognition Performance in Facing Pose Variation

NOWADAYS, there are many human jobs that can. Face Recognition Performance in Facing Pose Variation CommIT (Communication & Information Technology) Journal 11(1), 1 7, 2017 Face Recognition Performance in Facing Pose Variation Alexander A. S. Gunawan 1 and Reza A. Prasetyo 2 1,2 School of Computer Science,

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17

Recognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17 Recognition (Part 4) CSE 152 Lecture 17 Announcements Homework 5 is due June 9, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

In Between 3D Active Appearance Models and 3D Morphable Models

In Between 3D Active Appearance Models and 3D Morphable Models In Between 3D Active Appearance Models and 3D Morphable Models Jingu Heo and Marios Savvides Biometrics Lab, CyLab Carnegie Mellon University Pittsburgh, PA 15213 jheo@cmu.edu, msavvid@ri.cmu.edu Abstract

More information

Eugene Borovikov. Human Head Pose Estimation by Facial Features Location. UMCP Human Head Pose Estimation by Facial Features Location

Eugene Borovikov. Human Head Pose Estimation by Facial Features Location. UMCP Human Head Pose Estimation by Facial Features Location Human Head Pose Estimation by Facial Features Location Abstract: Eugene Borovikov University of Maryland Institute for Computer Studies, College Park, MD 20742 4/21/1998 We describe a method for estimating

More information

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Face Alignment Across Large Poses: A 3D Solution

Face Alignment Across Large Poses: A 3D Solution Face Alignment Across Large Poses: A 3D Solution Outline Face Alignment Related Works 3D Morphable Model Projected Normalized Coordinate Code Network Structure 3D Image Rotation Performance on Datasets

More information

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation Michael J. Black and Allan D. Jepson Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto,

More information

2013, IJARCSSE All Rights Reserved Page 213

2013, IJARCSSE All Rights Reserved Page 213 Volume 3, Issue 9, September 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com International

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated

More information

Texture Features in Facial Image Analysis

Texture Features in Facial Image Analysis Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Multi-stable Perception Necker Cube Spinning dancer illusion, Nobuyuki Kayahara Multiple view geometry Stereo vision Epipolar geometry Lowe Hartley and Zisserman Depth map extraction Essential matrix

More information

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing

More information

Pose Estimation using 3D View-Based Eigenspaces

Pose Estimation using 3D View-Based Eigenspaces Pose Estimation using 3D View-Based Eigenspaces Louis-Philippe Morency Patrik Sundberg Trevor Darrell MIT Artificial Intelligence Laboratory Cambridge, MA 02139 Abstract In this paper we present a method

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003

Image Transfer Methods. Satya Prakash Mallick Jan 28 th, 2003 Image Transfer Methods Satya Prakash Mallick Jan 28 th, 2003 Objective Given two or more images of the same scene, the objective is to synthesize a novel view of the scene from a view point where there

More information

Facial Processing Projects at the Intelligent Systems Lab

Facial Processing Projects at the Intelligent Systems Lab Facial Processing Projects at the Intelligent Systems Lab Qiang Ji Intelligent Systems Laboratory (ISL) Department of Electrical, Computer, and System Eng. Rensselaer Polytechnic Institute jiq@rpi.edu

More information

Mathematics of a Multiple Omni-Directional System

Mathematics of a Multiple Omni-Directional System Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba

More information