Face Recognition in Low-Resolution Videos Using Learning-Based Likelihood Measurement Model

Size: px
Start display at page:

Download "Face Recognition in Low-Resolution Videos Using Learning-Based Likelihood Measurement Model"

Transcription

1 Face Recognition in Low-Resolution Videos Using Learning-Based Likelihood Measurement Model Soma Biswas, Gaurav Aggarwal and Patrick J. Flynn, Department of Computer Science and Engineering, University of Notre Dame, Notre Dame {sbiswas, gaggarwa, Abstract Low-resolution surveillance videos with uncontrolled pose and illumination present a significant challenge to both face tracking and recognition algorithms. Considerable appearance difference between the probe videos and high-resolution controlled images in the gallery acquired during enrollment makes the problem even harder. In this paper, we extend the simultaneous tracking and recognition framework [22] to address the problem of matching highresolution gallery images with surveillance quality probe videos. We propose using a learning-based likelihood measurement model to handle the large appearance and resolution difference between the gallery images and probe videos. The measurement model consists of a mapping which transforms the gallery and probe features to a space in which their inter-euclidean distances approximate the distances that would have been obtained had all the descriptors been computed from good quality frontal images. Experimental results on real surveillance quality videos and comparisons with related approaches show the effectiveness of the proposed framework. 1. Introduction The wide range of applications in law-enforcement and security has made face recognition (FR) a very important area of research in the field of computer vision and pattern recognition. The ubiquitous use of surveillance cameras for improved security has shifted the focus of face recognition This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Laboratory (ARL). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of IARPA, the ODNI, the Army Research Laboratory, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. from controlled scenarios to the uncontrolled environment typical in surveillance setting [17]. Typically, the images or videos captured from the surveillance systems have nonfrontal pose and uncontrolled illumination in addition to low-resolution due to the distance of the subjects from the cameras. On the other hand, good high-resolution images of the subjects may be present in the gallery during enrollment. This presents the challenge of matching gallery and probe images or videos which differ significantly in resolution, pose and illumination. In this paper, we consider the scenario in which the gallery consists of one or more highresolution frontal images, while the probe consists of lowresolution videos with uncontrolled pose and illumination as is typically obtained in surveillance systems. Most of the research in video-based face recognition has focused on dealing with one or more challenges like uncontrolled pose, illumination, etc. [23], but there are very few approaches which simultaneously deal with all the challenges. Some of the recent approaches which handle the resolution difference between the gallery and probe either are restricted to frontal images [6] or require videos for enrollment [2]. For video based FR, a tracking-thenrecognition paradigm is typically followed, in which the faces are first tracked and then used for recognition. But both tracking and recognition are very challenging for lowquality videos with low-resolution and significant variations in pose and illumination. In this paper, we extend the simultaneous tracking and recognition framework [22] which performs the two tasks of tracking and recognition in a single unified framework to address these challenges. We propose using distance learning based techniques for better modeling the appearance changes between the frames of the low-resolution probe videos and the high-resolution gallery images for better recognition and tracking accuracy. Multidimensional Scaling [4] is used to learn a mapping from training images which transforms the gallery and probe features to a space in which their inter-euclidean distances approximate the distances that would have been obtained had all the descrip /11/$ IEEE

2 tors been computed from high-resolution frontal images. We evaluate the effectiveness of the proposed approach on surveillance quality videos from the MBGC data [16]. We observe that the proposed approach performs significantly better in terms of both tracking and recognition accuracy as compared to standard appearance modeling approaches. The rest of the paper is organized as follows. An overview of the related approaches is discussed in Section 2. The details of the proposed approach are provided in Section 3. The results of experimental evaluation are presented in Section 4. The paper concludes with a brief summary and discussion. 2. Previous Work In this section, we discuss the related work in the literature. For brevity, we will refer to high-resolution as HR and low-resolution as LR. There has been a considerable amount of work in general video-based FR addressing two kind of scenarios: (1) both the gallery and probe are video sequences [11] [13] [10] [18] and (2) the probe videos are compared with one or multiple still images in the gallery [22]. For tracking and recognizing faces in realworld, noisy videos, Kim et al. [10] propose a tracker that adaptively builds a target model reflecting changes in appearance typical of a video setting. In the subsequent recognition phase, the identity of the tracked subject is established by fusing pose-discriminant and person-discriminant features over the duration of a video sequence. Stallkamp et al. [18] classify faces using a local appearance-based FR algorithm for real-time video-based face identification. The obtained confidence scores from each classification are progressively combined to provide an identity estimate for the entire sequence. Many researchers have also addressed the problem of video-based FR by treating the videos as image sets [20]. Most of the current approaches which address the problem of LR still face recognition follow a super-resolution approach. Given an LR face image, Jia and Gong [8] propose directly computing a maximum likelihood identity parameter vector in the HR tensor space which can be used for recognition and reconstruction of HR face images. Liu et al. [12] propose a two-step statistical modeling approach for hallucinating a HR face image from a LR input. The relationship between the HR images and their corresponding LR images is learned using a global linear model and the residual high-frequency content is modeled by a patch-based non-parametric Markov network. Several other super-resolution techniques have also been proposed [5] [9]. The main aim of these techniques is to produce a high resolution image from the low-resolution input using assumptions about the image content, and they are usually not designed from a matching perspective. A Multidimensional Scaling (MDS)-based approach has been recently proposed to improve the performance of still LR images, but it does not deal with matching a HR gallery image with a LR probe video [3]. Recently, Hennings-Yeomans et al. [6] proposed an approach to perform super-resolution and recognition simultaneously. Using features from the face and super-resolution priors, they extract an HR template that simultaneously fits the super-resolution as well as the face-feature constraints. The formulation was extended to use multiple frames, and the authors showed that it can also be generalized to use multiple image formation processes, modeling different cameras [7]. But this approach assumes that the probe and gallery images are in the same pose making them not directly applicable for more general scenarios. Arandjelovic and Cipolla [2] propose a generative model for separating the illumination and down-sampling effects for the problem of matching a face in a LR query video sequence against a set of HR gallery sequences. It is an extension of the Generic Shape-Illumination Manifold framework [1] which was used to describe the appearance variations due to the combined effects of facial shape and illumination. As noted in [7], a limitation of this approach is that it requires a video sequence at enrollment. 3. Proposed Approach For matching LR probe videos with significant pose and illumination variations with HR frontal gallery images, we propose to use a learning based appearance modeling in a simultaneous tracking and recognition framework Simultaneous Tracking and Recognition First, we briefly describe the tracking and recognition framework [22] which uses a modified version of the CON- DENSATION algorithm for tracking the facial features across the frames in the poor quality probe video and for recognition. The filtering framework consists of a motion model which characterizes the motion of the subject in the video. The overall state vector of this unified tracking and recognition framework consists of an identity variable in addition to the usual motion parameters. The observation model determines the measurement likelihood i.e. the likelihood of observing the particular measurement given the current state consisting of the motion and identity variable. Motion Model: The motion model is given by the firstorder Markov chain θ t = θ t 1 + u t ; t 1 (1) Here affine motion parameters are used and so θ = (a 1, a 2, a 3, a 4, t x, t y ) where {a 1, a 2, a 3, a 4 } are deformation parameters and {t x, t y } are 2D translation parameters. u t is noise in the motion model.

3 Identity equation: Assuming that the identity does not change as time proceeds, the identity equation is given by n t = n t 1 ; t 1 (2) Observation Model: Assuming that the transformed observation is a noise-corrupted version of some still template in the gallery, the observation equation can be written as T θt {z t } = I nt + v t ; t 1 (3) where v t is the observation noise at time t and T θt {z t } is a transformed version of the observation z t. Here T θt {z t } is composed of (1) an affine transform of z using {a 1, a 2, a 3, a 4 }, (2) cropping the region of interest at position {t x, t y } with the same size as some still template and (3) performing zero-mean-unit-variance normalization. In this modified version of the CONDENSATION algorithm, random samples are propagated on the motion vector while the samples on the identity variable are kept fixed. Although only the marginal distribution is propagated for motion tracking, the joint distribution is propagated for recognition purposes. This results in a considerable improvement in computation over propagating random variables on both the motion vector and identity variable for large databases. The different steps of the simultaneous tracking and recognition framework are given in Figure 1. The mean of the Gaussian distributed prior comes from the initial detector whose covariance matrix is manually specified. Please refer to [22] for more details of the algorithm Traditional Likelihood Measurement If there is no significant facial appearance difference between the probe frames and the gallery templates, a simple likelihood measurement like a truncated Laplacian is sufficient [22]. More sophisticated likelihood measurement models like the probabilistic subspace density approach are required to handle greater appearance difference between the probe and the gallery [22]. In this approach, the intrapersonal variations are learned using the available gallery and one frame of the video sequences. Usually, surveillance videos have very poor resolution, in addition to large variations in pose and illumination which results in decrease in both tracking and recognition performance. Here we propose a multidimensional scaling (MDS)-based approach for computing the measurement likelihood which results in better modeling of the appearance difference between the gallery and probe resulting in both better tracking and recognition Learning-Based Likelihood Measurement In this work, we use local SIFT features [14] at seven fiducial locations for representing a face (Figure 2). SIFT descriptors are fairly robust to modest variations in pose and Initialize a sample set S 0 = {(θ (j) 0 )}J j=1 according to the prior distribution p(θ 0 z 0 ) which is assumed to be Gaussian. The particle weights for each subject {w (j) 0,n }J j=1, n = 1,, N is initialized to 1. J and N denotes the number of particles and subjects respectively. 1. Predict: sample by drawing θ (j) t from the motion state transition probability p(θ t θ (j) t 1 ) and compute the transformed image T corresponding to the predicted sample. 2. Update: the weights using α j t,n = w (j) t 1,n p(z t n, θ (j) t ) (measurement likelihood) for each subject in the gallery. The normalized weights are given by w (j) t,n = αt,n/ j N J n=1 j=1 αj t,n. The measurement likelihood is learned from a set of HR training images (Section 3.3). 3. Resample: Particles for all subjects are reweighted to obtain samples with new weights w (j) t,n = w t,n/w (j) (j) t, where the denominator is given by w (j) t = N n=1 w(j) t,n. Marginalize over θ t to obtain the weights for n t to obtain the probe id. Figure 1. Simultaneous tracking and recognition framework [22]. resolution and this kind of representation has been shown to be useful for matching facial images in uncontrolled scenarios. But the large variations in pose, illumination and resolution observed in surveillance quality videos results in significant decrease in recognition performance using SIFT descriptors. The MDS-based approach transforms the SIFT descriptors extracted from gallery/probe images to a space in which their inter-euclidean distances approximate the distances had all the descriptors been computed using HR frontal images. The transformation is learned from a set of HR and corresponding LR training images. Figure 2. SIFT features at fiducial locations used for representing the face. Let HR frontal images are denoted by I (h,f), and the LR non-frontal images are denoted by I (l,p). The corresponding SIFT-based feature descriptors are denoted by x (h,f) and x (l,p). Let f : R d R m denote the mapping from the

4 Figure 3. Flow chart showing the steps of the proposed algorithm. input feature space R d to the embedded Euclidean space R m f(x; W) = W T φ(x) (4) Here φ(x) can be a linear or non-linear function of the feature vectors and W is the matrix of the weights to be determined. The goal is to simultaneously transform the feature vectors from I (h,f) i and I (l,p) j such that the Euclidean distance between the transformed feature vectors approximates d (h,f) i,j (distance if both the images are frontal and high resolution). Thus the objective function to be minimized is given by the distance preserving term J DP which ensures that the distance between the transformed feature vectors approximates d (h,f) i,j J DP (W) = i=1 j=1 (q ij (W) d (h,f) ) 2 (5) Here q ij (W) is the distance between the transformed feature vectors of the images I (h,f) i i,j and I (l,p) j. An optional class separability term J CS can also be incorporated in the objective function to further facilitate discriminability J CS (W) = i=1 j=1 δ(ω i, ω j )qi,j(w) 2 (6) This term tries to minimize the distance between feature vectors belonging to same class [21]. Here δ(ω i, ω j ) = 0; when ω i ω j and 0 otherwise (ω i denotes the class label of the i th image). Combining the above two terms, the transformation is obtained by minimizing the following objective function J(W) = λj DP (W) + (1 λ)j CS (W) (7) The relative effect of the two terms in the objective function is controlled by the parameter λ. The iterative majorization algorithm [21] is used to minimize the objective function (7) to solve for the transformation matrix W. To compute the measurement likelihood, the SIFT descriptors of the gallery and affine-transformed probe frame are mapped using the learned transformation W, followed by computation of Euclidean distances between the transformed features. p(z t n t, θ t ) = W T [ φ(t θt {z t }) φ(x nt ) ] (8) Figure 3 shows a flow-chart of the proposed learning-based simultaneous tracking and recognition framework. 4. Experimental Evaluation In this section, we will discuss in detail the experimental evaluation of the proposed approach Dataset Used For our experiments, we use 50 surveillance quality videos (each frames from 50 subjects) from the

5 Figure 4. Example frames from MBGC video challenge [16]. Multiple Biometric Grand Challenge (MBGC) [16] video challenge data for the probe videos. Figure 4 shows some sample frames from a video sequence. Since the MBGC video challenge data does not contain high resolution frontal still images needed to form the HR gallery set, we select images of the same subjects from FRGC data which has considerable subject overlap with the MBGC data. Figure 5 (top row) shows some sample gallery images from the dataset used and the bottom row shows cropped face regions from the corresponding probe videos. We see that there is a considerable difference in pose, illumination and resolution between the gallery images and the probe videos. 2. Probabilistic subspace density based likelihood: To handle significant appearance differences between the facial images in the gallery and probe, Zhou et al. [22] proposed using the probabilistic subspace density based approach proposed by Moghaddam et al. [15] due to its computational efficiency and high recognition accuracy. The available gallery and one video frame was used for constructing the intra-personal space (IPS). Using this approach, the measurement likelihood can be written as p(zt nt, θt ) = PS Tθt {zt } Int (10) where PS(x) = Ps 2 i=1 (yi /λi ) Q 1/2 s (2π)s/2 i=1 λi exp 1/2 Here {λi, ei }si=1 are the top s eigenvalues and the corresponding eigenvectors obtained by performing regular Principal Component Analysis [19] on IPS and yi = eti x is the ith principal component of x. Figure 5. (Top) Example high resolution gallery images; (Bottom) Cropped facial regions from the corresponding low resolution probe videos Recognition and Tracking Accuracy Here we report both tracking and recognition performance of the proposed approach. The proposed learningbased likelihood measurement model is compared with the following two approaches for computing the likelihood measurement [22]: 1. Truncated laplacian likelihood: Here the likelihood measurement model is given by [22] p(zt nt, θt ) = LAP k Tθt {zt } Int k; σ1, τ1 (9) Here k. k is the absolute distance and 1 σ exp( x/σ) if x τ σ, LAP(x; σ; τ ) = σ 1 exp( τ ) otherwise We build upon the code provided in the authors website ( For all experiments, the kernel mapping φ is set to identity (i.e., φ(x) = x) to highlight just the performance improvement due to the proposed learning approach. Training is done using images from a separate set of 50 subjects. For computation of the transformation matrix using the iterative majorization algorithm, we observe that the objective function decreases till around 20 iterations and then stabilizes. The value of the parameter λ is set to 0.8 and the output dimension m is set to 100. The number of particles for the particle filtering framework is taken to be 200. The recognition performance of the proposed approach is shown in Table 1. Comparisons with the two different kinds of likelihood models are also shown. The three approaches label each video as belonging to one of the subjects in the gallery. The recognition rate is calculated as the percentage of correct labels out of all videos. We see that the recognition performance of the proposed learningbased simultaneous tracking and recognition framework is considerably better than the other approaches due to better

6 Method Truncated laplacian Probabilistic subspace density Proposed Approach Likelihood Based likelihood Rank 1 Recog. Accuracy 24% 40% 68% Tracking Accuracy Table 1. Rank-1 recognition accuracy and tracking accuracy (pixels/frame) using the proposed approach. Comparisons with other approaches are also provided. modeling of the appearance difference between the gallery and the probe images. To compute the tracking error, we manually marked three fiducial locations (the center of the two eyes and the bottom of the nose) of every fifth frame of each video. For each probe video, we measured the difference between the manually marked ground truth locations and the locations given by the tracker. For a probe video, the tracking error is given by the average difference in the fiducial locations (averaged over all the frames). Figure 6 shows the tracking results for a few frames of a probe video for the proposed approach. Figure 7 shows the tracking error for the proposed approach and for the truncated laplacian-based likelihood and probabilistic subspace density-based likelihood model. We see for 49 out of 50 videos, the proposed approach achieves a lower tracking error as compared to the other approaches. The mean tracking error (in pixels) over all the probe videos for all the approaches are shown in Table 1. problem. Performing tracking and recognition simultaneously in a unified framework as opposed to first performing tracking and then recognition has been shown to improve both the tracking and recognition performance. But simple likelihood measurement models like truncated laplacian, IPS, etc. fail to give satisfactory performance for cases where there is significant difference between the appearance of the gallery images and the faces in the probe videos. In this paper, we propose using a learning-based likelihood measurement model to improve both the recognition and tracking accuracy for surveillance quality videos. In the training stage, a transformation is learned to simultaneously transform the features from the poor quality probe images and the high quality gallery images in such a manner that the distances between them approximate the distances had the probe videos been captured in the same conditions as the gallery images. In the testing stage, the learned transformation matrix is used to transform the features from the gallery images and the different particles to compute the likelihood of each particle in the modified particle-filtering framework. Experiments on surveillance quality videos show the usefulness of the proposed approach. References Figure 7. Average tracking accuracy of the proposed learningbased approach. Comparison with the other approaches are also provided. 5. Summary and Discussion In this paper, we consider the problem of matching faces in low-resolution surveillance videos with good high resolution images in the gallery. Tracking and recognizing faces in low-resolution videos with considerable variations in pose, illumination, expression, etc. is a very challenging [1] O. Arandjelovic and R. Cipolla. Face recognition from video using the generic shape-illumination manifold. In European Conf. on Computer Vision, pages 27 40, [2] O. Arandjelovic and R. Cipolla. A manifold approach to face recognition from low quality video across illumination and pose using implicit super-resolution. In IEEE International Conf. on Computer Vision, , 2 [3] S. Biswas, K. W. Bowyer, and P. J. Flynn. Multidimensional scaling for matching low-resolution facial images. In IEEE International Conf. On Biometrics: Theory, Applications And Systems, [4] I. Borg and P. Groenen. Modern Multidimensional Scaling: Theory and Applications. Springer, Second Edition, New York, NY, [5] B. Gunturk, A. Batur, Y. Altunbasak, M. Hayes, and R. Mersereau. Eigenface-domain super-resolution for face recognition. IEEE Trans. on Image Processing, 12(5): , May [6] P. Hennings-Yeomans, S. Baker, and B. Kumar. Simultaneous super-resolution and feature extraction for recognition of low-resolution faces. In IEEE Conf. on Computer Vision and Pattern Recognition, pages 1 8, , 2

7 Figure 6. A few frames showing the tracking results obtained using the proposed approach. Here only the region of the frames containing the person is shown for better visualization. [7] P. Hennings-Yeomans, B. Kumar, and S. Baker. Recognition of low-resolution faces using multiple still images and multiple cameras. In IEEE International Conf. On Biometrics: Theory, Applications And Systems, pages 1 6, [8] K. Jia and S. Gong. Multi-modal tensor face for simultaneous super-resolution and recognition. In IEEE International Conf. on Computer Vision, pages , [9] K. Jia and S. Gong. Generalized face super-resolution. IEEE Trans. on Image Processing, 17(6): , June [10] M. Kim, S. Kumar, V. Pavlovic, and H. Rowley. Face tracking and recognition with visual constraints in real-world videos. In IEEE Conf. on Computer Vision and Pattern Recognition, pages 1 8, [11] K. C. Lee, J. Ho, M. H. Yang, and D. Kriegman. Video-based face recognition using probabilistic appearance manifolds. In IEEE Conf. on Computer Vision and Pattern Recognition, pages , [12] C. Liu, H. Y. Shum, and W. T. Freeman. Face hallucination: Theory and practice. International Journal of Computer Vision, 75(1): , [13] X. Liu and T. Chen. Video-based face recognition using adaptive hidden markov models. In IEEE Conf. on Computer Vision and Pattern Recognition, pages , [14] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2):91 110, [15] B. Moghaddam. Principal manifolds and probabilistic subspaces for visual recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 24(6): , June [16] P. J. Phillips, P. J. Flynn, J. R. Beveridge, W. T. Scruggs, A. J. O Toole, D. S. Bolme, K. W. Bowyer, A. Draper, Bruce, G. H. Givens, Y. M. Lui, H. Sahibzada, J. A. Scallan, and S. Weimer. Overview of the multiple biometrics grand challenge. In International Conference on Biometrics, pages , , 5 [17] P. J. Phillips, W. T. Scruggs, A. J. O Toole, P. J. Flynn, K. W. Bowyer, C. L. Schott, and M. Sharpe. Frvt 2006 and ice 2006 large-scale experimental results. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(5): , [18] J. Stallkamp, H. K. Ekenel, and R. Stiefelhagen. Video-based face recognition on real-world data. In IEEE International Conf. on Computer Vision, [19] M. Turk and P. Pentland. Eigenfaces for recognition. Journal of Cognitive Neurosicence, 3(1):71 86, [20] R. Wang, S. Shan, X. Chen, and W. Gao. Manifold-manifold distance with application to face recognition based on image set. In IEEE Conf. on Computer Vision and Pattern Recognition, [21] A. Webb. Multidimensional scaling by iterative majorization using radial basis functions. Pattern Recognition, 28(5): , May [22] S. K. Zhao, V. Krueger, and R. Chellappa. Probabilistic recognition of human faces from video. Computer Vision and Image Understanding, 91: , , 2, 3, 5 [23] W. Zhao, R. Chellappa, P. Phillips, and A. Rosenfeld. Face recognition: A literature survey. ACM Computing Surveys, 35(4): ,

Pose-Robust Recognition of Low-Resolution Face Images

Pose-Robust Recognition of Low-Resolution Face Images Pose-Robust Recognition of Low-Resolution Face Images Soma Biswas, Gaurav Aggarwal and Patrick J. Flynn, Department of Computer Science and Engineering, University of Notre Dame, Notre Dame {sbiswas, gaggarwa,

More information

FACE images captured by surveillance cameras usually

FACE images captured by surveillance cameras usually IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 35, NO. 12, DECEMBER 2013 3037 Pose-Robust Recognition of Low-Resolution Face Images Soma Biswas, Member, IEEE, Gaurav Aggarwal, Member,

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

Multidirectional 2DPCA Based Face Recognition System

Multidirectional 2DPCA Based Face Recognition System Multidirectional 2DPCA Based Face Recognition System Shilpi Soni 1, Raj Kumar Sahu 2 1 M.E. Scholar, Department of E&Tc Engg, CSIT, Durg 2 Associate Professor, Department of E&Tc Engg, CSIT, Durg Email:

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

Heat Kernel Based Local Binary Pattern for Face Representation

Heat Kernel Based Local Binary Pattern for Face Representation JOURNAL OF LATEX CLASS FILES 1 Heat Kernel Based Local Binary Pattern for Face Representation Xi Li, Weiming Hu, Zhongfei Zhang, Hanzi Wang Abstract Face classification has recently become a very hot research

More information

Selecting Models from Videos for Appearance-Based Face Recognition

Selecting Models from Videos for Appearance-Based Face Recognition Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.

More information

Video Based Face Recognition Using Graph Matching

Video Based Face Recognition Using Graph Matching 1 2 Video Based Face Recognition Using Graph Matching 3 4 5 6 Gayathri Mahalingam and Chandra Kambhamettu Video/Image Modeling and Synthesis (VIMS) Laboratory, Department of Computer and Information Sciences,

More information

Overview of the Multiple Biometrics Grand Challenge

Overview of the Multiple Biometrics Grand Challenge Overview of the Multiple Biometrics Grand Challenge P. Jonathon Phillips 1, Patrick J. Flynn 2, J. Ross Beveridge 3, W. Todd Scruggs 4, Alice J. O Toole 5, David Bolme 3, Kevin W. Bowyer 2, Bruce A. Draper

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

Semi-Supervised PCA-based Face Recognition Using Self-Training

Semi-Supervised PCA-based Face Recognition Using Self-Training Semi-Supervised PCA-based Face Recognition Using Self-Training Fabio Roli and Gian Luca Marcialis Dept. of Electrical and Electronic Engineering, University of Cagliari Piazza d Armi, 09123 Cagliari, Italy

More information

NIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899

NIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899 ^ A 1 1 1 OS 5 1. 4 0 S Support Vector Machines Applied to Face Recognition P. Jonathon Phillips U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Information

More information

Face Recognition Using SIFT- PCA Feature Extraction and SVM Classifier

Face Recognition Using SIFT- PCA Feature Extraction and SVM Classifier IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 5, Issue 2, Ver. II (Mar. - Apr. 2015), PP 31-35 e-issn: 2319 4200, p-issn No. : 2319 4197 www.iosrjournals.org Face Recognition Using SIFT-

More information

Overview of the Multiple Biometrics Grand Challenge

Overview of the Multiple Biometrics Grand Challenge Overview of the Multiple Biometrics Grand Challenge P. Jonathon Phillips 1, Patrick J. Flynn 2, J. Ross Beveridge 3, W. Todd Scruggs 4, Alice J. O Toole 5, David Bolme 3, Kevin W. Bowyer 2, Bruce A. Draper

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information

3D Face Modelling Under Unconstrained Pose & Illumination

3D Face Modelling Under Unconstrained Pose & Illumination David Bryan Ottawa-Carleton Institute for Biomedical Engineering Department of Systems and Computer Engineering Carleton University January 12, 2009 Agenda Problem Overview 3D Morphable Model Fitting Model

More information

Subject-Oriented Image Classification based on Face Detection and Recognition

Subject-Oriented Image Classification based on Face Detection and Recognition 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

FACE RECOGNITION USING SUPPORT VECTOR MACHINES

FACE RECOGNITION USING SUPPORT VECTOR MACHINES FACE RECOGNITION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (b) 1. INTRODUCTION

More information

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS Yanghao Li, Jiaying Liu, Wenhan Yang, Zongg Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,

More information

Face Recognition Using Phase-Based Correspondence Matching

Face Recognition Using Phase-Based Correspondence Matching Face Recognition Using Phase-Based Correspondence Matching Koichi Ito Takafumi Aoki Graduate School of Information Sciences, Tohoku University, 6-6-5, Aramaki Aza Aoba, Sendai-shi, 98 8579 Japan ito@aoki.ecei.tohoku.ac.jp

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Facial Recognition Using Active Shape Models, Local Patches and Support Vector Machines

Facial Recognition Using Active Shape Models, Local Patches and Support Vector Machines Facial Recognition Using Active Shape Models, Local Patches and Support Vector Machines Utsav Prabhu ECE Department Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA-15213 uprabhu@andrew.cmu.edu

More information

COMBINING SPEEDED-UP ROBUST FEATURES WITH PRINCIPAL COMPONENT ANALYSIS IN FACE RECOGNITION SYSTEM

COMBINING SPEEDED-UP ROBUST FEATURES WITH PRINCIPAL COMPONENT ANALYSIS IN FACE RECOGNITION SYSTEM International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 12, December 2012 pp. 8545 8556 COMBINING SPEEDED-UP ROBUST FEATURES WITH

More information

A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2

A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2 A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation Kwanyong Lee 1 and Hyeyoung Park 2 1. Department of Computer Science, Korea National Open

More information

Face View Synthesis Across Large Angles

Face View Synthesis Across Large Angles Face View Synthesis Across Large Angles Jiang Ni and Henry Schneiderman Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 1513, USA Abstract. Pose variations, especially large out-of-plane

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES

TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES SIMON J.D. PRINCE, JAMES H. ELDER, JONATHAN WARRELL, FATIMA M. FELISBERTI IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

More information

Local Similarity based Linear Discriminant Analysis for Face Recognition with Single Sample per Person

Local Similarity based Linear Discriminant Analysis for Face Recognition with Single Sample per Person Local Similarity based Linear Discriminant Analysis for Face Recognition with Single Sample per Person Fan Liu 1, Ye Bi 1, Yan Cui 2, Zhenmin Tang 1 1 School of Computer Science and Engineering, Nanjing

More information

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained

More information

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany

More information

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

Robust Model-Free Tracking of Non-Rigid Shape. Abstract Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840

More information

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Dong Yi, Shengcai Liao, Zhen Lei, Jitao Sang, and Stan Z. Li Center for Biometrics and Security Research, Institute

More information

Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition

Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition Lucas-Kanade Scale Invariant Feature Transform for Uncontrolled Viewpoint Face Recognition Yongbin Gao 1, Hyo Jong Lee 1, 2 1 Division of Computer Science and Engineering, 2 Center for Advanced Image and

More information

Comparison of Different Face Recognition Algorithms

Comparison of Different Face Recognition Algorithms Comparison of Different Face Recognition Algorithms Pavan Pratap Chauhan 1, Vishal Kumar Lath 2 and Mr. Praveen Rai 3 1,2,3 Computer Science and Engineering, IIMT College of Engineering(Greater Noida),

More information

Face Refinement through a Gradient Descent Alignment Approach

Face Refinement through a Gradient Descent Alignment Approach Face Refinement through a Gradient Descent Alignment Approach Simon Lucey, Iain Matthews Robotics Institute, Carnegie Mellon University Pittsburgh, PA 113, USA Email: slucey@ieee.org, iainm@cs.cmu.edu

More information

Manifold Learning for Video-to-Video Face Recognition

Manifold Learning for Video-to-Video Face Recognition Manifold Learning for Video-to-Video Face Recognition Abstract. We look in this work at the problem of video-based face recognition in which both training and test sets are video sequences, and propose

More information

Unconstrained Face Recognition using MRF Priors and Manifold Traversing

Unconstrained Face Recognition using MRF Priors and Manifold Traversing Unconstrained Face Recognition using MRF Priors and Manifold Traversing Ricardo N. Rodrigues, Greyce N. Schroeder, Jason J. Corso and Venu Govindaraju Abstract In this paper, we explore new methods to

More information

Face Recognition in Video: Adaptive Fusion of Multiple Matchers

Face Recognition in Video: Adaptive Fusion of Multiple Matchers Face Recognition in Video: Adaptive Fusion of Multiple Matchers Unsang Park and Anil K. Jain Michigan State University East Lansing, MI 48824, USA {parkunsa,jain}@cse.msu.edu Arun Ross West Virginia University

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation

Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Xiujuan Chai 1, 2, Shiguang Shan 2, Wen Gao 1, 2 1 Vilab, Computer College, Harbin Institute of Technology, Harbin,

More information

FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS

FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS Jaya Susan Edith. S 1 and A.Usha Ruby 2 1 Department of Computer Science and Engineering,CSI College of Engineering, 2 Research

More information

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe Face detection and recognition Many slides adapted from K. Grauman and D. Lowe Face detection and recognition Detection Recognition Sally History Early face recognition systems: based on features and distances

More information

GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES

GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (a) 1. INTRODUCTION

More information

Linear Discriminant Analysis for 3D Face Recognition System

Linear Discriminant Analysis for 3D Face Recognition System Linear Discriminant Analysis for 3D Face Recognition System 3.1 Introduction Face recognition and verification have been at the top of the research agenda of the computer vision community in recent times.

More information

Misalignment-Robust Face Recognition

Misalignment-Robust Face Recognition Misalignment-Robust Face Recognition Huan Wang 1 Shuicheng Yan 2 Thomas Huang 3 Jianzhuang Liu 1 Xiaoou Tang 1,4 1 IE, Chinese University 2 ECE, National University 3 ECE, University of Illinois 4 Microsoft

More information

Study and Comparison of Different Face Recognition Algorithms

Study and Comparison of Different Face Recognition Algorithms , pp-05-09 Study and Comparison of Different Face Recognition Algorithms 1 1 2 3 4 Vaibhav Pathak and D.N. Patwari, P.B. Khanale, N.M. Tamboli and Vishal M. Pathak 1 Shri Shivaji College, Parbhani 2 D.S.M.

More information

A Hierarchical Face Identification System Based on Facial Components

A Hierarchical Face Identification System Based on Facial Components A Hierarchical Face Identification System Based on Facial Components Mehrtash T. Harandi, Majid Nili Ahmadabadi, and Babak N. Araabi Control and Intelligent Processing Center of Excellence Department of

More information

Facial Deblur Inference to Improve Recognition of Blurred Faces

Facial Deblur Inference to Improve Recognition of Blurred Faces Facial Deblur Inference to Improve Recognition of Blurred Faces Masashi Nishiyama 1, Hidenori Takeshima 1, Jamie Shotton 2, Tatsuo Kozakaya 1, Osamu Yamaguchi 1 1 Corporate Research & Development, Toshiba

More information

Sparse Shape Registration for Occluded Facial Feature Localization

Sparse Shape Registration for Occluded Facial Feature Localization Shape Registration for Occluded Facial Feature Localization Fei Yang, Junzhou Huang and Dimitris Metaxas Abstract This paper proposes a sparsity driven shape registration method for occluded facial feature

More information

Creating Invariance To Nuisance Parameters in Face Recognition

Creating Invariance To Nuisance Parameters in Face Recognition Creating Invariance To Nuisance Parameters in Face Recognition Simon J.D. Prince and James H. Elder York University Centre for Vision Research Toronto, Ontario {prince, elder}@elderlab.yorku.ca Abstract

More information

Face Recognition by Combining Kernel Associative Memory and Gabor Transforms

Face Recognition by Combining Kernel Associative Memory and Gabor Transforms Face Recognition by Combining Kernel Associative Memory and Gabor Transforms Author Zhang, Bai-ling, Leung, Clement, Gao, Yongsheng Published 2006 Conference Title ICPR2006: 18th International Conference

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, Universität Karlsruhe (TH) 76131 Karlsruhe, Germany

More information

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme

Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Unsupervised Human Members Tracking Based on an Silhouette Detection and Analysis Scheme Costas Panagiotakis and Anastasios Doulamis Abstract In this paper, an unsupervised, automatic video human members(human

More information

Face Recognition Based On Granular Computing Approach and Hybrid Spatial Features

Face Recognition Based On Granular Computing Approach and Hybrid Spatial Features Face Recognition Based On Granular Computing Approach and Hybrid Spatial Features S.Sankara vadivu 1, K. Aravind Kumar 2 Final Year Student of M.E, Department of Computer Science and Engineering, Manonmaniam

More information

Gabor Volume based Local Binary Pattern for Face Representation and Recognition

Gabor Volume based Local Binary Pattern for Face Representation and Recognition Gabor Volume based Local Binary Pattern for Face Representation and Recognition Zhen Lei 1 Shengcai Liao 1 Ran He 1 Matti Pietikäinen 2 Stan Z. Li 1 1 Center for Biometrics and Security Research & National

More information

Three-Dimensional Face Recognition: A Fishersurface Approach

Three-Dimensional Face Recognition: A Fishersurface Approach Three-Dimensional Face Recognition: A Fishersurface Approach Thomas Heseltine, Nick Pears, Jim Austin Department of Computer Science, The University of York, United Kingdom Abstract. Previous work has

More information

Face Recognition Across Non-Uniform Motion Blur, Illumination and Pose

Face Recognition Across Non-Uniform Motion Blur, Illumination and Pose INTERNATIONAL RESEARCH JOURNAL IN ADVANCED ENGINEERING AND TECHNOLOGY (IRJAET) www.irjaet.com ISSN (PRINT) : 2454-4744 ISSN (ONLINE): 2454-4752 Vol. 1, Issue 4, pp.378-382, December, 2015 Face Recognition

More information

Integrating Face-ID into an Interactive Person-ID Learning System

Integrating Face-ID into an Interactive Person-ID Learning System Integrating Face-ID into an Interactive Person-ID Learning System Stephan Könn, Hartwig Holzapfel, Hazım Kemal Ekenel, Alex Waibel InterACT Research, Interactive Systems Labs, Universität Karlsruhe, Germany

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Robust Estimation of Albedo for Illumination-invariant Matching and Shape Recovery

Robust Estimation of Albedo for Illumination-invariant Matching and Shape Recovery Robust Estimation of Albedo for Illumination-invariant Matching and Shape Recovery Soma Biswas, Gaurav Aggarwal and Rama Chellappa Center for Automation Research, UMIACS Dept. of ECE, Dept. of Computer

More information

Face Detection Using Convolutional Neural Networks and Gabor Filters

Face Detection Using Convolutional Neural Networks and Gabor Filters Face Detection Using Convolutional Neural Networks and Gabor Filters Bogdan Kwolek Rzeszów University of Technology W. Pola 2, 35-959 Rzeszów, Poland bkwolek@prz.rzeszow.pl Abstract. This paper proposes

More information

ROBUST FACE HALLUCINATION USING QUANTIZATION-ADAPTIVE DICTIONARIES

ROBUST FACE HALLUCINATION USING QUANTIZATION-ADAPTIVE DICTIONARIES ROBUST FACE HALLUCINATION USING QUANTIZATION-ADAPTIVE DICTIONARIES Reuben A. Farrugia University of Malta Msida, Malta Christine Guillemot INRIA Rennes-Bretagne-Atlantique, France ABSTRACT Existing face

More information

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL

ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte

More information

LECTURE ATTENDANCE SYSTEM WITH FACE RECOGNITION AND IMAGE PROCESSING

LECTURE ATTENDANCE SYSTEM WITH FACE RECOGNITION AND IMAGE PROCESSING LECTURE ATTENDANCE SYSTEM WITH FACE RECOGNITION AND IMAGE PROCESSING Balwant Singh 1, Sunil Kumar 2, Paurush Bhulania 3 ; 1 ECE,Ideal Institute of Technology, Ghaziabad,(India) 2 ECE, Amity School Of Engineering

More information

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected

More information

CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS

CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS 38 CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS 3.1 PRINCIPAL COMPONENT ANALYSIS (PCA) 3.1.1 Introduction In the previous chapter, a brief literature review on conventional

More information

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /

More information

Occlusion Robust Multi-Camera Face Tracking

Occlusion Robust Multi-Camera Face Tracking Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,

More information

Dilation Aware Multi-Image Enrollment for Iris Biometrics

Dilation Aware Multi-Image Enrollment for Iris Biometrics Dilation Aware Multi-Image Enrollment for Iris Biometrics Estefan Ortiz 1 and Kevin W. Bowyer 1 1 Abstract Current iris biometric systems enroll a person based on the best eye image taken at the time of

More information

ROBUST PARTIAL FACE RECOGNITION USING INSTANCE-TO-CLASS DISTANCE

ROBUST PARTIAL FACE RECOGNITION USING INSTANCE-TO-CLASS DISTANCE ROBUST PARTIAL FACE RECOGNITION USING INSTANCE-TO-CLASS DISTANCE Junlin Hu 1, Jiwen Lu 2, and Yap-Peng Tan 1 1 School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore

More information

Active Appearance Models

Active Appearance Models Active Appearance Models Edwards, Taylor, and Cootes Presented by Bryan Russell Overview Overview of Appearance Models Combined Appearance Models Active Appearance Model Search Results Constrained Active

More information

Face Recognition using Principle Component Analysis, Eigenface and Neural Network

Face Recognition using Principle Component Analysis, Eigenface and Neural Network Face Recognition using Principle Component Analysis, Eigenface and Neural Network Mayank Agarwal Student Member IEEE Noida,India mayank.agarwal@ieee.org Nikunj Jain Student Noida,India nikunj262@gmail.com

More information

Learning Patch Dependencies for Improved Pose Mismatched Face Verification

Learning Patch Dependencies for Improved Pose Mismatched Face Verification Learning Patch Dependencies for Improved Pose Mismatched Face Verification Simon Lucey Tsuhan Chen Carnegie Mellon University slucey@ieee.org, tsuhan@cmu.edu Abstract Gallery Image Probe Image Most pose

More information

Applications Video Surveillance (On-line or off-line)

Applications Video Surveillance (On-line or off-line) Face Face Recognition: Dimensionality Reduction Biometrics CSE 190-a Lecture 12 CSE190a Fall 06 CSE190a Fall 06 Face Recognition Face is the most common biometric used by humans Applications range from

More information

Low Resolution Face Recognition Across Variations in Pose and Illumination

Low Resolution Face Recognition Across Variations in Pose and Illumination 1034 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 38, NO. 5, MAY 2016 Low Resolution Face Recognition Across Variations in Pose and Illumination Sivaram Prasad Mudunuri and Soma

More information

Face Verification across Age Progression

Face Verification across Age Progression Face Verification across Age Progression Narayanan Ramanathan ECE Department University of Maryland College Park Rama Chellappa ECE Department University of Maryland College Park Abstract Human faces undergo

More information

Robust Video-based Face Recognition

Robust Video-based Face Recognition Robust Video-based Face Recognition Subashini.T 1, S.T.Munusamy 2, Prof Srinivasan. R 3 M.Tech (IT) Student, Department of IT, PSV College of Engg & Tech, Krishnagiri, TN, India 1 Assistant Professor,

More information

Principal Component Analysis and Neural Network Based Face Recognition

Principal Component Analysis and Neural Network Based Face Recognition Principal Component Analysis and Neural Network Based Face Recognition Qing Jiang Mailbox Abstract People in computer vision and pattern recognition have been working on automatic recognition of human

More information

Online Learning of Probabilistic Appearance Manifolds for Video-based Recognition and Tracking

Online Learning of Probabilistic Appearance Manifolds for Video-based Recognition and Tracking Online Learning of Probabilistic Appearance Manifolds for Video-based Recognition and Tracking Kuang-Chih Lee David Kriegman Computer Science Computer Science & Engineering University of Illinois, Urbana-Champaign

More information

Face and Facial Expression Detection Using Viola-Jones and PCA Algorithm

Face and Facial Expression Detection Using Viola-Jones and PCA Algorithm Face and Facial Expression Detection Using Viola-Jones and PCA Algorithm MandaVema Reddy M.Tech (Computer Science) Mailmv999@gmail.com Abstract Facial expression is a prominent posture beneath the skin

More information

Dictionary-Based Face Recognition from Video

Dictionary-Based Face Recognition from Video Dictionary-Based Face Recognition from Video Yi-Chen Chen 1,VishalM.Patel 1, P. Jonathon Phillips 2, and Rama Chellappa 1 1 Department of Electrical and Computer Engineering Center for Automation Research,

More information

An Integrated Face Recognition Algorithm Based on Wavelet Subspace

An Integrated Face Recognition Algorithm Based on Wavelet Subspace , pp.20-25 http://dx.doi.org/0.4257/astl.204.48.20 An Integrated Face Recognition Algorithm Based on Wavelet Subspace Wenhui Li, Ning Ma, Zhiyan Wang College of computer science and technology, Jilin University,

More information

Robust Face Recognition via Sparse Representation

Robust Face Recognition via Sparse Representation Robust Face Recognition via Sparse Representation Panqu Wang Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA 92092 pawang@ucsd.edu Can Xu Department of

More information

A Manifold Approach to Face Recognition from Low Quality Video Across Illumination and Pose using Implicit Super-Resolution

A Manifold Approach to Face Recognition from Low Quality Video Across Illumination and Pose using Implicit Super-Resolution A Manifold Approach to Face Recognition from Low Quality Video Across Illumination and Pose using Implicit Super-Resolution Ognjen Arandjelović Trinity College University of Cambridge Cambridge, CB2 1TQ

More information

Automatic 3D Face Detection, Normalization and Recognition

Automatic 3D Face Detection, Normalization and Recognition Automatic 3D Face Detection, Normalization and Recognition Ajmal Mian, Mohammed Bennamoun and Robyn Owens School of Computer Science and Software Engineering The University of Western Australia 35 Stirling

More information

Distance-driven Fusion of Gait and Face for Human Identification in Video

Distance-driven Fusion of Gait and Face for Human Identification in Video X. Geng, L. Wang, M. Li, Q. Wu, K. Smith-Miles, Distance-Driven Fusion of Gait and Face for Human Identification in Video, Proceedings of Image and Vision Computing New Zealand 2007, pp. 19 24, Hamilton,

More information

Decorrelated Local Binary Pattern for Robust Face Recognition

Decorrelated Local Binary Pattern for Robust Face Recognition International Journal of Advanced Biotechnology and Research (IJBR) ISSN 0976-2612, Online ISSN 2278 599X, Vol-7, Special Issue-Number5-July, 2016, pp1283-1291 http://www.bipublication.com Research Article

More information

Learning a Manifold as an Atlas Supplementary Material

Learning a Manifold as an Atlas Supplementary Material Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito

More information

Thermal Face Recognition using Local Interest Points and Descriptors for HRI Applications *

Thermal Face Recognition using Local Interest Points and Descriptors for HRI Applications * Thermal Face Recognition using Local Interest Points and Descriptors for HRI Applications * G. Hermosilla, P. Loncomilla, J. Ruiz-del-Solar Department of Electrical Engineering, Universidad de Chile Center

More information

Boosting face recognition via neural Super-Resolution

Boosting face recognition via neural Super-Resolution Boosting face recognition via neural Super-Resolution Guillaume Berger, Cle ment Peyrard and Moez Baccouche Orange Labs - 4 rue du Clos Courtel, 35510 Cesson-Se vigne - France Abstract. We propose a two-step

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

Using 3D Models to Recognize 2D Faces in the Wild

Using 3D Models to Recognize 2D Faces in the Wild 23 IEEE Conference on Computer Vision and Pattern Recognition Workshops Using 3D Models to Recognize 2D Faces in the Wild Iacopo Masi, Giuseppe Lisanti, Andrew D. Bagdanov, Pietro Pala and Alberto Del

More information

3D Active Appearance Model for Aligning Faces in 2D Images

3D Active Appearance Model for Aligning Faces in 2D Images 3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active

More information

Face Recognition from Images with High Pose Variations by Transform Vector Quantization

Face Recognition from Images with High Pose Variations by Transform Vector Quantization Face Recognition from Images with High Pose Variations by Transform Vector Quantization Amitava Das, Manoj Balwani 1, Rahul Thota 1, and Prasanta Ghosh 2 Microsoft Research India. Bangalore, India amitavd@microsoft.com

More information