Laplacian Faces: A Face Recognition Tool
|
|
- Georgia Wilkins
- 5 years ago
- Views:
Transcription
1 Laplacian Faces: A Face Recognition Tool Prof. Sami M Halwani 1, Prof. M.V.Ramana Murthy 1, Prof. S.B.Thorat 1 Faculty of Computing and Information Technology, King Abdul Aziz University, Rabigh, KSA, -mv.rm50@gmail.com, Director, Institute of Technology & Management,VIP-Road,Nanded ,India, -suryakant_thorat@yahoo.com Abstract The purpose of this study is face recognition. Human face recognition systems have gained a considerable attention during last few years. There are very few applications with respect to security, sensitivity and accuracy. Face recognition systems are built on computer programs that analyze images of human faces for the purpose of identifying them. In this paper, Laplacian faces method is used for face recognition. Each face image in the image space is mapped to a low-dimensional face subspace using locality preserving projection (LPP), which is characterized by a set of feature images, called Laplacianfaces. The result of this study will build a system for recognizing faces by matching to an image database. This system can be used in any security systems and can be compared to fingerprint or eye iris recognition system. Index Terms: Face Recognition, Laplacian faces, Locality Preserving Projection 1. Introduction Laplacian face is an unsupervised and appearance based approach to human face recognition. This approach uses locality preserving projections (LPP) in which face images are mapped into a face subspace for analysis. LPP uses local manifold structure rather than global Euclidean structure being used by principle component analysis (PCA) and linear discriminant analysis (LDA). Specifically, Laplacian faces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. The approach of using Laplacianfaces for face recognition was proposed by Xiaofei He et al. (005). Laplacian faces propose a new approach to face analysis, which explicitly considers the manifold structure. To be specific, the manifold structure is modeled by a nearest-neighbour graph which preserves the local structure of the image space. A face subspace is obtained by LPP. Each face image in the image space is mapped to a low dimensional face subspace, which is characterized by a set of feature images, called Laplacian faces. The face subspace preserves local structure, seems to have more discriminating power than the PCA approach for classification purpose.. Locality Preserving Projection Locality Preserving Projection (LPP) is a new algorithm for learning a locality preserving subspace. LPP seeks to preserve the intrinsic geometry of the data and local structure. The objective function of LPP is as follows min ( y y ) S where y i is the one-dimensional representation of image x i and the matrix S is a similarity matrix. A possible way of defining S is as follows: S ij = {exp ( x i -x j /t), x i -x j < = 0 otherwise Where exp ( x i -x j /t), means if x i is among k nearest neighbors of x j or x j is among k nearest neighbors of x i. ε is sufficiently small, and ε > 0. Here, ε defines the radius of the local neighborhood. In other words, ε defines the locality. The objective function with our choice of symmetric weights S ij (S ij = S ji ) incurs a heavy penalty if neighboring points x i and x j are mapped far apart, i.e. if (y i y j ) is large. Therefore, minimizing it is an attempt to ensure that if x i and x j are close then y i and y j are close as well. Following some simple algebraic steps, we see that i j ij International Journal of Networked Computing and Advanced Information Management (IJNCM) volume. number1. April. 01 doi : /IJNCM.vol.issue1.1 1
2 = 1 ij ( y i y j ) S ij 1 ij (w T x i w T x j ) S ij ij w T x i S ij x T i w ij w T x i S ij xj T w i w T x i D ii x T i w w T XSX T w w T XDX T w w T XSX T w w T X (D S ) X T w w T XLX T w where X = [x 1, x,, x n ], and D is a diagonal matrix; its entries are column (or row, since S is symmetric) sums of S, D ii = Σ j S ji. L = D S is the Laplacian matrix. Matrix D provides a natural measure on the data points. The bigger the value D ii (corresponding to y i ) is, the more important is y i. Therefore, we impose a constraint as follows: y T Dy 1 w T XDX T w 1 Finally, the minimization problem reduces to finding: arg min w w T XLX T w w T XDX T w 1 The transformation vector w that minimizes the objective function is given by the minimum eigenvalue solution to the generalized eigenvalue problem: XLX T w λxdx T w Note that the two matrices XLX T and XDX T are both symmetric and positive semi-definite, since the Laplacian matrix L and the diagonal matrix D are both symmetric and positive semi-definite. 3. Laplacianfaces LPP is a general method for manifold learning. It is obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Betrami operator on the manifold. Therefore, though it is still a linear technique, it seems to recover important aspects of the intrinsic nonlinear manifold structure by preserving local structure. Based on LPP, we describe our Laplacian faces method for face representation in a locality preserving subspace. In the face analysis and recognition problem one is confronted with the difficulty that the matrix XDX T is sometimes singular. This stems from the fact that sometimes the number of images in the training set n is much smaller than the number of pixels in each image m. In such a case, the rank of XDX T is at most n, while XDX T is an m m matrix, which implies that XDX T is singular. To overcome the complication of a singular XDX T, we first project the image set to a PCA subspace so that the resulting matrix XDX T is non-singular. Another consideration of using PCA as preprocessing is for noise reduction. This method, we call Laplacian faces, can learn an optimal subspace for face representation and recognition. The algorithmic procedure of Laplacian faces is formally stated below: 1. PCA Projection: We project the image set {x i } into the PCA subspace by throwing away the smallest principal components. For the sake of simplicity, we still use x to denote the images in the PCA subspace in the following steps. We denote the transformation matrix of PCA by WPCA.. Constructing the Nearest Neighbor graph: Let G denote a graph with n nodes. The i th node corresponds to the face image x i. We put an edge between nodes i and j if x i and x j are close, i.e. x i is among k nearest neighbors of x i or x i is among k nearest neighbors of x j. The constructed nearest neighbor graph is an approximation of the local manifold structure. Note that, here we do not use the ε-neighborhood to construct the graph. This is simply because it is often difficult to choose the optimal ε in the real world applications, while k nearest neighbor graph can be
3 constructed more stably. The disadvantage is that the k nearest neighbor search will increase the computational complexity of our algorithm. When the computational complexity is a major concern, one can switch to the ε-neighborhood. 3. Choosing the weights: If node i and j are connected, put xi -x j t S ij e where t is a suitable constant. Otherwise, put S ij = 0. The weight matrix S of graph G models the face manifold structure by preserving local structure. 4. Eigenmap: Compute the eigenvectors and eigenvalues for the generalized eigenvector problem: XLX T w λxdx T w where D is a diagonal matrix whose ntries are column (or row, since S is symmetric) sums of S, D ii = Σ j S ji. L = D S is the Laplacian matrix. The i th row of matrix X is xi. Let w 0, w 1,, w k-1 be the solutions of equation (1), ordered according to their eigenvalues, 0 λ 0 λ 1 λ k-1. These eigenvalues are equal to or greater than zero, because the matrices XLX T and XDX T are both symmetric and positive semi-definite. Thus, the embedding is as follows: x y W T x W W PCA W LPP W LPP [w 0, w 1,..., w k 1 ] where y is a k-dimensional vector. W is the transformation matrix. This linear mapping best preserves the manifold s estimated intrinsic geometry in a linear sense. The column vectors of W are the so called Laplacian faces. 4. Algorithm for Finnding Laplacianfaces Assumptions: 1. M denotes number of images in the training set.. K denotes number of Laplacian faces using which we can approximate a face. K<M 3. All images are NxN matrices, which can be represented as N x1 dimensional vectors. Algorithm: 1. Obtain M training images I 1, I, I 3 I M. The training images must only contain the face and should be centered. In our system, a Ms. Access database is used which contains all the training images. The system has the capability of capturing new faces and inserting it into the database. The new training faces goes through some processes such as detecting face, resizing and rotation.. Represent each image I i as a vector Г i An shown in Fig.1 M 3. Find the average face vector M Г i i1 4. subtract the mean face from each face vector Г i to get a set of vectors Φ i. Thus, the common features will be removed and we will be left with only the distinguishing features from each face Φ i = Г i Ψ 5. Find the Contrivance matrix C = AA T, where A = [Φ 1, Φ Φ M ] Note that C is a N x N matrix and A is a N x M matrix, 6. We now need to calculate the eigenvectors u i of C, However note that C is a N x N matrix and it would return N Eigenvectors each being N matric and it would return N Eigenvectors each being N dimensional. For an image this number is HUGE. The computations required would easily make your system run out of memory. How do we get around this problem? 3
4 7. To solve this problem, instead of the matrix AA T consider the matrix A T A. As A is a N x M matrix, thus A T A is an M x M matrix. We can easily find Eigenvectors of this matrix without making the system running out of the memory. This would return M Eigenvectors, each having an M x 1 dimension. Let s call these Eigenvectors v i. Now from some properties of matrices, we have: u i = Av i. We have already found v i. Thus, we can find M largest Eigenvector of AA T using v i. Ij a a11 a1 a N a 1 a1 a a N concatenation a a N 1 a N a NN NN i a Figure 1. Concatenation of D image into 1D 11 N N NN N 1 Select the best K Eigenvectors, the simplest way is to sort all eigenvectors by their corresponding Eigenvalues and remove those Eigenvectors with lowest values. Selecting the value of the K depends. If reconstruction of faces is concerned then keeping 98% of the data is recommended. Note that the selected K Eigenvectors is called W PCA. 8. Find the k-nearest neighbor graph based on the W PCA created earlier. Create a graph G as a matrix of M x M. Then compute Euclidean distance between each image vector of the training data set and the rest as formula given below. v i v j where v i and v j is the i th and j th column of the W PCA matrix respectively. For each v i, all other vectors of the training data set should be calculated. Next, select k-lowest Euclidean distance and insert each Euclidean distance to its corresponding cell (i th row and j th column) in matrix G. 9. Choose the weights based on the k-nearest neighbor graph. Create a weight matrix S of M x M. Calculate the weights as follows: If there is a link between nodes i and j in graph G then compute S ij as follows: x i -x j t S ij e where t is a suitable constant. Otherwise, put S ij = 0. The weight matrix S of graph G models the face manifold structure by preserving local structure. 10. Construct the Laplacian matrix L. First, we need to construct Diagonal matrix D using the following formula. Dii = Σj Sji where S is the weight matrix created earlier. Next, we can construct the Laplacian matrix L based on the formula below. L = D S 11. Compute Eigenvectors and Eigenvalues for the generalized Eigenvector problem: Calculate Eigenvectors and Eigenvalues of ALA T. A is a N x M matrix created in step 4 earlier and L is M x M matrix created in step 10. Thus ALA T will be a matrix of N x N which results N Eigenvectors each N dimensional. For an image this number is quite huge and makes the system to run out of memory while computation. To solve this problem we will use one of the concepts of the matrix as discussed in step 6. Instead of ALA T we will consider A T AL, which is a matrix of M x M. This matrix results M Eigenvectors each having M dimensional. Let s call these Eigenvectors v. Now to find the Eigenvectors of the matrix ALA T, simply multiply A with v. Thus, we can find the M largest Eigenvectors of the matrix ALA T using v. Let s call these Eigenvectors W LPP. The column vectors of W LPP is so called Laplacianfaces. 4
5 1. The last step is to find embedding manifold y using the following formula. T x y W LPP X where W LPP is created in step 11 and X is created in step. Thus, y will be a matrix of M x M. Recognition Task: 1. Obtain a test image T. Pass it to Face Detection algorithm to remove extra pixels and to get only the face. Let s call the detected face I.. Resize the image I to the same size of the faces in the training data set. 3. Represent the image I as vector of Г as shown in fig Find the embedding manifold for the test face using following formula. y test W T LPP 5. After getting the embedded manifold we need to classify it. For the classification task we could simply use some distance measures or use some classifier like Support Vector Machines. In case of distance measure particularly Euclidean distance, we need to find the minimum Euclidean test distance between trained images and the test image. r min y yi And if r < θ, where θ is a threshold value, then we can say that the test image is recognized as the image with which it gives the minimum distance. If however r > θ then the test image does not belong to the database. 5. Distance Measures Euclidean Distance: The Euclidean Distance is probably the most widely used distance metric. It is a special case of a general class of norms and is given as: x y e xi yi The Mahalanobis Distance: The Mahalanobis Distance is a better distance measure when it comes to pattern recognition problems. It takes into account the covariance between the variables and hence removes the problems related to scale and correlation that are inherent with the Euclidean Distance. It is given as: 1 1 d( x, y) ( x y) C ( x y) where C is the covariance between the variables involved. 6. Deciding on the Threshold Why is the threshold, θ important? Consider for simplicity we have only 5 images in the training set as shown in fig.. And a test image that is not in the training set comes up for the recognition task. The score for each of the 5 images will be found out with the incoming test image. And even if the test image is not in the database, it will still say the test image is recognized as the training image with which its score is the lowest. Clearly this is an anomaly that we need to look at. It is for this purpose that we decide the threshold. The threshold θ is decided heuristically. To choose the threshold we choose a large set of random images (both face and non-face), we then calculate the scores for images of people in the database and also for this random set and set the threshold θ. 5
6 Figure. non-face score 7. Experiment on AT & T Face DATABASE The system has been tested by AT&T face database. AT&T face database contains faces of 40 different people each having 10 images with different poses and rotation. fig.3 shows an example of the AT&T faces of five people. One image of each person from the database has been trained to the system and the rest has been kept for testing purpose. Thus, the system had 40 trained images. Then, 00 faces from the AT&T and some other probe images out of the database have been tested. And finally it has been found out that the system could recognize 80% of the faces accurately. Conclusion One of the central problems in face manifold learning is to estimate the intrinsic dimensionality of the nonlinear face manifold, or, degrees of freedom. We know that the dimensionality of the manifold is equal to the dimensionality of the local tangent space. Some previous works show that the local tangent space can be approximated using points in a neighbor set. Therefore, one possibility is to estimate the dimensionality of the tangent space. Face recognition methods are still far to address all the challenges like pose, scale, rotation and illumination. However, Laplacian faces is robust and can minimize some of the problems with pose, scale, rotation and illumination as it preserves the local structure of the face but it can not eliminate it. Laplacian faces is interesting for further research and development activities. Figure 3. AT & T face samples 6
7 References [1] Amit, Y. (00) D Object Detection and Recognition, 1 st ed., The MIT Press Cambridge: Library of Congress Cataloging-in-Publication Data [] Amit, Y. (00) D Object Detection and Recognition, 1 st ed., The MIT Press Cambridge: Library of Congress Cataloging-in-Publication Data [3] Xiaofei He et al. (005) Laplacianfaces, State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China [4] Mahmood, M.T. (006) Face Detection by Image Discriminating, unpublished thesis (M.Sc.), Blekinge Institute of Technology [5] Smith, L.I. (00) A Tutorial on Principal Component Analysis [6] He, X., and Niyogi, P., Locality Preserving Projections, Department of Computer Science, The University of Chicago [7] Yang, M.H, Kriegman, D.J, and Ahuja, N. (00) IEEE Transactions On Pattern Analysis And Machine Intelligence: Detecting Faces in Images: A Survey [8] Goldstein, A. J., Harmon, L. D., and Lesk, A. B. (1971) Identification of human faces", Proc. IEEE 59, pp [9] Kanade, T. (1973) "Picture processing system by computer complex and recognition of human faces", Dept. of Information Science, Kyoto University. [10] Atalay, I., and Ballikaya, F. (1993) "Development of an Image Processing Environment for the 80x86 Platform, B.Sc. Thesis, Istanbul Technical University [11] Chang, H., and Robles, U. (000) EE368 Final Project Report: Face Detection [online], available: students.stanford.edu/ robles/ee368/ main.html [accessed 4 April 011] [1] Visual Statistics Studio, Variance and ovariance [online] available: http: // www. visualstatistics.net / visual% 0statist ics % 0multimedia/ covariance.htm [accessed 4 pril 011]. [13] Swarthmore Institute of higher learning, Eigenvalues and Eigenvectors Introduction [online], available : www. swarthmore.edu/ NatSci/ echeeve1 /Ref/ LPSA/ MtrxVibe/ EigMat/ Matrix Eigen.html [accessed 4 April 011] [14] Trivedi, S. (009) Face Recognition using Eigenfaces and Distance Calssifiers: A Tutorial [online] available: http: // onionesquereality, wordpres. Com /009/0/11/ face-recognitionuising-eigenfaces-and-distance-classifiers-a-atutorial/[accessed 4 April 011 7
Face Recognition using Laplacianfaces
Journal homepage: www.mjret.in ISSN:2348-6953 Kunal kawale Face Recognition using Laplacianfaces Chinmay Gadgil Mohanish Khunte Ajinkya Bhuruk Prof. Ranjana M.Kedar Abstract Security of a system is an
More informationThe Analysis of Parameters t and k of LPP on Several Famous Face Databases
The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun
More informationAppearance Manifold of Facial Expression
Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk
More informationDimension Reduction CS534
Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationTime Series Clustering Ensemble Algorithm Based on Locality Preserving Projection
Based on Locality Preserving Projection 2 Information & Technology College, Hebei University of Economics & Business, 05006 Shijiazhuang, China E-mail: 92475577@qq.com Xiaoqing Weng Information & Technology
More informationTechnical Report. Title: Manifold learning and Random Projections for multi-view object recognition
Technical Report Title: Manifold learning and Random Projections for multi-view object recognition Authors: Grigorios Tsagkatakis 1 and Andreas Savakis 2 1 Center for Imaging Science, Rochester Institute
More informationLinear Discriminant Analysis in Ottoman Alphabet Character Recognition
Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /
More informationFace Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained
More informationInternational Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 8, March 2013)
Face Recognition using ICA for Biometric Security System Meenakshi A.D. Abstract An amount of current face recognition procedures use face representations originate by unsupervised statistical approaches.
More informationLearning a Manifold as an Atlas Supplementary Material
Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito
More informationA New Orthogonalization of Locality Preserving Projection and Applications
A New Orthogonalization of Locality Preserving Projection and Applications Gitam Shikkenawis 1,, Suman K. Mitra, and Ajit Rajwade 2 1 Dhirubhai Ambani Institute of Information and Communication Technology,
More informationUsing Graph Model for Face Analysis
Report No. UIUCDCS-R-05-2636 UILU-ENG-05-1826 Using Graph Model for Face Analysis by Deng Cai, Xiaofei He, and Jiawei Han September 05 Using Graph Model for Face Analysis Deng Cai Xiaofei He Jiawei Han
More informationDimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report
Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu
More informationSchool of Computer and Communication, Lanzhou University of Technology, Gansu, Lanzhou,730050,P.R. China
Send Orders for Reprints to reprints@benthamscienceae The Open Automation and Control Systems Journal, 2015, 7, 253-258 253 Open Access An Adaptive Neighborhood Choosing of the Local Sensitive Discriminant
More informationApplications Video Surveillance (On-line or off-line)
Face Face Recognition: Dimensionality Reduction Biometrics CSE 190-a Lecture 12 CSE190a Fall 06 CSE190a Fall 06 Face Recognition Face is the most common biometric used by humans Applications range from
More informationPCA and KPCA algorithms for Face Recognition A Survey
PCA and KPCA algorithms for Face Recognition A Survey Surabhi M. Dhokai 1, Vaishali B.Vala 2,Vatsal H. Shah 3 1 Department of Information Technology, BVM Engineering College, surabhidhokai@gmail.com 2
More informationFace Recognition using Eigenfaces SMAI Course Project
Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract
More informationRobust Pose Estimation using the SwissRanger SR-3000 Camera
Robust Pose Estimation using the SwissRanger SR- Camera Sigurjón Árni Guðmundsson, Rasmus Larsen and Bjarne K. Ersbøll Technical University of Denmark, Informatics and Mathematical Modelling. Building,
More informationPrincipal Component Analysis (PCA) is a most practicable. statistical technique. Its application plays a major role in many
CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS ON EIGENFACES 2D AND 3D MODEL 3.1 INTRODUCTION Principal Component Analysis (PCA) is a most practicable statistical technique. Its application plays a major role
More informationRobust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma
Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected
More informationLarge-Scale Face Manifold Learning
Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random
More informationSparsity Preserving Canonical Correlation Analysis
Sparsity Preserving Canonical Correlation Analysis Chen Zu and Daoqiang Zhang Department of Computer Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China {zuchen,dqzhang}@nuaa.edu.cn
More informationImage-Based Face Recognition using Global Features
Image-Based Face Recognition using Global Features Xiaoyin xu Research Centre for Integrated Microsystems Electrical and Computer Engineering University of Windsor Supervisors: Dr. Ahmadi May 13, 2005
More informationFACE RECOGNITION USING SUPPORT VECTOR MACHINES
FACE RECOGNITION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (b) 1. INTRODUCTION
More informationRecognition, SVD, and PCA
Recognition, SVD, and PCA Recognition Suppose you want to find a face in an image One possibility: look for something that looks sort of like a face (oval, dark band near top, dark band near bottom) Another
More informationNon-linear dimension reduction
Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods
More informationRecognizing Handwritten Digits Using the LLE Algorithm with Back Propagation
Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation Lori Cillo, Attebury Honors Program Dr. Rajan Alex, Mentor West Texas A&M University Canyon, Texas 1 ABSTRACT. This work is
More informationUnsupervised learning in Vision
Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual
More informationEmotion Classification
Emotion Classification Shai Savir 038052395 Gil Sadeh 026511469 1. Abstract Automated facial expression recognition has received increased attention over the past two decades. Facial expressions convey
More informationData fusion and multi-cue data matching using diffusion maps
Data fusion and multi-cue data matching using diffusion maps Stéphane Lafon Collaborators: Raphy Coifman, Andreas Glaser, Yosi Keller, Steven Zucker (Yale University) Part of this work was supported by
More informationRecognition: Face Recognition. Linda Shapiro EE/CSE 576
Recognition: Face Recognition Linda Shapiro EE/CSE 576 1 Face recognition: once you ve detected and cropped a face, try to recognize it Detection Recognition Sally 2 Face recognition: overview Typical
More informationFace Recognition Using Wavelet Based Kernel Locally Discriminating Projection
Face Recognition Using Wavelet Based Kernel Locally Discriminating Projection Venkatrama Phani Kumar S 1, KVK Kishore 2 and K Hemantha Kumar 3 Abstract Locality Preserving Projection(LPP) aims to preserve
More informationDimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis
Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu 10/17/2014 Abstract
More informationManifold Learning for Video-to-Video Face Recognition
Manifold Learning for Video-to-Video Face Recognition Abstract. We look in this work at the problem of video-based face recognition in which both training and test sets are video sequences, and propose
More informationFace detection and recognition. Detection Recognition Sally
Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification
More informationFacial Expression Detection Using Implemented (PCA) Algorithm
Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with
More informationSelecting Models from Videos for Appearance-Based Face Recognition
Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.
More informationImage Analysis & Retrieval. CS/EE 5590 Special Topics (Class Ids: 44873, 44874) Fall 2016, M/W Lec 16
Image Analysis & Retrieval CS/EE 5590 Special Topics (Class Ids: 44873, 44874) Fall 2016, M/W 4-5:15pm@Bloch 0012 Lec 16 Subspace/Transform Optimization Zhu Li Dept of CSEE, UMKC Office: FH560E, Email:
More informationAutomatic Attendance System Based On Face Recognition
Automatic Attendance System Based On Face Recognition Sujay Patole 1, Yatin Vispute 2 B.E Student, Department of Electronics and Telecommunication, PVG s COET, Shivadarshan, Pune, India 1 B.E Student,
More informationCHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS
38 CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS 3.1 PRINCIPAL COMPONENT ANALYSIS (PCA) 3.1.1 Introduction In the previous chapter, a brief literature review on conventional
More informationRecognition of Non-symmetric Faces Using Principal Component Analysis
Recognition of Non-symmetric Faces Using Principal Component Analysis N. Krishnan Centre for Information Technology & Engineering Manonmaniam Sundaranar University, Tirunelveli-627012, India Krishnan17563@yahoo.com
More informationDocument Clustering Using Locality Preserving Indexing
Document Clustering Using Locality Preserving Indexing Deng Cai Department of Computer Science University of Illinois at Urbana Champaign 1334 Siebel Center, 201 N. Goodwin Ave, Urbana, IL 61801, USA Phone:
More informationCluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1
Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods
More informationMultidimensional scaling Based in part on slides from textbook, slides of Susan Holmes. October 10, Statistics 202: Data Mining
Multidimensional scaling Based in part on slides from textbook, slides of Susan Holmes October 10, 2012 1 / 1 Multidimensional scaling A visual tool Recall the PCA scores were X V = U where X = HX S 1/2
More informationPreserving Global and Local Features for Robust Face Recognition under Various Noisy Environments
Preserving Global and Local Features for Robust Face Recognition under Various Noisy Environments Ruba Soundar Kathavarayan Department of Computer Science and Engineering PSR Engineering College Sivakasi,
More informationFacial Expression Recognition using Principal Component Analysis with Singular Value Decomposition
ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial
More informationDimensionality Reduction and Classification through PCA and LDA
International Journal of Computer Applications (09 8887) Dimensionality Reduction and Classification through and Telgaonkar Archana H. PG Student Department of CS and IT Dr. BAMU, Aurangabad Deshmukh Sachin
More informationLaplacian MinMax Discriminant Projection and its Applications
Laplacian MinMax Discriminant Projection and its Applications Zhonglong Zheng and Xueping Chang Department of Computer Science, Zhejiang Normal University, Jinhua, China Email: zhonglong@sjtu.org Jie Yang
More informationFace Recognition using Rectangular Feature
Face Recognition using Rectangular Feature Sanjay Pagare, Dr. W. U. Khan Computer Engineering Department Shri G.S. Institute of Technology and Science Indore Abstract- Face recognition is the broad area
More informationDimension Reduction of Image Manifolds
Dimension Reduction of Image Manifolds Arian Maleki Department of Electrical Engineering Stanford University Stanford, CA, 9435, USA E-mail: arianm@stanford.edu I. INTRODUCTION Dimension reduction of datasets
More informationCSC 411: Lecture 14: Principal Components Analysis & Autoencoders
CSC 411: Lecture 14: Principal Components Analysis & Autoencoders Raquel Urtasun & Rich Zemel University of Toronto Nov 4, 2015 Urtasun & Zemel (UofT) CSC 411: 14-PCA & Autoencoders Nov 4, 2015 1 / 18
More informationDIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS
DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS YIRAN LI APPLIED MATHEMATICS, STATISTICS AND SCIENTIFIC COMPUTING ADVISOR: DR. WOJTEK CZAJA, DR. JOHN BENEDETTO DEPARTMENT
More informationImage Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images
Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images 1 Anusha Nandigam, 2 A.N. Lakshmipathi 1 Dept. of CSE, Sir C R Reddy College of Engineering, Eluru,
More informationGlobally and Locally Consistent Unsupervised Projection
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Globally and Locally Consistent Unsupervised Projection Hua Wang, Feiping Nie, Heng Huang Department of Electrical Engineering
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationRobust Face Recognition Using Enhanced Local Binary Pattern
Bulletin of Electrical Engineering and Informatics Vol. 7, No. 1, March 2018, pp. 96~101 ISSN: 2302-9285, DOI: 10.11591/eei.v7i1.761 96 Robust Face Recognition Using Enhanced Local Binary Pattern Srinivasa
More informationDiscriminate Analysis
Discriminate Analysis Outline Introduction Linear Discriminant Analysis Examples 1 Introduction What is Discriminant Analysis? Statistical technique to classify objects into mutually exclusive and exhaustive
More informationLinear Discriminant Analysis for 3D Face Recognition System
Linear Discriminant Analysis for 3D Face Recognition System 3.1 Introduction Face recognition and verification have been at the top of the research agenda of the computer vision community in recent times.
More informationRobust Face Recognition via Sparse Representation
Robust Face Recognition via Sparse Representation Panqu Wang Department of Electrical and Computer Engineering University of California, San Diego La Jolla, CA 92092 pawang@ucsd.edu Can Xu Department of
More informationSmart Attendance System using Computer Vision and Machine Learning
Smart Attendance System using Computer Vision and Machine Learning Dipti Kumbhar #1, Prof. Dr. Y. S. Angal *2 # Department of Electronics and Telecommunication, BSIOTR, Wagholi, Pune, India 1 diptikumbhar37@gmail.com,
More informationNonlinear Dimensionality Reduction Applied to the Classification of Images
onlinear Dimensionality Reduction Applied to the Classification of Images Student: Chae A. Clark (cclark8 [at] math.umd.edu) Advisor: Dr. Kasso A. Okoudjou (kasso [at] math.umd.edu) orbert Wiener Center
More informationLearning a Spatially Smooth Subspace for Face Recognition
Learning a Spatially Smooth Subspace for Face Recognition Deng Cai Xiaofei He Yuxiao Hu Jiawei Han Thomas Huang University of Illinois at Urbana-Champaign Yahoo! Research Labs {dengcai2, hanj}@cs.uiuc.edu,
More informationDistance-driven Fusion of Gait and Face for Human Identification in Video
X. Geng, L. Wang, M. Li, Q. Wu, K. Smith-Miles, Distance-Driven Fusion of Gait and Face for Human Identification in Video, Proceedings of Image and Vision Computing New Zealand 2007, pp. 19 24, Hamilton,
More informationCS 195-5: Machine Learning Problem Set 5
CS 195-5: Machine Learning Problem Set 5 Douglas Lanman dlanman@brown.edu 26 November 26 1 Clustering and Vector Quantization Problem 1 Part 1: In this problem we will apply Vector Quantization (VQ) to
More informationUnsupervised Learning
Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover
More informationLocally Linear Landmarks for large-scale manifold learning
Locally Linear Landmarks for large-scale manifold learning Max Vladymyrov and Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://eecs.ucmerced.edu
More informationCSC 411: Lecture 14: Principal Components Analysis & Autoencoders
CSC 411: Lecture 14: Principal Components Analysis & Autoencoders Richard Zemel, Raquel Urtasun and Sanja Fidler University of Toronto Zemel, Urtasun, Fidler (UofT) CSC 411: 14-PCA & Autoencoders 1 / 18
More informationDiscriminative Locality Alignment
Discriminative Locality Alignment Tianhao Zhang 1, Dacheng Tao 2,3,andJieYang 1 1 Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China 2 School of Computer
More informationLinear local tangent space alignment and application to face recognition
Neurocomputing 70 (2007) 1547 1553 Letters Linear local tangent space alignment and application to face recognition Tianhao Zhang, Jie Yang, Deli Zhao, inliang Ge Institute of Image Processing and Pattern
More informationLocal Linear Embedding. Katelyn Stringer ASTR 689 December 1, 2015
Local Linear Embedding Katelyn Stringer ASTR 689 December 1, 2015 Idea Behind LLE Good at making nonlinear high-dimensional data easier for computers to analyze Example: A high-dimensional surface Think
More informationFACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS
FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS Jaya Susan Edith. S 1 and A.Usha Ruby 2 1 Department of Computer Science and Engineering,CSI College of Engineering, 2 Research
More informationLinear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples
Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples Franck Olivier Ndjakou Njeunje Applied Mathematics, Statistics, and Scientific Computation University
More informationData Preprocessing. Javier Béjar. URL - Spring 2018 CS - MAI 1/78 BY: $\
Data Preprocessing Javier Béjar BY: $\ URL - Spring 2018 C CS - MAI 1/78 Introduction Data representation Unstructured datasets: Examples described by a flat set of attributes: attribute-value matrix Structured
More information[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor
GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES LBP AND PCA BASED ON FACE RECOGNITION SYSTEM Ashok T. Gaikwad Institute of Management Studies and Information Technology, Aurangabad, (M.S), India ABSTRACT
More informationUnsupervised Learning
Networks for Pattern Recognition, 2014 Networks for Single Linkage K-Means Soft DBSCAN PCA Networks for Kohonen Maps Linear Vector Quantization Networks for Problems/Approaches in Machine Learning Supervised
More informationData Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Exploratory data analysis tasks Examine the data, in search of structures
More informationFunction approximation using RBF network. 10 basis functions and 25 data points.
1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data
More informationData mining. Classification k-nn Classifier. Piotr Paszek. (Piotr Paszek) Data mining k-nn 1 / 20
Data mining Piotr Paszek Classification k-nn Classifier (Piotr Paszek) Data mining k-nn 1 / 20 Plan of the lecture 1 Lazy Learner 2 k-nearest Neighbor Classifier 1 Distance (metric) 2 How to Determine
More informationMultidirectional 2DPCA Based Face Recognition System
Multidirectional 2DPCA Based Face Recognition System Shilpi Soni 1, Raj Kumar Sahu 2 1 M.E. Scholar, Department of E&Tc Engg, CSIT, Durg 2 Associate Professor, Department of E&Tc Engg, CSIT, Durg Email:
More informationFuzzy Bidirectional Weighted Sum for Face Recognition
Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2014, 6, 447-452 447 Fuzzy Bidirectional Weighted Sum for Face Recognition Open Access Pengli Lu
More informationFace Recognition for Different Facial Expressions Using Principal Component analysis
Face Recognition for Different Facial Expressions Using Principal Component analysis ASHISH SHRIVASTAVA *, SHEETESH SAD # # Department of Electronics & Communications, CIIT, Indore Dewas Bypass Road, Arandiya
More informationAn Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face Image
International Journal of Computer Science Issues, Vol. 2, 2009 ISSN (Online): 694-0784 ISSN (Print): 694-084 49 An Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face Image Nageshkumar.M,
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 06 Image Structures 13/02/06 http://www.ee.unlv.edu/~b1morris/ecg782/
More informationOutline. Advanced Digital Image Processing and Others. Importance of Segmentation (Cont.) Importance of Segmentation
Advanced Digital Image Processing and Others Xiaojun Qi -- REU Site Program in CVIP (7 Summer) Outline Segmentation Strategies and Data Structures Algorithms Overview K-Means Algorithm Hidden Markov Model
More informationMETHODS FOR TARGET DETECTION IN SAR IMAGES
METHODS FOR TARGET DETECTION IN SAR IMAGES Kaan Duman Supervisor: Prof. Dr. A. Enis Çetin December 18, 2009 Bilkent University Dept. of Electrical and Electronics Engineering Outline Introduction Target
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationCIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]
CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.
More informationDiffusion Wavelets for Natural Image Analysis
Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................
More informationPrincipal motion: PCA-based reconstruction of motion histograms
Principal motion: PCA-based reconstruction of motion histograms Hugo Jair Escalante a and Isabelle Guyon b a INAOE, Puebla, 72840, Mexico, b CLOPINET, Berkeley, CA 94708, USA http://chalearn.org May 2012
More informationLocal Discriminant Embedding and Its Variants
Local Discriminant Embedding and Its Variants Hwann-Tzong Chen Huang-Wei Chang Tyng-Luh Liu Institute of Information Science, Academia Sinica Nankang, Taipei 115, Taiwan {pras, hwchang, liutyng}@iis.sinica.edu.tw
More information( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components
Review Lecture 14 ! PRINCIPAL COMPONENT ANALYSIS Eigenvectors of the covariance matrix are the principal components 1. =cov X Top K principal components are the eigenvectors with K largest eigenvalues
More informationFault detection with principal component pursuit method
Journal of Physics: Conference Series PAPER OPEN ACCESS Fault detection with principal component pursuit method Recent citations - Robust Principal Component Pursuit for Fault Detection in a Blast Furnace
More informationAnnouncements. Recognition I. Optical Flow: Where do pixels move to? dy dt. I + y. I = x. di dt. dx dt. = t
Announcements I Introduction to Computer Vision CSE 152 Lecture 18 Assignment 4: Due Toda Assignment 5: Posted toda Read: Trucco & Verri, Chapter 10 on recognition Final Eam: Wed, 6/9/04, 11:30-2:30, WLH
More informationA Supervised Non-linear Dimensionality Reduction Approach for Manifold Learning
A Supervised Non-linear Dimensionality Reduction Approach for Manifold Learning B. Raducanu 1 and F. Dornaika 2,3 1 Computer Vision Center, Barcelona, SPAIN 2 Department of Computer Science and Artificial
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More informationFace Recognition using Tensor Analysis. Prahlad R. Enuganti
Face Recognition using Tensor Analysis Prahlad R. Enuganti The University of Texas at Austin Final Report EE381K 14 Multidimensional Digital Signal Processing May 16, 2005 Submitted to Prof. Brian Evans
More informationFace/Flesh Detection and Face Recognition
Face/Flesh Detection and Face Recognition Linda Shapiro EE/CSE 576 1 What s Coming 1. Review of Bakic flesh detector 2. Fleck and Forsyth flesh detector 3. Details of Rowley face detector 4. The Viola
More information