Dual-Space Linear Discriminant Analysis for Face Recognition
|
|
- Abner Hart
- 5 years ago
- Views:
Transcription
1 Dual-Space Linear Discriminant nalysis for Face Recognition Xiaogang Wang and Xiaoou ang Department of Information Engineering he Chinese University of Hong Kong {xgang1, bstract Linear Discriminant nalysis (LD) is a popular feature extraction technique for face recognition. Hoever, it often suffers from the small sample size problem hen dealing ith the high dimensional face data. Some approaches have been proposed to overcome this problem, but they are often unstable and have to discard some discriminative information. In this paper, a dual-space LD approach for face recognition is proposed to take full advantage of the discriminative information in the face space. ased on a probabilistic visual model, the eigenvalue spectrum in the null space of ithin-class scatter matrix is estimated, and discriminant analysis is simultaneously applied in the principal and null subspaces of the ithin-class scatter matrix. he to sets of discriminative features are then combined for recognition. It outperforms existing LD approaches. 1. Introduction LD is a popular face recognition approach. It determines a set of proection vectors maximizing the beteen-class scatter matrix ( S b ) hile minimizing the ithin-class scatter matrix ( S ) in the proective feature space. Hoever, LD often suffers from the small sample size problem hen dealing ith the high dimensional face data. When there are not enough training samples, S may become singular, and it is difficult to compute the LD vectors. Several approaches have been proposed to address this problem. In a to-stage C+LD approach [1], the data dimensionality is first reduced by rincipal Component nalysis (C), and LD is performed in the reduced C subspace, in hich S is non-singular. Hoever, Chen et al. [] suggested that the null space spanned by the eigenvectors of S ith zero eigenvalues contains the most discriminative information. LD method in the null space of S as proposed. It chooses the proection vectors maximizing S b ith the constraint that S is zero. ut this approach discards the discriminative information outside the null space of S. Yu. et al. [3] proposed a direct LD algorithm. It first removes the null space of S b, and assumes that no discriminative information exists in this space. Unfortunately, e can sho that this assumption is incorrect. We ill demonstrate that the optimal discriminant vectors do not necessarily lie in the subspace spanned by the class centers. common problem ith all these proposed LD approaches is that they all lose some discriminative information in the high dimensional face space. In this paper, using the probabilistic visual model [4], the eigenvalue spectrum in the null space of S is estimated. We then apply discriminant analysis in both the principal and null subspaces of S. he to parts of discriminative features are combined in recognition. his dual-space LD approach successfully resolves the small sample size problem. Compared ith conventional LD approaches, it is more stable and makes use of all the discriminative information in both the principal and null space. he experiments on FERE database clearly demonstrate its efficacy.. Linear Discriminant nalysis.1. LD LD method tries to find a set of proection vectors W maximizing the ratio of determinant of S b to S, W S W W arg max b (1) W SW Let the training set contain L classes and each class X i has n i samples. S and S b are defined as, L S xkmixkmi, () i1 x kxi L Sb ni mi mm i m, (3) i1 here m is the center of the hole training set, m i is the center for the class X i, and x k is the sample belonging to class X i. W can be computed from the eigenvectors of S 1 Sb [7]. Hoever, hen the small sample size problem occurs, S becomes singular and it is difficult to 1 compute S. o avoid the singularity of S, a to- roceedings of the 4 IEEE Computer Society Conference on Computer Vision and attern Recognition (CVR 4) /4 $. 4 IEEE
2 stage C+LD approach is used in [1]. C is first used to proect the high dimensional face data into a lo dimensional feature space. hen LD is performed in the reduced C subspace, in hich S is non-singular. Fukunnaga [7] has proved that W can also be computed from simultaneous diagonalization of S and S b. First S is hitened by 1/ 1/ S I, (4) here, and are the eigenvector matrix and eigenvalue matrix of S. o avoid singularity and overfitting for noise, only the eigenvectors ith non-zero and non-trivial eigenvalues are selected in the enhanced LD model proposed by Liu et. al. [6]. Second, apply C on class centers of the transformed data. o do this, 1/ the class centers are proected onto, and the beteen-class scatter matrix is transformed to K, 1/ 1/ Sb K b. (5) fter computing the eigenvector matrix and eigenvalue matrix of K b, the overall proection vectors of LD can be defined as 1/ W. (6).. LD in the Null Space of S he LD approaches described above are all performed in the principal subspace of S, in hich W S W. Hoever, the null space of S, in hich W S W, also contains much discriminative information, since it is possible to find some proection vectors W satisfying W S W and W S b W, thus the Fisher criteria in Eq. (1) definitely reaches its maximum value. LD in the null space of S as proposed by Chen et. al. []. First, the null space of S is computed as, V SV ( V V I ). (7) hen S b is proected to the null space of S, S ~ b V SbV. (8) Choose the eigenvectors U of S ~ b ith the largest eigenvalues, ~ U S b U. (9) he LD transformation matrix is defined as W VU. s the rank of S increases, the null space of S becomes small, and much discriminative information outside the null space is discarded []. b.3. Direct LD Yu et. al. [3] proposed a direct LD method. S b is first diagonalized, and the null space of S b is removed, Y S b Y D b, (1) here Y are eigenvectors and D b are the corresponding non-zero eigenvalues of S b. S is transformed to 1/ 1/ Db Y SYDb K. (11) K is diagonalized by eigenanalysis, K U U D. (1) he LD transformation matrix for classification is defined as, 1/ 1/ YD b UD W. (13) In direct LD, the null space of S b is first removed. It is assumed that the null space of S b contains no discriminative information. his assumption is not true. In direct LD, proection vectors are restricted in the subspace spanned by class centers. Hoever the optimal discriminant vectors do not necessarily lie in the subspace spanned by class centers. his point can be clearly illustrated in Figure 1. For a binary classification problem, using direct LD, the derived discriminant proection vector is constrained to the line passing through the to class centers. ut according to the Fisher criteria, the optimal discriminant vector should be line..4. Discussion ll these proposed LD approaches lose some discriminative information in the face space. his point can be further summarized in Figure, here is the principal subspace of S, and is the principal subspace of S b. Since the total scatter matrix S t is equal to the summarization of S and S b [7], S S S, (14) t Class 1 m1 m b Class Figure 1. Using direct LD, the discriminant vector is constrained to the line passing through the to class centers m 1 and m, but according to the Fisher criteria, the optimal discriminant proection should be line roceedings of the 4 IEEE Computer Society Conference on Computer Vision and attern Recognition (CVR 4) /4 $. 4 IEEE
3 the face space is composed of and. When as shon in Figure (a), LD in the principal subspace of S, contains all the discriminative information in the face space. When as shon in Figure (b), direct LD, orking in subspace, can keep all the discriminative information. When = as shon in Figure (c), LD in the null space of S can keep all the discriminative information. Finally hen and are only partially overlapped as shon in Figure (d), some discriminative information ill definitely be lost using any of the existing LD approaches. Furthermore, conventional LD approaches suffer from the overfitting problem. roection vectors are tuned to the training set ith the existence of noise. s suggested in [8], an eigenvector ill be very sensitive to small perturbation if its eigenvalue is close to another eigenvalue of the same matrix, so the eigenvectors of S ith very small eigenvalues are unstable. In Eq. (6) and (13), LD in the principal subspace of S and direct LD all need to be hitened using the inverse of eigenvalues of S. Since the trivial eigenvalues sensitive to noise are not ell estimated because of the small sample size problem, they can substantially change the proection vectors. If data vector is hitened on noisy eigenvectors, overfitting ill happen. For LD in the null space of S, the rank of S, r S, is bounded by minm L, N, here M is the total training sample number, L is the class number, and N is the dimensionality of the face data. r S is almost equal to this bound because of the existence of noise. s shon by experiments in [], hen the training sample number is large, the null space of S becomes small, so much discriminative information outside it ill be lost. 3. Dual-Space LD 3.1. robabilistic model In LD, the main difficulty for the small sample size problem is that S, especially the eigenvalue spectrum in the null space of S, is not ell estimated. In order to estimate the eigenvalue spectrum in the null space of S, e use the probabilistic visual model proposed in [4]. In classification, the likelihood of an input vector x belonging to class X is often estimated ith a Gaussian density, 1 1 x X exp N / 1 x m x m, (15) here is the covariance matrix for X, and N is the dimensionality of the face vector. When there are not enough samples in each class, is replaced by S [5], 1 1 x X exp N / S 1 x m S x m. (16) It is called Mahalanobis likelihood [5], characterized by a Mahalanobis distance, x d ~ 1 x S ~ x, (17) here ~ x x m. S can be diagonalized as, S 1 here, N, (18) 1, N are the eigenvector matrix of S. he Mahalanobis distance can be estimated as the sum of to independent parts, distance-in-feature-space (DIFS) and distance-from-feature space (DFFS), K i corresponding to the principal subspace F i 1, spanned by the K eigenvectors ith the largest eigenvalues, and its orthogonal complement i N i K F 1, K ~ x y dx i, (19) i1 i here y i is the component proecting ~ x to the ith eigenvector, and i is the corresponding eigenvalue in F. ~ x is the C reconstruction error of ~ x in F. he eigenvalues in F are not ell estimated and have zero values. s suggested in [4], they are simply the estimated noise spectrum, and tend to be the flattest portion of eigenvalue spectrum. So they can be estimated by a single value, 1 N * i N K ik 1. () he unknon * i in F are estimated by fitting a nonlinear function to the available portion of the eigenvalue spectrum in F. No the ithin-class scatter matrix can be estimated as, roceedings of the 4 IEEE Computer Society Conference on Computer Vision and attern Recognition (CVR 4) /4 $. 4 IEEE
4 (a) Figure. is the principal subspace of case (a),, LD in the principal space of (b) S, is the principal subspace of S b, and is the hole face space. In S can keep all the discriminative information. In case (b),, direct LD can keep all the discriminative information. In case (c), =, LD in the null space of S can keep all the discriminative information. In case (d), and are only partially intersect, and so discriminative information in data space ill definitely be lost in conventional LD approaches. (c) (d) Sˆ 1 Ŝ is no non-singular. K (1) he relationship beteen the Mahalanobis distance and LD implies that LD can be simultaneously applied to the principal and null subspaces of S, and the to parts of discriminative features can be combined to make full use of the discriminative information in the face space. ased on this observation, e develop a dualspace LD algorithm: t training stage, 1. Compute S and S b from the training set. 3.. Discriminant nalysis in Dual Space oth LD and the Mahalanobis distance in (19) evaluate the distance from the input pattern to the class center after proecting to the subspace. Recall that LD in the principal subspace of S can be divided into to steps. In the first hitening step, face data is proected 1/ onto and normalized by. Most of the ithinclass variation concentrates on several largest eigenvectors in. Since represents the energy distribution of ithin-class variation, the hitening process effectively reduces the ithin-class variation. his process is essentially the same as the Mahalanobis distance in F. hen, C is applied on the hitened class centers to find the dominant direction that best separates the class centers. his process further reduces the noise and compacts the discriminative features onto a small number of principal components. It is also easy to see that the null space of S is S essentially identical to F. LD in the null space of extracts the discriminative features in F. F has removed most of the ithin-class variation, and LD further separates the class centers by C.. pply C to S, and compute the principal subspace F, ith K eigenvectors V 1,, K, and its complementary subspace F. Estimate the average eigenvalue in F. 3. ll of the class centers are proected to F and normalized by the K eigenvalues. S b is transformed to 1/ 1/ V SbV Kb, () here is the eigenvalue matrix for F. pply C to K b, and compute l eigenvectors ith the largest eigenvalues. he l discriminative vectors in F are defined as W V 1/. (3) 4. roect all the class centers to F and compute the reconstruction difference as r here m, VV I VV (4) m L 1, is the class centers matrix. In fact, r is the proection of into F. In F, S b is transformed to roceedings of the 4 IEEE Computer Society Conference on Computer Vision and attern Recognition (CVR 4) /4 $. 4 IEEE
5 C Kb I VV S I VV. (5) b C Compute l C eigenvectors C of K b ith the largest eigenvalues. he l C discriminative vectors in the F are defined as WC I VV C. (6) t the recognition stage, 1. ll the face class centers m in the gallery and the probe face data x t are proected to the discriminant vectors in F and F to get, a W m C a WC m at W xt C at WC xt, (7). (8), (9). (3). Class is found to minimize the distance measure C d a a a a /. (31) t his dual-space LD algorithm has several advantages over conventional LD algorithms. First, it takes advantage of all the discriminative information in the full face space hile other LD approaches all lose some discriminative information one ay or the other. In the principal and null subspaces of S, LD vectors can be computed using different criterions, C W SbW W arg max W S W, (3) W S W W C arg maxwc SbWC. (33) WC S WC oth of the to sets of features contain discriminative information for recognition. Hoever, the to subspaces have different metric scales. he principal subspace of S has been hitened, so proection vectors in W are not orthornormal. It is not suitable, at least not optimal, to combine the distances in the to subspaces directly. In dual-space LD, the null space of S is also hitened by the average eigenvalues. In Eq.(34), t a a t and C a C a t / are computed under the same metric scale measure, and the distances in the to subspaces are similarly hitened by the eigenvalue spectrum of S. he second advantage of the ne algorithm is that it is more stable and insensitive to noise than existing LD approaches. Since eigenvectors of S ith very small eigenvalues are unstable and sensitive to small perturbation, e avoid computing these unstable eigenvectors by grouping them into F. In addition, the eigenvalue spectrum of S is better estimated, thus it avoids hitening ith very small eigenvalues. Finally, this approach can be vieed as an improvement to the Mahalanobis likelihood computed from the subspace estimation. It is more effective for classification and more efficient in computation. esides effective reducing the ithin-class variation like the Mahalanobis distance, it further distances class centers, and removes some noise disturbance by compacting the discriminative features. It is also much faster than the Mahalanobis distance. Computing the reconstruction error x in Eq. () is expensive. Its computational cost is comparable to the correlation beteen the to original high dimensional face data vectors. Our approach only needs to compute the distances beteen vectors of l dimensions. l C 4. Experiment In this section, e apply the dual-space LD to face recognition and compare ith conventional approaches. by experiments on the data sets from the FERE face database [9]. he high dimensional image intensity vector is used as input pattern for classification. ll the 1195 people from the FERE Fa/Fb data set are used in the experiment. here are to face images for each person. 495 people are used for training, and the remaining 7 people are used for testing. For each testing people, one face image is in the gallery and the other is for probe. First, e compare the dual-space LD ith the Mahalanobis distance estimated by the probabilistic visual model. Figure 3 reports their op 1 recognition accuracies ith different feature numbers. he feature number for the dual-space LD is the summation of discriminant feature numbers in F and F. he feature number for the Mahalanobis distance is the dimensionality (K) of F. hree distance measures, DIFS, DFFS and DIFS+DFFS, for the Mahalanobis distance, are evaluated. Notice that only for DIFS. the feature number relates to computation cost. Even for a small K, the computation cost of DFFS and DIFS+DFFS are very high, since they need to compute the reconstruction error roceedings of the 4 IEEE Computer Society Conference on Computer Vision and attern Recognition (CVR 4) /4 $. 4 IEEE
6 the unified subspace analysis [1] and face sketch recognition [11], and further compare ith the random sampling LD, hich combine the to LD subspaces at the decision level [1]. cknoledgement he ork described in this paper as fully supported by grants from the Research Grants Council of the Hong Kong Special dministrative Region (roect no. CUHK 419/1E and CUHK 44/3E). Figure 3. Recognition accuracy comparison beteen the Dual-Space LD and the Mahalanobis distances. Figure 4 ccumulative matching scores for the dualspace LD, Mahanobis distance, Fisherface, LD in null space, and direct LD on the FERE database. x. Dual-space LD outperforms DIFS significantly at the same computational cost. It achieves over 96% recognition accuracy. he best performance for the three Mahalanobis distance measures is about 93%. his is over 4% reduction in recognition error rate. he dual-space LD also outperforms conventional LD approaches. Figure 4 reports the accumulative matching scores comparing the Dual-Space LD ith Fisherface, hich uses to stage C+LD, LD in the null space of S, and direct LD. he novel method has reduced 5% error rate than conventional approaches. 5. Conclusion In this paper, a dual-space LD approach for high dimensional data classification is proposed. Compared ith existing LD approaches, it is more stable and makes use of all the discriminative information in the face space. Experiments on the FERE face database have shon that the method is much more effective that existing LD methods. In future study, e ill investigate the application of this dual-space approach to Reference [1]. N. elhumeur, J. Hespanda, and D. Kiregeman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear roection, IEEE rans. on MI, Vol. 19, No. 7, pp , July [] L. Chen, H. Liao, M. Ko, J. Lin, and G. Yu, Ne LD- ased Face Recognition System Which can Solve the Small Sample Size roblem, Journal of attern Recognition, Vol. 33, No. 1, pp , Oct.. [3] H. Yu and J. Yang, Direct LD lgorithm for High- Dimensional Data - ith pplication to Face Recognition, attern Recognition, Vol. 34, pp. 67-7, 1. [4]. Moghaddam and. entland, robabilistic Visual Learning for Obect Representation, IEEE rans. on MI, Vol. 19, No. 7, pp , July, [5] W. Hang and J. Weng, "Hierarchical Discriminant Regression," IEEE rans. on MI, Vol., No. 11, pp , Nov.. [6] C. Liu and H. Wechsler, Enhanced Fisher Linear Discriminant Models for Face Recognition, roceedings of ICR, Vol., pp , [7] K. Fukunnaga, Introduction to Statistical attern Recognition, cademic ress, second edition, [8] G. W. Steart, Introduction to Matrix Computations, cademic ress, Ne York, [9]. J. hillips, H. Moon, S.. Rizvi, and. J. Rauss, "he FERE Evaluation," in Face Recognition: From heory to pplications, H. Wechsler,.J. hillips, V. ruce, F.F. Soulie, and.s. Huang, Eds., erlin: Springer-Verlag, [1] X. Wang and X. ang, Unified Subspace nalysis for Face Recognition, in roceedings of ICCV, pp , Nice, France, Oct. 3. [11] X. ang, and X. Wang, Face Sketch Synthesis and Recognition, in roceedings of ICCV, pp , Nice, France, Oct. 3. [1] X. Wang and X. ang, Random Sampling LD for Face Recognition, in roceedings of CVR, Washington D.C., US, 4. roceedings of the 4 IEEE Computer Society Conference on Computer Vision and attern Recognition (CVR 4) /4 $. 4 IEEE
Stepwise Nearest Neighbor Discriminant Analysis
Stepwise Nearest Neighbor Discriminant Analysis Xipeng Qiu and Lide Wu Media Computing & Web Intelligence Lab Department of Computer Science and Engineering Fudan University, Shanghai, China xpqiu,ldwu@fudan.edu.cn
More informationDiagonal Principal Component Analysis for Face Recognition
Diagonal Principal Component nalysis for Face Recognition Daoqiang Zhang,2, Zhi-Hua Zhou * and Songcan Chen 2 National Laboratory for Novel Software echnology Nanjing University, Nanjing 20093, China 2
More informationCHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS
38 CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS 3.1 PRINCIPAL COMPONENT ANALYSIS (PCA) 3.1.1 Introduction In the previous chapter, a brief literature review on conventional
More informationIEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 1, JANUARY Face Recognition Using LDA-Based Algorithms
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 14, NO. 1, JANUARY 2003 195 Brief Papers Face Recognition Using LDA-Based Algorithms Juwei Lu, Kostantinos N. Plataniotis, and Anastasios N. Venetsanopoulos Abstract
More informationDimension Reduction CS534
Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of
More informationFace detection and recognition. Detection Recognition Sally
Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification
More informationLinear Discriminant Analysis for 3D Face Recognition System
Linear Discriminant Analysis for 3D Face Recognition System 3.1 Introduction Face recognition and verification have been at the top of the research agenda of the computer vision community in recent times.
More informationFACE RECOGNITION BASED ON GENDER USING A MODIFIED METHOD OF 2D-LINEAR DISCRIMINANT ANALYSIS
FACE RECOGNITION BASED ON GENDER USING A MODIFIED METHOD OF 2D-LINEAR DISCRIMINANT ANALYSIS 1 Fitri Damayanti, 2 Wahyudi Setiawan, 3 Sri Herawati, 4 Aeri Rachmad 1,2,3,4 Faculty of Engineering, University
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Statistical Models for Shape and Appearance Note some material for these slides came from Algorithms
More informationRobust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma
Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected
More informationI How does the formulation (5) serve the purpose of the composite parameterization
Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)
More informationFeature Selection Using Principal Feature Analysis
Feature Selection Using Principal Feature Analysis Ira Cohen Qi Tian Xiang Sean Zhou Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign Urbana,
More informationHierarchical Ensemble of Gabor Fisher Classifier for Face Recognition
Hierarchical Ensemble of Gabor Fisher Classifier for Face Recognition Yu Su 1,2 Shiguang Shan,2 Xilin Chen 2 Wen Gao 1,2 1 School of Computer Science and Technology, Harbin Institute of Technology, Harbin,
More informationThe Curse of Dimensionality
The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more
More informationMixture models and clustering
1 Lecture topics: Miture models and clustering, k-means Distance and clustering Miture models and clustering We have so far used miture models as fleible ays of constructing probability models for prediction
More informationLinear Laplacian Discrimination for Feature Extraction
Linear Laplacian Discrimination for Feature Extraction Deli Zhao Zhouchen Lin Rong Xiao Xiaoou Tang Microsoft Research Asia, Beijing, China delizhao@hotmail.com, {zhoulin,rxiao,xitang}@microsoft.com Abstract
More informationFuzzy Bidirectional Weighted Sum for Face Recognition
Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2014, 6, 447-452 447 Fuzzy Bidirectional Weighted Sum for Face Recognition Open Access Pengli Lu
More informationCSC 411: Lecture 14: Principal Components Analysis & Autoencoders
CSC 411: Lecture 14: Principal Components Analysis & Autoencoders Raquel Urtasun & Rich Zemel University of Toronto Nov 4, 2015 Urtasun & Zemel (UofT) CSC 411: 14-PCA & Autoencoders Nov 4, 2015 1 / 18
More informationLinear Discriminant Analysis in Ottoman Alphabet Character Recognition
Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /
More informationFace Recognition for Mobile Devices
Face Recognition for Mobile Devices Aditya Pabbaraju (adisrinu@umich.edu), Srujankumar Puchakayala (psrujan@umich.edu) INTRODUCTION Face recognition is an application used for identifying a person from
More informationAdaptive Video Compression using PCA Method
Adaptive Video Compression using Method Mostafa Mofarreh-Bonab Department of Electrical and Computer Engineering Shahid Beheshti University,Tehran, Iran Mohamad Mofarreh-Bonab Electrical and Electronic
More informationAn Efficient LDA Algorithm for Face Recognition
An Efficient LDA Algorithm for Face Recognition Jie Yang, Hua Yu, William Kunz School of Computer Science Interactive Systems Laboratories Carnegie Mellon University Pittsburgh, PA 15213 Abstract It has
More informationFeature Selection Using Modified-MCA Based Scoring Metric for Classification
2011 International Conference on Information Communication and Management IPCSIT vol.16 (2011) (2011) IACSIT Press, Singapore Feature Selection Using Modified-MCA Based Scoring Metric for Classification
More informationThe Analysis of Parameters t and k of LPP on Several Famous Face Databases
The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun
More informationRegularized Discriminant Analysis For the Small Sample Size Problem in Face Recognition
Regularized Discriminant Analysis For the Small Sample Size Problem in Face Recognition Juwei Lu, K.N. Plataniotis, A.N. Venetsanopoulos Bell Canada Multimedia Laboratory The Edward S. Rogers Sr. Department
More informationFace Detection and Recognition in an Image Sequence using Eigenedginess
Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras
More informationThe Novel Approach for 3D Face Recognition Using Simple Preprocessing Method
The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach
More informationFeature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Feature Selection CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Dimensionality reduction Feature selection vs. feature extraction Filter univariate
More informationUnsupervised learning in Vision
Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual
More informationDiscriminate Analysis
Discriminate Analysis Outline Introduction Linear Discriminant Analysis Examples 1 Introduction What is Discriminant Analysis? Statistical technique to classify objects into mutually exclusive and exhaustive
More informationFADA: An Efficient Dimension Reduction Scheme for Image Classification
Best Paper Candidate in Retrieval rack, Pacific-rim Conference on Multimedia, December 11-14, 7, Hong Kong. FADA: An Efficient Dimension Reduction Scheme for Image Classification Yijuan Lu 1, Jingsheng
More informationA Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2
A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation Kwanyong Lee 1 and Hyeyoung Park 2 1. Department of Computer Science, Korea National Open
More informationCSC 411: Lecture 14: Principal Components Analysis & Autoencoders
CSC 411: Lecture 14: Principal Components Analysis & Autoencoders Richard Zemel, Raquel Urtasun and Sanja Fidler University of Toronto Zemel, Urtasun, Fidler (UofT) CSC 411: 14-PCA & Autoencoders 1 / 18
More informationHuman Action Recognition Using Independent Component Analysis
Human Action Recognition Using Independent Component Analysis Masaki Yamazaki, Yen-Wei Chen and Gang Xu Department of Media echnology Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577,
More informationContrast Improvement on Various Gray Scale Images Together With Gaussian Filter and Histogram Equalization
Contrast Improvement on Various Gray Scale Images Together With Gaussian Filter and Histogram Equalization I M. Rajinikannan, II A. Nagarajan, III N. Vallileka I,II,III Dept. of Computer Applications,
More informationGENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES
GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (a) 1. INTRODUCTION
More informationClassification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University
Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate
More informationVideo-based Face Recognition Using Earth Mover s Distance
Video-based Face Recognition Using Earth Mover s Distance Jiangwei Li 1, Yunhong Wang 1,2, and Tieniu Tan 1 1 National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences,
More informationApplication of 2DPCA Based Techniques in DCT Domain for Face Recognition
Application of 2DPCA Based Techniques in DCT Domain for Face Recognition essaoud Bengherabi, Lamia ezai, Farid Harizi, Abderraza Guessoum 2, and ohamed Cheriet 3 Centre de Développement des Technologies
More informationA KERNEL MACHINE BASED APPROACH FOR MULTI- VIEW FACE RECOGNITION
A KERNEL MACHINE BASED APPROACH FOR MULI- VIEW FACE RECOGNIION Mohammad Alwawi 025384 erm project report submitted in partial fulfillment of the requirement for the course of estimation and detection Supervised
More informationEnhancing the pictorial content of digital holograms at 100 frames per second
Enhancing the pictorial content of digital holograms at 100 frames per second P.W.M. Tsang, 1 T.-C Poon, 2 and K.W.K. Cheung 1 1 Department of Electronic Engineering, City University of Hong Kong, Hong
More informationCLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS
CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of
More informationMultidirectional 2DPCA Based Face Recognition System
Multidirectional 2DPCA Based Face Recognition System Shilpi Soni 1, Raj Kumar Sahu 2 1 M.E. Scholar, Department of E&Tc Engg, CSIT, Durg 2 Associate Professor, Department of E&Tc Engg, CSIT, Durg Email:
More informationComparison of face recognition algorithms in terms of the learning set selection
Comparison of face recognition algorithms in terms of the learning set selection Simon Gangl Domen Mongus Supervised by: Borut Žalik Laboratory for Geometric Modelling and Multimedia Algorithms Faculty
More informationLinear Methods for Regression and Shrinkage Methods
Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors
More informationEmotion Classification
Emotion Classification Shai Savir 038052395 Gil Sadeh 026511469 1. Abstract Automated facial expression recognition has received increased attention over the past two decades. Facial expressions convey
More informationFace View Synthesis Across Large Angles
Face View Synthesis Across Large Angles Jiang Ni and Henry Schneiderman Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 1513, USA Abstract. Pose variations, especially large out-of-plane
More informationA Robust Method of Facial Feature Tracking for Moving Images
A Robust Method of Facial Feature Tracking for Moving Images Yuka Nomura* Graduate School of Interdisciplinary Information Studies, The University of Tokyo Takayuki Itoh Graduate School of Humanitics and
More informationRandom projection for non-gaussian mixture models
Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,
More informationIEEE TRANSACTIONS ON SYSTEM, MAN AND CYBERNETICS, PART B 1. Discriminant Subspace Analysis for Face. Recognition with Small Number of Training
IEEE TRANSACTIONS ON SYSTEM, MAN AND CYBERNETICS, PART B 1 Discriminant Subspace Analysis for Face Recognition with Small Number of Training Samples Hui Kong, Xuchun Li, Matthew Turk, and Chandra Kambhamettu
More informationTechnical Report. Title: Manifold learning and Random Projections for multi-view object recognition
Technical Report Title: Manifold learning and Random Projections for multi-view object recognition Authors: Grigorios Tsagkatakis 1 and Andreas Savakis 2 1 Center for Imaging Science, Rochester Institute
More informationNeighbourhood Components Analysis
Neighbourhood Components Analysis Jacob Goldberger, Sam Roweis, Geoff Hinton, Ruslan Salakhutdinov Department of Computer Science, University of Toronto {jacob,rsalakhu,hinton,roweis}@cs.toronto.edu Abstract
More informationHeat Kernel Based Local Binary Pattern for Face Representation
JOURNAL OF LATEX CLASS FILES 1 Heat Kernel Based Local Binary Pattern for Face Representation Xi Li, Weiming Hu, Zhongfei Zhang, Hanzi Wang Abstract Face classification has recently become a very hot research
More information22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos
Machine Learning for Computer Vision 1 22 October, 2012 MVA ENS Cachan Lecture 5: Introduction to generative models Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Center for Visual Computing Ecole Centrale Paris
More informationFACE RECOGNITION UNDER LOSSY COMPRESSION. Mustafa Ersel Kamaşak and Bülent Sankur
FACE RECOGNITION UNDER LOSSY COMPRESSION Mustafa Ersel Kamaşak and Bülent Sankur Signal and Image Processing Laboratory (BUSIM) Department of Electrical and Electronics Engineering Boǧaziçi University
More informationApplication of Principal Components Analysis and Gaussian Mixture Models to Printer Identification
Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification Gazi. Ali, Pei-Ju Chiang Aravind K. Mikkilineni, George T. Chiu Edward J. Delp, and Jan P. Allebach School
More informationGenerative and discriminative classification techniques
Generative and discriminative classification techniques Machine Learning and Category Representation 013-014 Jakob Verbeek, December 13+0, 013 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.13.14
More informationStudy and Comparison of Different Face Recognition Algorithms
, pp-05-09 Study and Comparison of Different Face Recognition Algorithms 1 1 2 3 4 Vaibhav Pathak and D.N. Patwari, P.B. Khanale, N.M. Tamboli and Vishal M. Pathak 1 Shri Shivaji College, Parbhani 2 D.S.M.
More informationSPEECH FEATURE EXTRACTION USING WEIGHTED HIGHER-ORDER LOCAL AUTO-CORRELATION
Far East Journal of Electronics and Communications Volume 3, Number 2, 2009, Pages 125-140 Published Online: September 14, 2009 This paper is available online at http://www.pphmj.com 2009 Pushpa Publishing
More informationIMAGE RETRIEVAL USING EFFICIENT FEATURE VECTORS GENERATED FROM COMPRESSED DOMAIN
IMAGE RERIEVAL USING EFFICIEN FEAURE VECORS GENERAED FROM COMPRESSED DOMAIN Daidi Zhong, Irek Defée Department of Information echnology, ampere University of echnology. P.O. Box 553, FIN-33101 ampere,
More informationFace detection and recognition. Many slides adapted from K. Grauman and D. Lowe
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe Face detection and recognition Detection Recognition Sally History Early face recognition systems: based on features and distances
More informationA Discriminative Model for Age Invariant Face Recognition
To Appear, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY A Discriminative Model for Age Invariant Face Recognition Zhifeng Li 1, Member, IEEE, Unsang Park 2, Member, IEEE, and Anil K. Jain 2,3,
More informationMSA220 - Statistical Learning for Big Data
MSA220 - Statistical Learning for Big Data Lecture 13 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Clustering Explorative analysis - finding groups
More informationDimension Reduction of Image Manifolds
Dimension Reduction of Image Manifolds Arian Maleki Department of Electrical Engineering Stanford University Stanford, CA, 9435, USA E-mail: arianm@stanford.edu I. INTRODUCTION Dimension reduction of datasets
More informationGraph Laplacian Kernels for Object Classification from a Single Example
Graph Laplacian Kernels for Object Classification from a Single Example Hong Chang & Dit-Yan Yeung Department of Computer Science, Hong Kong University of Science and Technology {hongch,dyyeung}@cs.ust.hk
More informationCluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1
Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods
More informationRobust Kernel Methods in Clustering and Dimensionality Reduction Problems
Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Jian Guo, Debadyuti Roy, Jing Wang University of Michigan, Department of Statistics Introduction In this report we propose robust
More informationAnnouncements. Recognition I. Gradient Space (p,q) What is the reflectance map?
Announcements I HW 3 due 12 noon, tomorrow. HW 4 to be posted soon recognition Lecture plan recognition for next two lectures, then video and motion. Introduction to Computer Vision CSE 152 Lecture 17
More informationScienceDirect. Evaluation of PCA, LDA and Fisherfaces in Appearance-Based Object Detection in Thermal Infra-Red Images with Incomplete Data
Available online at www.sciencedirect.com ScienceDirect Procedia Engineering 100 (2015 ) 1167 1173 25th DAAAM International Symposium on Intelligent Manufacturing and Automation, DAAAM 2014 Evaluation
More informationDeep Learning for Computer Vision
Deep Learning for Computer Vision Spring 2018 http://vllab.ee.ntu.edu.tw/dlcv.html (primary) https://ceiba.ntu.edu.tw/1062dlcv (grade, etc.) FB: DLCV Spring 2018 Yu Chiang Frank Wang 王鈺強, Associate Professor
More informationInternational Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 8, March 2013)
Face Recognition using ICA for Biometric Security System Meenakshi A.D. Abstract An amount of current face recognition procedures use face representations originate by unsupervised statistical approaches.
More informationECE 484 Digital Image Processing Lec 17 - Part II Review & Final Projects Topics
ECE 484 Digital Image Processing Lec 17 - Part II Review & Final Projects opics Zhu Li Dept of CSEE, UMKC Office: FH560E, Email: lizhu@umkc.edu, Ph: x 2346. http://l.web.umkc.edu/lizhu slides created with
More informationImage-Based Face Recognition using Global Features
Image-Based Face Recognition using Global Features Xiaoyin xu Research Centre for Integrated Microsystems Electrical and Computer Engineering University of Windsor Supervisors: Dr. Ahmadi May 13, 2005
More informationAn Integrated Face Recognition Algorithm Based on Wavelet Subspace
, pp.20-25 http://dx.doi.org/0.4257/astl.204.48.20 An Integrated Face Recognition Algorithm Based on Wavelet Subspace Wenhui Li, Ning Ma, Zhiyan Wang College of computer science and technology, Jilin University,
More informationNeural Networks for Machine Learning. Lecture 15a From Principal Components Analysis to Autoencoders
Neural Networks for Machine Learning Lecture 15a From Principal Components Analysis to Autoencoders Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Principal Components
More informationUnsupervised Learning
Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover
More informationA FRACTAL WATERMARKING SCHEME FOR IMAGE IN DWT DOMAIN
A FRACTAL WATERMARKING SCHEME FOR IMAGE IN DWT DOMAIN ABSTRACT A ne digital approach based on the fractal technolog in the Disperse Wavelet Transform domain is proposed in this paper. First e constructed
More informationVirtual Training Samples and CRC based Test Sample Reconstruction and Face Recognition Experiments Wei HUANG and Li-ming MIAO
7 nd International Conference on Computational Modeling, Simulation and Applied Mathematics (CMSAM 7) ISBN: 978--6595-499-8 Virtual raining Samples and CRC based est Sample Reconstruction and Face Recognition
More informationSegmentation of MR Images of a Beating Heart
Segmentation of MR Images of a Beating Heart Avinash Ravichandran Abstract Heart Arrhythmia is currently treated using invasive procedures. In order use to non invasive procedures accurate imaging modalities
More informationImplementation of a Turbo Encoder and Turbo Decoder on DSP Processor-TMS320C6713
International Journal of Engineering Research and Development e-issn : 2278-067X, p-issn : 2278-800X,.ijerd.com Volume 2, Issue 5 (July 2012), PP. 37-41 Implementation of a Turbo Encoder and Turbo Decoder
More informationSTUDY OF FACE AUTHENTICATION USING EUCLIDEAN AND MAHALANOBIS DISTANCE CLASSIFICATION METHOD
STUDY OF FACE AUTHENTICATION USING EUCLIDEAN AND MAHALANOBIS DISTANCE CLASSIFICATION METHOD M.Brindha 1, C.Raviraj 2, K.S.Srikanth 3 1 (Department of EIE, SNS College of Technology, Coimbatore, India,
More informationCombining Selective Search Segmentation and Random Forest for Image Classification
Combining Selective Search Segmentation and Random Forest for Image Classification Gediminas Bertasius November 24, 2013 1 Problem Statement Random Forest algorithm have been successfully used in many
More informationFACE RECOGNITION USING SUPPORT VECTOR MACHINES
FACE RECOGNITION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (b) 1. INTRODUCTION
More informationSome questions of consensus building using co-association
Some questions of consensus building using co-association VITALIY TAYANOV Polish-Japanese High School of Computer Technics Aleja Legionow, 4190, Bytom POLAND vtayanov@yahoo.com Abstract: In this paper
More informationCS 195-5: Machine Learning Problem Set 5
CS 195-5: Machine Learning Problem Set 5 Douglas Lanman dlanman@brown.edu 26 November 26 1 Clustering and Vector Quantization Problem 1 Part 1: In this problem we will apply Vector Quantization (VQ) to
More informationMULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER
MULTIVARIATE TEXTURE DISCRIMINATION USING A PRINCIPAL GEODESIC CLASSIFIER A.Shabbir 1, 2 and G.Verdoolaege 1, 3 1 Department of Applied Physics, Ghent University, B-9000 Ghent, Belgium 2 Max Planck Institute
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationPerturbation Estimation of the Subspaces for Structure from Motion with Noisy and Missing Data. Abstract. 1. Introduction
Perturbation Estimation of the Subspaces for Structure from Motion with Noisy and Missing Data Hongjun Jia, Jeff Fortuna and Aleix M. Martinez Department of Electrical and Computer Engineering The Ohio
More informationA new kernel Fisher discriminant algorithm with application to face recognition
Neurocomputing 56 (2004) 415 421 www.elsevier.com/locate/neucom Letters A new kernel Fisher discriminant algorithm with application to face recognition Jian Yang a;b;, Alejandro F. Frangi a, Jing-yu Yang
More informationContent-based Image Retrieval with Color and Texture Features in Neutrosophic Domain
3rd International Conference on Pattern Recognition and Image Analysis (IPRIA 2017) April 19-20, 2017 Content-based Image Retrieval ith Color and Texture Features in Neutrosophic Domain Abdolreza Rashno
More informationLearning to Recognize Faces in Realistic Conditions
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationFace Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method
Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained
More informationDiscriminant Analysis of Haar Features for Accurate Eye Detection
Discriminant Analysis of Haar Features for Accurate Eye Detection Shuo Chen and Chengjun Liu Department of Computer Science, New Jersey Institute of Technology 323 M.L. King Boulevard, University Height,
More informationDimensionality Reduction, including by Feature Selection.
Dimensionality Reduction, including by Feature Selection www.cs.wisc.edu/~dpage/cs760 Goals for the lecture you should understand the following concepts filtering-based feature selection information gain
More informationAn Adaptive Threshold LBP Algorithm for Face Recognition
An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent
More informationImage Processing and Image Representations for Face Recognition
Image Processing and Image Representations for Face Recognition 1 Introduction Face recognition is an active area of research in image processing and pattern recognition. Since the general topic of face
More informationLearning based face hallucination techniques: A survey
Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)
More informationENEE633 Project Report SVM Implementation for Face Recognition
ENEE633 Project Report SVM Implementation for Face Recognition Ren Mao School of Electrical and Computer Engineering University of Maryland Email: neroam@umd.edu Abstract Support vector machine(svm) is
More informationRegularized discriminant analysis for the small sample size problem in face recognition
Pattern Recognition Letters 24 (2003) 3079 3087 www.elsevier.com/locate/patrec Regularized discriminant analysis for the small sample size problem in face recognition Juwei Lu *, K.N. Plataniotis, A.N.
More informationGlobally Stabilized 3L Curve Fitting
Globally Stabilized 3L Curve Fitting Turker Sahin and Mustafa Unel Department of Computer Engineering, Gebze Institute of Technology Cayirova Campus 44 Gebze/Kocaeli Turkey {htsahin,munel}@bilmuh.gyte.edu.tr
More information