I CP Fusion Techniques for 3D Face Recognition
|
|
- Beryl Charles
- 6 years ago
- Views:
Transcription
1 I CP Fusion Techniques for 3D Face Recognition Robert McKeon University of Notre Dame Notre Dame, Indiana, USA Patrick Flynn University of Notre Dame Notre Dame, Indiana, USA Abstract The 3D shape of the face has been shown to be a viable and robust biometric for security applications. Many state of the art techniques use Iterative Closest Point (ICP) to match one or more face regions to comparable regions stored in an enrollment database. In this paper, we propose and explore several optimizations of the ICPbased matching technique relating to the processing of multiple regions and the fusion of region matching scores obtained from ICP alignment. Some of these optimizations yielded improved recognition results. The optimizations explored included: (i) the symmetric use of probe and enabling score fusion; (ii) gallery and probe region matching score normalization; (iii) region selection based on face data centroid rather than the nose tip, and (iv) region weighting. As a result of these optimizations, the rank-one recognition rate for a canonical matching experiment improved from 96.4% to 98.6%, and the True Accept Rate (TAR) at 0.1% False Accept Rate (FAR) improved from 90.4% to 98.5%. I. Introduction The security of knowledge and buildings is increasingly important in many areas; access to the same must be restricted to authorized personnel in such situations. Researchers have explored the field of biometrics to establish identity and thus decide on access rights, and many biometric signatures like iris and fingerprint are well-known. Iris and fingerprint recognition require significant subject cooperation, so researchers have also investigated face recognition, which requires less subject cooperation. Due to a relative lack of sensitivity to lighting variations, 3D face recognition has a unique advantage over 2D face recognition; the information captured is the shape of the face and not the color. 3D face images are commonly acquired using structured light sensors and/or stereo vision techniques [13]. The Iterative Closest Point (ICP) [1] algorithm has proven to be a workhorse solution for a host of geometric alignment problems, including the alignment of two 3D samplings of the face obtained from a 3D sensor. ICP aligns two 3D point clouds by minimizing the average error between the nearestneighbors. ICP aligns a data point cloud to a model point cloud. The main disadvantage to this technique is the need for an expensive nearest-point search at each iteration (this search can be accelerated through use of hierarchical data structures). In most 3D face recognition systems, the comparison of an enrolled gallery face to an unlabeled probe face yields a matching score typically based on the alignment error calculated by an ICP-based technique [2]. Thus, lower scores are better and the best match identifies the matching gallery image and conveys an identity claim. Principal component analysis has also been used to match aligned 3D faces [4, 5, 7]. Faltemier el al. [2] used a simple ensemble of 38 face regions to significantly improve face recognition using ICP as a matching metric. In this paper, we find that the ICP face recognition approach can be improved in both processing time and recognition performance. The goal of this paper is to exploit both areas to improve 3D face recognition using three recognition metrics: rank-one recognition performance, the True Accept Rate (TAR) at 0.1% False Accept Rate (FAR), and an examination of the Receiver Operating Characteristic (ROC). The paper is organized as follows. We examine prior work in the area in Section 2. In section 3, we will show a variety of methods we used to improve face recognition. Section 4 shows the results achieved from these improvements, and Section 5 provides a summary and outlines potential future areas of research. II. Literature Review Faltemier et al. [2] conducted recognition experiments using the testing partition of the FRGC 2.0 3D face image dataset, which has D face images of 466 unique subjects. They fused the match results of multiple face region alignments in order to improve performance over full face performance. They achieved 97.2% rank-one recognition where the earliest image acquired was used as the gallery. They achieved a TAR of 94.8% at a 0.1% FAR for the first semester images as gallery images and the second semester images as probes. This result remains,
2 to our knowledge, one of best-published performance on the FRGC 2.0 testing partition. Russ et al. [4] used a form of PCA for facial recognition by first aligning each face to a reference model using ICP. PCA was performed based on the corresponding reference face points to each gallery face, and they experimented on the neutral expression probes of FRGC 2.0 achieving 96.0% TAR at 0.1% FAR. Boehnen et al. [5] developed face signatures to compare 3D faces quickly using the reference face technique for alignment. They compare these signatures using a combination of a nearest neighbor technique and a normal vector search. They achieved 95.5% rank one identification merging 8 facial regions on the FRGC 2.0 dataset, which is an improvement over 92.9% for 8 regions as seen by Faltemier et al. [2]. Al- Osaimi et al. [7] used PCA on a set of 2.5D face images in order to avoid some of the pitfalls of ICP such as the possibility that ICP does not converge to a correct answer. They achieved 95.4% TAR at 0.1% FAR and 93.7% rankone recognition for the neutral expression images of the FRGC 2.0 dataset. Faltemier et al. [2] achieved 99.2% rank-one on the same neutral expression subset. Colbry et al. [6] used region selection to narrow the search space for 3D recognition. They selected the region around the nose and eyes to reduce the number of feature points, but this caused a decrease in recognition performance. They achieved 98.2% rank-one recognition on a data set of 325 face scans. Chua et al. [8] aim to find the regions on the face that deform (such as like the nose, the eye sockets, and the forehead), which were identified using surface curvature. Their data set was only six face images on which they achieved 100% rank-one recognition, but it is not a fair comparison to the size of the FRGC set. Mian et al. [14] used a 3D spherical face representation in a modified ICP algorithm for face recognition. They used the nose and eye-forehead regions and fused these two region scores together. On the FRGC 2.0 dataset, they achieved 98.5% TAR at 0.1% FAR. III. Methods Previous research on face recognition has focused on finding new algorithms to recognize faces better, but the main research in using ICP for face recognition has not looked at improving the ICP strategy beyond employing multiple regions. Faltemier et al. [2] improved the ICP strategy by using multiple regions. We looked at a variety of improvements based on observations made during our ICP-based 3D face image matching experiments. We conclude that some of these improvements should always be implemented, some improvements require further development, and some experiments suggested additional possible improvements. Among the modifications we considered are score fusion, probe normalization, gallery normalization, improved region selection, and weighting regions. 3.1 The Basic I CP Method To compute a match score for two face image meshes (or two face image regions segmented from the face mesh), we used the Iterative Closest Point (ICP) algorithm [1] to align the meshes, using the alignment Err or E 2 as the match score (lower is better). ICP employs two geometric sets: a fixed model shape M and a data shape D to which a transformation T is applied to perform the alignment; In the execution of ICP, D is aligned iteratively to M T 0 is obtained through some heuristic procedure. At each subsequent iteration i, the nearest neighbor points for each point in T i-1 D are found using search of a previously constructed k-d tree [12]. Using these correspondences an updated transformation matrix T i is calculated. T i is composed of a 3D rotation and a 3D translation. The RMS alignment error E 2 i for the iteration is then calculated using Equation 1, in which d p closest to p when transformed by T i : E 2 i pm p T d 2 M i p is the data shape point 2 Iteration continues until E i does not change significantly between iterations or until some other stopping criterion is met. The final value of E 2 i is reported as the matching score. A value of zero would imply a perfect alignment of identically sampled shapes. For notational convenience, we will denote the matching error obtained via equation (1) from an ICP invocation with a data shape D and a model shape M as ICP(M,D) or E Dual I C P Invocation and Score F usion Techniques In face recognition, it is not always obvious whether the probe face p or the gallery face g should serve as the model shape M and the data shape D in the ICP formulation. Ideally, the data shape should be a subset of the model shape in terms of geometric coverage. In some cases, this assumption is not true with either assignment of g or p to M or D. Thus, we invoke ICP twice, obtaining two matching scores ICP(p,g) and ICP(g,p) as implemented by McKeon and Flynn [3]. These two scores are fused using the product and minimum rules, yielding two matching scores used henceforth: (1)
3 E 2 min(p,g) = min(icp(p,g), ICP(g,p)) (2) E 2 prod(p,g) = ICP(p,g)ICP(g,p) (3) In Section 4.2, we demonstrate improvements in performance due to the use of E 2 min and E 2 prod. E 2 pk L E 2 jk j1 L 1 where L is the total number of images in the gallery (8) 3.3 Techniques In order to fuse matching scores from multiple regions effectively, we found it necessary to perform normalization. The distributions of the match scores obtained from different facial image regions differ significantly. We examined two techniques to normalize the set of match scores for a vector V p = [ E 2 *(p,g 1 E 2 *(p,g k )] of k gallery matches to a single probe match: -- these vectors, the * indicates the score fusion technique used. - E 2 minmax) transforms all the values in a vector to be in the range [0, 1]. For a vector V p, the normalization yields an output vector V p,minmax : V p,minmax = [(E 2 *(p,g 1 )-min(v p ))/(max(v p ) min(v p (E 2 *(p,g k ) min(v p )) /(max(v p ) min(v p )) ] (4) Z-score normalization (E 2 z) transforms all the values in a vector to have an average of zero and unit variance: V p,z = [(E 2 *(p,g 1 )-mean(v p ))/(std(v p 2 *(p,g k ) mean(v p )) /(std(v p )) ] (5) We also examined the application of these normalization techniques to vectors V g = [E 2 *(p 1 E 2 *(p K,g)] of probe matches to a single gallery match. The same normalization techniques may be applied to the vector V g, yielding output vectors V g,m and V g,z, but these did not improve performance. Instead, we normalized each V g to make the average of V g equal to one. E 2 pk (Equation 6) was normalized by the mean of all E jk 2 (Equation 7) as shown in (Equation 8), resulting in (V gn for the vector V g ) for a probe p and a gallery image G k. E 2 kk is excluded from the calculation of the mean because it equals 0 due to being an exact match. E 2 pk = ICP(P,G k ) (6) where P is a probe image, and G k is a gallery image. E 2 jk = ICP(G j,g k ) (7) where G j is the j th gallery image, and G k is a gallery image. 3.4 Nose-centered Regions vs. Centroid-Centered Regions 3D face recognition performance can be improved through use of ensemble matching strategies using local face region matches instead of whole face matches. In such methods, the initial selection of regions for matching depends on a registration step, which is itself often performed using the results of a whole-face ICP alignment to a canonical face model. We find the nose tip using the curvature-based technique described by Faltemier et al. [2], and then form a series of regions centered either at the nose tip or at the centroid of the face data and cropped from the face data using a set of cropping spheres of various radii combined with offsets in the X and Y axes. The face regions cropped using the nose tip as a cropping origin are denoted R n,i, where n denotes the nose-centered crop and i refers to the region number. Similarly, the face regions cropped around the data centroid are denoted R c,i. In our experience, the data centroid is typically located strategies are depicted graphically in Figure 3.1. (a) (b) (c) Figure 3.1: (a) A face with the nose-tip and the face centroid identified. The vector b connects the centroid and the nose tip. (b) A family R n,i of nose-centered cropped regions. (b) A family R c,i of centroid-centered cropped regions. 3.5 Matching using Weighted Ensemble Scores Previous researchers have found facial region ensembles outperform full face recognition. Faltemier et al. [2] fused match score results from multiple nose centered cropped regions to form a final match score. Boehnen et al. [5] cropped regions with a variety of shapes
4 shape. We used the same face regions as Faltemier et al. [2], with a Sequential Forward Search (SFS) [11] to find the region ensemble with the best performance. We formed the ensemble using as SFS as employed by Liwicki and Bunke [10]. An ensemble S k with k elements was selected from the initial set of all regions S 0 = {R 1, R 2, R 3 n } as follows: 1. Find the best performing region R* in S 0 based on TAR at 0.1% FAR, remove R* from S 0, and set S 1 to be the singleton {R*}. 2. For -1 do: a. Find the region R* in S 0 that maximizes TAR at 0.1% FAR when added to S k-1 b. Set S k = S k-1 { R* } c. Set S 0 = S 0 { R* } d. Form the E 2 k,sfs : E 2 k,sfs S k R j k Previous region merging schemes like Faltemier et al. [2] also used SFS. However, a region can only be added to match score with respect to the other regions. We did this match score by an integer found using a modified SFS process. The SFS Weighted (SFSW) starts with S k = {( 1,R 1 ), ( 2,R 2 n,r n )} containing a weight i for each region R i, all initialized at 0, and selectively incremented at each iteration k as follows: 1. Find the region R* whose addition to the initial set S 1 maximizes TAR at 0.1% FAR, set the corresponding to For -1 do: a. Find the region R* which, if its weight is incremented, maximizes TAR at 0.1% FAR b. Increment the corresponding to R*, and add (*) to S if necessary. 3. Form E 2 k,sfsw as a weighted sum of per-region match scores : E 2 k,sfsw = 1 R R R 3 n R n I V. j 0 Experimental Results We used the training partition of the FRGC v2 3D face database [9] for our recognition experiments. This partition contains D face images. We excluded the subjects that have only one image in the data set, yielding a total of 3950 images. For the rank-one recognition experiments, this set was divided into 466 gallery images and 3484 probe images. The gallery images were acquired prior to the probe images. For the 0.1% FAR and the ROC curve experiments, this set was divided into the first semester of acquisition (1628 gallery images) and the second semester of acquisition (1727 probe images). The gallery images were acquired prior to the probe images. Both probe sets contain both neutral and non-neutral expressions. The 38 regions used in Faltimier et al. [2] were used for both R n and R c and formed into an ensemble using either SFS or SFSW. Table 4.1: Experimental Variables Experimental Variables Number of Variables { E 2, E 2 min, E 2 prod } 3 { E 2 minmax, E 2 z } 2 { E 2, } 2 { R n, R c, R n R c } 3 { SFS, SFSW } 2 Total Number of Experiments: 72 Table 4.2: Experiment Descriptions Matching Experiment Comments score 1. Baseline E 2 SFS 2. Minimum ICP Score 3. Product ICP Score 4. Probe Min-max 5. Probe Z-score 6. Gallery 7. Nosecentered regions vs Centroidcentered regions 8. Weighted Region Ensembles 9. Optimal Number of Regions E 2 min E 2 prod E 2 minmax E 2 z R n vs. R c R n vs. R n R c SFS vs. SFSW N/A Selected regions using the nose or the centroid as the offset Allowed regions to be weighted differently Explored when adding more regions to ensemble degrades performance As a baseline, we employ the performance results of an implementation that uses ICP(g,p) or E 2 as a matching score, employing none of the optimizations described above. We found that the use of E 2 min or E 2 prod as the matching score improves performance as do both probe score normalization and gallery score normalization.
5 Weighting region match scores also improves performance. The use of centroid-centered cropping regions did not by itself improve performance, but performance was improved when regions obtained by centroid-centered cropping and nose-centered cropping were combined. We also found that some regions cannot improve an ensemble. Table 4.1 lists the experimental variables, and Table 4.2 lists a short description of each experiment. This baseline matcher differed slightly from Faltimier et al. [2] in performance for rank-one recognition (some match ranks were different), but we were unable to identify which part of the process deviated from their stated process. The 0.1% FAR values were also different because we used a summation scheme, and they used a Confidence Voting scheme to merge regions. 4.1 Biometric Evaluation Metrics The two typical biometric tasks are identification and verification. Both tasks compare an incoming probe template to an existing database of gallery templates. Identification tasks compare the probe against every template of interest in the gallery and return the identity of the best match. A verification task compares the probe only against templates that match the identity claim presented with the probe. In our experiments, recognition performance is typically quantified using the rank-one recognition rate, and verification performance is quantified by the True Accept Rate (TAR) at 0.1% False Accept Rate (FAR), or graphically by the Receiver Operating Characteristic (ROC) curve, which can also be examined to determine improved performance. For rank-one recognition, match ranks were computed from ensembles of regions using a modified Borda Count [2]. The original Borda Count (BC) is the sum of the match ranks for regions in the ensemble. The gallery match g for a probe with the largest Borda Count will be the rank-one match. The modified Borda Count (MBC) first proposed by Faltemier et al. [2] adds a quadratic weight to the first N b matches and forms a weighted sum. As before, the match with the largest modified Borda count is considered correct. We found the ROC curve and the TAR at 0.1% FAR by summing the region match scores (E 2, E 2 min, or E 2 prod) to give a final match score from which the ROC was calculated. We used SFS [10, 11] to find the region ensemble with the best performance expect when specified. 4.2 E 2 min and E 2 prod We found that both E 2 min and E 2 prod were superior to E 2 with respect to both identification and verification experiments. Table 4.3 shows the rank-one performance and the TAR at 0.1% FAR. Figure 4.1 presents the ROC curves for identification experiments using E 2, E 2 min and E 2 prod and SFS for region selection. Using SFS, the best performing ensembles contained between 10 and 20 regions. Table 4.3: E 2, E 2 min and E 2 prod performance Rank-one correct match rate 96.4% 97.4% 97.4% 0.1% FAR 90.5% 93.2% 92.6% To determine the statistical significance of these results, we ran 30 experiments using 500 random probes for each system, and we used a one-tailed t-test to test a null hypothesis that the mean recognition performance (rankone recognition or TAR at 0.1% FAR) of the alternative technique (E 2 min, etc.) is better than the default. A p-value below 0.05 will indicate rejection of the null hypothesis. E 2 min and E 2 prod compared to E 2 have p-values of 1.67 x 10-2 and 1.67 x 10-2 respectively for rank-one recognition rates and p-values of 8.89 x and 4.69 x respectively for TAR at 0.1% FAR. Thus, we rejected the null hypothesis that E 2 min is equivalent to E 2, and also rejected the null hypothesis that E 2 prod is equivalent to E 2. However, we accepted the null hypothesis that E 2 prod and E 2 min demonstrated the same performance for rank-one recognition (p-value of 0.50) but rejected the null hypothesis for TAR at 0.1% FAR (p-value of 8.16 x 10-4 ). True Accept Rate (TAR) 100.0% 98.0% 96.0% 94.0% 92.0% 90.0% 88.0% 86.0% 84.0% 82.0% 80.0% 78.0% 76.0% 74.0% 0.001% 0.010% 0.100% 1.000% % % False Accept Rate (FAR) Err- pg Err- min Err- prod Figure 4.1: The ROC curve for E 2, E 2 min and E 2 prod. 4.3 Both probe normalization (E 2 minmax, E 2 z) and gallery
6 normalization () improved recognition. For probe score normalization, E 2 minmax and E 2 z both improved recognition, but E 2 minmax outperformed E 2 z. Rank-one correct recognition rates were not affected by any probe score normalization because the ranking of a region does not change when using probe score normalization. We also found using followed by probe score normalization, results in a significant performance gain. Table 4.4 summarizes TAR at 0.1% FAR results for E 2 minmax, E 2 z and. Figure 4.2 presents the ROC curves of E 2 compared to E 2 min with min-max probe score normalization and gallery score normalization. E 2 minmax and E 2 z have p-values as compared to E 2 of 1.85 x and 1.32 x for TAR at 0.1% FAR, so both rejected the null hypothesis of equal average performance. Table 4.5 shows the p-values for the probe normalization and gallery normalization experiments. We rejected the null hypothesis that was statistically equal to E 2. Table 4.4: TAR at 0.1% FAR for probe and gallery normalization E 2 Probe 2 E 90.5% 93.2% 92.6% E 2 minmax 94.2% 96.2% 95.9% E 2 z 75.5% 74.3% 75.8% E % 94.1% 93.9% E 2 minmax 96.9% 97.8% 97.7% E 2 z 81.0% 78.1% 79.0% Table 4.5: p-values for TAR at 0.1% FAR results of probe and gallery normalization (bold indicates statistical significance, p- value<0.05) p-values E 2 vs. { E 2 minmax, E 2 z } E 2 vs. Probe E 2 minmax 1.85E E E-34 E 2 z 1.32E E E-36 E E E E-26 E 2 minmax 2.11E E E-35 E 2 z 6.48E E E-34 True Accept Rate (TAR) 100.0% 98.0% 96.0% 94.0% 92.0% 90.0% 88.0% 86.0% 84.0% 82.0% 80.0% 78.0% 76.0% 74.0% Err- pg 0.001% 0.010% 0.100% 1.000% % % False Accept Rate (FAR) Err- min, gallery normalization, Min- max probe normalization Figure 4.2: ROC of original E 2 and E 2 min with gallery score normalization and Min-max probe score normalization. 4.4 Centroid-centered vs. Nose-centered The ensemble with R c R n for E 2 min and E 2 prod as show in Table 4.4. R c does improve the raw scores for rank-one recognition. However, building an ensemble using R n R c does improve performance as seen in Table 4.6. Table 4.6: Rank-one Recognition for R n, R n, and R n R c Region Selection E 2 R c 97.8% 97.7% 97.7% R n 96.4% 97.4% 97.4% R n R c 98.2% 98.1% 98.1% R n 96.6% 97.9% 97.8% R c 97.5% 97.2% 97.2% R n R c 98.4% 98.2% 98.3% R c does not perform as well when looking at the TAR at 0.1% FAR as seen in Table 4.7. Figure 4.3 shows the ROC curves for a few of these methods. From the ROC curve, one can see that R n R c performs the best, and more performance is gained using Min-max probe score normalization and gallery score normalization. Table 4.8 shows the results of hypothesis testing on this set of experiments. Overall, R n R c combinations exhibited significantly better performance. 4.5 Weighted Regions We formed region ensembles using SFSW as opposed to
7 SFS. This resulted a significant improvement as seen in Table 4.9 for rank-one recognition. Table 4.10 shows the results for TAR at 0.1% FAR. Figure 4.4 shows the ROC curve for a few selections. To properly build any ensemble using SFS or SFSW, training data is required. These experiments show that ensembles could be built using either SFS or SFSW to achieve the best performance, but SFSW improves the lower end of the ROC curve slightly. The p-values for the weighted experiments are shown in Table 4.11, and the SFSW results all rejected the null hypothesis of equal average performance to the SFS results. Table 4.7: 0.1% FAR for R n, R n, and R n R c Region Selection E 2 R c 85.5% 87.8% 87.3% R n 90.5% 93.2% 92.6% E 2 minmax R n R c 91.5% 93.1% 92.7% R n 96.9% 97.8% 97.7% R c 92.6% 93.7% 93.1% R n R c 97.8% 98.1% 97.9% Table 4.8: p-values for 0.1% FAR results comparing R n, R n, and R n R c True Accept Rate (TAR) Region Selection R n vs. R c R n R c vs. R n R n vs. R n with E 2 minmax and R c vs. R c with E 2 minmax and R n R c vs. R n R c with E 2 minmax and 100.0% 98.0% 96.0% 94.0% 92.0% 90.0% 88.0% 86.0% 84.0% 82.0% 80.0% 78.0% 76.0% 74.0% 72.0% 70.0% 7.78E E E E E E E E E E E E E E E % 0.010% 0.100% 1.000% % % False Accept Rate (FAR) Figure 4.3: The ROC curves for a few normalization experiments. R- nc, Err- pg R- nc, Min- max normalized, Gallery normalized R- cc, Err- pg (R- nc, R- cc), Err- pg (R- nc, R- cc), Min- max normalized Gallery normalized Table 4.9: Rank-one Recognition on Weighted Region Ensembles E 2 Region Selection R n SFS 96.4% 97.4% 97.4% R n R c SFS 98.2% 98.1% 98.1% R n SFSW 96.7% 97.6% 97.6% R n R c SFSW 98.3% 98.4% 98.4% R n SFS 96.6% 97.9% 97.8% R n R c SFS 98.4% 98.2% 98.3% R n SFSW 97.2% 98.0% 98.1% R n R c SFSW 98.6% 98.4% 98.5% Table 4.10: 0.1% FAR on Weighted Region Ensembles True Accept Rate (TAR) E 2 E 2 minmax Region Selection R n SFS 90.5% 93.2% 92.6% R n R c SFS 91.5% 93.1% 92.7% R n SFSW 91.3% 93.6% 93.4% R n R c SFSW 92.8% 94.2% 93.7% R n SFS 96.9% 97.8% 97.7% R n R c SFS 97.8% 98.1% 97.9% R n SFSW 97.2% 98.1% 97.9% R n R c SFSW 98.1% 98.5% 98.2% Table 4.11: p-values for 0.1% FAR results using Weighted Region Ensembles Region Selection R n SFS vs. R n SFSW 4.18E E E-06 R n R c SFS vs. R n R c SFSW 2.49E E E % 98.0% 96.0% 94.0% 92.0% 90.0% 88.0% 86.0% 84.0% 82.0% 80.0% 78.0% 76.0% 74.0% 72.0% 0.001% 0.010% 0.100% 1.000% % % False Accept Rate (FAR) R- nc, Err- pg Weighted R- nc, Err- min Weighted, Min- max normalization, Gallery (R- nc, R- cc), Err- pg Weighted (R- nc, R- cc), Err- min Weighted, Min- max normalization, Gallery Figure 4.4: The ROC curves for a few weighted region experiments.
8 4.6 Number of regions in the ensemble Using SFS, we found that more regions do not necessarily improve results. In most cases, there is a peak in performance, after which the performance decreases. This is true except in the case of SFSW because only a few regions have a greater than 0. However, this actually requires training to determine for each region, but we used all the test data to train the SFS for the ensemble as did other researchers [2, 5]. Figure 4.5 shows the rank-one as a function of the number of regions added on the R n using SFS and SFSW. Figure 4.5: The rank-one recognition as compared to the number of regions added to the ensemble. V. Conclusion Through our experiments, we have shown that 3D face recognition could be improved through region selection, ensemble creation, normalization, and post-processing the raw ICP scores. We found E 2 min and E 2 prod are very helpful in post-processing the raw ICP scores, and their success has expanded the understanding of the inter-workings of ICP. Probe and gallery normalization were very successful especially when combined. Forming region ensembles using SFSW was not significantly better than SFS. We found more regions in an ensemble do not always provide improved recognition. We also found by picking regions based on the face centroid, we can improve an ensemble particularly when a region ensemble for face recognition is drawn from a pool of nose-centered regions and centroid-centered regions. Future work will aim to improve the ICP match score itself by cropping out outlier points and will improve the ICP running time using a reference face and a reduced number of points of the face region. V I. References [1] 14, no. 2, pp , [2] Profile Signatures for Robust 3D Feature De IEEE International Conference on Automatic Face and Gesture Recognition, Amsterdam, The Netherlands. September [3] Timothy Faltemier, Kevin W. Bowyer and Patrick J. Flynn, IEEE Transactions on Information Forensics and Security, vol. 3, iss. 1, pp.62-73, [4] -Dimensional Facial Imaging using a Static Light Screen and a Dynamic Proceedings of the 3D Data Processing, Visualization, and Transmission (3DPVT), Pages [5] Proceedings of the Computer Vision and Pattern Recognition, New York, pp , [6] C. Boehnen, T. s for Proceedings of the International Conference on Biometrics 2009, Alghero, Italy. [7] The Proceedings of the Society of Photographic Instrumentation Engineers Conference on Defense & Security, [8] F.R. Al- local and global geometrical cues for 3D face recognition", Pattern Recognition Letters, pp , [9] C.S. Chua, F. Han, Fourth IEEE International Conference on Automatic Face and Gesture Recognition, Page 233, [10] P.J. Phillips, P.J. Flynn, T. Scruggs, and K.W. Bowyer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Vol 1, pp [11] search 15(11): [12] Proceedings of the Third Int. Conf. on Pattern recognition, pages 71-75, Coronado, CA, [13] J. Bentle 18, pp , [14] the Fifth IEEE International Conference on 3-D Digital Imaging and Modeling (3DIM). Pages , [15] Multimodal 2D-3D Hybrid Approach to Automatic Face Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 1, November 2007.
3D Signatures for Fast 3D Face Recognition
3D Signatures for Fast 3D Face Recognition Chris Boehnen, Tanya Peters, and Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame, USA {cboehnen,tpeters,flynn}@nd.edu
More informationGeneric Face Alignment Using an Improved Active Shape Model
Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn
More informationDilation Aware Multi-Image Enrollment for Iris Biometrics
Dilation Aware Multi-Image Enrollment for Iris Biometrics Estefan Ortiz 1 and Kevin W. Bowyer 1 1 Abstract Current iris biometric systems enroll a person based on the best eye image taken at the time of
More informationAn Evaluation of Multimodal 2D+3D Face Biometrics
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 4, APRIL 2005 619 An Evaluation of Multimodal 2D+3D Face Biometrics Kyong I. Chang, Kevin W. Bowyer, and Patrick J. Flynn Abstract
More informationComponent-based Registration with Curvature Descriptors for Expression Insensitive 3D Face Recognition
Component-based Registration with Curvature Descriptors for Expression Insensitive 3D Face Recognition Neşe Alyüz Boğaziçi University Computer Engineering Dept. Istanbul, Turkey nese.alyuz@boun.edu.tr
More informationFace Recognition At-a-Distance Based on Sparse-Stereo Reconstruction
Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,
More informationMultiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 10, OCTOBER 2006 1 Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression Kyong I. Chang, Kevin
More informationThe Impact of Diffuse Illumination on Iris Recognition
The Impact of Diffuse Illumination on Iris Recognition Amanda Sgroi, Kevin W. Bowyer, and Patrick J. Flynn University of Notre Dame asgroi kwb flynn @nd.edu Abstract Iris illumination typically causes
More informationFACE recognition in 3-D has been addressed using a variety
62 IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 3, NO. 1, MARCH 2008 A Region Ensemble for 3-D Face Recognition Timothy C. Faltemier, Kevin W. Bowyer, and Patrick J. Flynn, Senior Member,
More informationEmpirical Evaluation of Advanced Ear Biometrics
Empirical Evaluation of Advanced Ear Biometrics Ping Yan Kevin W. Bowyer Department of Computer Science and Engineering University of Notre Dame, IN 46556 Abstract We present results of the largest experimental
More informationSemi-Supervised PCA-based Face Recognition Using Self-Training
Semi-Supervised PCA-based Face Recognition Using Self-Training Fabio Roli and Gian Luca Marcialis Dept. of Electrical and Electronic Engineering, University of Cagliari Piazza d Armi, 09123 Cagliari, Italy
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationDEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION. Ani1 K. Jain and Nicolae Duta
DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION Ani1 K. Jain and Nicolae Duta Department of Computer Science and Engineering Michigan State University, East Lansing, MI 48824-1026, USA E-mail:
More informationCHAPTER 6 RESULTS AND DISCUSSIONS
151 CHAPTER 6 RESULTS AND DISCUSSIONS In this chapter the performance of the personal identification system on the PolyU database is presented. The database for both Palmprint and Finger Knuckle Print
More information3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.
3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction
More information3D Face Identification - Experiments Towards a Large Gallery
3D Face Identification - Experiments Towards a Large Gallery Dirk Colbry a, Folarin Oki b, George Stockman b a Arizona State University, School of Computing and Informatics, Tempe, AZ 85287-8809 USA b
More informationThe Novel Approach for 3D Face Recognition Using Simple Preprocessing Method
The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach
More informationCombining Statistics of Geometrical and Correlative Features for 3D Face Recognition
1 Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition Yonggang Huang 1, Yunhong Wang 2, Tieniu Tan 1 1 National Laboratory of Pattern Recognition Institute of Automation,
More informationDETC D FACE RECOGNITION UNDER ISOMETRIC EXPRESSION DEFORMATIONS
Proceedings of the ASME 2014 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2014 August 17-20, 2014, Buffalo, New York, USA DETC2014-34449
More informationLandmark Detection on 3D Face Scans by Facial Model Registration
Landmark Detection on 3D Face Scans by Facial Model Registration Tristan Whitmarsh 1, Remco C. Veltkamp 2, Michela Spagnuolo 1 Simone Marini 1, Frank ter Haar 2 1 IMATI-CNR, Genoa, Italy 2 Dept. Computer
More informationOverview of the Face Recognition Grand Challenge
To appear: IEEE Conference on Computer Vision and Pattern Recognition 2005. Overview of the Face Recognition Grand Challenge P. Jonathon Phillips 1, Patrick J. Flynn 2, Todd Scruggs 3, Kevin W. Bowyer
More informationEXPLOITING 3D FACES IN BIOMETRIC FORENSIC RECOGNITION
18th European Signal Processing Conference (EUSIPCO-2010) Aalborg, Denmark, August 23-27, 2010 EXPLOITING 3D FACES IN BIOMETRIC FORENSIC RECOGNITION Marinella Cadoni, Andrea Lagorio, Enrico Grosso Massimo
More informationNIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899
^ A 1 1 1 OS 5 1. 4 0 S Support Vector Machines Applied to Face Recognition P. Jonathon Phillips U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Information
More informationPattern Recognition 42 (2009) Contents lists available at ScienceDirect. Pattern Recognition
Pattern Recognition 42 (2009) 1895 -- 1905 Contents lists available at ScienceDirect Pattern Recognition journal homepage: www.elsevier.com/locate/pr Automatic 3D face recognition from depth and intensity
More informationGurmeet Kaur 1, Parikshit 2, Dr. Chander Kant 3 1 M.tech Scholar, Assistant Professor 2, 3
Volume 8 Issue 2 March 2017 - Sept 2017 pp. 72-80 available online at www.csjournals.com A Novel Approach to Improve the Biometric Security using Liveness Detection Gurmeet Kaur 1, Parikshit 2, Dr. Chander
More informationBetter than best: matching score based face registration
Better than best: based face registration Luuk Spreeuwers University of Twente Fac. EEMCS, Signals and Systems Group Hogekamp Building, 7522 NB Enschede The Netherlands l.j.spreeuwers@ewi.utwente.nl Bas
More informationClustering. Robert M. Haralick. Computer Science, Graduate Center City University of New York
Clustering Robert M. Haralick Computer Science, Graduate Center City University of New York Outline K-means 1 K-means 2 3 4 5 Clustering K-means The purpose of clustering is to determine the similarity
More informationFlexible and Robust 3D Face Recognition. A Dissertation. Submitted to the Graduate School. of the University of Notre Dame
Flexible and Robust 3D Face Recognition A Dissertation Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy
More information2654 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 12, DECEMBER 2015
2654 IEEE TRANSACTIONS ON CYBERNETICS, VOL. 45, NO. 12, DECEMBER 2015 Can We Do Better in Unimodal Biometric Systems? A Rank-Based Score Normalization Framework Panagiotis Moutafis, Student Member, IEEE,
More informationLinear Discriminant Analysis for 3D Face Recognition System
Linear Discriminant Analysis for 3D Face Recognition System 3.1 Introduction Face recognition and verification have been at the top of the research agenda of the computer vision community in recent times.
More informationA GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION
A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, Universität Karlsruhe (TH) 76131 Karlsruhe, Germany
More informationComputationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms
Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at
More informationUR3D-C: Linear Dimensionality Reduction for Efficient 3D Face Recognition
UR3D-C: Linear Dimensionality Reduction for Efficient 3D Face Recognition Omar Ocegueda 1, Georgios Passalis 1,2, Theoharis Theoharis 1,2, Shishir K. Shah 1, Ioannis A. Kakadiaris 1 Abstract We present
More informationOnline and Offline Fingerprint Template Update Using Minutiae: An Experimental Comparison
Online and Offline Fingerprint Template Update Using Minutiae: An Experimental Comparison Biagio Freni, Gian Luca Marcialis, and Fabio Roli University of Cagliari Department of Electrical and Electronic
More informationEI3D: Expression-Invariant 3D Face Recognition based on Feature and Shape Matching
1 Pattern Recognition Letters journal homepage: www.elsevier.com EI3D: Expression-Invariant 3D Face Recognition based on Feature and Shape Matching Yulan Guo a,, Yinjie Lei b, Li Liu c, Yan Wang d, Mohammed
More informationImproving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More informationBIOMET: A Multimodal Biometric Authentication System for Person Identification and Verification using Fingerprint and Face Recognition
BIOMET: A Multimodal Biometric Authentication System for Person Identification and Verification using Fingerprint and Face Recognition Hiren D. Joshi Phd, Dept. of Computer Science Rollwala Computer Centre
More informationStructured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov
Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter
More informationState of The Art In 3D Face Recognition
State of The Art In 3D Face Recognition Index 1 FROM 2D TO 3D 3 2 SHORT BACKGROUND 4 2.1 THE MOST INTERESTING 3D RECOGNITION SYSTEMS 4 2.1.1 FACE RECOGNITION USING RANGE IMAGES [1] 4 2.1.2 FACE RECOGNITION
More informationFigure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5].
ON THE VULNERABILITY OF FACE RECOGNITION SYSTEMS TO SPOOFING MASK ATTACKS Neslihan Kose, Jean-Luc Dugelay Multimedia Department, EURECOM, Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr
More informationSpatial Frequency Domain Methods for Face and Iris Recognition
Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026
More informationClassification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University
Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate
More informationNew Experiments on ICP-Based 3D Face Recognition and Authentication
New Experiments on ICP-Based 3D Face Recognition and Authentication Boulbaba Ben Amor Boulbaba.Ben-Amor@ec-lyon.fr Liming Chen Liming.Chen@ec-lyon.fr Mohsen Ardabilian Mohsen.Ardabilian@ec-lyon.fr Abstract
More information6. Multimodal Biometrics
6. Multimodal Biometrics Multimodal biometrics is based on combination of more than one type of biometric modalities or traits. The most compelling reason to combine different modalities is to improve
More informationShape Model-Based 3D Ear Detection from Side Face Range Images
Shape Model-Based 3D Ear Detection from Side Face Range Images Hui Chen and Bir Bhanu Center for Research in Intelligent Systems University of California, Riverside, California 92521, USA fhchen, bhanug@vislab.ucr.edu
More informationRobust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma
Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected
More informationBiometrics Technology: Multi-modal (Part 2)
Biometrics Technology: Multi-modal (Part 2) References: At the Level: [M7] U. Dieckmann, P. Plankensteiner and T. Wagner, "SESAM: A biometric person identification system using sensor fusion ", Pattern
More informationFingerprint Indexing using Minutiae and Pore Features
Fingerprint Indexing using Minutiae and Pore Features R. Singh 1, M. Vatsa 1, and A. Noore 2 1 IIIT Delhi, India, {rsingh, mayank}iiitd.ac.in 2 West Virginia University, Morgantown, USA, afzel.noore@mail.wvu.edu
More informationATINER's Conference Paper Series COM
Athens Institute for Education and Research ATINER ATINER's Conference Paper Series COM2012-0049 A Multi-Level Hierarchical Biometric Fusion Model for Medical Applications Security Sorin Soviany, Senior
More informationLeveraging Set Relations in Exact Set Similarity Join
Leveraging Set Relations in Exact Set Similarity Join Xubo Wang, Lu Qin, Xuemin Lin, Ying Zhang, and Lijun Chang University of New South Wales, Australia University of Technology Sydney, Australia {xwang,lxue,ljchang}@cse.unsw.edu.au,
More informationFace Recognition using Eigenfaces SMAI Course Project
Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract
More informationIntegrating Range and Texture Information for 3D Face Recognition
Integrating Range and Texture Information for 3D Face Recognition Xiaoguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University East Lansing, MI 48824 {Lvxiaogu, jain}@cse.msu.edu
More informationAn Unsupervised Approach for Combining Scores of Outlier Detection Techniques, Based on Similarity Measures
An Unsupervised Approach for Combining Scores of Outlier Detection Techniques, Based on Similarity Measures José Ramón Pasillas-Díaz, Sylvie Ratté Presenter: Christoforos Leventis 1 Basic concepts Outlier
More informationCHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS
CHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS As explained previously in the scope, this thesis will also create a prototype about face recognition system. The face recognition system itself has several
More informationOn Modeling Variations for Face Authentication
On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu
More informationAutomatic 3D Face Detection, Normalization and Recognition
Automatic 3D Face Detection, Normalization and Recognition Ajmal Mian, Mohammed Bennamoun and Robyn Owens School of Computer Science and Software Engineering The University of Western Australia 35 Stirling
More informationFast and Accurate 3D Face Recognition
Int J Comput Vis (2011) 93: 389 414 DOI 10.1007/s11263-011-0426-2 Fast and Accurate 3D Face Recognition Using Registration to an Intrinsic Coordinate System and Fusion of Multiple Region Classifiers Luuk
More informationThree-Dimensional Face Recognition: A Fishersurface Approach
Three-Dimensional Face Recognition: A Fishersurface Approach Thomas Heseltine, Nick Pears, Jim Austin Department of Computer Science, The University of York, United Kingdom Abstract. Previous work has
More informationCorrespondence. CS 468 Geometry Processing Algorithms. Maks Ovsjanikov
Shape Matching & Correspondence CS 468 Geometry Processing Algorithms Maks Ovsjanikov Wednesday, October 27 th 2010 Overall Goal Given two shapes, find correspondences between them. Overall Goal Given
More informationLandmark Localisation in 3D Face Data
2009 Advanced Video and Signal Based Surveillance Landmark Localisation in 3D Face Data Marcelo Romero and Nick Pears Department of Computer Science The University of York York, UK {mromero, nep}@cs.york.ac.uk
More informationModel-based segmentation and recognition from range data
Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This
More informationCase-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.
CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance
More informationCS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley 1 1 Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance
More informationShifting Score Fusion: On Exploiting Shifting Variation in Iris Recognition
Preprocessing c 211 ACM This is the author s version of the work It is posted here by permission of ACM for your personal use Not for redistribution The definitive version was published in: C Rathgeb,
More informationImproving Personal Identification Accuracy Using Multisensor Fusion for Building Access Control Applications
Improving Personal Identification Accuracy Using Multisensor Fusion for Building Access Control Applications Lisa Osadciw, Pramod Varshney, and Kalyan Veeramachaneni laosadci,varshney,kveerama@syr.edu
More informationSelection of Location, Frequency and Orientation Parameters of 2D Gabor Wavelets for Face Recognition
Selection of Location, Frequency and Orientation Parameters of 2D Gabor Wavelets for Face Recognition Berk Gökberk, M.O. İrfanoğlu, Lale Akarun, and Ethem Alpaydın Boğaziçi University, Department of Computer
More informationFeature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Feature Selection CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Dimensionality reduction Feature selection vs. feature extraction Filter univariate
More informationPoint-Pair Descriptors for 3D Facial Landmark Localisation
Point-Pair Descriptors for 3D Facial Landmark Localisation Marcelo Romero and Nick Pears Department of Computer Science The University of York York, UK {mromero, nep}@cs.york.ac.uk Abstract Our pose-invariant
More informationNonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.
Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2
More informationSYDE Winter 2011 Introduction to Pattern Recognition. Clustering
SYDE 372 - Winter 2011 Introduction to Pattern Recognition Clustering Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 5 All the approaches we have learned
More informationPose Normalization for Robust Face Recognition Based on Statistical Affine Transformation
Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation Xiujuan Chai 1, 2, Shiguang Shan 2, Wen Gao 1, 2 1 Vilab, Computer College, Harbin Institute of Technology, Harbin,
More informationExpression Detection in Video. Abstract Expression detection is useful as a non-invasive method of lie detection and
Wes Miller 5/11/2011 Comp Sci 534 Expression Detection in Video Abstract Expression detection is useful as a non-invasive method of lie detection and behavior prediction, as many facial expressions are
More informationDe-identifying Facial Images using k-anonymity
De-identifying Facial Images using k-anonymity Ori Brostovski March 2, 2008 Outline Introduction General notions Our Presentation Basic terminology Exploring popular de-identification algorithms Examples
More informationColor Local Texture Features Based Face Recognition
Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India
More informationFacial Range Image Matching Using the Complex Wavelet Structural Similarity Metric
Facial Range Image Matching Using the Complex Wavelet Structural Similarity Metric Shalini Gupta, Mehul P. Sampat, Mia K. Markey, Alan C. Bovik The University of Texas at Austin, Austin, TX 78712, USA
More information3D Face Recognition Using Spherical Vector Norms Map *
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 3, XXXX-XXXX (016) 3D Face Recognition Using Spherical Vector Norms Map * XUE-QIAO WANG ab, JIA-ZHENG YUAN ac AND QING LI ab a Beijing Key Laboratory of Information
More informationA Dissertation. Submitted to the Graduate School. of the University of Notre Dame. in Partial Fulfillment of the Requirements.
NEW MULTI-BIOMETRIC APPROACHES FOR IMPROVED PERSON IDENTIFICATION A Dissertation Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationPeg-Free Hand Geometry Verification System
Peg-Free Hand Geometry Verification System Pavan K Rudravaram Venu Govindaraju Center for Unified Biometrics and Sensors (CUBS), University at Buffalo,New York,USA. {pkr, govind} @cedar.buffalo.edu http://www.cubs.buffalo.edu
More informationUnsupervised Learning
Outline Unsupervised Learning Basic concepts K-means algorithm Representation of clusters Hierarchical clustering Distance functions Which clustering algorithm to use? NN Supervised learning vs. unsupervised
More informationExploring Facial Expression Effects in 3D Face Recognition Using Partial ICP
Exploring Facial Expression Effects in 3D Face Recognition Using Partial ICP Yueming Wang 1, Gang Pan 1,, Zhaohui Wu 1, and Yigang Wang 2 1 Dept. of Computer Science, Zhejiang University, Hangzhou, 310027,
More informationA 2D+3D FACE IDENTIFICATION SYSTEM FOR SURVEILLANCE APPLICATIONS
A 2D+3D FACE IDENTIFICATION SYSTEM FOR SURVEILLANCE APPLICATIONS Filareti Tsalakanidou, Sotiris Malassiotis and Michael G. Strintzis Informatics and Telematics Institute Centre for Research and Technology
More informationAn Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face Image
International Journal of Computer Science Issues, Vol. 2, 2009 ISSN (Online): 694-0784 ISSN (Print): 694-084 49 An Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face Image Nageshkumar.M,
More informationImproving Hand-Based Verification Through Online Finger Template Update Based on Fused Confidences
Improving Hand-Based Verification Through Online Finger Template Update Based on Fused Confidences Gholamreza Amayeh, George Bebis and Mircea Nicolescu {amayeh, bebis, mircea}@cse.unr.edu Computer Vision
More informationGlobal Shape Matching
Global Shape Matching Section 3.2: Extrinsic Key Point Detection and Feature Descriptors 1 The story so far Problem statement Given pair of shapes/scans, find correspondences between the shapes Local shape
More informationK-Means Clustering Using Localized Histogram Analysis
K-Means Clustering Using Localized Histogram Analysis Michael Bryson University of South Carolina, Department of Computer Science Columbia, SC brysonm@cse.sc.edu Abstract. The first step required for many
More informationClustering. Chapter 10 in Introduction to statistical learning
Clustering Chapter 10 in Introduction to statistical learning 16 14 12 10 8 6 4 2 0 2 4 6 8 10 12 14 1 Clustering ² Clustering is the art of finding groups in data (Kaufman and Rousseeuw, 1990). ² What
More informationUsing Support Vector Machines to Eliminate False Minutiae Matches during Fingerprint Verification
Using Support Vector Machines to Eliminate False Minutiae Matches during Fingerprint Verification Abstract Praveer Mansukhani, Sergey Tulyakov, Venu Govindaraju Center for Unified Biometrics and Sensors
More informationLearning to Recognize Faces in Realistic Conditions
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationLocal Correlation-based Fingerprint Matching
Local Correlation-based Fingerprint Matching Karthik Nandakumar Department of Computer Science and Engineering Michigan State University, MI 48824, U.S.A. nandakum@cse.msu.edu Anil K. Jain Department of
More informationSurface Registration. Gianpaolo Palma
Surface Registration Gianpaolo Palma The problem 3D scanning generates multiple range images Each contain 3D points for different parts of the model in the local coordinates of the scanner Find a rigid
More informationA coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking
A coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking Przemyslaw Szeptycki, Mohsen Ardabilian and Liming Chen Abstract Automatic 2.5D face landmarking aims at locating facial
More informationFeature-level Fusion for Effective Palmprint Authentication
Feature-level Fusion for Effective Palmprint Authentication Adams Wai-Kin Kong 1, 2 and David Zhang 1 1 Biometric Research Center, Department of Computing The Hong Kong Polytechnic University, Kowloon,
More informationCluster Analysis. Ying Shen, SSE, Tongji University
Cluster Analysis Ying Shen, SSE, Tongji University Cluster analysis Cluster analysis groups data objects based only on the attributes in the data. The main objective is that The objects within a group
More informationIntensity Augmented ICP for Registration of Laser Scanner Point Clouds
Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using
More informationCOSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor
COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality
More informationExploratory Data Analysis using Self-Organizing Maps. Madhumanti Ray
Exploratory Data Analysis using Self-Organizing Maps Madhumanti Ray Content Introduction Data Analysis methods Self-Organizing Maps Conclusion Visualization of high-dimensional data items Exploratory data
More informationMULTI-FINGER PENETRATION RATE AND ROC VARIABILITY FOR AUTOMATIC FINGERPRINT IDENTIFICATION SYSTEMS
MULTI-FINGER PENETRATION RATE AND ROC VARIABILITY FOR AUTOMATIC FINGERPRINT IDENTIFICATION SYSTEMS I. Introduction James L. Wayman, Director U.S. National Biometric Test Center College of Engineering San
More informationAAM Based Facial Feature Tracking with Kinect
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking
More informationRepresentation Plurality and Decision Level Fusion for 3D Face Recognition
1 Representation Plurality and Decision Level Fusion for 3D Face Recognition Berk Gökberk, Helin Dutağacı, Lale Akarun, Bülent Sankur B. Gökberk and L. Akarun are with Boğaziçi University, Computer Engineering
More information