New Experiments on ICP-Based 3D Face Recognition and Authentication

Similar documents
Enhancing 3D Face Recognition By Mimics Segmentation

3D Face Recognition. Anil K. Jain. Dept. of Computer Science & Engineering Michigan State University.

Three-Dimensional Face Recognition: A Fishersurface Approach

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5].

Integrating Range and Texture Information for 3D Face Recognition

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D face recognition based on a modified ICP method

Empirical Evaluation of Advanced Ear Biometrics

A 2D+3D FACE IDENTIFICATION SYSTEM FOR SURVEILLANCE APPLICATIONS

Conformal mapping-based 3D face recognition

Algorithm research of 3D point cloud registration based on iterative closest point 1

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Exploring Facial Expression Effects in 3D Face Recognition Using Partial ICP

3D Face Identification - Experiments Towards a Large Gallery

IDENTITY VERIFICATION VIA THE 3DID FACE ALIGNMENT SYSTEM. Dirk Colbry and George Stockman

3D Face Modelling Under Unconstrained Pose & Illumination

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

Face Recognition using 3D Facial Shape and Color Map Information: Comparison and Combination

Expression Invariant 3D Face Recognition with a Morphable Model

DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION. Ani1 K. Jain and Nicolae Duta

Pattern Recognition 42 (2009) Contents lists available at ScienceDirect. Pattern Recognition

Linear Discriminant Analysis for 3D Face Recognition System

A coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking

EXPLOITING 3D FACES IN BIOMETRIC FORENSIC RECOGNITION

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

DETC D FACE RECOGNITION UNDER ISOMETRIC EXPRESSION DEFORMATIONS

On Modeling Variations for Face Authentication

Applications Video Surveillance (On-line or off-line)

Facial Range Image Matching Using the Complex Wavelet Structural Similarity Metric

3-D Shape Matching for Face Analysis and Recognition

Structured light 3D reconstruction

Automatic 3D Face Recognition Combining Global Geometric Features with Local Shape Variation Information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

State of The Art In 3D Face Recognition

Point-Pair Descriptors for 3D Facial Landmark Localisation

Biometrics Technology: Multi-modal (Part 2)

Component-based Face Recognition with 3D Morphable Models

Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression

Landmark Localisation in 3D Face Data

Component-based Face Recognition with 3D Morphable Models

A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION

Spatial Frequency Domain Methods for Face and Iris Recognition

Better than best: matching score based face registration

A Dissertation. Submitted to the Graduate School. of the University of Notre Dame. in Partial Fulfillment of the Requirements.

I CP Fusion Techniques for 3D Face Recognition

Landmark Detection on 3D Face Scans by Facial Model Registration

Textured 3D Face Recognition using Biological Vision-based Facial Representation and Optimized Weighted Sum Fusion

Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Shape Model-Based 3D Ear Detection from Side Face Range Images

Local Correlation-based Fingerprint Matching

Expression-Invariant 3D Face Recognition using Patched Geodesic Texture Transform

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

Face Alignment Under Various Poses and Expressions

Head Frontal-View Identification Using Extended LLE

Surface Registration. Gianpaolo Palma

Component-based Registration with Curvature Descriptors for Expression Insensitive 3D Face Recognition

Integrated 3D Expression Recognition and Face Recognition

Fusion of Infrared and Range Data: Multi-modal Face Images

Semi-Supervised PCA-based Face Recognition Using Self-Training

Correspondence. CS 468 Geometry Processing Algorithms. Maks Ovsjanikov

Automatic 3D Face Recognition Using Shapes of Facial Curves

Range Image Registration with Edge Detection in Spherical Coordinates

HW2 due on Thursday. Face Recognition: Dimensionality Reduction. Biometrics CSE 190 Lecture 11. Perceptron Revisited: Linear Separators

Hybrid Biometric Person Authentication Using Face and Voice Features

Face Recognition Using Phase-Based Correspondence Matching

5.2 Surface Registration

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Automatic Gait Recognition. - Karthik Sridharan

Haresh D. Chande #, Zankhana H. Shah *

Graph Matching Iris Image Blocks with Local Binary Pattern

Pose Normalization for Robust Face Recognition Based on Statistical Affine Transformation

Comparison of Different Face Recognition Algorithms

An Evaluation of Volumetric Interest Points

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Compact signatures for 3D face recognition under varying expressions

Model-based segmentation and recognition from range data

FITTING AND TRACKING 3D/4D FACIAL DATA USING A TEMPORAL DEFORMABLE SHAPE MODEL. Shaun Canavan, Xing Zhang, and Lijun Yin

Feature-level Fusion for Effective Palmprint Authentication

A Study on Similarity Computations in Template Matching Technique for Identity Verification

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks

Face Recognition by Multi-Frame Fusion of Rotating Heads in Videos

Learning to Fuse 3D+2D Based Face Recognition at Both Feature and Decision Levels

Automatic 3D Face Detection, Normalization and Recognition

A Survey of 3D Face Recognition Methods

Using 3D Models to Recognize 2D Faces in the Wild

CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS AND FISHER LINEAR DISCRIMINANT ANALYSIS

An Evaluation of Multimodal 2D+3D Face Biometrics

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

DTU Technical Report: ARTTS

Overview of the Face Recognition Grand Challenge

AUTOMATIC human face recognition is a challenging task

Colored Point Cloud Registration Revisited Supplementary Material

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse

Shape and Texture Based Countermeasure to Protect Face Recognition Systems Against Mask Attacks

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION

APPLICATION OF LOCAL BINARY PATTERN AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION

Reconstruction of complete 3D object model from multi-view range images.

Transcription:

New Experiments on ICP-Based 3D Face Recognition and Authentication Boulbaba Ben Amor Boulbaba.Ben-Amor@ec-lyon.fr Liming Chen Liming.Chen@ec-lyon.fr Mohsen Ardabilian Mohsen.Ardabilian@ec-lyon.fr Abstract In this paper, we discuss new experiments on face recognition and authentication based on dimensional surface matching. While most of existing methods use facial intensity images, a newest ones focus on introducing depth information to surmount some of classical face recognition problems such as pose, illumination, and facial expression variations. The presented matching algorithm is based on ICP (Iterative Closest Point) that provides perfectly the posture of presented probe. In addition, the similarity metric is given by spatial deviation between the overlapped parts in matched surfaces. The general paradigm consists in building a full 3D face gallery using a laser-based scanner (the off-line phase). At the on-line phase, identification or verification, only one captured 2.5D face model is performed with the whole set of 3D faces from the gallery or compared to the 3D face model of the genuine, respectively. This probe model can be acquired from arbitrary viewpoint, with arbitrary facial expressions, and under arbitrary lighting conditions. A new multi-view registered 3D face database, including these variations, is developed within BioSecure Workshop 2005 in order to perform significant experiments. 1. Introduction and motivation Over the past few decades, biometrics and particularly face recognition and authentication have been applied widely in several applications such as recognition inside video surveillance systems, and authentication within access control devices. However, as described in the Face Recognition Vendor Test Report [13], and as in other reports, most commercial face recognition technologies suffer from two kinds of problems. The first one concerns inter-class similarity such as twins classes, and fathers and sons classes. Here, people have similar appearances which make their discrimination difficult. The second, and the more important complexity, is due to intra-class variations caused by changes in lighting conditions, pose variations (i.e. three-dimensional head orientation), and facial expressions. On the one hand, lighting conditions change dramatically the face appearance; consequently approaches only based on intensity images are insufficient to employ. On the other hand, pose variations present also a considerable handicap for recognition by performing comparisons between frontal face images and changed viewpoint images. In addition, compensation of facial expressions is a difficult task in 2D-based approaches, because it significantly changes the appearance of the face in the texture image. Current state-of-the-art in face recognition is interesting since it contains works which aim at resolving problems regarding this challenge. The majority of these works use intensity faces images for recognition or authentication, called 2D model-based techniques. A second family of recent works, known as 3D model-based, exploits threedimensional face shape in order to mitigate some of these variations. Where some of them propose to apply subspacebased methods, others perform shape matching algorithm. As described in [12, 8, 11], classical linear and non-linear dimensional reduction techniques such as PCA and LDA are applied to range images from data collection in order to build a projection sub-space. Further, the comparison metric computes distances between the obtained projections.

Shape matching-based approaches rather use classical 3D surface alignment algorithms that compute the residual error between the surface of probe and the 3D images from the gallery as proposed in [4] and [10]. In [5], authors present a new proposal which considers the facial surface (frontal view) as an isometric surface (length preserving). Using a global transformation based on geodesics, the obtained forms are invariant to facial expressions. After the transformation, they perform one classical rigid surface matching and PCA for sub-space building and face matching. A good reviews and comparison studies of some of these techniques (both 2D and 3D) are given in [6] and [14]. Another interesting study which compares ICP and PCA 3D-based approaches is presented in [7]. Here, the authors show a baseline performance between these approaches and conclude that ICP-based method performs better than a PCAbased method. Their challenge is expression changes particularly, "eye lip open/closed" and "mouth open/closed". In the present paper, we experiment against the ICP-based algorithm with a particular paradigm for both authentication and recognition tasks. The presented discussions and conclusions are interesting for future works. A new multiview and registered 3D face database which includes full 3D faces and probe images with all these variations is collected for significant experiments. 2. The proposed paradigm Our identification/authentication approach is based on dimensional surfaces of faces. First, we build the full 3D face database with neutral expressions (Figure 4). The models inside includes both shape and texture channels saved in the same VRML file: the off-line phase. Second, only one partial model is captured and only the face region is conserved. The given 2.5D model is performed with all full 3D faces in the gallery or compared to the genuine model. The core of our recognition and authentication scheme consists in alignment and matching steps of the obtained surfaces. For the first step, approximating the rigid transformations between the presented probe and the full 3D face, a coarse alignment step and then a fine alignment step via ICP (Iterative Closest Point) algorithm [3] are applied. This algorithm is an iterative procedure minimizing the MSE (Mean Square Error) between points in partial model and the closest points in the 3D full model. The final result is two matched sets of points in the 2.5D probe model and the 3D face model from the gallery. For the second step, similarity score production, a global spatial deviation between each pair of points is computed. The recognition and authentication process is based on the obtained distance distribution between surfaces. 3D face model (From gallery) Probe (2.5D) model Coarse alignment result Figure 1. The coarse alignment step (initilisation for ICP). 3. ICP-based 2.5D/3D face matching The main contribution of our approach is the use of dimensional information which is lost by projection in the two-dimensional images in order to study and mitigate some of the limitations in face recognition challenge. One of interesting ways for performing verification/identification is the 3D shape matching process. Many solutions are developed to resolve this task especially for range image registration and 3D object recognition. The basic algorithm is the Iterative Closest Point developed by Bessel et al. and published in [3]. In our approach we consider first a coarse alignment step, which approximates the rigid transformation between models and bring them closer. Then we perform a variant of ICP algorithm which computes the minimal distance and converges to a minima starting from the last initial solution. 3.1. Coarse alignment Currently we are focusing on full-automatic coarse alignment procedure based on 3D face features located by curvature analysis. However, we present for these experiments a manual pre-annotation step in which the user must select more than two corresponded 3D points in the probe model and the 3D face model from the gallery. Rigid transformation (R init,t init ) including rotation R init, and translation t init is computed using the selected points. This procedure presents a good initialization stage before the final fine alignment stage using ICP. There are at least two advantages of the coarse alignment: (a) good initialization which guarantees the convergence to the global minima and (b) reducing time consuming for computing optimal solution. Figure 1 illustrates this process and shows the result of the rigid transformation applied to the probe model.

Zoom minvalue=0.097 maxvalue=14.833 meanvalue=2.373 stdvalue=1.674 Zoom 99.98 % Figure 2. Iterative Closest Point for fine alignement. minvalue=0.048 maxvalue=9.093 meanvalue=1.75 stdvalue=1.01 3.2. Fine alignment Our fine alignment algorithm is based on the well-known Iterative Closet Point (ICP) algorithm [3]. It is an iterative procedure minimizing the mean square error (MSE) between points in one view and the closest points, respectively, in the other. At each iteration of the algorithm, the geometric transformation that best aligns the probe model and the 3D model from the gallery is computed. Intuitively, starting from the two sets of points P = {p i }, as a reference data, and X = {y i }, as a test data, the goal is to find the rigid transformation (R, t) which minimizes the distance between these two sets of points. The target of ICP consists in determining for each point p i of the reference set P the nearest point in the second set X within the meaning of the Euclidean distance. The rigid transformation, minimizing a least square criterion (1), is calculated and applied to the each point of P : e(r, t) = 1 N N (Rp i + t) y i 2 (1) i=0 This procedure is alternated and iterated until convergence (i.e. stability of the minimal error). Indeed, total transformation (R, t) is updated in an incremental way as follows: for each iteration k of the algorithm: R = R k.r and t = t+t k. The criterion to be minimized in the iteration k becomes (2): e(r k,t k )= 1 N N (R k (Rp i + t)+t k y i 2 (2) i=0 The ICP algorithm presented above always converges monotonically to a local minimum [3]. However, we can hope for a convergence to a global minimum if initialization is good. For this reason, we perform the previous coarse alignment procedure before the fine one. Figure 2 shows zooms on some regions in aligned models; here the 3D model is one neutral 3D model from the gallery whereas Figure 3. Examples of spatial deviations and colormaps between the 2.5D requests and the 3D corresponded face from the gallery. the 2.5D model is a scan of the same subject with facial expressions. It s clear that fine alignment contributes to minimizing the distance between the points. In our implemented version of ICP, we use a set of features selected based on the tolerance level T of spatial deviation. This allows a rapid convergence of the algorithm which processes only these points and cancels points which presents spatial deviation value greater than T. In contrast, correspondence concerns all points in overlapped surfaces. 4. The 2.5D/3D similarity metric ICP-based algorithm allows as the alignment of the probe and the 3D face surfaces. The face matcher which computes de similarity score between faces is based on the spatial deviation distribution of matched points in probe model and 3D face from the gallery. A histogram of distances is computed for each pair of surfaces for comparison. Figure 3 shows two histograms of distributions and the associated colormaps which present by colors in the requested model (the probe model), the distance values. The first example (genuine: frontal view with changed facial expressions) shows that the higher parts of the models are matched better than the lower parts. This is because these parts are more static than the lower ones. The distribution presents a

(A) Poses: (r): right profile, (f): frontal view, (l):left profile Figure 4. 3D full face model from 3D gallery. mean value equal to 2.37mm and standard deviation equal to 1.67mm due to the deformation of the face by changes in facial expressions. In the second given example, where the request is a left profile with neutral expression of genuine model, The surface is perfectly matched to the 3D face model and distribution presents 1.75mm as mean value and 1.01mm as standard deviation. 5. Experiments and discussions Until now, there is no public multi-view and registered 3D face database for testing the proposed recognition and authentication scheme. For that we have acquired a new gallery and probe datasets in order to make significant experiments for evaluation. The database consists of galley dataset with full 3D faces and test scans with variations on facial expressions, posture and lighting conditions. The provided experiments pertains to both identification and authentication problems. 5.1. Database collection As described on the proposed paradigm section, the proposed system requires 3D full faces for performing recognition and authentication from any viewpoint. For each subject, his full 3D face is obtained by integrating three partial models. First, we acquire three scans of the subject from different viewpoints (frontal view, left and right profile views), then we register them, and finally we merge 3D shapes and texture images. All reconstructed 3D full face models have neutral expressions. They have the same number of vertices in their mesh (7000) and about 13000 triangles obtained by adaptively sub-sampling to the initial mesh (based on curvature). For the present experiments, our database includes 50 3D full faces in the gallery and 400 2.5D test models as probes. For each subject the test dataset contains 8 probe models (1 frontal, 2 profiles, 1 with controlled illumination and 4 with different expressions), as illustrated by figure 5. The probe models have also about 7000 vertices and 13000 triangles which is a detail preserving resolution and allows the convergence of the matching (B) Expressions: (f):neutral, (e): closed eyes, (s): surprised, (d): disgasting, (h): happy (C) IIlluminations: (i): controlled, (f): uncontrolled Figure 5. Variations in probe images for recognition and authentication experiments. algorithm in short time (about 5s in matlab implementation). A set of meta-data is stored containing 3D landmarks, pose, illumination and expression values in XML files, for each face model (2.5D and 3D). The main target of this database collection is to evaluate the robustness of the ICP-based developed technique and others to significant illumination, pose and facial expression variations and to define new research directions. This database is acquired in BioSecure residential workshop which is held in Paris on august 2005. 5.2. Experimental results The following experiments present the obtained results regarding verification and recognition tasks. 8 experiments are developed according each variation. In these experiments, similarity matrices are produced, rank-one recognition rates are comuted and evaluation ROC-DET curves are plotted. Figure 6 summerize the rank-one recognition rates (identity is found in the first rank) for each variation and the recognition rate of the whole. It presents 97.25%, for the whole set of experiments. In authentication experiments, 3 diagrams are presented: the FAR/FRR in function of the threshold, the ROC curve and the Error trade-off curve which summerize the system performance according different variations. Figure 7 shows the obtained Error-trade off curves for each experiment. Both authentication and identification experiments, summarized by probe rank distributions and Error trade-off

Probe variations (all) 97.25 % (f) (r) (l) (i) (e) (h) (s) (d) 95.12 % 92.68 % 0 10 20 30 40 50 60 70 80 90 100 Rank-one recognition rates False Reject Rate (%) 25 20 15 10 Experiment (d) Experiment (e) Experiment (f) Experiment (h) Experiment (i) Experiment (l) Experiment (r) Experiment (s) Experiment (All) FAR=FRR Figure 6. Rank-one recognition rates for all experiments. 5 0 0 5 10 15 20 25 False Accept Rate (%) curves, show invariance of the presented paradigm to posture and illumination changes. However, system accuracy decreases with increasing the 3D face shape deformations. We can summarize these curves by dividing the result into three classes. The first one presents the highest recognition and authentication rates with neutral expressions and in arbitrary viewpoint ((f), (l), and (r)). The second class of results ((e), (i), and (h)) presents less high rates due to small changes in shape of the probe images (changes in illumination affects the laser-based acquisition and consequently the shape of the face). In the last class of results, we can note worse rates. Here, probes present dramatic changes in their shapes ((d), and (s)). We can conclude that the performance of the presented approach depends on shape deformations. The less the deformation is, the invariant the ICP-based approach is. In our future works we plan to integrate the ICP-based matching algorithm with a region-based metric in order to decrease the influence of mimic regions and attribute more weights to the static parts. We plan also to integrate the geodesic concept for computing distances maps. These distances maps are not affected by facial expressions, as described in [5], in isometric surfaces. We are working also on facial feature extraction in order to automate the coarse alignment stage and compute the geodesic distances for higher recognition and authentication rates. The database collection will be also continued within the framework of the IV2 project [1]. References [1] http://lsc.univ-evry.fr/techno/iv2/. [2] B. BenAmor, K. Ouji, M. Ardabilian, and L. Chen. 3d face recognition by icp-based shape matching. In Proceeding of IEEE International Conference on Machine Intelligence, Tozeur, Tunisie, November 2005. [3] P. J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans. Pattern Anal. Mach. Intell., 14(2):239 256, 1992. Figure 7. Error trade-off curves for all experiments. [4] C. Beumier and M. Acheroy. Automatic 3D face authentication. Image and Vision Computing, 18(4):315 321, 2000. [5] A. M. Bronstein, M. M. Bronstein, and R. Kimmel. Threedimensional face recognition. International Journal of Computer Vision (IJCV), 64(1):5 30, August 2005. [6] K. I. Chang, K. W. Bowyer, and P. J. Flynn. An evaluation of multi-modal 2D+3D face biometrics. IEEE Transactions on PAMI, 27(4):619 624, 2005. [7] K. J. Chang, K. W. Bowyer, and P. J. Flynn. Effects on facial expression in 3D face recognition. In Proceedings of the SPIE, Volume 5779, pp. 132-143 (2005)., pages 132 143, Mar. 2005. [8] T. Heseltine, N. Pears, and J. Austin. Three-dimensional face recognition: An eigensurface approach. In Proc. IEEE International Conference on Image Processing, pages 1421 1424, Singapore, 2004. poster. [9] M. W. Lee and S. Ranganath. Pose-invariant face recognition using a 3D deformable model. Pattern Recognition, 36(8):1835 1846, 2003. [10] X. Lu and A. K. Jain. Integrating range and texture information for 3d face recognition. In Proc. 7th IEEE Workshop on Applications of Computer Vision, pages 156 163, 2005. [11] S. Malassiotis and M. G. Strintzis. Pose and illumination compensation for 3D face recognition. In Proc. IEEE International Conference on Image Processing, pages 91 94, Singapore, 2004. oral. [12] G. Pan, Z. Wu, and Y. Pan. Automatic 3D face verification from range data. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 3, pages 193 196, Hong Kong, 2003. [13] P. Phillips, P. Grother, R. Micheals, D. Blackburn, E. Tabassi, and J. Bone. Frvt 2002: Evaluation report. Technical report, NIST, March 2003. [14] C. Xu, Y. Wang, T. Tan, and L. Quan. Depth vs. intensity: Which is more important for face recognition? In Proc. 17th International Conference on Pattern Recognition, Cambridge, UK, 2004. poster.