IDENTITY VERIFICATION VIA THE 3DID FACE ALIGNMENT SYSTEM. Dirk Colbry and George Stockman

Size: px
Start display at page:

Download "IDENTITY VERIFICATION VIA THE 3DID FACE ALIGNMENT SYSTEM. Dirk Colbry and George Stockman"

Transcription

1 IDENTITY VERIFICATION VIA THE 3DID FACE ALIGNMENT SYSTEM Dirk Colbry and George Stockman Department of Computer Science and Engineering, Michigan State University East Lansing, Michigan {colbrydi, ABSTRACT The 3DID system has been developed to verify the identity of a person by matching a sensed 3D surface of the face to a face model stored during an enrollment process. Anchor point detection is performed based on shape index; then, a rigid alignment is determined between the observed and model face anchor points. The alignment is refined using a modified ICP algorithm allowing for trimming of 10% noise. Trimmed RMS error for the same person is almost always smaller than 1.3mm; whereas for different persons, it is almost always larger. Performance analysis shows that the 3DID system is fast enough (< 2 sec on a 3.2 MHz P4), reliable enough (1% EER with 1.5% reject rate), and user friendly enough (handles 30 degrees of yaw and 10 degrees of roll and pitch) to be practical in several applications. The current system provides several displays of value to human agents either in online or delayed analysis mode. An inexpensive scanner is needed for widespread use. 1. INTRODUCTION The human face is the most common biometric used for person identification. Our friends and coworkers recognize us primarily by face. ID cards, such as a passport or drivers license, bind our name, address, etc., to a photograph. Thus, even an unfamiliar person might identify us by this picture ID. Given this social background, the human face is a prime candidate for biometric identification by machine. There have been many experiments reported on matching 2D portraits, and recently good performance has been achieved [9, 5]. However, automatic face matching via 2D images has difficulty with changes in pose, lighting, and scale. The advent of 3D scanners presents the opportunity to extract biometric information that is much less dependent on lighting and pose. Different methods are available for scanning and representing the human face as a collection of 6D points [x, y, z, R, G, B], where the triple [x, y, z] lies on the face surface f(x, y, z) =0and the triple [R, G, B] gives the color observed at point [x, y, z]. Studies from FRVT 2000 have shown that when variations in lighting and pose are introduced into a data set, the performance of 2D face recognition system degrades significantly [8]. Much recent research has focused on the development of 3D face recognition technology. Some investigators have used the 3D processing component to normalize the pose and lighting of the input image to match pose and lighting variation in the 2D gallery images. These mappings include 3D rotations, as well as more advanced morphing models that include variations in expression [2]. The rest of this paper summarizes our experience in developing our 3DID system and the performance that it has achieved. Section 2 describes the system goals and the design of the system to meet those goals. Section 3 describes the face-matching algorithm and Section 4 gives the results of many tests of the algorithm. Discussion of the system and the performance tests given in Section 5 support the conclusion that 3DID will be effective in practice provided that scanner costs are significantly decreased. 2. 3DID FACE VERIFICATION SYSTEM DESIGN The 3DID system was designed to control access to secure locations or assets, such as airports or bank accounts. A key assumption is that a person using the 3DID system wants to gain access to the secure resources, and thus will be cooperative in the identification process. For example, while the 3DID system can handle minor changes in pose and expression, it does not accept the extreme variations in expression that are generated by an uncooperative subject Performance goals The system was designed with the following goals: 1. The verification process should be no longer than 5 secs. 2. A reject rate of 5%, with immediate retry, is tolerable. 3. A false dismissal rate of 1% is tolerable in environments where personnel are also present (e.g. airports), but not when no personnel are present (e.g. at an ATM). 4. A very low false accept rate, perhaps less than 0.1%. Considering the airport example, it may take one minute or more to complete the gate entry process at the metal detectors: there is typically a 5 second window to perform 3D

2 face scanning in a particular place. The retry rate in the metal detector may be more than 5%, so a similar rate at the face scanner should be acceptable. Similarly, a false dismissal rate of 1% would mean than only 1 of 100 persons would need to be interrogated by a security agent because face verification failed. The false accept rate will be very low because an imposter would have to gain access to the persons ID card and have a face that is very similar in 3D shape and texture to the stored face Operating assumptions For a successful verification process, we make the following assumptions: We assume that a 3D scanner samples the face at 0.5 mm resolution, or better, in x, y, and z, and also samples face color R, G, B at each point. (Most of our algorithm tests have used only shape and no color.) We assume that the person faces the sensor with a neutral expression and with no more than 5 degrees of yaw, roll, or pitch. (3DID proved robust to much larger rotations, as shown below.) We assume that the face is unoccluded vertically from just below the nose tip to an inch above the eyebrows and horizontally between the ears; and free of hair, jewelry, glasses, hands, etc. (3DID will succeed even with small occlusions such as nose piercing due to the sparse surface sampling and 10% trimming.) We assume that the face is stationary for the full time required for scanning. (This is about 2 and 4 seconds for low and high resolution with our current scanner.) If the subject s behavior violates the assumptions, 3DID is allowed to reject the scan and ask for a retry. This rejection will not be regarded as a negative on the performance evaluation. In some cases of rejection, 3DID is able to give feedback to the subject in order to improve the repeat scan. 3. MATCHING SYSTEM 3DID uses recognition-by-alignment: the method for comparing two face scans is based on optimally aligning the surfaces and calculating the distance between them. We have developed a two-step, rigid alignment process shown in Figure 1. The rightmost display in the figure shows the two face surfaces in their optimal alignment with different shading to emphasize their interpenetration: this output has high value to a human agent (and has been entertaining to 3DID subjects and spectators). The system has been shown to be equally effective in our laboratory and in four other locations. The first step in our process is a coarse alignment using anchor points [4]. These anchor points include the nose tip, inside corners of the eyes, the mouth, chin, etc., and are detected by using a model of the structure of the face and the Fig. 1. 3DID Matching system. curvature (second derivative) at various points. Figure 6b displays the shape index, a single value computed from the min and max curvature. Using the shape index, the nose tip is evident as a local maxima, the inner eye corners as local minima, and the nose bridge as a saddle point. The shape index array is computed by fitting a bicubic polynomial z = f(x, y) to the 9 x 9 neighborhood of each surface point and using the curvatures from that polynomial patch. Once the anchor points are detected in both the probe and gallery scans, corresponding anchor points are used to estimate a coarse, rigid transformation between the two scans. (Experimental results show that (1) detection of some anchor points is reliable, although not highly accurate; that (2) detection of one good set of 3 anchor points is highly reliable; and that (3) the subsequent ICP process is highly likely to converge correctly even with inaccurate initial alignment.) The second alignment step uses the Iterative Closest Point (ICP) algorithm to finely align the scans [1]. ICP samples a set of control points on one scan and calculates the nearest points on the second scan, then calculates a transformation that reduces the error between these point pairs. This algorithm terminates when the change in error is below a threshold or when the iteration limit is reached. A grid of 100 control points surrounding the eyes and the nose is chosen; these are areas of the face that do not vary much with changes in expression. (Our results show that this two-step alignment process is highly reliable, even when aligning the face surfaces of two different persons.) Trimming the points of comparison to the best 90 of 100 is critical because of frequent errors in reading the laser in the eyes, and because of steep steps in z near the bottom and sides of the nose. After the current and model scans are finely aligned, different metrics can be used to determine how well the scans match and whether the two face scans derive from the same person. For the current study, we are interested in the realworld accuracy of the surface matching so we consider a simple alignment measurement SAM the final RMS matching error produced by the ICP algorithm. All SAM values are reported in millimeters and reflect the average over the 90 control points after trimming. For more information on our matching system, see [6] and [3].

3 4. EXPERIMENTS Our experiments evaluated the performance of 3DID using environmental parameters similar to what would be expected in the airport application. We examined the entire system endto-end, including the scanning and the matching processes. The major questions we explored were: 1. How fast are the scanning and matching processes? 2. How accurate is the ID verification process as a function of change in pose, relative to a frontal position? 3. How accurate is the ID verification process overall? 4.1. Face Scanning Our face matching system uses input from a commercial structured light scanner - the Minolta Vivid 910 [11]. The Minolta Vivid 910 scanner is commonly used in the face recognition community, and is the primary scanner used to gather 3D face data for the FRGC [7]. This system contains a standard color camera to obtain [R, G, B] reflections from the object surface. It also sweeps a horizontal plane of laser light across the object, the camera detects the laser line and triangulates the depth of the illuminated surface points. The Vivid 910 produces depth values accurate to better than ±0.5mm in low resolution mode, and to better than ±0.10mm in high resolution mode. The two different resolutions were used in the experiments described here. For face data obtained in our own lab at MSU, the Minolta Vivid 910 was set up to record a low resolution, 320x240 pixel image in less than 2 seconds. For every pixel, the scanner outputs the Cartesian coordinates [x, y, z], the color [R, G, B], and a flag value indicating whether or not the depth value could be computed. The FRGC face data provided by Notre Dame were collected at high resolution (640 x 480 pixels). Although the higher resolution potentially delivers a more accurate representation, it is also more prone to noise since the human subject tends to move during the almost 4-second scan time. Movement during the scanning process causes distortion of the image and also misalignment between depth and color images. We have used the high resolution FRGC data from Notre Dame for testing and for making comparisons with our own low resolution data processing Data Sets Three data sets were used for our experiments: 1. The first data set contained scans of a rigid mannequin head (Marie). The mannequin was used for its unchanging 3D surface, which we assumed would produce bestcase performance for the combination of hardware and software that is 3DID. 2. The second data set contained artificially rotated virtual scans generated from a set of over 300 real face scans of 111 human subjects. The human scans varied some in expression, but little in pose. The virtual rotation process rotates the 3D points of a face scan in 3D space and then re-projects them onto an orthogonal plane that is parallel to the xy plane. The orthogonal projection produces a virtual scan that includes a depth map, flag map and color image similar to the data produced by the Minolta scanner. Precise pose angles can be produced and control can be maintained over the sampling rate. 3. The third data set was the FRGC1 data set obtained from Notre Dame containing 948 approximately frontal neutral scans from 275 different people Results of Experiments Experiments were conducted to evaluate the performance of the surface matching system under changes in stand off distance, head pose, and lighting conditions. Optimal Conditions Baseline SAM - A total of 20 scans (190 matching scores) were made of Marie in a frontal pose with constant florescent lighting. The resulting SAM values had a mean of 0.37mm, ±0.15mm. These results indicate the range of expected variation due to sensor noise, and suggest that the 3D matching system can assume that two scans matching within this range indicate a proper alignment of surfaces from the same face. We also learned that a SAM of 0.5mm is a very good matching value for two scans from a real humans face. A histogram of these SAM values for Marie is shown in Figure 2. The dotted line represents the Gaussian approximation to the histogram with a mean at 0.37mm and a standard deviation of 0.15mm. As a comparison, two more Gaussian distributions are shown next to Marie, one for intra-class (Genuine) SAM values and the other for inter-class (Imposters) SAM values. The Genuine curve is an estimation from scans taken from 111 people; in half of these scans the people were smiling. Notice that the mean of the genuine curve is slightly higher than that for Marie due to changes in expression (0.7mm±0.15). The Imposters curve was taken from the same 111 subjects. Notice that the SAM values for Imposters match scores are much higher (1.5mm ± 0.3). The overlap between the Genuine and Imposters curves represents the error region for 3DID using only the SAM value to make an accept/deny decision. Color and Lighting - Although skin color can be a source of identification or disguise, we have found that it rarely affects the ability to match two surfaces using SAM criteria. Changes in color, however, can affect the Minolta scanner s ability to detect the laser, which leads to failure of the triangulation scheme and of computing the depth value. We found that washout from severe lighting inhibits the camera s ability to detect the laser beam, as do some dark colors. These types of changes, however, usually do not prohibit our matching system from generating an acceptable SAM value.

4 Fig. 2. Gaussian approximation of SAM distributions. Change in Stand Off Distance - The Minolta Vivid 910 s operating stand off for the medium distance lens is reported to be between 0.6 and 2.5 meters. We tested the system with the distance between the camera and the subject varying from 0.35 to 3.05 meters. The system found no valid points on the face when the distance was below 0.5 meters and above 2.9 meters. The closest viable distance for Marie to the scanner was found to be 0.65 meters. At this distance the matching SAM was measured to be 0.41mm. As the distance increased, the SAM increased at an almost linear rate, and increased to 1.1mm at the largest offset. Pose Variation - Another experiment tested the effects of changing the relative pose (yaw, pitch, and roll) angle between the face and the camera. Marie was mounted on a tripod and the origin of this coordinate system was at the base of her neck. Typical results of the pose experiments are shown in Figure 3. Data points for Marie represent an average of 5 scans, while our virtual rotation results are averages of 300 scans for each rotation angle. These experiments show that our face matching system can easily tolerate rotations (yaw) of up to ±30 degrees from frontal. Changes in pitch and roll were not as well tolerated (±10 degrees). 3D matching performance based on PCA (similar to [10]) is also available for this data set from the University of Notre Dame. In this baseline algorithm, both the color and depth components of the face scans are normalized to a standard width and height using the manually selected points at the center of the eyes. Once the images are normalized, the PCA algorithm is applied to both data channels independently and the matching results are reported. Figure 4 shows the ROC curves produced by the baseline algorithm and our 3DID algorithm. The Notre Dame baseline algorithm has an EER of 4.6% and our algorithm has a baseline EER of 3.8% on the same database. Our matching algorithm performs much better for low values of the false positive rate. Many of the errors on the top end of the ROC curve are due to incorrect detection of the anchor points. In fact, the 3.8% EER includes only 23 False Reject errors. One of these scans is missing its depth data, and the other 22 errors are due to bad anchor points in the same 15 scans. These error rates demonstrate some of the difficulty with this testing method: a single error in anchor point detection on a single scan can propagate and cause misleading results. In a real application, it would be better to automatically identify and reject some of the bad scans and have the subject rescanned. Fig. 4. ROC curves for the FRGC 1.0 Data Automatic Rejection Option Fig. 3. Change in Yaw vs. SAM. Human Frontal Data - In the final experiment, our algorithm was evaluated on the FRGC 1.0 database. A baseline 3DID contains a rejection algorithm based on the symmetry of the face; it uses ICP to align the current scan with a mirror image of itself. If the anchor points are not found correctly, then the SAM between a scan and its mirror image will likely be high. However, if the anchor points are found correctly, then the SAM score will be quite low. Using this assumption, we designed a rejection criteria that rejected 1.5% of the 948 scans. Only 1 good scan was incorrectly rejected and 2 bad scans were not rejected. After implementing the reject option, the ROC was recalculated and an EER of 1.2% was achieved.

5 5. 3DID PROTOTYPE INTERFACE The prototype program is a specific aspect of the 3D Research Platform that runs on Windows. The prototype program is designed to gather data using the VIVID 910 and demonstrate the viability of using 3D face matching in a practical system. The three main features of the prototype are: 1. Data are gathered using the VIVID 910 in fast mode. 2. The practicality of the matching algorithm is demonstrated by using the SAM matching score and symmetry reject option in real-time. 3. Different visualization methods are provided to demonstrate how the data is processed and what the algorithm is doing. A subject stands in front of the camera and must hold still for about 2 seconds to capture a scan. (Often the user is instructed to provide a neutral, or poker, face or a slight smile.) Figure 5 is a flow chart of the operation of the prototype system. Many additional features have been incorporated into the prototype to increase functionality, such as keyboard shortcuts, batch process capabilities and the ability to turn different aspects of the matching algorithm on or off Matching Algorithm The matching algorithm takes two scans (a model and a query) and aligns them as described in Section 3. Figure 7 shows the four windows that appear when executing the matching algorithm. Figure 7b shows the model scan with the 100 control points selected from the query scan. The green control points represent the untrimmed points, and the blue circle around each control point represents the current alignment error for that point (1 pixel in radius is approximately one millimeter between the surfaces). Another window (see Figure 7c) shows a histogram of these errors (all distances above 3mm are put into the 3mm bin). Figure 7d is a bar graph representing the current SAM score. The vertical line indicates the threshold value. A matching score to the right of the vertical line indicates an impostor (bar is colored red), while a matching score to the left of the vertical line indicates a match (bar is colored green). The matching algorithm compares the model to the query and the query to the model and reports the lower of the two scores; if the first match is very low, the second match will be bypassed. A final display will appear either accepting or rejecting the subject Data Visualization Once a scan has been taken or loaded from a file, it is displayed on the main prototype window (see Figure 7a). The main window has a tool bar for executing the main system commands and setup parameters. By default, the image is displayed with the automatically detected anchor points, which can be toggled on and off. The flag values can also be toggled on to indicate where the scanner did not pick up any valid data. In addition to the color image, the depth map, shape index and surface normals can also be displayed. Any of the main window visualization modes can be exported to an image file or exported directly into a VRML file, which will be displayed immediately using an external VRML viewer, as shown in Figure 6. This gives the system the ability to output models that can be rotated in 3D. (a) Depth Map (b) Shape Index Fig. 5. Flow chart representing the operation of the prototype 3D face verification system. (c) Color Space (blue to red) Fig. 6. 3D VRML viewer with color, depth, shape and normal data representations.

6 (a) Main Window With Flag Values Showing. (b) Model Scan with Control Points (c) Point Error Histogram (d) Current SAM Fig. 7. Matching algorithm visualization. 6. CONCLUSIONS In this paper, we have described a commercial system prototype, its hardware and software, and the results of using it. In our 3DID project, we have performed many experiments in person verification and have developed several algorithms and software tools. To date we have ourselves taken several hundred scans from about 300 subjects, and we have used hundreds of scans from the Notre Dame test set. We have tested the system in several buildings, outside, and in a tent. Specific results of testing the performance of the 3DID prototype are as follows. 1. Most color and lighting changes did not adversely affect the 3D face matching system. 2. Under optimally controlled baseline operating conditions, the SAM values are 0.37mm ± 0.15mm. 3. Our face matching system can tolerate rotations (yaw) of up to ±30 degrees from frontal. Changes in pitch and roll are not as well tolerated (±10 degrees). 4. Using the SAM as a matching metric can produce reasonable results (3.8% EER) on a large data set. 5. Using a rejection option to remove poor quality scans improves the EER to 1.2% (at a rejection rate of 1.5%). Our testing demonstrates that the SAM can be used as a matching score to achieve equal error rates of 1.2% for frontal, neutral expression scans. This error rate and the speed of our system are acceptable for some biometric applications, and in particular meet the target requirements for the airline application. We argue that the 3DID system displays are of great value to human agents, either for immediate identification purposes or for delayed analysis offline. Current 3D scanners are still too expensive (about $50k) for widespread application. We hope to take part in the development of a combination structured light and stereo system that could produce a faster scanner in the $5k range. We are also working on improvements to all aspects of our algorithms better anchor point detection algorithms, better reject/rescan algorithms, and more advanced matching techniques (including the use of color), in order to achieve even better performance. 7. REFERENCES [1] P. J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2): , [2] V. Blanz and T. Vetter. Face recognition based on fitting a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9): , [3] D. Colbry. Human Face Verification by Robust 3D Surface Alignment. PhD thesis, Michigan State Unviersity, [4] D. Colbry, G. Stockman, and A. K. Jain. Detection of anchor points for 3D face verification. In Proceedings of Workshop on Advanced 3D Imaging for Safety and Security, San Diego California, [5] S. Z. Li and A. K. Jain, editors. Handbook of Face Recognition. Springer Verlag, [6] X. Lu, A. K. Jain, and D. Colbry. Matching 2.5d face scans to 3D models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(1):31 43, [7] P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek. Overview of the Face Recognition Grand Challenge. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition, [8] P. J. Phillips, P. Grother, R. J. Micheals, D. M. Blackburn, E. Tabassi, and M. Bone. Face Recognition Vendor Test (FRVT), overview and summary. Technical report, National Institute of Standards and Technology, [9] M. Savvides, B. V. K. V. Kumar, and P. K. Khosla. Eigenphases vs. eigenfaces. In Proceedings of the International Conference on Pattern Recognition, volume 03, pages , Los Alamitos, CA, USA, [10] M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71 86, [11] Vivid910. Minolta vivid 910 non-contact 3D laser scanner

3D Face Identification - Experiments Towards a Large Gallery

3D Face Identification - Experiments Towards a Large Gallery 3D Face Identification - Experiments Towards a Large Gallery Dirk Colbry a, Folarin Oki b, George Stockman b a Arizona State University, School of Computing and Informatics, Tempe, AZ 85287-8809 USA b

More information

Integrating Range and Texture Information for 3D Face Recognition

Integrating Range and Texture Information for 3D Face Recognition Integrating Range and Texture Information for 3D Face Recognition Xiaoguang Lu and Anil K. Jain Dept. of Computer Science & Engineering Michigan State University East Lansing, MI 48824 {Lvxiaogu, jain}@cse.msu.edu

More information

3D Face Recognition. Anil K. Jain. Dept. of Computer Science & Engineering Michigan State University.

3D Face Recognition. Anil K. Jain. Dept. of Computer Science & Engineering Michigan State University. 3D Face Recognition Anil K. Jain Dept. of Computer Science & Engineering Michigan State University http://biometrics.cse.msu.edu Face Recognition 1959? 1960 1972 1973 Face detection using OpenCV Viola-Jones

More information

New Experiments on ICP-Based 3D Face Recognition and Authentication

New Experiments on ICP-Based 3D Face Recognition and Authentication New Experiments on ICP-Based 3D Face Recognition and Authentication Boulbaba Ben Amor Boulbaba.Ben-Amor@ec-lyon.fr Liming Chen Liming.Chen@ec-lyon.fr Mohsen Ardabilian Mohsen.Ardabilian@ec-lyon.fr Abstract

More information

Empirical Evaluation of Advanced Ear Biometrics

Empirical Evaluation of Advanced Ear Biometrics Empirical Evaluation of Advanced Ear Biometrics Ping Yan Kevin W. Bowyer Department of Computer Science and Engineering University of Notre Dame, IN 46556 Abstract We present results of the largest experimental

More information

Automatic 3D Face Detection, Normalization and Recognition

Automatic 3D Face Detection, Normalization and Recognition Automatic 3D Face Detection, Normalization and Recognition Ajmal Mian, Mohammed Bennamoun and Robyn Owens School of Computer Science and Software Engineering The University of Western Australia 35 Stirling

More information

A coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking

A coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking A coarse-to-fine curvature analysis-based rotation invariant 3D face landmarking Przemyslaw Szeptycki, Mohsen Ardabilian and Liming Chen Abstract Automatic 2.5D face landmarking aims at locating facial

More information

3D Face Modelling Under Unconstrained Pose & Illumination

3D Face Modelling Under Unconstrained Pose & Illumination David Bryan Ottawa-Carleton Institute for Biomedical Engineering Department of Systems and Computer Engineering Carleton University January 12, 2009 Agenda Problem Overview 3D Morphable Model Fitting Model

More information

Enhancing 3D Face Recognition By Mimics Segmentation

Enhancing 3D Face Recognition By Mimics Segmentation Enhancing 3D Face Recognition By Mimics Segmentation Boulbaba Ben Amor, Mohsen Ardabilian, and Liming Chen MI Department, LIRIS Laboratory, CNRS 5205 Ecole Centrale de Lyon, 36 av. Guy de Collongue, 69134

More information

Information technology Biometric data interchange formats Part 5: Face image data

Information technology Biometric data interchange formats Part 5: Face image data INTERNATIONAL STANDARD ISO/IEC 19794-5:2005 TECHNICAL CORRIGENDUM 2 Published 2008-07-01 INTERNATIONAL ORGANIZATION FOR STANDARDIZATION МЕЖДУНАРОДНАЯ ОРГАНИЗАЦИЯ ПО СТАНДАРТИЗАЦИИ ORGANISATION INTERNATIONALE

More information

Spatial Frequency Domain Methods for Face and Iris Recognition

Spatial Frequency Domain Methods for Face and Iris Recognition Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026

More information

Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression

Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 10, OCTOBER 2006 1 Multiple Nose Region Matching for 3D Face Recognition under Varying Facial Expression Kyong I. Chang, Kevin

More information

Better than best: matching score based face registration

Better than best: matching score based face registration Better than best: based face registration Luuk Spreeuwers University of Twente Fac. EEMCS, Signals and Systems Group Hogekamp Building, 7522 NB Enschede The Netherlands l.j.spreeuwers@ewi.utwente.nl Bas

More information

Overview of the Face Recognition Grand Challenge

Overview of the Face Recognition Grand Challenge To appear: IEEE Conference on Computer Vision and Pattern Recognition 2005. Overview of the Face Recognition Grand Challenge P. Jonathon Phillips 1, Patrick J. Flynn 2, Todd Scruggs 3, Kevin W. Bowyer

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Fingerprint Recognition using Robust Local Features Madhuri and

More information

Face Recognition using Eigenfaces SMAI Course Project

Face Recognition using Eigenfaces SMAI Course Project Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Three-Dimensional Face Recognition: A Fishersurface Approach

Three-Dimensional Face Recognition: A Fishersurface Approach Three-Dimensional Face Recognition: A Fishersurface Approach Thomas Heseltine, Nick Pears, Jim Austin Department of Computer Science, The University of York, United Kingdom Abstract. Previous work has

More information

DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION. Ani1 K. Jain and Nicolae Duta

DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION. Ani1 K. Jain and Nicolae Duta DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION Ani1 K. Jain and Nicolae Duta Department of Computer Science and Engineering Michigan State University, East Lansing, MI 48824-1026, USA E-mail:

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational

More information

I CP Fusion Techniques for 3D Face Recognition

I CP Fusion Techniques for 3D Face Recognition I CP Fusion Techniques for 3D Face Recognition Robert McKeon University of Notre Dame Notre Dame, Indiana, USA rmckeon@nd.edu Patrick Flynn University of Notre Dame Notre Dame, Indiana, USA flynn@nd.edu

More information

Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms

Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Rohit Patnaik and David Casasent Dept. of Electrical and Computer Engineering, Carnegie Mellon University,

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

3D Signatures for Fast 3D Face Recognition

3D Signatures for Fast 3D Face Recognition 3D Signatures for Fast 3D Face Recognition Chris Boehnen, Tanya Peters, and Patrick J. Flynn Department of Computer Science and Engineering University of Notre Dame, USA {cboehnen,tpeters,flynn}@nd.edu

More information

Structured light 3D reconstruction

Structured light 3D reconstruction Structured light 3D reconstruction Reconstruction pipeline and industrial applications rodola@dsi.unive.it 11/05/2010 3D Reconstruction 3D reconstruction is the process of capturing the shape and appearance

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

State of The Art In 3D Face Recognition

State of The Art In 3D Face Recognition State of The Art In 3D Face Recognition Index 1 FROM 2D TO 3D 3 2 SHORT BACKGROUND 4 2.1 THE MOST INTERESTING 3D RECOGNITION SYSTEMS 4 2.1.1 FACE RECOGNITION USING RANGE IMAGES [1] 4 2.1.2 FACE RECOGNITION

More information

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5].

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5]. ON THE VULNERABILITY OF FACE RECOGNITION SYSTEMS TO SPOOFING MASK ATTACKS Neslihan Kose, Jean-Luc Dugelay Multimedia Department, EURECOM, Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr

More information

Expression Detection in Video. Abstract Expression detection is useful as a non-invasive method of lie detection and

Expression Detection in Video. Abstract Expression detection is useful as a non-invasive method of lie detection and Wes Miller 5/11/2011 Comp Sci 534 Expression Detection in Video Abstract Expression detection is useful as a non-invasive method of lie detection and behavior prediction, as many facial expressions are

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

The Impact of Diffuse Illumination on Iris Recognition

The Impact of Diffuse Illumination on Iris Recognition The Impact of Diffuse Illumination on Iris Recognition Amanda Sgroi, Kevin W. Bowyer, and Patrick J. Flynn University of Notre Dame asgroi kwb flynn @nd.edu Abstract Iris illumination typically causes

More information

Rotation Invariant Finger Vein Recognition *

Rotation Invariant Finger Vein Recognition * Rotation Invariant Finger Vein Recognition * Shaohua Pang, Yilong Yin **, Gongping Yang, and Yanan Li School of Computer Science and Technology, Shandong University, Jinan, China pangshaohua11271987@126.com,

More information

Flexible and Robust 3D Face Recognition. A Dissertation. Submitted to the Graduate School. of the University of Notre Dame

Flexible and Robust 3D Face Recognition. A Dissertation. Submitted to the Graduate School. of the University of Notre Dame Flexible and Robust 3D Face Recognition A Dissertation Submitted to the Graduate School of the University of Notre Dame in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

More information

THE ability to consistently extract features is at the heart of

THE ability to consistently extract features is at the heart of International Journal of Information and Communication Engineering 5:4 29 Scale-Space Volume Descriptors for Automatic 3D Facial Feature Extraction Daniel Chen, George Mamic, Clinton Fookes, Sridha Sridharan

More information

Range Image Registration with Edge Detection in Spherical Coordinates

Range Image Registration with Edge Detection in Spherical Coordinates Range Image Registration with Edge Detection in Spherical Coordinates Olcay Sertel 1 and Cem Ünsalan2 Computer Vision Research Laboratory 1 Department of Computer Engineering 2 Department of Electrical

More information

Point-Pair Descriptors for 3D Facial Landmark Localisation

Point-Pair Descriptors for 3D Facial Landmark Localisation Point-Pair Descriptors for 3D Facial Landmark Localisation Marcelo Romero and Nick Pears Department of Computer Science The University of York York, UK {mromero, nep}@cs.york.ac.uk Abstract Our pose-invariant

More information

Expression Invariant 3D Face Recognition with a Morphable Model

Expression Invariant 3D Face Recognition with a Morphable Model Expression Invariant 3D Face Recognition with a Morphable Model Brian Amberg brian.amberg@unibas.ch Reinhard Knothe reinhard.knothe@unibas.ch Thomas Vetter thomas.vetter@unibas.ch Abstract We describe

More information

Intensity-Depth Face Alignment Using Cascade Shape Regression

Intensity-Depth Face Alignment Using Cascade Shape Regression Intensity-Depth Face Alignment Using Cascade Shape Regression Yang Cao 1 and Bao-Liang Lu 1,2 1 Center for Brain-like Computing and Machine Intelligence Department of Computer Science and Engineering Shanghai

More information

3D Facial Landmark Localisation by Matching Simple Descriptors

3D Facial Landmark Localisation by Matching Simple Descriptors 3D Facial Landmark Localisation by Matching Simple Descriptors Marcelo Romero-Huertas and Nick Pears Abstract We present our graph matching approach for 3D facial feature localisation. The work here uses

More information

Roger Woodman [ ]

Roger Woodman [   ] A Photometric Stereo Approach to Face Recognition Roger Woodman [ www.razorrobotics.com/roger-woodman ] A dissertation submitted in partial fulfilment of the requirements of the University of the West

More information

Component-based Face Recognition with 3D Morphable Models

Component-based Face Recognition with 3D Morphable Models Component-based Face Recognition with 3D Morphable Models Jennifer Huang 1, Bernd Heisele 1,2, and Volker Blanz 3 1 Center for Biological and Computational Learning, M.I.T., Cambridge, MA, USA 2 Honda

More information

A Framework for Efficient Fingerprint Identification using a Minutiae Tree

A Framework for Efficient Fingerprint Identification using a Minutiae Tree A Framework for Efficient Fingerprint Identification using a Minutiae Tree Praveer Mansukhani February 22, 2008 Problem Statement Developing a real-time scalable minutiae-based indexing system using a

More information

A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION

A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A. Hadid, M. Heikkilä, T. Ahonen, and M. Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering

More information

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, Universität Karlsruhe (TH) 76131 Karlsruhe, Germany

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

NIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899

NIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899 ^ A 1 1 1 OS 5 1. 4 0 S Support Vector Machines Applied to Face Recognition P. Jonathon Phillips U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Information

More information

Landmark Localisation in 3D Face Data

Landmark Localisation in 3D Face Data 2009 Advanced Video and Signal Based Surveillance Landmark Localisation in 3D Face Data Marcelo Romero and Nick Pears Department of Computer Science The University of York York, UK {mromero, nep}@cs.york.ac.uk

More information

Local Correlation-based Fingerprint Matching

Local Correlation-based Fingerprint Matching Local Correlation-based Fingerprint Matching Karthik Nandakumar Department of Computer Science and Engineering Michigan State University, MI 48824, U.S.A. nandakum@cse.msu.edu Anil K. Jain Department of

More information

An Evaluation of Multimodal 2D+3D Face Biometrics

An Evaluation of Multimodal 2D+3D Face Biometrics IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 27, NO. 4, APRIL 2005 619 An Evaluation of Multimodal 2D+3D Face Biometrics Kyong I. Chang, Kevin W. Bowyer, and Patrick J. Flynn Abstract

More information

Development of an Automated Fingerprint Verification System

Development of an Automated Fingerprint Verification System Development of an Automated Development of an Automated Fingerprint Verification System Fingerprint Verification System Martin Saveski 18 May 2010 Introduction Biometrics the use of distinctive anatomical

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition

Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition 1 Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition Yonggang Huang 1, Yunhong Wang 2, Tieniu Tan 1 1 National Laboratory of Pattern Recognition Institute of Automation,

More information

Semi-Supervised PCA-based Face Recognition Using Self-Training

Semi-Supervised PCA-based Face Recognition Using Self-Training Semi-Supervised PCA-based Face Recognition Using Self-Training Fabio Roli and Gian Luca Marcialis Dept. of Electrical and Electronic Engineering, University of Cagliari Piazza d Armi, 09123 Cagliari, Italy

More information

Landmark Detection on 3D Face Scans by Facial Model Registration

Landmark Detection on 3D Face Scans by Facial Model Registration Landmark Detection on 3D Face Scans by Facial Model Registration Tristan Whitmarsh 1, Remco C. Veltkamp 2, Michela Spagnuolo 1 Simone Marini 1, Frank ter Haar 2 1 IMATI-CNR, Genoa, Italy 2 Dept. Computer

More information

Face Recognition by 3D Registration for the Visually Impaired Using a RGB-D Sensor

Face Recognition by 3D Registration for the Visually Impaired Using a RGB-D Sensor Face Recognition by 3D Registration for the Visually Impaired Using a RGB-D Sensor Wei Li 1, Xudong Li 2, Martin Goldberg 3 and Zhigang Zhu 1,3 1 The City College of New York, New York, NY 10031, USA 2

More information

A 2D+3D FACE IDENTIFICATION SYSTEM FOR SURVEILLANCE APPLICATIONS

A 2D+3D FACE IDENTIFICATION SYSTEM FOR SURVEILLANCE APPLICATIONS A 2D+3D FACE IDENTIFICATION SYSTEM FOR SURVEILLANCE APPLICATIONS Filareti Tsalakanidou, Sotiris Malassiotis and Michael G. Strintzis Informatics and Telematics Institute Centre for Research and Technology

More information

Exploring Facial Expression Effects in 3D Face Recognition Using Partial ICP

Exploring Facial Expression Effects in 3D Face Recognition Using Partial ICP Exploring Facial Expression Effects in 3D Face Recognition Using Partial ICP Yueming Wang 1, Gang Pan 1,, Zhaohui Wu 1, and Yigang Wang 2 1 Dept. of Computer Science, Zhejiang University, Hangzhou, 310027,

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Dilation Aware Multi-Image Enrollment for Iris Biometrics

Dilation Aware Multi-Image Enrollment for Iris Biometrics Dilation Aware Multi-Image Enrollment for Iris Biometrics Estefan Ortiz 1 and Kevin W. Bowyer 1 1 Abstract Current iris biometric systems enroll a person based on the best eye image taken at the time of

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

Miniature faking. In close-up photo, the depth of field is limited.

Miniature faking. In close-up photo, the depth of field is limited. Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg

More information

Mobile Face Recognization

Mobile Face Recognization Mobile Face Recognization CS4670 Final Project Cooper Bills and Jason Yosinski {csb88,jy495}@cornell.edu December 12, 2010 Abstract We created a mobile based system for detecting faces within a picture

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 10 Part-2 Skeletal Models and Face Detection March 21, 2014 Sam Siewert Outline of Week 10 Lab #4 Overview Lab #5 and #6 Extended Lab Overview SIFT and SURF High

More information

VeriLook 5.3/MegaMatcher 4.4 Algorithm Demo

VeriLook 5.3/MegaMatcher 4.4 Algorithm Demo VeriLook 5.3/MegaMatcher 4.4 Algorithm Demo User's guide User's guide version: 5.3.0.0 Publish date: 1/29/2013 Table of Contents 1 Introduction 1 1.1 System Requirements 1 2 IP Cameras Configuration 2

More information

The HFB Face Database for Heterogeneous Face Biometrics Research

The HFB Face Database for Heterogeneous Face Biometrics Research The HFB Face Database for Heterogeneous Face Biometrics Research Stan Z. Li, Zhen Lei, Meng Ao Center for Biometrics and Security Research Institute of Automation, Chinese Academy of Sciences 95 Zhongguancun

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Partial Face Recognition

Partial Face Recognition Partial Face Recognition Shengcai Liao NLPR, CASIA April 29, 2015 Background Cooperated face recognition People are asked to stand in front of a camera with good illumination conditions Border pass, access

More information

Global fitting of a facial model to facial features for model based video coding

Global fitting of a facial model to facial features for model based video coding Global fitting of a facial model to facial features for model based video coding P M Hillman J M Hannah P M Grant University of Edinburgh School of Engineering and Electronics Sanderson Building, King

More information

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993

Camera Calibration for Video See-Through Head-Mounted Display. Abstract. 1.0 Introduction. Mike Bajura July 7, 1993 Camera Calibration for Video See-Through Head-Mounted Display Mike Bajura July 7, 1993 Abstract This report describes a method for computing the parameters needed to model a television camera for video

More information

Partial Matching of Interpose 3D Facial Data for Face Recognition

Partial Matching of Interpose 3D Facial Data for Face Recognition Partial Matching of Interpose 3D Facial Data for Face Recognition P. Perakis G. Passalis T. Theoharis G. Toderici I.A. Kakadiaris Abstract Three dimensional face recognition has lately received much attention

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

3D Face Mesh Modeling for 3D Face Recognition

3D Face Mesh Modeling for 3D Face Recognition 7 3D Face Mesh Modeling for 3D Face Recognition Ansari A-Nasser 1, Mahoor Mohammad 2 and Abdel-Mottaleb Mohamed 3 1 Ministry of Defence, QENF, Doha, Qatar 2 University of Denver, Denver, Colorado 3 University

More information

Face Recognition Using Phase-Based Correspondence Matching

Face Recognition Using Phase-Based Correspondence Matching Face Recognition Using Phase-Based Correspondence Matching Koichi Ito Takafumi Aoki Graduate School of Information Sciences, Tohoku University, 6-6-5, Aramaki Aza Aoba, Sendai-shi, 98 8579 Japan ito@aoki.ecei.tohoku.ac.jp

More information

Peg-Free Hand Geometry Verification System

Peg-Free Hand Geometry Verification System Peg-Free Hand Geometry Verification System Pavan K Rudravaram Venu Govindaraju Center for Unified Biometrics and Sensors (CUBS), University at Buffalo,New York,USA. {pkr, govind} @cedar.buffalo.edu http://www.cubs.buffalo.edu

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Object Detection System

Object Detection System A Trainable View-Based Object Detection System Thesis Proposal Henry A. Rowley Thesis Committee: Takeo Kanade, Chair Shumeet Baluja Dean Pomerleau Manuela Veloso Tomaso Poggio, MIT Motivation Object detection

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

Continuous 3D Face Authentication using RGB-D Cameras

Continuous 3D Face Authentication using RGB-D Cameras 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Continuous 3D Face Authentication using RGB-D Cameras Maurício Pamplona Segundo 1,2, Sudeep Sarkar 1, Dmitry Goldgof 1, Luciano

More information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach

More information

Recognition of Non-symmetric Faces Using Principal Component Analysis

Recognition of Non-symmetric Faces Using Principal Component Analysis Recognition of Non-symmetric Faces Using Principal Component Analysis N. Krishnan Centre for Information Technology & Engineering Manonmaniam Sundaranar University, Tirunelveli-627012, India Krishnan17563@yahoo.com

More information

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation

Advanced Vision Guided Robotics. David Bruce Engineering Manager FANUC America Corporation Advanced Vision Guided Robotics David Bruce Engineering Manager FANUC America Corporation Traditional Vision vs. Vision based Robot Guidance Traditional Machine Vision Determine if a product passes or

More information

Rigid ICP registration with Kinect

Rigid ICP registration with Kinect Rigid ICP registration with Kinect Students: Yoni Choukroun, Elie Semmel Advisor: Yonathan Aflalo 1 Overview.p.3 Development of the project..p.3 Papers p.4 Project algorithm..p.6 Result of the whole body.p.7

More information

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds

Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Intensity Augmented ICP for Registration of Laser Scanner Point Clouds Bharat Lohani* and Sandeep Sashidharan *Department of Civil Engineering, IIT Kanpur Email: blohani@iitk.ac.in. Abstract While using

More information

Precise Dynamic Measurement of Structures Automatically Utilizing Adaptive Targeting

Precise Dynamic Measurement of Structures Automatically Utilizing Adaptive Targeting Precise Dynamic Measurement of Structures Automatically Utilizing Adaptive Targeting Gary Robertson ShapeQuest Inc. Ottawa, Canada gary@shapecapture.com Commission V KEY WORDS: Automated measurement, binary

More information

Scale Invariant Feature Transform

Scale Invariant Feature Transform Scale Invariant Feature Transform Why do we care about matching features? Camera calibration Stereo Tracking/SFM Image moiaicing Object/activity Recognition Objection representation and recognition Image

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

Processing 3D Surface Data

Processing 3D Surface Data Processing 3D Surface Data Computer Animation and Visualisation Lecture 15 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing

More information

Digital Vision Face recognition

Digital Vision Face recognition Ulrik Söderström ulrik.soderstrom@tfe.umu.se 27 May 2007 Digital Vision Face recognition 1 Faces Faces are integral to human interaction Manual facial recognition is already used in everyday authentication

More information

Face Recognition Based on Frontal Views Generated from Non-Frontal Images

Face Recognition Based on Frontal Views Generated from Non-Frontal Images Face Recognition Based on Frontal Views Generated from Non-Frontal Images Volker Blanz 1, Patrick Grother 2, P. Jonathon Phillips 2 and Thomas Vetter 3 1 Max-Planck-Institut für Informatik, Saarbrücken,

More information

FITTING AND TRACKING 3D/4D FACIAL DATA USING A TEMPORAL DEFORMABLE SHAPE MODEL. Shaun Canavan, Xing Zhang, and Lijun Yin

FITTING AND TRACKING 3D/4D FACIAL DATA USING A TEMPORAL DEFORMABLE SHAPE MODEL. Shaun Canavan, Xing Zhang, and Lijun Yin FITTING AND TRACKING 3D/4D FACIAL DATA USING A TEMPORAL DEFORMABLE SHAPE MODEL Shaun Canavan, Xing Zhang, and Lijun Yin Department of Computer Science State University of New York at Binghamton ABSTRACT

More information

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors

Complex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Face Recognition in Video: Adaptive Fusion of Multiple Matchers

Face Recognition in Video: Adaptive Fusion of Multiple Matchers Face Recognition in Video: Adaptive Fusion of Multiple Matchers Unsang Park and Anil K. Jain Michigan State University East Lansing, MI 48824, USA {parkunsa,jain}@cse.msu.edu Arun Ross West Virginia University

More information

Using Support Vector Machines to Eliminate False Minutiae Matches during Fingerprint Verification

Using Support Vector Machines to Eliminate False Minutiae Matches during Fingerprint Verification Using Support Vector Machines to Eliminate False Minutiae Matches during Fingerprint Verification Abstract Praveer Mansukhani, Sergey Tulyakov, Venu Govindaraju Center for Unified Biometrics and Sensors

More information