Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint

Size: px
Start display at page:

Download "Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint"

Transcription

1 Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by ADITYA NIGAM to the DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING INDIAN INSTITUTE OF TECHNOLOGY KANPUR, INDIA August 2014

2

3

4

5 Synopsis Personal authentication is one of the basic requirements of our modern day society. Almost financial as well as personal security related applications expect to have an automated, efficient, near real time and highly accurate access control mechanism. Traditional methods of authentication are based on token and/or knowledge. Biometrics is an alternative to traditional methods because it is harder to circumvent. Biometric based authentication systems use individual s characteristics which are based on either behavior (voice, signature, gait etc) or physiology (face, iris, palmprint, fingerprint, knuckleprint, ear etc). These characteristics are hard to circumvent because they cannot be lost or forgot like token or knowledge based methods. Some of the well known identification systems make use of fingerprint, face, iris, palmprint, finger-knuckleprint, ear, gait etc. But each biometric trait has its own set of challenges and trait specific issues. Hence, none of the traits can be considered as the best one because it depends on the type of applications where it has to be applied. The performance of any unimodal biometric system is dependent on factors like environment, atmosphere, sensor precision. Also, there are several trait specific challenges such as pose, expression, aging etc for face recognition, occlusion and acquisition related issues for iris and poor quality and social acceptance related

6 vi issues for fingerprint. Hence, fusion of more than one biometric samples, traits or algorithms to achieve superior performance is an alternative way to achieve the better performance and is termed in literature as multi-biometrics or multimodal biometrics. In this thesis, we have considered three different biometric traits viz. iris, knuckleprint and palmprint. Iris can be considering as the best biometric in terms of performance which is fused with two other biometric traits to obtain an efficient multi-modal biometric system. The iris is a ring made up of tissues that allows light to enter into the eye. Thin circular diaphragm between cornea and lens is called as iris which has abundance of micro-texture such as crypts, furrows, ridges, corona, freckles and pigment spots. These textures are randomly distributed and hence they are believed to be unique. On the other hand, knuckleprint and palmprint do not have such a rich anatomical structure. But, they posses line like (i.e. knuckle-lines, palm-lines, wrinkles) rich pattern based structure in vertical, horizontal as well as diagonal directions which can be very useful if they are used in conjunction with iris samples because iris has mostly radial features. It has been observed experimentally that such a fusion of orthogonal multiple biometric modalities facilitates the system to reject the imposter confidently. Utilization of multi modality information can be useful to achieve high performance while working over large database. The spoof vulnerability is much lesser than unimodal system; hence it is ideal for outdoor unattended supervision in uncontrolled environments. However, not much work has been done in this area mainly due to non-availability of multimodal datasets. Any biometric based personal authentication system consists of several steps such as : Sample Acquisition, ROI Extraction, Sample Normalization, Preprocessing,

7 vii Feature Extraction and Template Matching. Each step affects overall performance of the system; segmentation is one of the most critical steps. Wrong segmentation renders the subsequent steps meaningless. This thesis proposes an efficient multimodal authentication system which fuses iris, knuckleprint and palmprint images. To the best of our knowledge, this is the first effort in which iris, knuckleprint and palmprint samples are fused. In this work we have proposed new iris, knuckleprint and palmprint extraction algorithms. Several quality parameters for iris and knuckleprint modalities are proposed. Each biometric trait is enhanced and transformed using the proposed local enhancement and LGBP transformation. Tracking based matching algorithm is proposed to perform biometric sample identification/recognition that uses corner features and CIOF dissimilarity measure. There are eight chapters in this thesis. The introduction of any general biometric system, stages involved, modes of operation, traits and their properties are discussed in Chapter 1. Also motivation and performance parameter are described along with the key contributions presented. in the next chapter the detailed literature review is presented for each of the three traits considered in this thesis work. There are several image processing as well as computer vision based techniques that are used in designing the systems. They are discussed in Chapter 3. In Chapter 4, iris based recognition system has been proposed. The iris segmentation is done efficiently using an improved circular hough transform for inner iris boundary (i.e pupil) detection. The robust integro-differential operator is used to detect outer iris boundary that makes use of the pupil location. The quality of the acquired iris sample is estimated using the proposed quality assessment parameters and if it is less than a predefined threshold then it is recaptured. This early quality

8 viii assessment is very crucial in order to handle poor quality and non-ideal imagery. The segmented iris is normalized to polar coordinates (i.e. rectangular strips) and is preprocessed using the proposed LGBP (Local Gradient Binary Pattern) to obtain robust features. The KLT based corners are extracted and matched using the proposed dissimilarity measure CIOF (Corners having Inconsistent Optical Flow). The proposed system has been tested over publicly available CASIA 4.0 Interval and Lamp iris databases consisting of 2, 639 and 16, 212 images respectively. It is found that CRR (Rank 1 accuracy) of the proposed system is 100% and 99.87% for Interval and Lamp databases respectively. Further, its EER for Interval and Lamp are 0.109% and 1.3% respectively. In Chapter 5, knuckleprint based recognition system has been proposed. The knuckleprint segmentation is done by estimating the central knuckle-line using a new modified version of gabor filter called curvature gabor filter. The quality of the acquired knuckleprint sample is estimated using the proposed quality assessment parameters. The segmented knuckleprint ROI is preprocessed using the proposed LGBP (Local Gradient Binary Pattern) to obtain robust features. The KLT based corners are extracted and matched using the proposed dissimilarity measure CIOF (Corners having Inconsistent Optical Flow). The proposed system has been tested over publicly available PolyU knuckleprint databases consisting of 7, 920 images. It is found that CRR of the proposed system is 99.79% with an EER of 0.93% over PolyU knuckleprint database. In Chapter 6, palmprint based recognition system has been presented. The palmprint segmentation is done by obtaining the two valley points and then a square shaped ROI is clipped using that. The segmented palmprint is preprocessed using the proposed LGBP (Local Gradient Binary Pattern) proposed dissimilarity

9 ix measure CIOF (Corners having Inconsistent Optical Flow) is used for matching. The system has been tested over publicly available CASIA and PolyU palmprint databases consisting of 4, 528 and 7, 720 images respectively. It is found that CRR (Rank 1 accuracy) of the proposed system is 100% and 99.95% for CASIA and PolyU palmprint databases respectively. Further, its EER for CASIA and PolyU are 0.15% and 0.41% respectively. In Chapter 7, details of multi-modal based recognition system and all stages involved in it are discussed. The major focus is over the chimeric multimodal database creation and its experimental analysis. Three multimodal systems viz. iris and knuckleprint, knuckleprint and palmprint and finally iris, knuckleprint and palmprint have been discussed. Several testing strategies such as intersession matching, one training and one testing and multiple training and multiple testing have been considered. Two publicly available iris databases (CASIA Interval and LAMP) are fused with two public palmprint databases (CASIA, PolyU) while for knuckleprint largest publicly available PolyU database is used. Hence, 4 tri-modal databases are generated for testing. It is observed that the performance of the trimodal system shows almost perfect behavioral (i.e CRR = 100% and EER = 0%) under various testing strategies. The last chapter concludes the work carried out in the thesis. It is shown through experimental analysis that orthogonal feature based modality fusion of iris with other biometrics like knuckleprint and palmprint can be very useful while working on large databases. It is observed that very high accuracy can be achieved. The proposed system can perform single verification of an individual within a second that is fast enough for any real-time application.

10 x

11 Acknowledgements It gives me immense pleasure in expressing my profound sense of gratitude to my thesis supervisor Prof. Phalguni Gupta. I am highly indebted and deeply grateful to him for his inspiring guidance, constant encouragement, valuable suggestions and ingenious ideas. His enormous knowledge and understanding has greatly helped me in better understanding of various aspects of Biometrics. His expertise in the area of Biometrics and enthusiasm in the research have been of great value to me. Aditya Nigam

12 xii

13 Dedicated to My Beloved Parents, All My Respected Teachers & My Wife Mrs. Jyoti Nigam and Son Master Bhavya Nigam

14 xiv

15 Contents List of Tables List of Figures List of Algorithms List of Abbreviations xx xxiv xxxi xxxiii 1 Introduction Biometric Traits Physiological based Traits Behavioral Modes of Biometric System Enrollment Verification Identification Multi-biometric System Types of Multi-biometric System Fusion Levels Motivation of Thesis Iris xv

16 xvi CONTENTS Knuckleprint Palmprint Multi-biometrics Performance Analysis Metrics Verification Performance Metrics Identification Performance Metrics Thesis Contribution ROI Extraction Quality Estimation Biometric Feature Enhancement and Transformation Biometric Feature Extraction and Matching Multimodal Biometric System Thesis Organization Literature Review Iris based Biometric System Knuckleprint based Biometric System Palmprint based Biometric System Multi-modal Biometric System Preliminaries Local Binary Pattern (LBP) Corner Point Detection Sparse Point Tracking (KLT) Phase Only Correlation (POC) Iris Recognition System Iris ROI Extraction Iris Inner Boundary Localization

17 CONTENTS xvii Iris Outer Boundary Localization Proposed Segmentation Algorithm Iris Normalization Iris Quality Estimation Focus (F) based Quality Attribute Motion Blur (MB) based Quality Attribute Contrast and Illumination (CI) based Quality Attribute Dilation (D) based Quality Attribute Specular Reflection (SR) based Quality Attribute Occlusion (O) based Quality Attribute Quality Class Determination Iris Image Preprocessing Iris Image Enhancement Iris Image Transformation Iris Feature Extraction and Matching Corners having Inconsistent Optical Flow (CIOF) Matching Algorithm Databases Databases Used Testing Strategy Experimental Results Iris Segmentation Analysis Threshold Selection Impact of Enhancement on System Performance Comparative Analysis Knuckleprint Recognition System Knuckleprint ROI Extraction

18 xviii CONTENTS Knuckle Area Detection Central Knuckle-Line Detection Central Knuckle-Point Detection Knuckleprint Image Quality Extraction Focus (F ) Clutter (C ) Uniformity based Quality Attribute (S) Entropy based Quality Attribute (E) Reflection based Quality Attribute (Re) Contrast based Quality Attribute (Con) Quality Class Determination Knuckleprint Image Preprocessing Knuckleprint Feature Extraction and Matching Corners having Inconsistent Optical Flow (CIOF) Database Specifications Testing Strategy Performance Analysis Knuckleprint Segmentation Threshold Selection Comparative Analysis Palmprint Recognition System Palmprint ROI Extraction Palmprint Preprocessing Palmprint Feature Extraction and Matching Corners having Inconsistent Optical Flow (CIOF) Database Testing Strategy

19 CONTENTS xix 6.5 Performance Analysis Multimodal Based Recognition System Knuckleprint and Palmprint Fusion Multi-modal Databases Creation Testing Strategy Performance Analysis Iris and Knuckleprint Fusion Multimodal Database Creation Testing Strategy Performance Analysis Iris, Knuckleprint and Palmprint Fusion Multimodal database creation Testing Strategy Performance Analysis Conclusions 159 References 164 Publications 177

20 xx CONTENTS

21 List of Tables 1.1 Biometric Properties (L = Low; M = Medium; H = High) Quality Parameters and Quality Class for Images in Fig Iris Database Specifications Database and Testing Strategy Specifications Values of Various Parameters used in Algorithm 4.1 and Variation in Threshold needed for Correct Segmentation Required Parametric Variations: - stands for no change, stands for increasing and for decreasing the parameter value Segmentation Error. (Rows Below Images: Errors Occurred in Interval, Lamp Databases) Threshold Selection for CASIA V4 Interval and Lamp Iris Database using only vcode Matching over Validation Set of First 100 Subjects Enhancement based Performance boost-up over Iris Interval Databases (In the above Table fusion referred as vcode+hcode) Performance Analysis of Proposed System Comparative Performance Analysis over CASIA V4 Interval and Lamp Iris Database Database Specifications xxi

22 xxii LIST OF TABLES 5.2 Parameterized Performance Analysis of the proposed system on PolyU Knuckleprint Database. (only vcode matching) Comparative Performance Analysis over PolyU Knuckleprint (Results as reported in [27]) Database Specifications Palmprint Results (L=LEF T Hand,R=RIGHT Hand,ALL=L + R) Comparative Performance with Other Systems Database Specifications (L=LEF T Hand,R=RIGHT,ALL=L + R) Testing Strategy Specifications (L=LEF T,R=RIGHT,ALL=L + R) Consolidated Results (L=LEF T Hand,R=RIGHT Hand,ALL=L+R) Database Specifications Performance Analysis of the Proposed System over Iris Interval and Lamp Databases along with Knuckleprint PolyU Database (In the above Table fusion referred as vcode+hcode) Database Specifications of Four Self Created Tri-Modal Database using Iris, Knuckleprint and Palmprint Database Specifications for Interval CASIA (db1) containing images from 349 Subjects in (3 + 4) = 7 different poses. First 3 images are considered for training and Last 4 are taken as testing Database Specifications for Interval PolyU (db2) containing images from 349 Subjects in (3 + 4) = 7 different poses. First 3 images are considered for training and Last 4 are taken as testing Database Specifications for Lamp Casia (db3) containing images from 566 Subjects in (4 + 4) = 8 different poses. First 4 images are considered for training and Last 4 are taken as testing

23 LIST OF TABLES xxiii 7.10 Database Specifications for Lamp PolyU (db4) containing images from 386 Subjects in (6 + 6) = 12 different poses. First 6 images are considered for training and Last 6 are taken as testing

24 xxiv LIST OF TABLES

25 List of Figures 1.1 Some Biometric Traits Flow Diagram of Enrollment Process Flow diagram of Authentication Process Flow Diagram of Identification Process Fingerprint Sensors Two Fingerprint Matchers (Multi-algorithm) Samples of Same Fingerprint (Multi-instance) Five different Biometric Traits (Multi-Modal) Iris, Knuckle and Palmprint Anatomy Graphical Representation of FAR and FRR Graphical Representation of EER and Accuracy Graph Showing Genuine and Imposter Score Distribution (here t 0 (s(x H G)dx) represents the genuine similarity score distribution) Graph showing Genuine Vs Imposter Best Score Graph Images are taken from [19, 18] Mapping of radial rays into a polar representation pupil and iris boundaries become vertical edges [13] xxv

26 xxvi LIST OF FIGURES 2.3 (a) Most Significant bit-plane 7 (b) Bit-plane 6 (c) Bit-plane 5 (d) Bit-plane 4 (e) Bit-plane 3 (f) Bit-plane 2 (g) Bit-plane 1 (h) Least Significant bit-plane 0 [11] Iris Deformable Model [50] Iris Image Preprocessing [50] : (a) Original Image, (b) Detected Inner Boundary, (c) Detected Outer Boundary, (d) Lower Half of Iris, (e) Normalized, (f) Eyelid Masking and (g) Enhanced Image Overview of Non-ideal Iris Recognition System [22] Off Angle Iris Segmentation [2] Two Most Popular Iris Systems Steps Involved for Feature Vector Extraction [51] Ordinal Measures for Iris Recognition [66] Knuckleprint Acquisition System [77], (a) Device, (b) Raw Sample, (c) ROI Localized, (d) Cropped Ensemble of Local and Global Features [80] Improvement over Gabor based Compcode. (a) Original Image, (b) Orientation Code (ImCompCode), (c) Magnitude Code (M agcode) (Images taken from [79]) Local Feature Extraction Partitioning for Zernike based Palmprint System [7] Partitioning for Phase based Palmprint System [9] Partitioning for Stockwell based Palmprint System [6] Partitioning for DCT based Palmprint System [8] LBP Computation Application of LBP in Face Recognition Corner Features shown in Red Hand Tracking Application

27 LIST OF FIGURES xxvii 3.5 Phase Only Correlation (P OC) between two translated images. Image in (c) represents the corresponding P OC values for each spatial location Overall Architecture of the Proposed Iris Recognition System Iris Segmentation Iris Pupil Segmentation (Thresholding) Iris Pupil Segmentation (Gradient based Thresholding using Sobel) For any radius r, any point (x, y) having 2 pupil centers (c x 1, c y 1) and (c x 2, c y 2) Segmented Iris Pupil All Steps in Iris Pupil Segmentation Application of Integro-differential Operator Iris Localization (1 st Row : Original Iris, 2 nd Row : Segmented Iris) Normalization of the Cropped Iris Focus Filter Response JNB Edge Width Occlusion in Normalized Iris Application of Region-Growing Eyelid Regions: Arrows Denote the Direction for Region-Growing Determination of Eyelash Region Reflection Detection by Thresholding Overall Occlusion Mask Quality Estimation Iris Texture Enhancement LGBP Transformation (Red: -ve gradient;green: +ve grad.;blue: zero grad.) Original and Transformed (vcode, hcode) for Iris ROI s

28 xxviii LIST OF FIGURES 4.23 Iris Images Segmentation Error Hierarchy (Numbers Mentioned for each Subcategory Specify the Number Images of that Category for Interval Database.) Parameterized ROC Curve of the Proposed System for Iris Interval Database (only vcode matching) Parameterized ROC Analysis of the Proposed System for Iris Lamp Database (only vcode matching) Enhancement based Performance boost-up in terms of Receiver Operating Characteristic Curves for the Proposed System Comparative Analysis of Proposed System over Interval Database Comparative Analysis of Proposed System over Lamp Database Overall Architecture of the Proposed Knuckleprint Recognition System Knuckleprint ROI Annotation Knuckle Area Detection Conventional Gabor Filter Curvature Knuckle Filter Knuckle Filter Knuckleprint ROI detection. (a) Knuckle filter response is super impose over the knuckle area with blue color, (b) Full Annotated and (c) FKP (F KP ROI ) Some Segmented Knuckleprints Poor Quality Knuckleprint ROI Samples Defocus based Quality Attribute F and C Uniformity based Quality Attribute (S) Entropy based Quality Attribute E Reflection based Quality Attribute Re

29 LIST OF FIGURES xxix 5.14 Knuckleprint Quality Estimation. Top, Middle, Bottom Images for each Proposed Quality Parameter Enhanced Knuckleprint Original and Transformed (vcode, hcode) for Knuckleprint ROI s Illumination Invariance of LGBP transformation. First row shows same image under five different illumination conditions. Second row shows their corresponding vcode s Knuckleprint Recognition Some Images from Knuckleprint Database [56] used in this work Failed Knuckle ROI detection Parameterized ROC Analysis of the Proposed System for PolyU Knuckleprint Database. First 50 subject s left index images are used. (only vcode matchings) Performance of the Proposed System over Knuckleprint PolyU Database Overall Architecture of the Proposed Palmprint Recognition System Palmprint ROI Extraction (Images are taken from [9]) Palmprint Image Enhancement Original and Transformed (vcode, hcode) for Palmprint ROI Steps of the Palmprint Recognition System Sample Palmprint Images from Casia and PolyU Databases Vertical and Horizontal Information Fusion based Performance Boostup for CASIA Palmprint Databases (X axis in Log scale) Vertical and Horizontal Information Fusion based Performance Boostup for PolyU Palmprint Databases (X axis in Log scale) Sample Images from all Databases (Five Distinct Users) Genuine Vs Imposter Best Matching Score Separation Graphs

30 xxx LIST OF FIGURES 7.3 ROC Characteristics Comparison of the Unimodal Vs Multimodal Databases for the Proposed System

31 List of Algorithms 4.1 Pupil Segmentation Iris Segmentation Eyelid Detection Eyelash Detection Occlusion Mask Determination CIOF (Iris a, Iris b ) Knuckleprint ROI Detection Uniformity based Quality Attribute (S) CIOF (knuckle a, knuckle b ) Palmprint ROI Extraction CIOF (palm a, palm b ) xxxi

32 xxxii LIST OF ALGORITHMS

33 List of Abbreviations CLAHE AUC CMC CRR EER EUC FAR FMR FNMR FRR GAR ICA IITK PCA ROC ROI SIFT SURF LDA DCT Contrast Limited Adaptive Histogram Equalization Area Under ROC Curve Cumulative Matching Characteristics Correct Recognition Rate Equal Error Rate Error Under ROC Curve False Acceptance Rate False Match Rate False Non-Match Rate False Rejection Rate Genuine Acceptance Rate Independent Component Analysis Indian Institute of Technology Kanpur Principal Component Analysis Receiver Operating Characteristics Region of Interest Scale Invariant Feature Transform Speeded Up Robust Features Linear Discriminant Analysis Discrete Cosine Transform xxxiii

34 xxxiv Abbreviations POC BLPOC LGBP LOG DI GvI Graph FTE LBP EBGM NIR SVM DFT IDFT HMM CCD SOTA vcode hcode CG JNB CIOF Phase Only Correlation Band Limited Phase Only Correlation Local Gradient Binary Pattern Laplacian of Gaussian Discriminative Index Genuine Vs Imposter Graph Failure to Enroll Rate Local Binary Pattern Elastic Bunch Graph Matching Near Infra Red Support Vector Machine Discrete Fourier Transform Inverse Discrete Fourier Transform Hidden Markov Model Charged Coupled Device State of the Art Vertical code Horizontal code Curvature Gabor filter Just Noticeable Blur Corners having Inconsistent Optical Flow

35 Chapter 1 Introduction Personal authentication plays an important role in the society. It requires at least some level of security to assure the identity. Security can be realized through one of the three levels. 1. Level 1 [ Possession ] : The user possesses something which is required to be produced at the time of authentication. For example, key of a car or room. 2. Level 2 [ Knowledge ] : The user knows something which is used for authentication. For example, PIN (personal identification number), password, or credit card CVV (card verification value). 3. Level 3 [ Biometrics ] : The user owns certain unique physiological and behavioral characteristics, known as biometric which are used for authentication. For example, face, iris, fingerprint, signature, gait. There are some cases where more then one level of security are used to enhance the accuracy of any authentication system. However, there are drawbacks in Level 1 and Level 2 security. For example, key or smart-cards may be lost or mishandled while passwords or PIN may be forgotten or guessed. Since both possession and

36 2 CHAPTER 1. INTRODUCTION knowledge are not intrinsic user properties, they are difficult to be managed by the user. But this is not the case with Level 3 security which is based on biometrics which can be considered as the science of personal authentication using the physiological (eg. fingerprint, face, iris, etc.) and behavioral characteristics of human beings (e.g signature, gait,voice, etc.). Examples of some well known biometric traits are shown in Figs (a) Physiological (b) Behavioral Figure 1.1: Some Biometric Traits Any biometrics based authentication system is better than the traditional possession or knowledge based system because of the following reasons. Biometric traits are intrinsically related to the user. forgotten or misplaced; hence they are easy to manage. They cannot be lost, There is a need of physical presence of the trait for authentication. Features characteristics are unique. below. Some of the vital properties that any biometric trait should possess are given

37 3 1. Uniqueness: Characteristics associated with the biometric trait should be different for everyone. 2. Universality: The biometric trait should be owned by everyone and should not be lost. 3. Circumvention: The biometric trait should not be spoofed or forged easily. 4. Collectability: The biometric trait should be able to acquire by some digital sensor. 5. Permanence: Characteristics associated with the biometric trait should be time invariant (temporally stable). 6. Acceptability: The biometric trait should be accepted by the society without any objection. A biometric based personal authentication is a multi-staged process. In the initial stage, the raw image is captured using an acquisition sensor. This is very critical and important stage because accuracy of any biometric system is highly dependent on the quality of images. In the second stage, the desired part from the image, termed as region of interest (ROI), is extracted from the acquired image. Third stage estimates the quality of the ROI. If the quality of the ROI is poor, then one may go for re-acquisition of the image. In next stage, the ROI is preprocessed using some enhancement technique. Some transformations are also performed to get the robust ROI. Discriminative features from the enhanced ROI are extracted in the next stage. Finally, features of a query image have matched against those of image(s) in the database to authenticate the claim.

38 4 CHAPTER 1. INTRODUCTION 1.1 Biometric Traits There does not exist any biometric trait which satisfies all desired properties strictly. For example, facial features are not permanent throughout the life span, fingerprints are not visible for hard working people etc. However, there exist several well known biometric traits which satisfy more or less all biometric properties. Biometric traits can be divided based on physiological and behavioral characteristics Physiological based Traits This subsection discusses some of the well known biometric traits which are based on physiological characteristics. These traits are face, fingerprint, ear, iris, palmprint and knuckleprint. Face : It is one of the most common and well known biometric traits. Images can be captured from distance and even can be extracted from video frames. In a typical face recognition system, initially face is detected from a test image and features are extracted. These features are matched against those stored in the database. An appropriate matching algorithm is used to obtain the matching score. Most commonly geometric distances between facial key-points such as eyes, nose, mouth. There are several real time applications such as surveillance, criminal identification, access control management where it is being used. Face recognition is non-intrusive but due to several challenges like pose, illumination, occlusion, aging and expression, its performance is found to be restricted. Advantages : Face is the most common and non-intrusive trait and hence, can be captured easily using cheap sensors. It is widely acceptable in the society.

39 1.1. BIOMETRIC TRAITS 5 Challenges : The face pose varies with the viewing angle along with illumination. The facial expressions can deform the face significantly while partial occlusion may hide some facial regions. Aging is a big challenge to deal with because facial features do not vary in a specific pattern. Fingerprint : For quiet a long time, fingerprints are being used for personal authentication. Any typical fingerprint is made up ridges. There exist huge amount of discriminative textures and patterns such as loop, arch, whorl over a fingerprint. The ridge ending and the ridge bifurcation are known as minutia features. These features are assumed to be unique and stable. Several minutia based fingerprint matching algorithms are proposed. This type of algorithms uses the co-ordinates of minutiae point along with its orientation. Fingerprint sensors are easily available but the good quality fingerprint is a major challenge. The poor quality of fingerprints may be due to poor quality of sensor or external factors such as dirt, oil, sweat etc. It is also observed that laborers possess poor quality fingerprints due to their nature of work. Advantages : Fingerprints are unique. Lesser amount of user cooperation is required for its acquisition and can be captured through cheap sensors. Challenges : The major challenge is to get good quality fingerprints. Ear : Like other biometric traits, it contains robust, unique and discriminative line based features. In an ear recognition system, ear is segmented from the raw profile face image. Features obtained from ear are matched against those that are stored in database. The major disadvantage of ear is the occlusion which occurs due to hair or any other foreign body such as ear ring, cap, ear phones etc.

40 6 CHAPTER 1. INTRODUCTION Advantages : Similar to face, ear can also be acquired non-intrusively using cheap sensors. They are universal and have robust shape that do not vary too much. The social acceptance of ear is high. Challenges : Ear recognition performance suffers in varying illumination, pose and translation. The scale and external body occlusion like hair and ear rings are other challenges to deal with. Iris : Iris is considered as one of the best known biometric traits. It is basically a donut shape annular region that is bounded by sclera and pupil. There exist discriminative textures within iris in the form of furrows, ridges, crypts. It is difficult to capture iris in visible light as it is sensitive to light. Hence, iris acquisition is captured in N IR (Near Infra-Red) light in high resolution. Iris is segmented and normalized into a fixed size rectangular strip. There exist several texture based techniques to extract binary features which are matched using hamming distance for authentication. The iris based recognition systems are usually found to be very accurate and robust. But iris can be spoofed through contact lense. The major problem for iris is that it requires stringent user cooperation during acquisition. Also, iris is very sensitive to any external stimulus and it is very hard to control it. Advantages : Iris posseses highly discriminative unique texture that is naturally well protected. They are difficult to alter and faster to process. Iris images can be acquired touchlessly. Challenges : Accurate iris segmentation in varying illumination is very challenging. The eyelid and eyelash along with motion blur and specular reflection is another issue. Off-angle iris recognition is also a very challenging problem.

41 1.1. BIOMETRIC TRAITS 7 Palmprint : The inner part of a palm image is considered as palmprint. It includes ridges, minutia, principle line, delta points and rich palmprint texture in abundance. These features are assumed to be stable and unique. Binary features are extracted using several textures as well as statistical and structural properties. Stability of palmprint features is not yet critically studied. Advantages : Palmprint can be captured using the low cost sensors in a touch-less manner. The extracted palm ROI is large and contains discriminative and unique features. Challenges : Variation of illumination and rotation as well as translation and handling the problem of occlusion are the major challenges. Knuckleprint : The outer part of a finger is considered as finger knuckle. This can be acquired through any sensor. The line based structural features can be extracted from them and are assumed to be discriminative features. Several gabor based texture and orientation based techniques are used to extract binary features. Advantages : It is naturally well protected and can be acquired using low cost sensors. Unique and discriminative features are available over knuckleprint ROI. Challenges : Major challenges are the ways to handle the problem of illumination variation as well as rotation and translation Behavioral Some of the most popular behavioral biometric characteristics are discussed below. Gait : It is one of the behavioral biometric traits. It considers the characteristics that are lying on the way that a person walks. Gait data can be acquired

42 8 CHAPTER 1. INTRODUCTION using moving light displays or video streams. Further, there are sensors that can record various crucial parameters such as pressure and step patterns that can be used for identification. Advantages : It can be acquired non-intrusively and from distance; hence its user acceptance is good. Challenges : Major challenges are the way of handling variations due to background, clothing or walking surface. Signature : The hand-written signature of a person is an instance of personal verification. Generally it is used for verification of the owner of bank cheques or other off-line documents. From any signature, the orientation and co-ordinate based features in X and Y directions are extracted. They are matched with the features that are already available in the existing database of the claimed identity. General features such as writing angle, breakpoint and curvatures are termed as static while pen-speed, writing time, pressure applied are termed as dynamic features. There are two modes in which such system works, one is on-line while other is off-line. Image of the off-line signature is obtained by scanning handwritten signature while digital signature is acquired through digital signature pad or tablet.it is not universal as illiterate persons do not know how to sign. It is not permanent as it can vary with time and can be spoofed/forged easily. Advantages : It has well acceptance socially. Challenges : Signature can vary due to aging, emotion or writing surface posing big challenge to automated signature recognition. Voice : It considers how a person makes sounds while speaking. It is believed that every person has its own natural texture and tonal quality that depends

43 1.2. MODES OF BIOMETRIC SYSTEM 9 on nasal tone, cadence and inflection. Several voice based characteristic are extracted and used for voice based authentication. Features are not permanent and can be imitated/spoofed easily. It is not universally owned as dumb persons cannot speak anything. Advantages : It is well accepted by the society. Challenges : Voice suffers from aging, emotion and other environmental variation severely. 1.2 Modes of Biometric System There are three different possible modes in which any biometric system can be operated and they are enrollment, verification and identification. Every user of the system needs to be enrolled to the system by providing images of the biometric trait. Features are extracted from the images and are stored in the database. In case of verification or identification, features of the query image are matched against those of the enrolled users Enrollment In this step, a user enrolls or registers to the existing database. A biometric sensor is used to acquire the image from the user and its quality is evaluated. If the quality is above than a threshold set a priori features are extracted and a unique identification number is assigned to it. Otherwise, image is re-captured. This process of capturing is repeated until the desired quality image is captured. The process of enrollment subsystem is shown in Fig. 1.2.

44 10 CHAPTER 1. INTRODUCTION Figure 1.2: Flow Diagram of Enrollment Process Verification It is also known as One-to-One (i.e 1:1) matching. In this mode, user claims an identity and the system verifies the correctness of the claim. Features from the biometric trait provided by the user are matched with the features of the claimed identity stored in the existing database. If the similarity matching score is more than a pre-computed threshold, then the claim is verified and the user is considered as a genuine user. Otherwise, the claim is rejected and the user is an imposter. The flow diagram of the verification subsystem is shown in Fig Suppose, F 1 and F 2 are the feature vectors of a biometric trait of two subjects, s 1 and s 2. By the term Matching between s 1 and s 2, we mean the similarity or dissimilarity between their feature vectors F 1 and F 2. One can make use of any distance measure to obtain the similarity or the dissimilarity score between the two feature vectors. Without any loss of generality, let us assume that the distance measure provides the similarity score. If the score is greater than a predefined threshold then we can conclude that the two images are matched (genuine); otherwise they

45 1.2. MODES OF BIOMETRIC SYSTEM 11 are not matched (imposter). Figure 1.3: Flow diagram of Authentication Process Identification It is also known as One-to-Many (i.e 1:N) matching. In this mode, the system may not have any information other than the presented biometric trait. It attempts to determine the correct identity of that user; hence it is termed as Identification. The ROI of biometric image is preprocessed and features are extracted. The feature vector is matched with feature vectors of all users in the database and the top best matches are obtained. More clearly, let F 1 be probe features of a biometric trait. The matching score may be computed using a variety of distance metrics to obtain the similarity or the dissimilarity scores between F 1 and all feature vectors that are stored in the database. Out of all these scores, the best N matching images are reported as N probable matches. The identification subsystem is shown in Fig. 1.4.

46 12 CHAPTER 1. INTRODUCTION Figure 1.4: Flow Diagram of Identification Process 1.3 Multi-biometric System The performance of any unimodal biometric system is often got restricted due to variation and uncontrolled environmental condition, sensor precision and reliability as well as several trait specific challenges such as pose, expression, aging etc for face. Moreover, it considers features to take decision on matching and hence, it is difficult to improve its accuracy. Hence one can explore the possibility of fusing more than one biometric samples, traits or algorithms. This is termed as multi-biometrics [16]. There exist different types of multi-biometric system. Some of them are discussed below Types of Multi-biometric System Any biometric based authentication system consists of several stages and has many challenges and limitations. The aim of any multi-biometric system is to improve the performance of the system.

47 1.3. MULTI-BIOMETRIC SYSTEM 13 Multi-sensor System : It considers images of same biometric trait where images are captured with the help of multiple sensors. Fig. 1.5 shows three types of fingerprint scanners which can be used to build a multi-sensor biometric system. These three sensors use using different technologies to acquire data and hence the quality as well as discriminative features of their samples are significantly different. Figure 1.5: Fingerprint Sensors Multi-algorithm System : It considers multiple matching algorithms to improve the performance of the system. Images of the selected trait are captured using single sensor. In Fig. 1.6, it is shown that one can use different algorithms applied over the same image. One algorithm may be using some global texture like orientation field features while other one may use minutia based local features. Fusion of these matchers is expected to perform better than any of these two algorithms. Multi-instance System : It considers more than one image of the same trait per user. Multiple samples are collected. In Fig. 1.7, three samples of the same finger collected under controlled environment are shown. This redundant information is useful to address the issues related to local as well

48 14 CHAPTER 1. INTRODUCTION (a) Orientation Field Based Algorithm (b) Minutia Based Algorithm Figure 1.6: Two Fingerprint Matchers (Multi-algorithm) as environmental condition variations. Figure 1.7: Samples of Same Fingerprint (Multi-instance) Multi-modal System : It considers multiple biometric traits for authentication. Uncorrelated traits are considered to achieve better performance [34]. Also it makes system more robust against spoofing attacks as it becomes more and more difficult to imitate all selected traits at-once. But still print-attack and spoof-attack may circumvent these systems [33]. Hence trait selection with better spoofing algorithms is desirable. In Fig. 1.8, a multi-modal system is shown in which face, fingerprint, ear, iris and palmprint samples are used.

49 1.3. MULTI-BIOMETRIC SYSTEM 15 (a) Face (b) Fingerprint (c) Ear (d) Iris (e) Palmprint Figure 1.8: Five different Biometric Traits (Multi-Modal) Fusion Levels There are several ways that can be used to fuse various characteristics in a multibiometric system. Fusion can be done at various levels which are discussed below. Score Level : Samples of different traits are matched with their corresponding trait individually and scores are obtained. Scores are normalized to same scale and converted into either dissimilarity or similarity. These scores are combined to obtain the fused score. There exist several score level fusion techniques such as max, min, average or weighted average. For example, let a face similarity score be 91 in a scale of [0 to 100] and an iris dissimilarity score be 0.27 in a scale of 0 to 1.0. Either face score should be scaled to [0 to 1.0] and converted into dissimilarity (i.e = 0.09), or iris score should be scaled to [0 to 100] and converted into similarity (i.e = 73), before fusion hence for fusion of face and iris, one can used weighted average of either 91 and 73 or 0.09 and 0.27 scores. Feature Level : All features from each trait are first extracted individually for every subject. These features should be of the same type and are concatenated into one single multi-biometric template which is used for authentication. For example, LBP based histogram features extracted from face and iris images can be fused by concatenating both histogram one after the

50 16 CHAPTER 1. INTRODUCTION other to obtain a single histogram. The χ 2 dissimilarity measure can be used to obtain the fused multi-modal matching score. Decision Level : All samples of different traits are matched with their corresponding trait to obtain individual scores. These scores are thresholded to obtain individual decision for each trait. The final decision is taken by fusing them using OR, AND or other rules. For example, for face, iris and palmprint traits let the individual decisions be F ace = Accept, Iris = Reject and P almprint = Accept. Simple rules like at-least 2 Accept for matching can be used to make the decision. 1.4 Motivation of Thesis There exist several potentially viable physiological and behavioral biometric traits. Each biometric trait has its own unique and complex anatomical structure. The dynamics of this structure accounts to the discriminative power as well as its stability Iris It is a ring made up of tissues. The detailed iris related anatomical features are described in [12] and are shown in Fig. 1.9(a). Thin circular diaphragm between cornea and lens is called as iris which has abundance of micro-textures as crypts, furrows, ridges, corona, freckles and pigment spots. These textures are randomly distributed; hence they are believed to be unique [32]. Iris texture is very fine and most of its details are developed during the embryonic development. Textures of two subjects are believed to be unique and even the right eye of the same subject is different from the left eye [21]. Iris is a naturally well-protected biometric as compared to the other traits and is also assumed to be invariant to age.

51 1.4. MOTIVATION OF THESIS 17 Issues such as iris occlusion due to eyelid and eyelashes and specular reflection remain to be addressed. Also accurate iris localization is a challenge. Huge amount of efforts are required to improve the accuracy and reliability of any iris based system. Estimating the iris quality is also an important issue that has to be resolved. In this thesis, we have addressed some of these issues. (a) Iris Anatomy (b) Knuckle Anatomy (c) Palmprint Anatomy Figure 1.9: Iris, Knuckle and Palmprint Anatomy Knuckleprint Anatomical structure of the knuckleprint is shown in Fig. 1.9(b). The line like (i.e. knuckle lines) rich pattern structures in vertical as well as horizontal directions exist over it knuckleprint. These horizontal and vertical pattern formations are believed to be very discriminative [80]. The knuckleprint texture is developed very early and last very long because it occurs on the outer side of the hand and no one can use them for almost any work except boxers. Negligible weir and tire as well as print quality degradation with time and age are observed. Its failure to enrollment rate is also expected to be very low as compared to the fingerprint and it does not require much user cooperation. Since it is a very new trait, there exist several challenges that have to be ad-

52 18 CHAPTER 1. INTRODUCTION dressed. The central knuckle line extraction is a major issue that is required to register knuckleprints. There is a need to design a reliable feature selection as well as knuckleprint matching algorithm. Estimating the quality of a knuckleprint is a challenge that requires significant attention. This thesis has addressed some of these issues Palmprint The inner part of the hand is called as palm and the extracted region of interest in between fingers and wrist is termed as palmprint which is shown in Fig. 1.9(c). Even monozygotic twins are found to have different palmprint patterns [37]. Pattern formation within this region is supposed to be stable as well as unique [9]. Huge amount of textures in the form of palm-lines, ridges, wrinkles etc. is available over palmprint as shown in Fig. 1.9(c). Prime advantage of palmprint over fingerprint includes its higher social acceptance because it is never being associated with criminals. It has larger ROI area as compared to fingerprint images that ensures abundance of structural features including principle lines, wrinkles, creases and texture pattern. Due to larger ROI, even low resolution palmprint images can be used to enhance system s speed but to reduce the cost. There are several challenges that are to be addressed. The accurate palmprint localization is one of the key issue that is required to extract registered palmprints. One needs to design reliable features selection as well as palmprint matching algorithm. Methods of computing the quality of a palmprint is a challenge that requires significant attention. This thesis has dealt with some of these issues Multi-biometrics A comparative study between iris, knuckleprint and palmprint is presented in Table 1.1. It has been observed experimentally that such a fusion of two or more biometric

53 1.4. MOTIVATION OF THESIS 19 Property Meaning Iris Palmprint Knuckleprint Universality Every individual must possess M M M Uniqueness Features should be distinct H M M across individuals Permanence Features should be constant H H H over a long period of time Collectability Trait can be easily acquired L M H Performance Possess high performance H M M Acceptability Acceptable to a large percentage L M H of the population Circumvention Difficult to mask or manipulate M M H Table 1.1: Biometric Properties (L = Low; M = Medium; H = High) modalities facilitates the system to reject the imposters much more confidently and hence boosting the overall system performance significantly. Most of the state-ofthe-art uni-modal biometric based authentication systems perform block by block and pair-wise matching [6], [7], [9], [36], [65], [76]; hence they require pre-registered images. But they cannot produce highly accurate systems. Any multi-modal system makes use of some biometric traits to enhance system s performance. Multimodal systems are more relevant when the number of enrolled users is very large. The false acceptance rate grows rapidly with the increase in the size of the database [5]; hence multiple traits are used to achieve better performance. Also multi-modal systems can enable us to deal with missing trait. The spoof vulnerability is much lesser than any unimodal system; hence it is ideal for outdoor and non-controlled environments. To best of the knowlegde not much work has been done in this area mainly due to the non-availability of multi-modal datasets. Iris, knuckleprint and palmprint traits have uncorrelated features; hence this thesis has considered these traits for fusion to enhance the system performance. Further, these traits are never used to design a multimodal system in the past. All of them

54 20 CHAPTER 1. INTRODUCTION are well protected and also their textures cannot be altered or deformed easily. 1.5 Performance Analysis Metrics It is necessary to analyze the performance of any biometric system. The inferences drawn over performance analysis is used in judging the suitability for its application. There exist several performance measures to analyze a verification or identification system Verification Performance Metrics Like any pattern recognition system, there are two types of errors viz.; False Acceptance Rate (F AR) and False Rejection Rate (F RR). When two feature vectors are matched, it generates a matching score. This score is either dissimilarity or similarity score. For a dissimilarity (similarity) score, if it is less (greater) than a predefined threshold, we assume, these two feature vectors are matched. F AR is the probability of accepting an imposter as a genuine user wrongly. More clearly, if we perform N distinct imposter matchings and M of them have got accepted wrongly as genuine matching then F AR is given by : F AR = M N 100% (1.1) Similarly, F RR is defined as the probability of rejecting a genuine user wrongly. That means, if we perform N distinct genuine matchings and M of them have been got rejected wrongly then F RR is given by : F RR = M N 100% (1.2) For various thresholds, if we plot F AR or F RR, we get a curve which is known

55 1.5. PERFORMANCE ANALYSIS METRICS 21 (a) Hypothetical FAR curve (b) Hypothetical FRR curve Figure 1.10: Graphical Representation of FAR and FRR as F AR or F RR curve. Sample F AR and F RR curves are shown in Fig [a] EER : Equal error rate (EER) is the value of F AR for which F AR and F RR are equal.that means, EER is the point of intersection of F AR and F RR curves. Also, if we draw a curve of F AR vs F RR for all thresholds and draw a line at 45 from origin then EER is the point of intersection of that line with the F AR vs F RR curve. It is shown in Fig. 1.11(a). [c] Accuracy : If T is the threshold for which F AR+F RR 2 is minimum for all FAR and FRR at different threshold, then we define the accuracy at T as where F AR T, F RR T Accuracy = ( 100 F AR ) T + F RR T % (1.3) 2 are F AR and F RR at threshold T. The threshold (T ) at which the combination of F AR and F RR gives the highest accuracy, is considered as the optimum threshold. It can be observed that accuracy may not be maximum at the threshold of EER. [c] Receiver Operating Characteristics (ROC) Curve : It is a graph

56 22 CHAPTER 1. INTRODUCTION (a) Hypothetical EER curve (b) Hypothetical Accuracy curve Figure 1.11: Graphical Representation of EER and Accuracy plotting F AR against various F RRs. It helps to analyze the behavior of F AR against F RR as shown in Fig. 1.11(a). It quantifies the discriminative power of the system between genuine and imposter s score. An ideal ROC curve would include a point at F RR = 0, F AR = 0 which signifies EER = 0. The curve provides a good way to compare the performance of two biometric systems. Lower the ROC curve (towards both co-ordinate axis) better is the system since area under the curve (i.e error) is lesser. [d] Error under ROC Curve (EUC) : It is a scalar quantity defined as the area under the ROC curve. It estimates the amount of error incurred while one makes decision on genuine and imposter matchings. [f] Decidability Index : It measures separability between imposter and genuine matching scores and is defined by: d = µ G µ I σg 2 +σ2 I 2 (1.4) where µ G and µ I are the mean and σ G and σ I are the standard deviation of the

57 1.5. PERFORMANCE ANALYSIS METRICS 23 Figure 1.12: Graph Showing Genuine and Imposter Score Distribution (here t (s(x H 0 G)dx) represents the genuine similarity score distribution) genuine and imposter scores respectively. In Fig. 1.12, the genuine and the imposter score distributions are shown. Higher the value of d, better is seperation between two distributions; hence error is less Identification Performance Metrics For identification, performance analysis is done by correct recognition rate, CRR and genuine vs imposter best match graph. [a] CRR : The correct recognition rate, CRR, is also known as the Rank 1 accuracy. It is defined as the ratio of the number of correct (Non-false) top best matches and the total number of matching performed in the query set. More clearly, if we have N images in the test set and out of that, M images have got the Non-false top best match then CRR is given by : CRR = M N 100% (1.5) [b] Genuine vs Imposter Best Match Graph (GvI Graph) : The graph which shows separation of genuine vs imposter best matching scores plots the best

58 24 CHAPTER 1. INTRODUCTION Figure 1.13: Graph showing Genuine Vs Imposter Best Score Graph genuine and the best imposter scores for all probe images. From the graph, separation between the best genuine and the imposter matching scores can be analyzed visually. In Fig. 1.13, one such plot is shown from which one can observe that genuine matching scores are well separated from imposters and overlapping scores are errors. 1.6 Thesis Contribution This thesis deals with the problem of designing some efficient biometric systems. It has considered three biometrics traits viz. iris, knuckleprint and palmprint. Finally, it has proposed an efficient multimodal biometric system which makes use of three traits. Score level fusion is performed to obtain the matching score of the multimodal system. The contribution of this thesis spans over all stages involved in the development of any biometric system.

59 1.6. THESIS CONTRIBUTION ROI Extraction The data acquisition system captures the biometric samples. It is required to extract the region of interest (ROI) from the acquired samples. We have proposed efficient algorithms to extract the ROI from iris, knuckle and palm samples. 1. Iris ROI Extraction : It requires the exact localization of inner as well as outer iris boundary. Improved Hough and Integro-differential transformations are used in the way that they can complement to each other to extract the inner and outer boundaries respectively. The iris ROI is extracted efficiently by using the modified hough and sector restricted integro-differential transform. 2. Knuckleprint ROI Extraction : It contains different type of texture and structure. Gabor filter is modified to Curvature Gabor filters (CG) to model the knuckleprint ROI. It is used as a template to localize the central phalangeal joint and to segment the knuckleprint ROI. 3. Palmprint ROI Extraction : The palmprint has a very peculiar and well defined structure. Some landmark key-points such as valley and hill points are used to segment the palmprint ROI Quality Estimation The quality of the extracted biometric ROI plays a significant role in the overall performance of any system. Hence, several general as well as trait specific quality parameters are proposed to estimate the quality of any iris, knuckleprint and palmprint sample. 1. Iris Quality Estimation : The quality of an iris sample is modeled as a function of six attributes, namely focus, motion blur, occlusion, contrast and illumination, dilation and specular reflection.

60 26 CHAPTER 1. INTRODUCTION 2. Knuckleprint Quality Estimation : The knuckleprint image quality is obtained by computing the amount of well focus edges, amount of clutter, distribution of focused edges, block-wise entropy of focused edges, reflection caused by light source or camera flash and the amount of contrast. 3. Palmprint Quality Estimation : The palmprint image quality is obtained by estimating the amount of three primary features, namely palm principle lines, ridges and wrinkles Biometric Feature Enhancement and Transformation An enhancement algorithm which can be used to obtain robust iris, knuckle and palmprint ROI has been proposed. ROIs are transformed using the proposed local gradient based binary pattern to obtain highly discriminative as well as robust texture representation Biometric Feature Extraction and Matching A feature extraction and matching algorithm is proposed to perform biometric sample identification/recognition. The discriminative corner features are used for matching. The matching algorithm uses the concept of sparse point tracking but under three constrains viz. vicinity, correlation and patch-wise error bounds. The proposed matching algorithm is parameterized and is fine tuned to make the matching Multimodal Biometric System We have proposed an efficient multimodal biometric system using three traits, viz. iris, knuckleprint and palmprint. Since same matching algorithm is used to matching iris, knuckle and palm, simple score level fusion is done. To analyze the performance

61 1.7. THESIS ORGANIZATION 27 of the system, we have created several chimeric multi-modal biometric databases. The system is optimized to perform efficiently so that it can be considered as a real-time application. In this thesis we have shown that uncorrelated features of different modalities can be fused to achieve better performance. 1.7 Thesis Organization This thesis contains eight chapters. A general biometric system with its properties has been discussed in first chapter. Also motivation of the thesis presented. The literature review on each of the three traits viz. iris, knuckleprint, palmprint is presented in Chapter 2. Also, it has presented the some well known work on various multi-modal based biometric system. Some image processing and computer vision based techniques which are used to design our systems have been presented in the next chapter. In Chapter 4, an efficient iris based recognition system has been proposed. Each iris is efficiently segmented using an improved circular hough transform for inner iris boundary (i.e pupil) detection. The robust integro-differential operator is used to detect outer iris boundary that makes use of the pupil location. If the quality of the acquired iris sample is less than a predefined threshold then the image is recaptured. This early quality assessment is very crucial to handle the problem of poor quality and non-ideal imagery. The segmented iris is normalized to polar coordinates (i.e. rectangular strips) and LGBP (Local Gradient Binary Pattern) algorithm is proposed to obtain robust features. The corner features are extracted and matched using the proposed dissimilarity measure CIOF (Corners having Inconsistent Optical Flow). The system has been tested over publicly available CASIA 4.0 Interval and Lamp iris databases which consist of 2, 639 and 16, 212 images respectively. In the next chapter, a knuckleprint based recognition system has been proposed.

62 28 CHAPTER 1. INTRODUCTION The segmentation of the knuckleprint is done by estimating the central knuckle-line. It has used a new modified version of gabor filter to extract the knuckleprint. We have proposed a technique to extract the quality of the acquired knuckleprint sample. If it is less than a predefined threshold then it is recaptured. The segmented knuckleprint ROI is preprocessed using the proposed LGBP (Local Gradient Binary Pattern) to obtain robust features. The corner features are extracted and matched using the proposed dissimilarity measure, CIOF (Corners having Inconsistent Optical Flow). The proposed system has been tested over the publicly available PolyU knuckleprint database consisting of 7, 920 images. An effective palmprint based recognition system has been proposed in Chapter 6. The palmprint segmentation is done by obtaining the two valley points and then a square shaped ROI is extracted. The quality of the acquired palmprint sample is estimated using the proposed quality assessment parameters and if the quality is less than a predefined threshold, it is recaptured. The segmented palmprint is preprocessed using the proposed LGBP (Local Gradient Binary Pattern) to obtain robust features. The KLT based corner features are extracted and matched using the proposed dissimilarity measure CIOF (Corners having Inconsistent Optical Flow). The proposed system has been tested over publicly available CASIA and PolyU palmprint databases which consist of 4, 528 and 7, 720 images respectively. In Chapter 7, an efficient multi-modal based recognition system has been proposed. Four different fused multimodal systems viz. iris and knuckleprint, knuckleprint and palmprint, palmprint and iris and finally iris, knuckleprint and palmprint have been proposed and these systems are analyzed on the chimeric databases created by us. Conclusions along with the future scope of work have been presented in the last chapter.

63 Chapter 2 Literature Review This chapter presents the literature survey on the work carried out on these three traits. 2.1 Iris based Biometric System The iris ROI should be accurately localized which is challenging due to lot of variation in illumination and other extrinsic factors. The iris sample suffers from dimensional inconsistency among eye images which is caused due to pupil dilation, head movement, eye movement etc. Hence, the ROI is normalized to a rectangular strip of fixed size. The pioneering work in iris recognition system is done by Daugman [19]. The integro-differential operator is used to find circular boundaries by detecting heavy jump or drop in summation of pixel intensities over the circle. The multi-scale quadrature 2D gabor wavelet coefficients are used. A binary feature vector (i.e. iriscode) of 256 bytes is generated which is shown in Fig. 2.1(a). Feature vectors obtained from two iris images are matched by the hamming distance. Wildes [69] has used circular hough transform which finds the circle that fits max-

64 30 CHAPTER 2. LITERATURE REVIEW (a) Localized Iris (b) iriscode Figure 2.1: Images are taken from [19, 18] imum edge points. Huang et.al [31] have used rescaled image to find iris boundaries and have used it to guide the search on the original image. It has improved the approach presented in [69]. The algorithm proposed by Liu et al. [42] segments the iris image by ignoring high intensity edge pixels around specular reflections. Lili and Mei [41] have found a coarse location of iris. It is based on the assumption that image histogram has three main peaks and they are due to pupil, iris and sclera. He and Shi [29] have proposed an approach in which image is binarized to locate the pupil. It uses edge detection and circular Hough transform to find the limbic boundary. Feng et al. [25] have used coarse-to-fine strategy to find iris boundary and have suggested an improvement to use lower part of the pupil because it is stable even under occlusion. Xu et al. [72] have divided the image into grids and have used the minimum mean intensity across the grids as a threshold. It has considered to locate pupil and subsequently the limbic boundary. Zaim et al. [74] have used a split and merge algorithm to detect pupil as the connected region of uniform intensity. Camus and Wildes [13] have presented an algorithm which is based on Daugman s integro-differential operator that searches in cubic space of (x, y, r). Instead of applying the operator directly taking all points as candidate center points, they have used local minimas of intensity as seed points, making the process faster by

65 2.1. IRIS BASED BIOMETRIC SYSTEM times as compared to Daugman s algorithm [18]. Actual iris center is examined by measuring the image patch gradient information. Also uniformity is measured across the eight rays passing from the assumed potential center at a π angle difference, as shown in Fig It gives an accuracy of 99.5% in case of having no 4 glasses and that of 66.6% with glasses. (a) Incorrect Center (b) Correct Center Figure 2.2: Mapping of radial rays into a polar representation pupil and iris boundaries become vertical edges [13] Bonney et al. [11] have extracted pupil by using least significant bit-plane along with erosion and dilation operations. Using pupil area, the standard deviation in horizontal and vertical direction is computed to search for limbic boundary; both boundaries are modeled as ellipses and is shown in Fig This method does not rely on circular edges detection; hence it is not restricted to circular iris. Figure 2.3: (a) Most Significant bit-plane 7 (b) Bit-plane 6 (c) Bit-plane 5 (d) Bitplane 4 (e) Bit-plane 3 (f) Bit-plane 2 (g) Bit-plane 1 (h) Least Significant bit-plane 0 [11].

66 32 CHAPTER 2. LITERATURE REVIEW In [30], He et al. have proposed a Viola and Jones style cascade of classifiers [67] for detecting the presence of pupil region and for optimizing the boundary of pupil. Miyazawa and Ito [50] have thresholded iris image and have used the center of gravity of binary image to find out pupil center followed by ellipse fitting to determine pupil boundary. The iris boundary is modeled as an ellipse and is found using the integro-differential operator. Pupil center is detected using the center of (a) Changing Iris (b) 10 parameter based deformable iris model Figure 2.4: Iris Deformable Model [50] gravity of the binarised image. The two ellipses, as shown in Fig. 2.4 are extracted by computing a parameter based deformable iris model as shown in Fig. 2.4(b). Segmented iris is used to extract features and matching is performed using band limited phase only correlation, (BLP OC). Preprocessing of an iris image is shown in Fig The recent trend in iris segmentation deals with the off-angle images i.e. person looking at some angle rather than straight. Dorairaj et al. [22] have proposed the global ICA encoding for non-ideal iris based recognition. It has used an initial available estimate of angle of rotation and then the integro-differential operator for ellipse has been used for detection and refinement of result. The off-angle image is projected to front-view image. The overview of the proposed scheme is shown in Fig. 2.6(b).

67 2.1. IRIS BASED BIOMETRIC SYSTEM 33 Figure 2.5: Iris Image Preprocessing [50] : (a) Original Image, (b) Detected Inner Boundary, (c) Detected Outer Boundary, (d) Lower Half of Iris, (e) Normalized, (f) Eyelid Masking and (g) Enhanced Image (a) Non-Ideal Iris Samples (b) Overview Figure 2.6: Overview of Non-ideal Iris Recognition System [22] Li et al. [40] have fitted an ellipse to pupil boundary and have used rotation and scaling to transform it to circular boundary. It is shown that this calibration improves the intra-class and inter-class separation that is achieved by Daugman-like recognition algorithm [19]. One of the major factors that reduces the iris quality significantly is off-angle.

68 34 CHAPTER 2. LITERATURE REVIEW It is almost impossible to give iris data without any off-angle. Abhyankar et al. [2] have proposed an approach involving bi-orthogonal wavelet networks for nonideal imagery. Non-ideal and poor quality iris imaging may cause irrecoverable loss of information as iris segmentation starts to behave erroneous. The circle fitting algorithm performs very poorly when one attempts to segment off-angle images, as shown in Fig Some projective geometry and affine transformations are used Figure 2.7: Off Angle Iris Segmentation [2] to distribute the radial resolution uniformly in all quadrants. Synthetic iris samples are generated by rotating them at an angle of 10 and 20. Active shape model (ASM) [3] is used to find elliptical iris boundaries from off-angle images. In [19], gabor wavelet responses are quantized to generate feature vector and matching is done using hamming distance. The phase demodulation process is used to encode iris patterns. Local regions of an iris are projected onto quadrature 2D Gabor wavelets. The angle of each phase is quantized to one of the four quadrants by setting two bits of phase information as shown in Fig. 2.8(a). In [69], hough transform is used for iris localization and Laplacian of Gaussian (LOG) is used for matching. In [45], vanishing and appearing of important image structures are considered as key local variations and dyadic wavelets are used to transform 2D image signals into 1D signals to obtain unique features. They have located positions of key variations accurately by easily computable dyadic wavelets. The overview of the system and the dyadic wavelet transformation of any 2D signal

69 2.1. IRIS BASED BIOMETRIC SYSTEM 35 (a) Phasor [19] (b) Overview [45] Figure 2.8: Two Most Popular Iris Systems (S(X)) are shown in Fig. 2.8(b). The response is converted into a 1D binary sequence and dissimilarity score is computed using hamming distance for matching. In [24], gabor wavelet with elastic graph matching is used for iris recognition. In [51], Discrete Cosine Transform (DCT ) coefficients are extracted from non-overlapping rectangular blocks at various angles and of variable sizes and are quantized. The binary feature for each rectangular block is computed. Average is done across the width to get a 1D vector that is windowed using the hanning window to suppress the spectral leakage. The DCT is applied over it and the DCT coefficients are used as features. The differences between the DCT coefficients of adjacent patch is binarised to obtain a binary code using zero crossings. Steps involved in feature extraction are given in Fig The hamming distance based matching is performed for score computation. In [50], Phase Only Correlation (P OC) and Band Limited Phase Only Correlation (BLP OC) are used for accurate iris recognition. In [60], variational model is applied to localize iris while modified contribution selection algorithm (M CSA)

70 36 CHAPTER 2. LITERATURE REVIEW Figure 2.9: Steps Involved for Feature Vector Extraction [51] is used for iris feature ranking. In [66], compact and highly efficient ordinal measures are applied for iris recognition. The relationship between iris patches selected over the complete normalized iris is considered to be robust and is shown in Fig. 2.10(a). Hence, such ordinal relationships can be used as key signature of the iris patch and this ordinal relationship is encoded as a binary feature. These ordinal relationship between iris patches is efficiently computed using multi-lobe differential filters (MLDF Filters) as shown in Fig. 2.10(b). Also, there are some significant contributions like application of Principle Component Analysis (P CA) and Independent Component Analysis (ICA) in iris recognition [17], [31]. A comprehensive survey on iris biometrics is available in [12]. 2.2 Knuckleprint based Biometric System The finger knuckleprint is relatively new trait and a limited amount of work has been done. In [77], Zhang et.al have extracted the knuckleprint ROI using convex direction coding as shown in Fig The correlation between two knuckleprints is used for identification. It is calcu-

71 2.2. KNUCKLEPRINT BASED BIOMETRIC SYSTEM 37 (a) Ordinal Relationship (b) Multi-lobe Differential Filters (MLDF Filters) Figure 2.10: Ordinal Measures for Iris Recognition [66] Figure 2.11: Knuckleprint Acquisition System [77], (a) Device, (b) Raw Sample, (c) ROI Localized, (d) Cropped. lated using band limited phase only correlation (BLP OC). The knuckleprints are assumed to possess small range of frequencies and hence, the Fourier based frequency band is restricted and phase only correlation (P OC) is calculated. The P OC value between two images can be used as similarity score and is obtained using the crossspread spectrum of the Fourier transform of both images. In [52], knuckleprints are enhanced using CLAHE to address non-uniform reflection because they can be affected due to noise and improper illumination. There

72 38 CHAPTER 2. LITERATURE REVIEW Figure 2.12: Ensemble of Local and Global Features [80] can be some variation becasue of scale and rotation; hence, scale invariant feature transform (SIF T ) key-points can be used for matching. In [71], a knuckleprint based recognition system that extracts features using local gabor binary patterns has been proposed. Gabor filter is applied over a pixel and its neighborhood. A discriminating local pattern is extracted to represent that pixel. In [80], local and global features are fused to achieve better performance. Global features are extracted by band limited phase only correlation (BLP OC) [77] while local features are found using gabor filter [35] which is a Gaussian envelop modulated

73 2.2. KNUCKLEPRINT BASED BIOMETRIC SYSTEM 39 by a sinusoidal plain wave. It ensures that the major contribution in convolution response comes from the local small patch. The local orientation of a pixel is estimated by applying six gabor filters at an angle of π. The maximum responsive 6 gabor filter is considered to code that pixel. Encoding is done into three bit code for each pixel. Hamming distance is used for matching. Both local and global scores are fused to get the better result. In [79], a bank of six gabor filters at an angle of π 6 is applied to extract features for those pixels that are having varying gabor responses. All pixels do not contribute to discrimination equally as some of them may be at a similar background patch; hence, they may not possess any dominant orientation. Such pixels are ignored and the orientation code is computed as shown in Fig. 2.13(b). Similarly, magnitude code is obtained by the real and imaginary part of the gabor filers as shown in Fig. 2.13(c). results. The orientation and magnitude information are fused to achieve better (a) Original (b) ImCompCode (c) M agcode Figure 2.13: Improvement over Gabor based Compcode. (a) Original Image, (b) Orientation Code (ImCompCode), (c) Magnitude Code (M agcode) (Images taken from [79]) In [78], three local features viz. phase congruency, local orientation and local phase as shown in Fig are extracted. All of them are computed using the quadrature pair filter and are fused at score level. Finally, all are fused along with the gabor based local features and BLP OC based global features [80] to achieve the better performance.

74 40 CHAPTER 2. LITERATURE REVIEW Figure 2.14: Local Feature Extraction 2.3 Palmprint based Biometric System Palmprint recognition systems are broadly based on structural and statistical features. In [28], line-like structural features are extracted by applying morphological operations over edge-maps. In [23], structural features such as points on principle line and some isolated points are utilized for palmprint authentication. Gabor based directional filtering is used to develop several palmprint based recognition systems such as [35], [36], [65], [76]. In [76], single fixed orientation gabor filter is applied over the palmprint and the resulting Gabor phase is binarized using zero crossings. In [36], bank of elliptical Gabor filters with different orientations is employed to extract the phase information of the palmprint image and is merged according to a fusion rule to produce feature vector. In [35], the palmprint is processed using the bank of Gabor filters with different orientations. The highest filter response is preserved as features. Further, to improve the performance, a modified algorithm using fuzzy C-means is used to cluster the orientation of each Gabor filter in [73]. In [65], the palmprint is processed using the bank of orthogonal Gabor filters. Their differences are considered as palmprint features. All these systems use hamming distance for matching two palmprints. Statistical techniques such as Principle Component Analysis (P CA), Linear Discriminant Analysis (LDA), Independent Component Analysis (ICA) and their combinations are also applied to palmprints to achieve better performance [43],[62],[70]. Several other techniques such as Stockwell [6], Zernike moments [7], Discrete Cosine Transforms (DCT) [8] and Fourier [9] transforms are also applied to achieve

75 2.3. PALMPRINT BASED BIOMETRIC SYSTEM 41 good performance. The palmprint samples are partitioned into several different ways to compliment the transform or the method that is finally used to extract the features. Figure 2.15: Partitioning for Zernike based Palmprint System [7] In [7], each palmprint is partitioned into non-overlapping blocks as shown in Fig From each block, Zernike moment based features are computed. The order of the moment signifies the degree of the detailed information. Higher the order, more is the information. Lower order moments are used to compute features as they can coarsely represent the image information. Blocks are also weighed based on the entropy of the block and the binarised feature vector is matched using the hamming distance. In [9], palmprint samples are divided into overlapping square blocks as shown in Fig The 2D block is converted into two 1D signal by performing averaging operation in horizontal and vertical directions. The phase difference between horizontal and vertical signals is binarised using zero-crossing to obtain the binary vector. The hamming distance is used for matching. In [6], the palmprint is divided into circular strips. The radius of the strip is a variable and is optimized for performance as shown in Fig Similar to Fourier transform, Stockwell transform can also be used for spectral decomposition. But the

76 42 CHAPTER 2. LITERATURE REVIEW Figure 2.16: Partitioning for Phase based Palmprint System [9] Figure 2.17: Partitioning for Stockwell based Palmprint System [6] instantaneous phase used by Stockwell transform produces feature vector that encodes both phase and time which is more useful than only phase or magnitude as we can get from Fourier transform. Hence, features using Stockwell transform are computed from this annular ring as shown in Fig The 2D annular ring is converted into 1D average vector over which Stockwell transform is applied. These features are binarised and hamming distance is used for matching. In [8], the palmprint samples are divided into oriented rectangular boxes of variable size and orientation as shown in Fig The 2D DCT can perform the image compression and hence, is used to save energy of the image more efficiently.

77 2.4. MULTI-MODAL BIOMETRIC SYSTEM 43 Figure 2.18: Partitioning for DCT based Palmprint System [8] The DCT based feature for each rectangular block is computed. For each rectangle, average in the vertical direction is computed to get a 1D vector that is windowed using the hanning window to suppress the spectral leakage. On each windowed signal, the DCT is applied and the coefficients are used as features. Features of the adjacent block are subtracted and binarised to obtain the binary feature vector. The hamming distance is used to determine its matching score. 2.4 Multi-modal Biometric System Not much work has been reported in this area largely because of non-availability of multi-modal biometric database. In [64], 2D discrete wavelets have been used to extract low dimensional features from iris and face. A reduced joint feature vector set is obtained using Direct Linear Discriminant Analysis (DLDA) which is finally used for classification. In [58], face and iris (left and right both) are fused using SIF T feature vector. The classification is done using nearest neighborhood ratio matching. In [81], iris and face are considered and PCA coefficients along with Daughman s gabor filter approaches are used for face and iris images respectively. Scores are fused after min-max normalization.

78 44 CHAPTER 2. LITERATURE REVIEW In [59], scores obtained by eigenfinger and eigenpalm are fused while in [39] hand shape and palm features are fused. In [38], finger geometry and dorsal finger surface information are fused to improve the performance as compared to its unimodal version. In [49], features which are automatically detected by tracking are encoded using efficient directional coding while Ridgelet transformation is used for feature matching and these scores are fused using SVM. In [48], 1D gabor filters are used to extract features from knuckle and palmprint. In [82], fusion of knuckle and palmprint information is done at score level. Sharp edge based knuckleprint features are denoised using wavelet. Corner features with their local descriptors are considered for palmprint images. Finally, matching is done by cosine similarity function and hierarchical hand metric. In [53], radon and haar transforms are used for feature extraction from knuckleprints of multiple fingers and nonlinear fisher transformation is done for dimensionality reduction. Matching is done using parzen window classification. In [47], score level fusion is performed on palm and knuckleprints using phase only correlation (P OC) function.

79 Chapter 3 Preliminaries Their exist several techniques for extracting features from iris, knuckle and palmprint. These techniques can broadly be divided into two categories: global and local. Any global feature extraction technique considers whole image to obtain features from the image while on the other hand, local techniques consider only a few neighboring pixels or a fixed size in each of local patches. Some of the well known global extraction techniques are designed with the help of Gabor filters, local binary patterns, Radon based transforms, Gaussian based ordinal measures, Wavelets, Stockwell transform, Zernike moment and discrete cosine transform (DCT). Similarly, there are several local feature extraction techniques to extract features. Some of them are P CA, LDA, Gabor P CA, LBP P CA, DF T etc. Advantages of the use of global features is that they can be computed at once for the whole image; hence they should be easy to compute and fast. But full image approximation may create the problem of under-estimation. Local features are extracted for each pixel, block or key point; hence, they are computationally intensive. But performance-wise, they are found to be better because of the effective use of smaller neighborhood and similar patch approximation. Generally, features of iris, knuckle and palmprint are converted into binary vector

80 46 CHAPTER 3. PRELIMINARIES and hamming distance based measures are used to compute matching scores between two feature vectors. One of the disadvantages of such a matching strategy is that if the images are not well registered, the performance of the system gets severely affected. This is because it never attempts to compute the pixel or patch level correspondence. This chapter discusses some well known feature extraction techniques which are used to design biometric systems based on Iris, Knuckleprint and Palmprint. These techniques are local binary pattern (LBP) [54], Phase Only Correlation (POC) [80], KLT Corner Extraction [63] and LK-tracking algorithms [44]. 3.1 Local Binary Pattern (LBP) It is a well known texture operator [54] which is simple and computationally efficient and effective. It assumes that any image texture has some pattern and a strength is associated to it. It is invariant to illumination and its intensity has the range from 0 to 255; hence, it can be realized as a gray-scale image. It is based on the assumption that pixel s relative gray value with respect to its 8-neighborhood pixels can be more stable than its own intensity value. A 8-bit local binary pattern is computed by thresholding its 8 neighbors with respect to itself and by representing the ordinal relation as shown in Fig The LBP features of each image can be computed in the form of LBP histograms. These histograms are saved in the form of 1-D vectors and are used to compute distance. Let S and M be the histograms of the probe image and the gallery image respectively. To discriminate histogram features, there exist several possible dissimilarity measures as discussed below.

81 3.1. LOCAL BINARY PATTERN (LBP) 47 Figure 3.1: LBP Computation Histogram intersection: D (S, M) = i min (S i, M i ) (3.1) Log-likelihood statistic: L (S, M) = i S i log M i (3.2) Chi square statistic (χ 2 ) χ 2 (S, M) = i (S i M i ) 2 S i + M i (3.3) where S i and M i are i th bin of histograms S and M respectively. The LBP based histogram features can be used for face recognition. Let us assume that we have two facial images probe and gallery between which we have to find out the dissimilarity score. Regions like left eye, right eye, nose and lips of each facial image are divided into 8 8 = 64 blocks. For each block, a histogram is

82 48 CHAPTER 3. PRELIMINARIES (a) Original (b) Key Region (c) LBP Histogram (d) Original (e) Key Region (f) LBP Histogram Figure 3.2: Application of LBP in Face Recognition calculated using decimal values of the binary patterns as labels. The concatenation of the histograms for each block in each region acts as the feature vector of that image (say H probe and H gallery for probe and gallery respectively). This feature vector is used for calculation of dissimilarity between the probe and the gallery images. The Chi square (χ 2 ) measure can be used to obtain the dissimilarity between probe and gallery image as χ 2 (H probe, H gallery ) = 255 i=0 ( H i probe H i gallery) 2 H i probe + Hi gallery (3.4) where H i probe and Hi gallery are the values for ith label of probe and gallery histograms respectively. Overall dissimilarity score for probe and gallery images can be expressed as

83 3.2. CORNER POINT DETECTION 49 D (probe, gallery) = χ 2 (H probe, H gallery ) (3.5) Region Blocks It can be noted that since D provides the dissimilarity information between two samples, lower the value of D, better is the match. 3.2 Corner Point Detection Some key points that are visually significant, lie on the edges or sharp discontinuities. But all points on the edges in an image cannot be considered as key feature points because they all look similar along that edge. Corners have strong derivative in two orthogonal directions and can provide enough robust information for tracking. We have considered corner points as features because of their repeatability and discrimination. The autocorrelation matrix M [63] can be used to calculate corner points that are having strong orthogonal derivatives. The matrix M can be defined for any pixel at the i th row of the j th column of an image as: M(i, j) = A C B (3.6) D where

84 50 CHAPTER 3. PRELIMINARIES A = w(a, b).ix(i 2 + a, j + b) B = C = D = K a,b K K a,b K K a,b K K a,b K w(a, b).i x (i + a, j + b).i y (i + a, j + b) w(a, b).i y (i + a, j + b).i x (i + a, j + b) w(a, b).iy(i 2 + a, j + b) (3.7) and w(a, b) is the weight given to the neighborhood,i x (i + a, j + b) and I y (i + a, j +b) are the partial derivatives sampled within a patch of size (2K +1) (2K +1) centered at the pixel (i, j). However, all neighbors may not have same weight. The matrix M can have two eigen values λ 1 and λ 2 such that λ 1 λ 2 with e 1 and e 2 as the corresponding eigenvectors. All pixels having λ 2 T (smaller eigen value greater than a threshold) are considered as corner points. An example is shown in Fig (a) Original (b) Extracted Corners Figure 3.3: Corner Features shown in Red 3.3 Sparse Point Tracking (KLT) The feature point correspondence problem is the key issue in authentication. We have used corner points as the feature points; corner point correspondence issue can

85 3.3. SPARSE POINT TRACKING (KLT) 51 be tackled with the help of a suitable tracking algorithm. The KL-tracking algorithm [44] localizes the set of features points of an image in another image when two images are taken of the same object but at different time. This tracking algorithm makes an effort to estimate the optical flow vector for each sparsely populated feature point and uses that flow vector to estimate the final position of that feature in the subsequent image. Let there be a feature at location (x, y) at a time instant t with intensity I(x, y, t) and this feature has moved to the location (x + δx, y + δy) at the time instant t + δt. Three basic assumptions that are used by KL Tracking to perform tracking successfully are: Brightness Consistency: Features in a frame do not change much for the small change in the value of δt, i.e I(x, y, t) I(x + δx, y + δy, t + δt) (3.8) Temporal Persistence: Features in a frame moves only within a small neighborhood. It is assumed that features have not made much movement for the small change in the value of δt. Using Taylor series and neglecting the high order terms, one can estimate I(x + δx, y + δy, t + δt) as δi δx δx + δi δy Dividing both sides of Eq 3.9 by δt one gets, δi δy + δt = 0 (3.9) δt I x V x + I y V y = I t (3.10) where V x, V y are the respective components of the optical flow velocity for

86 52 CHAPTER 3. PRELIMINARIES the pixel I(x, y, t) and I x, I y and I t are the derivatives in the corresponding directions. Spatial Coherency: In Eq. 3.10, there are two unknown variables V x and V y for every feature point. Hence finding unique V x and V y for every feature point is an ill-posed problem 1. The spatial coherency assumption is used to solve this problem. It assumes that a local mask of pixels moves coherently. Therefore, one can estimate motion of the central pixel by assuming the local constant flow. The KL tracking uses a non-iterative method by considering flow vector (V x, V y ) as constant within 5 5 neighborhood (i.e 25 neighboring pixels, P 1, P 2... P 25 ) around the current feature point (center pixel) to estimate its optical flow. The said assumption is fair as all pixels within a mask of 5 5 can have coherent movement. Hence, one can obtain an over-determined system of 25 linear equations which can be represented by I x (P 1 ) I y (P 1 ).. V I t (P 1 ) x =. V y I x (P 25 ) I y (P 25 ) }{{} I t (P 25 ) }{{} V }{{} C D (3.11) where rows of the matrix C represent the derivatives of image I in x and y directions and those of D are the temporal derivative at 25 neighboring pixels. This system of linear equations is used to compute the estimated flow of current feature point V using least square method. The 2 1 matrix V is defined as V = (C T C) 1 C T ( D) (3.12) 1 There are two variables to evaluate using only one equation

87 3.4. PHASE ONLY CORRELATION (POC) 53 The final location F of any feature point is estimated with the help of its initial position vector Î and the estimated flow vector V as F = Î + V (3.13) (a) Hand Image (b) Hand Tracking (Feature correspondence) Figure 3.4: Hand Tracking Application A hand tracking system can make use of the tracking algorithm as shown in Fig It can estimate the sparse optical flow using KL-tracking algorithm. The optical flow which is nothing but the direction of feature motion (i.e motion vectors) for each feature is computed for all video frames. Hence, feature correspondence can be used to track hand in a video while it is moving freely. 3.4 Phase Only Correlation (POC) The P OC can play an important role in registering two images. Let f and g be two M N images. Assume M = 2M 0 + 1, N = 2N and the image center is at location (0, 0). If F and G are the Discrete Fourier Transform (DF T ) of f and g

88 54 CHAPTER 3. PRELIMINARIES images respectively, then F (u, v) = G(u, v) = M 0 m= M 0 N 0 = A F (u, v)e jθ F (u,v) M 0 m= M 0 n= N 0 f(m, n)e j2π( N 0 = A G (u, v)e jθ G(u,v) n= N 0 g(m, n)e j2π( mu M + nv N ) (3.14) (3.15) mu M + nv N ) (3.16) (3.17) Cross Phase Spectrum between G and F is given by: R GF (u, v) = G(u, v) F (u, v) G(u, v) F (u, v) = ej(θ G(u,v) θ F (u,v)) (3.18) (a) Original Face (b) Translated Face (c) Sharp Peak Figure 3.5: Phase Only Correlation (P OC) between two translated images. Image in (c) represents the corresponding P OC values for each spatial location Phase Only Correlation (P OC) is the IDF T of R GF (u, v) and is given by : P gf (m, n) = 1 MN M 0 u= M 0 N 0 v= N 0 R GF (u, v)e j2π( mu M + nv N ) (3.19) From [50], we can have the following. When two images are Same, the P OC function P gf (m, n) becomes kronecker delta function δ(m, n). If two images are Similar, the P OC function gives a distinct sharp peak; otherwise, the peak value

89 3.4. PHASE ONLY CORRELATION (POC) 55 drops significantly. Hence peak height of P gf (m, n) can be considered as a similarity measure. Image in Figure 3.5(b) is a translated version of image in Figure 3.5(a). One can clearly observe from Fig. 3.5(c) that there exists a sharp distinct peak which points to the location (t x, t y ) at which translation is most likely to be expected. In other words, image in Fig. 3.5(b) can be registered by translating image in Fig. 3.5(a) by t x units in x-direction and t y units in y-direction.

90 56 CHAPTER 3. PRELIMINARIES

91 Chapter 4 Iris Recognition System This chapter deals with the problem of designing an efficient iris based recognition system. Two state-of-the-art techniques (Integro-differential and Hough transformation) are applied in designing the system so that they can compliment to each other for the efficient iris segmentation. The iris contains good amount of texture but it is required to be enhanced. Identification/authentication is performed by tracking corner features. Like any other biometric system, iris based recognition system consists of five major tasks, viz. ROI extraction, quality estimation, ROI preprocessing, feature extraction and matching. The overall architecture of the proposed iris based recognition system is shown in Fig Iris ROI Extraction This section proposes an efficient iris segmentation technique. It approximates pupil and limbic boundaries by circles. It extracts iris ROI by applying hough and integrodifferential transformations. A modified hough transform is used to locate the inner boundary and then sector based integro-differential operator is applied to localize the outer iris boundary. The detected iris is projected into a rectangular strip which

92 58 CHAPTER 4. IRIS RECOGNITION SYSTEM Figure 4.1: Overall Architecture of the Proposed Iris Recognition System is shown in Fig (a) Original Iris (b) Donut Shaped (c) Cropped Iris (d) Normalized Iris Figure 4.2: Iris Segmentation Iris Inner Boundary Localization The pupil of an eye can be modeled as a dark circular region within the iris. Each eye image is scaled down for faster processing and thresholded based on image brightness to filter out pixels of the pupil and to get a binary image I t. This reduces

93 4.1. IRIS ROI EXTRACTION 59 the computation for pupil boundary (i.e iris inner boundary) on only dark pixels in the image. However, there may be some eyelashes, eyebrows or shadow points which can act as noise in inner boundary detection. Also, specular reflection on the pupil may cause the incorrect detection of the boundary. Hence, morphological operator of flood-filling with four neighbors is applied to the binary image I t. Pixels which cannot be reached by flood-filling the background form holes are removed. Thus, specular reflection inside the pupil region and other dark spots caused by eyelashes are removed as shown in Figure. 4.3(b) represented as I tf. (a) I 1 (Original) (b) I 1 tf (Thresholded) Figure 4.3: Iris Pupil Segmentation (Thresholding) The Sobel filters in both horizontal and vertical directions are applied on the iris image to obtain the horizontal and vertical gradient images. The Sobel derivative is an approximation to image intensity gradient in a given direction. The gradient magnitude image I g is obtained by I g (x, y) = (I 2 gh (x, y) + I2 gv(x, y)) (4.1) where I gh and I gv are horizontal and vertical gradient images. The gradient magnitude image I g is thresholded to generate binary image I gb as shown in Figure. 4.4(b), representing only strong edges. Edge orientation at any

94 60 CHAPTER 4. IRIS RECOGNITION SYSTEM pixel (x, y) in the generated binary image is obtained by: ( ) θ(x, y) = tan 1 Igv (x, y) I gh (x, y) (4.2) (a) I 1 (Original) (b) I 1 gb (Sobel) Figure 4.4: Iris Pupil Segmentation (Gradient based Thresholding using Sobel) An improved circular Hough transform [10] is applied over I tf to detect the pupil boundary. The top left-most corner is considered as the origin. The original Hough transform for circle detection has three parameters; hence the parametric space is of 3D viz. x and y coordinates of the center and the radius r. In order to reduce 3D parametric search space, the Hough transform is modified so that the search can be performed on 1D space (i.e. only on the radius r). Modified Hough Transform : Suppose, an edge pixel (x, y) in the generated binary image is on a circle. For any fixed radius, there can only be two potential pupil centers (c x 1, c y 1) and (c x 2, c y 2). They must be on the opposite sides of the tangential line to the circle at that point and over the normal line as shown in Fig These potential centers are calculated for different radii r using the normal to the orientation angle θ(x, y) by using the following equations: c x 1 = x + r sin(θ(x, y) π/2) (4.3) c y 1 = y r cos(θ(x, y) π/2) (4.4)

95 4.1. IRIS ROI EXTRACTION 61 Figure 4.5: For any radius r, any point (x, y) having 2 pupil centers (c x 1, c y 1) and (c x 2, c y 2) c x 2 = x r sin(θ(x, y) π/2) (4.5) c y 2 = y + r cos(θ(x, y) π/2) (4.6) (a) I 1 (Original) (b) I 1 ps(pupil) Figure 4.6: Segmented Iris Pupil A voting scheme is devised in which all edge pixels cast a vote for its likely-tobe-circle. A 3-D array A is created for storing the votes for every set of possible parameter set where A(i, j, r) represents the number of edge pixels casting vote for the circle having center at (i, j) pixel with radius r. Each edge pixel for a fix radius r can vote for only two circles that have their center at a distance of r from itself in either side as shown in Fig The values A(c x 1, c y 1, r) and A(c x 2, c y 2, r)

96 62 CHAPTER 4. IRIS RECOGNITION SYSTEM are incremented by 1 for each edge point (x, y), if the candidate points (c x 1, c y 1) and (c x 2, c y 2) are lying within the image boundary of the generated binary image. The parameter set with maximum votes give the coordinates of the center and the radius of the iris inner boundary as shown in Figure 4.6(b). The algorithm for inner boundary localization is described in Algorithm 4.1. Results obtained at each step of the algorithm are shown in Figure 4.7. (a) I 2 (Original) (b) I 2 tf (Thresholded) (c) I2 gb (Sobel) (d) I2 ps(pupil) Figure 4.7: All Steps in Iris Pupil Segmentation Iris Outer Boundary Localization (a) Integro-differential Operator (b) Area of iris considered for outer boundary localization Figure 4.8: Application of Integro-differential Operator The contrast across the outer iris boundary is less compared to the inner boundary. Thus, any edge detector may fail to detect the true edges. To tackle this issue, circular integro-differential operator [19] which uses raw derivative information is used.

97 4.1. IRIS ROI EXTRACTION 63 Algorithm 4.1 Pupil Segmentation Require: Iris image I of dimension m n, p rmin : minimum pupil radius, p rmax : maximum pupil radius, t: binary threshold. Ensure: Pupil center c p with co-ordinates as (c x p, c y p), Pupil radius p r. 1: I t threshold(i, t); // generate binary image after thresholding 2: I tf remove specular reflection(i t ); // flood filling 3: I g, I gh, I gv Sobel(I tf ); // Sobel edge detection 4: I gb T hreshold Gradient(I g ); // choosing the best edge points 5: E white pixels in I gb ; // collect the edge points in I gb 6: for all Edge pixels (x, y) ɛ E do ( 7: θ(x, y) tan 1 Igv(x,y) I gh (x,y) 8: end for ) ; // edge orientation at a point 9: A(m, n, p rmax ) 0; // 3-D array initialization for voting 10: for all Edge pixels (x, y) ɛ E do 11: for r = p rmin to p rmax do 12: Compute (c x 1, c y 1), (c x 2, c y 2) by putting (x, y) and θ(x, y) in eqs. (4.3)-(4.6); 13: if Point (c x 1, c y 1) lies within I gb image then 14: A(c x 1, c y 1, r) A(c x 1, c y 1, r) + 1; // Vote Casting 15: end if 16: if Point (c x 2, c y 2) lies within I gb image then 17: A(c x 2, c y 2, r) A(c x 2, c y 2, r) + 1; // Vote Casting 18: end if 19: end for 20: end for 21: c x p argmax (i) A(i, j, k); 22: c y p argmax (j) A(i, j, k); 23: p r argmax (k) A(i, j, k); Integro-differential operator: For any circle with center (c x, c y ) and radius r, the integro-differential operator adds the intensity values over this circle and calculates the difference in the summation over the concentric circle of higher radius

98 64 CHAPTER 4. IRIS RECOGNITION SYSTEM (r + δr). The circle having the maximum jump in terms of this summation gives the outer iris boundary as shown in Fig. 4.8(a). The integro-differential operator can be expressed as: ( ) θ G σ (r) [I(cx (r + δr) sin θ, c y + (r + δr) cos θ) I(c x r sin θ, c y + r cos θ)] 2πr (4.7) where G σ (r) is radial Gaussian operator with a standard deviation of σ and is the convolution operator. The arguments (c x, c y ) and r for the maximum response value of this operator over the image give the coordinates of iris center and radius respectively Proposed Segmentation Algorithm The image is smoothened using 2-D Gaussian filter to remove any stray noise. Iris center and pupil center may not be concentric, but are usually closed to each other. Hence iris center is searched within a W W window around the pupil center (c x p, c y p). For each point within this window, intensity values over an arc of ( π/4, π/6) (5π/6, 5π/4) radian angular sectors (with respect to horizontal as shown in Figure. 4.8(b)) over the perimeter are summed up for any particular radius. We have chosen only these sectors because in these sectors of the iris, occlusion is empirically found to be minimum as compared to other iris areas. The gradient for this sum is calculated for all radius. Parameters of the circle with maximum gradient is considered as the coordinates of iris center and iris radius. The algorithm for outer iris boundary localization is explained in Algorithm 4.2. It uses, (a) The co-ordinates of pupil center (c x p, c y p), the original image of dimension m n,

99 4.1. IRIS ROI EXTRACTION 65 Figure 4.9: Iris Localization (1 st Row : Original Iris, 2 nd Row : Segmented Iris) (b) The iris radius range is (i rmin, i rmax ), (c) The search window of size W W around the pupil center within which iris center is assumed to be located, (d) There is a occlusion free range of sector angles α range. The original image is smoothened to remove noise and circular integro-differential operator is applied over the occlusion free sectors using the candidate iris center points around the pupil center and the radii in the range. The circular summation of intensity for all candidate centers and for all radii is computed. The center coordinates and radius for which the maximum difference is obtained are considered as the iris center and radius. Algorithm 4.2 finds the parameters for outer iris boundary. Some typical eye images after iris localization are shown in Figure Iris Normalization The iris samples suffer from dimensional inconsistencies among eye images mainly due to the following reasons: Iris stretching caused by pupil dilation due to varying illumination Varying imaging distance Rotation of the camera

100 66 CHAPTER 4. IRIS RECOGNITION SYSTEM Algorithm 4.2 Iris Segmentation Require: Iris image I of dimension m n, i rmin : minimum iris radius, i rmax : maximum iris radius, (c x p, c y p): Pupil center, p r : pupil radius, W : search window, α range : angular range defining the occlusion free sectors. Ensure: Iris center c i (c x i, c y i ), iris radius i r. 1: I s GaussSmooth(I, σ = 0.5, k = 3); // k: kernel size, Gaussian noise removal 2: max diff 0; // the maximum change in contour summation 3: for all points (c x, c y ) ɛ [W W ] window around (c x p, c y p) do 4: prev sum 0; // previous circular summation of intensity values 5: start flag T rue; // no circle has yet been summed up 6: for r = i rmin to i rmax do 7: c sum 0; 8: for all α ɛ α range do 9: c sum c sum + I(c x r sin(α), c y + r cos(α));//sector-wise summation 10: end for 11: diff sum c sum prev sum ; // calculation of difference of sum 12: prev sum c sum ; 13: if diff sum > max diff and start flag T rue then 14: max diff diff sum ; 15: c x i c x, c y i cy, i r r; // update the parameters 16: end if 17: start flag F alse; // a circle has been summed up 18: end for 19: end for Head tilt Rotation of the eye within the eye socket. The iris normalization process produces iris regions which have the constant dimension. Hence, two images of the same iris under different environmental conditions

101 4.1. IRIS ROI EXTRACTION 67 (a) Segmented Iris ROI (b) Normalized Iris ROI Figure 4.10: Normalization of the Cropped Iris have their unique and random characteristic features at more or less same spatial location. Therefore, the iris can be transformed to a fixed dimension rectangular strip as shown in Fig. 4.10(b). For the normalization of the segmented iris, rubber sheet model [20], which assumes iris area to be stretched like rubber, is used. It handles the above mentioned inconsistencies by mapping the iris ring points from Cartesian to polar coordinates, using the following equations: I(x(r, θ), y(r, θ)) I(r, θ) (4.8) x(r, θ) = (1 r) x i (θ) + r x o (θ) (4.9) y(r, θ) = (1 r) y i (θ) + r y o (θ) (4.10) where I denotes the segmented iris image, (x i (θ), y i (θ)) and (x o (θ), y o (θ)) are coordinates of the inner and the outer boundary at an angle θ (w.r.t the horizontal direction) of pupil and iris centers respectively. The radius r ranges within the interval [0, 1]. The circular segmented iris region is divided into m equi-spaced concentric circles and n sectors to obtain m n sized rectangular iris strip as shown in Fig. 4.10(b). Each intersection point is mapped to polar coordinate using equations (4.8) - (4.10).

102 68 CHAPTER 4. IRIS RECOGNITION SYSTEM 4.2 Iris Quality Estimation Quality of an image plays an important role in any identification system. Higher the quality, lesser the false accept and false reject rates. The quality of any iris image is affected by the presence of eyelashes and eyelids over the iris region, lack of focus over iris region, improper lightning condition, dilation of the iris etc. In this section, an algorithm has been proposed to classify iris images into different quality based classes lying between 1 and 5. Higher class index indicates better quality. The quality of the iris image has been modeled as a function of the following six attributes: Focus (F ), Motion Blur (M B), Occlusion (O), Contrast and Illumination (CI), Dilation (D), Specular Reflection (SR). Hence overall quality of the iris image can be represented by Quality = f(f, MB, O, CI, D, SR) (4.11) where f is a function which is learned from the training data using a Support Vector Machine (SV M). Quality attributes from the training set of iris images along with their respective true quality labels are used to train the SV M classifier. This creates a model based classifier which can be used to predict the quality of an iris image. The six quality attributes computed are 1. Focus (F ) : It refers to the amount of blur due to defocus in the image. 2. Motion Blur (MB) : It is the blurring effect caused due to the movement of the camera relative to the subject or vice-versa during image acquisition. 3. Occlusion (O) : It is one of the major hurdles in iris recognition. It occurs due to eyelids, eyelashes, specular reflection and shadows. 4. Contrast and Illumination (CI) : It is defined as the range of its intensity level. Any high contrast image generally has a uniformly distributed histogram.

103 4.2. IRIS QUALITY ESTIMATION Dilation (D) : It is a natural process of expansion of the pupil area due to the lack of proper lightning within the surroundings that reduces the amount of iris area. 6. Specular Reflection (SR) : They are caused due to light source that is used to illuminate the iris. Iris is very reflective, so the light reflection is visible in the acquired image Focus (F) based Quality Attribute Focus of an image refers to the amount of blur due to defocus in the image. It is the blurring introduced in the image when the focal point is outside the depth of the field of object being captured. Greater the distance, higher the amount of defocus blur. The 2D Fourier spectrum of any well focused image is uniform; but for a defocus image, the spectrum is concentrated more towards the low frequencies. The focus of an image can be estimated by calculating the high frequency components in the image spectrum. This is done by convolving the image with the proposed 6 6 kernel K = (4.12) This kernel can approximate the 2D Fourier spectrum high frequency band pass filter. Higher the response from the image, more focused the image is. For the proposed 6 6 kernel, its corresponding filter responses are shown in Figure One can observe that well focus images have higher responses as compared to the

104 70 CHAPTER 4. IRIS RECOGNITION SYSTEM defocus images. (a) Focus Image (b) Response (c) Defocus Image (d) Response Figure 4.11: Focus Filter Response Motion Blur (MB) based Quality Attribute The iris acquisition often gets affected by motion blur because iris movement is very hard to control. The iris image becomes blurred whenever there is a relative movement between the acquisition sensor and subject s eye. The amount of blur is directly proportional to the amount of motion. The edge width at any edge pixel is used to compute the motion blur. The edge pixels in an image are computed using sobel operators. The edge width at any edge pixel is estimated as the number of pixels between the local extremities at either side of the edge pixel as shown in Fig It is assumed that blurred edges are having more width than the sharper edges. In [26], the amount of motion blur is computed by defining Just Noticeable Blur (JNB) which is the minimum amount of blur required to be perceived by humans. The edge width from which JNB becomes noticeable is called JNB width. It is determined by intensity value range within a small neighborhood as given by JNB width = 5 if Intensity > 50 3 if Intensity 50 (4.13)

105 4.2. IRIS QUALITY ESTIMATION 71 Figure 4.12: JNB Edge Width The motion blur is computed for an image by dividing it into blocks of size The blocks having edge pixels more than a pre-defined threshold are called edge blocks and only these blocks are considered for computation. For every edge pixel in an edge block, the probability of blur is defined by a psychometric function P (e ij ) = 1 e ( edgewidth(e ij ) JNB width (B) )β (4.14) where B is an edge block, P (e ij ) is the probability of blur, edgewidth(e ij ) is the edge width at e ij and JNB width (B) is the just noticeable blur width for the block B. The parameter β is a value empirically selected as 3.6. As the edge width increases, the probability of blur increases. The motion blur becomes noticeable when edgewidth(e ij ) is greater than or equal to JNB width (B) for that block. From Eq. (4.14), it is seen that the probability of blur increases up to 63.42% when edgewidth(e ij ) becomes equal to JNB width (B). Hence any edge pixel with the blur probability less than 64% is considered as a pixel of a sharp edge; otherwise, the pixel is considered as a blurred pixel. The motion blur parameter (MB) for any image is defined as the ratio of the total number of pixels belonging to blurred edges to the total number of edge pixels.

106 72 CHAPTER 4. IRIS RECOGNITION SYSTEM Contrast and Illumination (CI) based Quality Attribute Contrast of an image is defined by the range of its intensity level. Any high contrast image generally obeys uniform distribution. Any such image becomes either too dark or too light. It predicts whether the image is uniformly illuminated or not. The contrast can be estimated by removing the extreme gray values which are basically noise. The range of pixel intensities is divided into three groups which are (0, 35), (36, 220), (221, 255). The intensities in the region (36, 220) indicate moderate intensity levels. The ratio of the pixels in this region to the total number of pixels is used to quantify the amount of uniformity in illumination in the image Dilation (D) based Quality Attribute Dilation of the pupil is a natural process of expansion of the pupil area due to the lack of proper lightning within the surroundings that reduces the amount of iris area. The dilation metric is defined as the ratio of the available iris region to the total area of the iris outer boundary circle. The dilation of the image affects severely the normalization of the iris image. Higher the dilation in the iris image, more is the pixel value redundancy in the normalized image. This also causes the iris texture pattern to shift its spatial position within the normalized iris image. Thus, the dilation attribute can be estimated by dilation = Area of iris = Area of iris outer circle 1 r2 p r 2 i (4.15) where r i and r p are the radius of the iris outer and pupil s outer boundary respectively.

107 4.2. IRIS QUALITY ESTIMATION Specular Reflection (SR) based Quality Attribute Specular reflections are caused due to the light sources used to acquire a well illuminated iris image. But iris is very reflective; so the light reflection is visible in the acquired image. Reflection causes white spots in the image which affect the matching score due to missing/wrong data at that spot. The specular reflection can be identified by using an adaptive thresholding. The acquired image is successively thresholded to identify the most obvious reflection points in the image using a high pixel intensity threshold. Some area around the reflection may not be captured. The threshold is lowered by some value to obtain new threshold and the number of pixels is counted again. If the number of pixels is slightly increased, then the area of specular reflection patch is also improved a little. The threshold is further reduced and the process is repeated until the amount of specular reflection is saturated to ensure detection of all pixels affected by reflection Occlusion (O) based Quality Attribute One of the major hurdles in iris recognition is occlusion (hiding of iris) which occurs due to eyelids, eyelashes, specular reflection and shadows as shown in Figure Occlusion hides the useful iris texture and introduces irrelevant parts like eyelids and eyelashes which are not even an integral part of any iris image. Occlusion cannot be ignored because it may pose a difficulty to match intra-class irides and also may introduce false matches with other irides. Hence, it is important to locate and to segregate occlusion from a normalized image. Occlusion is detected from the normalized image, instead of original iris image, to reduce the working area which detects occlusion efficiently. It is done in three steps: eyelid detection followed by eyelash and reflection detection. [A] Eyelid Detection :

108 74 CHAPTER 4. IRIS RECOGNITION SYSTEM Figure 4.13: Occlusion in Normalized Iris Major portion of the occluded area in an iris image is constituted by lower and upper eyelids. Statistically, upper eyelid can be found at the center of left half while lower eyelid at the center of the right half of the normalized image. One cannot discard some area of eyelids heuristically from the normalized image because their sizes are different due to the degree of occlusion; also their locations are not perfectly determined. Eyelids have almost uniform texture and a boundary flooded with eyelashes and shadows. Traditional techniques of parabola/ellipse fitting over eyelids may fail due to the noisy edge information and non-standard shape of eyelids. Eyelids may be extended up to the pupil as well. It means that its upper boundary may not be visible in a normalized image. These challenges have motivated us to use region-growing approach to determine the eyelids. It uses only the texture information to separate eyelid region from the rest. Region growing is a morphological flooding operation which helps to find objects of uniform texture. Since eyelids have uniform texture, this operation helps to find the eyelid region. In [4], a set S of seed points is selected from the image lying in the region S d, that is required to be detected. All pixels in 8-neighborhood of any pixel in the set S are checked for their intensity difference with the mean intensity of the set S. Pixels which are having this difference less than a certain threshold are added to S. This process is iterated until no pixel can be added further. Finally, S covers the desired region S d. Figure 4.14 shows the results of the region-growing algorithm on an image after a few iterations. Eyelid detection from the normalized iris strip of size r c requires two seed points for region-growing, one for each lower and upper eyelid. They are selected as (r, c) 4

109 4.2. IRIS QUALITY ESTIMATION 75 (a) Start of growing a region (b) Growing Process after few iterations Figure 4.14: Application of Region-Growing and (r, 3c ) for upper and lower eyelid respectively as shown in Fig These 4 two seed points are chosen because after normalization, upper and lower eyelids are centered mostly at ( ) π ( 2 and 3π ) 2 angles w.r.t. x-axis. Region-growing begins with these seeds using a low threshold and expands the region until a dissimilar region is encountered. This gives the expected lower and the expected upper eyelid regions. Detection eyelids are shown in Figure To obtain both lower and upper eyelid regions, Algorithm 4.3 can be used with different seed points. Region-growing overcomes the shape irregularity of eyelids and gives the exact area which is occluded by eyelids. It fails when eyelid boundary does not have good contrast and hence, it grows outside the eyelid region. To prevent this, region-growing is repeated with a lower threshold so that it does not grow outside the statistical bounds for eyelid regions. If region grows beyond a limit, it indicates that there is no eyelid. Finally, a binary mask is generated in which all eyelid pixels are set to 1. An example is shown in Figure (a) Normalized Image (b) Eyelid Mask Figure 4.15: Eyelid Regions: Arrows Denote the Direction for Region-Growing

110 76 CHAPTER 4. IRIS RECOGNITION SYSTEM Algorithm 4.3 Eyelid Detection Require: Normalized Iris image NI of dimension m n, s : (s x, s y ): initial seed point in N I, t: region-growing threshold Ensure: Eyelid region LID, Eyelid Mask Mask eyelid 1: S s // Seed point is added to eyelid set S 2: M = NI(s x, s y ) // Mean of set S 3: while True do 4: min Infinity, minp oint φ 5: for all unallocated neighboring pixels (x, y) of S do 6: if NI(x, y) M < min then 7: min NI(x, y) M // absolute difference with region s mean 8: minp oint (x, y) // keep track of the best point 9: end if 10: end for 11: if min > t or size(s)==size(ni) then 12: break // stop region-growing 13: end if 14: S S minp oint // add the best point to the set 15: M mean(s) // update the mean value 16: end while 17: LID S 18: Mask eyelid NI(LID) // eyelid mask [B] Eyelash Detection : There are two types of eyelashes: separable and multiple. Separable eyelashes are like thin threads whereas multiple eyelashes constitute a shadow like region. Eyelashes have lower intensity compared to iris texture. But any predefined threshold which can be used to separate them from the rest of iris is difficult to be determined because it changes with illumination condition. Also, since the proportion of eyelashes in the image is not constant, histogram-based thresholding cannot be used. In this section, a new eyelash detection approach to detect both types of eyelashes has been proposed. Eyelashes have high contrast with their surrounding pixels, but having low in-

111 4.2. IRIS QUALITY ESTIMATION 77 tensity. As a result, standard deviation of gray values within a small region around separable eyelashes is high. The standard deviation for every pixel in a normalized image is computed using its 8-neighborhood. It is high in areas where there are separable eyelashes. Multiple eyelashes may have high standard deviation, but they also have dark intensity value. Hence, the low gray value intensity is also given some weight. The computed standard deviation for each pixel is normalized using max min normalization method and is saved in a 2D-array SD. If SD is used alone for segregating eyelash regions, then multiple eyelashes may not be detected and iris texture which has large standard deviation at some points gets wrongly classified as eyelashes. Hence for each pixel, a fused value F (i, j) is computed which considers both the computed standard deviation as well as the gray value intensity of that pixel defined as F (i, j) = 0.5 SD(i, j) (1 N(i, j)) (4.16) where N(i, j) is the normalized gray intensity values (0 1) and SD(i, j) is the standard deviation computed using 8 neighborhood pixel intensities for the pixel (i, j). This fused value F (i, j) boosts up the gap between eyelash and non-eyelash part. The image histogram F H of F has two distinct clusters: a cluster of low values of F consisting of the iris pixels and the second cluster with high values of F representing eyelash pixels. To identify the two clusters, Otsu thresholding is applied on the histogram F H of F to obtain binary eyelash mask. It determines two clusters in a histogram by considering all possible pairs of clusters and chooses that clustering threshold that minimizes the intra-cluster variance. It thus separates the eyelash portion from the iris portion. The detected eyelash of an iris image is shown in Figure The algorithm for eyelash detection is presented in Algorithm 4.4. [C] Reflection Detection : Pixels which exceed a threshold value in gray-scale image are declared as reflec-

112 78 CHAPTER 4. IRIS RECOGNITION SYSTEM Algorithm 4.4 Eyelash Detection Require: Normalized Iris image NI of dimension r c Ensure: Eyelash Mask M askeyelash 1: SD f ilter2d(n I, std(3, 3))// 2D filtering with 3x3 standard deviation filter 2: SD SD // normalize w.r.t. maximum value max(sd) 3: N = NI // Normalized intensity values(0-1) of N I 255 4: F = 0.5 SD (1 N) // fusion of std. deviation and intensity values. 5: F H imhist(f ) // image histogram 6: T hresh Otsu(F H ) // determine Otsu threshold 7: Mask eyelash threshold(f H, T hresh) // Otsu thresholding: Mask eyelash (x, y) is set only if (x, y) is an eyelash pixel (a) Normalized Image (b) Eyelash Mask Figure 4.16: Determination of Eyelash Region tions because reflections are very bright in every acquisition setting. Also, since occlusion due to reflection is not a major component, it is chosen not to do complex computation to remove reflection. Detected reflection from a sample image is shown in Figure A binary mask Mask reflection (reflection mask) is generated in which pixels affected by reflection are set to 1. (a) Normalized Image (b) Reflection Mask Figure 4.17: Reflection Detection by Thresholding Final occlusion mask is generated by addition (logical OR) of the binary masks of eyelid, eyelash and reflection. The algorithm to determine occlusion mask Mask occlusion from a normalized image is described in Algorithm 4.5. Detected occlusion of a sam-

113 4.2. IRIS QUALITY ESTIMATION 79 ple image is shown in Figure Algorithm 4.5 Occlusion Mask Determination Require: Normalized Iris image NI of dimension m n, Eyelid mask Mask eyelid, Eyelash Mask Mask eyelash, Specular Reflection Mask Mask reflection Ensure: Occlusion Mask (Mask occlusion ) 1: Mask occlusion Mask eyelid Mask eyelash Mask reflection // logical OR-ing 2: return (Mask occlusion ) (a) Normalized Image (b) Eyelid Mask (Mask eyelid ) (c) Eyelash Mask (Mask eyelash ) (d) Reflection Mask (Mask reflection ) (e) Occlusion Mask (Mask occlusion ) Figure 4.18: Overall Occlusion Mask Quality Class Determination The Support Vector Machine (SV M) is trained using a set of 1000 iris images from CASIA Lamp database. The actual quality classes (i.e ground truth) in the training set of images are assigned manually. This set of quality attributes along with the quality class label of the iris image is used to train the SV M to learn the function f as defined in Eq. (4.11). The trained classifier based model is used for future prediction of the quality. When a new iris image is given to the system, all quality parameters are normalized to the range (0,1) and are passed to the trained SV M classifier which provides the quality class corresponding to the iris image.

114 80 CHAPTER 4. IRIS RECOGNITION SYSTEM (a) Image 1 (b) Image 2 (c) Image 3 (d) Image 4 (e) Image 5 Figure 4.19: Quality Estimation In Fig. 4.19, some images from the database are shown. Quality parameters of these images and the final quality score are given in Table 4.1. It shows that as the quality of image degrades, its overall quality score also reduces. A poor score is assigned to the image with higher occlusion. Image Focus Blur Occlusion Contrast Dilation Reflection Quality Image Image Image Image Image Table 4.1: Quality Parameters and Quality Class for Images in Fig Iris Image Preprocessing The extracted region of interest (ROI) of iris contains texture information but generally is of poor contrast. Suitable image enhancement technique is required to apply on the ROI. In order to obtain a robust representation that can tolerate small amount of illumination variation in iris images are transformed. In this section, the technique that we have used to enhance and to transform the normalized iris images is discussed.

115 4.3. IRIS IMAGE PREPROCESSING Iris Image Enhancement (a) Original Iris (b) Esti. Illumination (c) Uniform Illumination (d) Weiner Filtering Figure 4.20: Iris Texture Enhancement The iris texture is enhanced in such a way that it increases its richness as well as its discriminative power. The iris ROI is divided into blocks and the mean of each block is considered as the coarse illumination of that block. This mean is expanded to the original block size as shown in Fig. 4.20(b). Selection of block size plays an important role. It should be such that the mean of the block truly represents the illumination effect of the block. So, larger block may produce improper estimate. We have seen that a block size of 8 8 is the best choice for our experiment. The estimated illumination of each block is subtracted from the corresponding block of the original image to obtain the uniformly illuminated ROI as shown in Fig. 4.20(c). The contrast of the resultant ROI is enhanced using Contrast Limited Adaptive Histogram Equalization (CLAHE) [55]. It removes the artificially induced blocking effect using bilinear interpolation and enhances the contrast of image without introducing much external noise. Finally, Wiener filter [68] is applied to reduce constant power additive noise and the enhanced iris texture is obtained shown in Figure 4.20(d).

116 82 CHAPTER 4. IRIS RECOGNITION SYSTEM Iris Image Transformation (a) Original (b) Transformed (kernel=3) (c) Transformed (kernel=9) Figure 4.21: LGBP Transformation (Red: -ve gradient;green: +ve grad.;blue: zero grad.) It transforms the iris ROI into a representation that can provide robust corner features. The gradient (approximated by pixel difference) of any edge pixel is positive if it lies on an edge created due to light to dark shade (i.e. high to low gray value) transition. Hence all edge pixels can be divided into three classes of +ve, ve and zero gradient values as shown in Fig The sobel kernel fails to hold rotational symmetry; hence more consistent scharr kernels [61] which are obtained by minimizing the angular error is applied Scharr x = Scharr y = The scharr x-direction kernels of size 3 3 and 9 9 are applied to obtain the transformed image of Fig. 4.21(a). Resultant images are shown in Fig. 4.21(b) and 4.21(c). Bigger size kernel produces coarse level features as shown in Fig. 4.21(c). This gradient augmented information of each edge pixel can be more discriminative and robust. The transformation uses this information to calculate a 8-bit code for each pixel. It uses gradient values of x- direction and y-direction of its 8 neighboring pixels to obtain vcode and hcode respectively as discussed below. The vcode and hcode Generation : Let P i,j be the (i, j) th pixel of any biometric image P and Neigh[k], k = 1, 2,...8 are the gradients of 8 neighboring

117 4.3. IRIS IMAGE PREPROCESSING 83 pixels centered at pixel P i,j obtained by applying x- direction or y- direction scharr kernel to obtain vcode or hcode respectively. Then the k th bit of the 8-bit code (termed as lgbp code) is given by lgbp code[k] = 1 if Neigh[k] > 0 0 otherwise (4.17) In vcode or hcode, every pixel is represented by its lgbp code as shown in Fig. 4.22(d) and 4.22(g). The pattern of edges within a neighborhood is assumed to be robust; hence each pixel s lgbp code is considered which is just an encoding of edge pattern in its 8-neighborhood. Also, lgbp code of any pixel considers only the sign of the derivative within its specified neighborhood; hence it ensures the robustness of the proposed transformation in illumination variation. (a) Original Iris (b) Original Iris (c) Original Iris (d) Iris vcode (e) Iris vcode (f) Iris vcode (g) Iris hcode (h) Iris hcode (i) Iris hcode Figure 4.22: Original and Transformed (vcode, hcode) for Iris ROI s

118 84 CHAPTER 4. IRIS RECOGNITION SYSTEM 4.4 Iris Feature Extraction and Matching The corner features [63] are extracted from both vcode and hcode obtained from any iris ROI. The KL tracking [44] has been used to track the corner features in the corresponding images for matching two iris ROIs. The KL tracking makes use of three assumptions, namely brightness consistency, temporal persistence and spatial coherency. Hence its performance depends completely on how well these three assumptions are satisfied. It can be safely assumed that these three assumptions are more likely to be satisfied while tracking is performed between features of same subject (genuine matching) and degrades substantially for others (imposter matching). Therefore, one can infer that the performance of KL tracking algorithm is good in genuine matching as compared to the imposter ones Corners having Inconsistent Optical Flow (CIOF) The direction of pixel motion which termed as optical f low of that pixel, can be computed by KL-tracking algorithm. A dissimilarity measure CIOF (Corners having Inconsistent Optical Flow) has been proposed to estimate the KL-tracking performance. It evaluates two geometrical quantities for each potential matching feature pair given by KL-tracking algorithm. [a] Vicinity Constraints: Euclidean distance between any corner and its estimated tracked location should be less than or equal to an empirically selected threshold (T d ), which depends upon the amount of translation and rotation in the sample images. High threshold value signifies more translation and vise-versa. [b] Patch-wise Dissimilarity: Tracking error is defined as pixel-wise sum of absolute difference between a local patch centered at current corner and that of its estimated tracked location patch. This error should be less than or equal to an

119 4.4. IRIS FEATURE EXTRACTION AND MATCHING 85 empirically selected threshold (T e ), which ensures that the matching corners must have similar neighborhood patch around it. However, all tracked corner features may not be the true matches because of noise, local non-rigid distortions in the biometric samples and also less difference in inter class matching and more in intra class matching. Hence, the direction of pixel motion (i.e optical flow) for each pixel is used to prune out some of the false matching corners. Consistent Optical Flow: It can be noted that true matches have the optical flow which can be aligned with the actual affine transformation between two images that are being matched. The estimated optical flow angle is quantized into eight directions and the most consistent direction is the one which has the largest number of successfully tracked corner features. Any corner matching pair (i.e corner and its corresponding corner) having optical flow direction other than the most consistent direction is considered as false matching pair and has to be discarded Matching Algorithm Given vcode and hcode of two iris, Iris a to compute a dissimilarity score using CIOF measure. and Iris b, Algorithm 4.6 can be used The vcode of Iris a and Iris b are matched to obtain the vertical matching score while the respective hcode are matched to generate horizontal matching score. The corner features that are having their tracked position and local patch dissimilarity within the thresholds are considered as successfully tracked. Since both Iris a to Iris b and Iris b to Iris a matchings are considered, four sets of successfully tracked corners are computed viz. stc v AB, stch AB, stcv BA and stch BA. Out of these four corner sets, two correspond to vertical while other two correspond to horizontal matching. The optical flow which is the pixel motion direction is computed for each successfully tracked corner and is quantized into eight bins at an interval of π. Four histograms (of eight bins 8

120 86 CHAPTER 4. IRIS RECOGNITION SYSTEM each) are obtained, one for each set of successfully tracked corners represented by HAB v, Hh AB, Hv BA and Hh BA. The maximum value in each histogram represents the total number of corners having consistent optical flow and these are represented as cofab v, cof AB h, cof BA v and cof BA h. Finally they are normalized by the total number of corners and are converted into horizontal and vertical dissimilarity scores. The final score, CIOF (Iris a, Iris b ) is obtained by using sum rule of horizontal and vertical matching scores. Such a fusion can significantly boost-up the performance of the proposed system because some of the images are having more discrimination in vertical direction while others have it in horizontal direction. Algorithm 4.6 CIOF (Iris a, Iris b ) Require: (a) The vcode IA v,iv B of two biometric sample images Iris a, Iris b respectively (b) The hcode IA h,ih B of two biometric sample images Iris a, Iris b respectively. (c) Na v, Nb v,n a h and Nb h are the number of corners in Iv A, Iv B, Ih A, and Ih B respectively. Ensure: Return CIOF (Iris a, Iris b ). 1: Track all the corners of vcode IA v in vcode Iv B and that of hcode Ih A in hcode Ih B. 2: Obtain the set of corners successfully tracked in vcode tracking (i.e. stc v AB ) and hcode tracking (i.e. stc h AB ) that have their tracked position within T d and their local patch dissimilarity under T e. 3: Similarly compute successfully tracked corners of vcode IB v in vcode Iv A (i.e. stc v BA ) as well as hcode Ih B in hcode Ih A (i.e. stch BA ). 4: Quantize optical flow direction for each successfully tracked corners into eight directions (i.e. at an interval of π) and obtain 4 histograms 8 Hv AB, Hh AB, Hv BA and HBA h using these four corner set stcv AB, stch AB, stcv BA and stch BA respectively. 5: For each histogram, out of 8 bins the bin (i.e. direction) having the maximum number of corners is considered as the consistent optical flow direction. The maximum value obtained from each histogram is termed as corners having consistent optical flow represented as cofab v, cof AB h, cof BA v and cof BA h. 6: ciofab v = 1 cof AB v ; [Corners with Inconsis. Optical Flow (vcode)] Na v 7: ciofba v = 1 cof BA v ; [Corners with Inconsis. Optical Flow (vcode)] Nb v 8: ciofab h = 1 cof AB h ; [Corners with Inconsis. Optical Flow (hcode)] Na h 9: ciofba h = 1 cof BA h ; [Corners with Inconsis. Optical Flow (hcode)] Nb h 10: return CIOF (Iris a, Iris b ) = ciof AB v +ciof AB h +ciof BA v +ciof BA h ; 4

121 4.5. DATABASES Databases Two publicly available iris databases have been used to analyze the proposed iris system. All possible inter-session matchings are done to perform various experiments Databases Used CASIA V4 Interval Database [14]: It contains 2, 639 iris images which are collected from 249 subjects. It has 395 distinct irises and contains at-most 7 images of each iris. Left and right eye of the same individual are considered as two distinct iris since these iris patterns are distinct. Iris images are taken in two sessions under controlled environment. The time gap between two sessions is at least one month. The acquired images are of resolution and have very clear iris texture. (a) CASIA Interval Iris Database (b) CASIA Lamp Iris Database Figure 4.23: Iris Images CASIA V4 Lamp [14]: It consists of 16, 212 images acquired from 411 subjects. It contains 819 distinct irises and at most 20 images of each iris. These iris images are obtained in only one session under controlled environment with lamp

122 88 CHAPTER 4. IRIS RECOGNITION SYSTEM on/off. The image resolution is These images are found to be challenging for our study because there exists nonlinear deformation due to variation of visible illumination. Also, size of this database is huge and it is a challenge to get good results on any large database because number of false acceptances grows very fast with the increase in database size [5]. For both iris databases, specifications of images are given in Table 4.2 and sample images are shown in Fig Dataset Traits Subject Pose Total Imgs CASIA-Interval Left+Right Eye CASIA-Lamp Left+Right Eye Table 4.2: Iris Database Specifications Testing Strategy CASIA V4 Interval database : Images of the first session are taken as training while the remaining images are used for testing. Hence a total of 3, 657 genuine and 1, 272, 636 imposter matchings are performed for Interval database testing. CASIA V4 Lamp database : First 10 images are considered as training while the rest are used for testing. Hence there are 78, 300 genuine and 61, 230, 600 imposter matchings for Lamp database. Subject Pose Total Training Testing Genuine Matching Imposter Matching Casia V4 Interval (Iris) 249 (395 Iris) 7 2,639 First 3 Rest 3,657 1,272,636 Casia V4 Lamp (Iris) 411 (819 Iris) 20 16,212 First 10 Last 10 78,300 61,230,600 Table 4.3: Database and Testing Strategy Specifications

123 4.6. EXPERIMENTAL RESULTS 89 Default Parameters Db s t p rmin p rmax i rmin i rmax W α range(radians) Interval ( π/4, π/6) (5π/6, 5π/4) Lamp ( π/3, 0) (π, 4π/3) Table 4.4: Values of Various Parameters used in Algorithm 4.1 and 4.2 Each database along with the testing strategy specifications is given in Table 4.3. One can observe that a large number of genuine as well as imposter matchings are considered to evaluate the performance of the proposed system. 4.6 Experimental Results The experimental results of the proposed iris based authentication system are discussed in this section Iris Segmentation Analysis The proposed iris segmentation algorithm has been used to segment iris. It has been found that the segmentation accuracy is 94.5% and 94.63% for Interval and Lamp database respectively. The parameters used for different databases are given in Table 4.4. Errors on these databases can be classified broadly into three categories: 1. Occlusion : (a) Eyelid occlusion (affects outer boundary localisation) (b) Eyelash occlusion (affects inner and outer boundary localisation) 2. Noise : (a) Specular reflection (reflection on pupil boundary) (b) Pupil Boundary Noise (Non-circular pupil shape or noise points)

124 90 CHAPTER 4. IRIS RECOGNITION SYSTEM Figure 4.24: Segmentation Error Hierarchy (Numbers Mentioned for each Subcategory Specify the Number Images of that Category for Interval Database.) Threshold parameter (t) value Sub-Category Mean Minimum Maximum Standard Deviation Eyelid Eyelash Specular Reflection Pupil Boundary Noise Bright image Dark image Table 4.5: Variation in Threshold needed for Correct Segmentation 3. Illumination : (a) Bright image (excess illumination) (b) Dark image (lack of illumination) The hierarchy of errors is shown in Figure There are only two critical parameters viz. threshold (t) and angular range (α range ) that are required to be adjusted as suggested in Table 4.5 and Table 4.6 for accurate segmentation. The erroneous segmentations are critically analyzed and are corrected by adjusting these parameters. This adjustment helps to achieve an accuracy of more than 99.6% for both the databases. Some very critically occluded images are segmented manually (< 0.4%). Some of the images where the proposed algorithm has been failed are

125 4.6. EXPERIMENTAL RESULTS 91 Sub-Category t α range Eyelid - Eyelash - Specular Reflection - Pupil Boundary Noise - Bright image - Dark image - Table 4.6: Required Parametric Variations: - stands for no change, stands for increasing and for decreasing the parameter value Eyelid Occlusion Eyelash Occlusion Specular Reflection Interval=4,Lamp=300 Interval=15,Lamp=288 Interval=5,Lamp=68 Pupil Noise Bright Image Dark Image Interval=71,Lamp=160 Interval=11,Lamp=1 Interval=39,Lamp=53 Table 4.7: Segmentation Error. (Rows Below Images: Errors Occurred in Interval, Lamp Databases) shown in Table Threshold Selection Matching performance has been obtained by varying two thresholds used to compute CIOF dissimilarity measure. These values for any database are selected in such a way that the performance of the system is maximized over the validation set, using only vcode matching. The validation set contains images of first 100 subjects only

126 92 CHAPTER 4. IRIS RECOGNITION SYSTEM T d T e DI EER(%) Accuracy(%) EUC CRR(%) Casia V4 Interval Iris Database Casia V4 Lamp Iris Database Table 4.8: Threshold Selection for CASIA V4 Interval and Lamp Iris Database using only vcode Matching over Validation Set of First 100 Subjects. from that database. One threshold T e [used for patch-wise dissimilarity] depends upon the patch size while other one T d [used for vicinity constraints] depends upon the amount of affine transformation in between database images; hence it varies for different databases. The typical value of T d for the system with maximum CRR and minimum EER is 7 for Lamp and 10 for Interval database with T e = 600 having patch size of 5 5. The results are shown in Fig. 4.26, Fig along with Table 4.8. This analysis has inferred that Interval database has more affine transformation in iris images of

127 4.6. EXPERIMENTAL RESULTS 93 Figure 4.25: Parameterized ROC Curve of the Proposed System for Iris Interval Database (only vcode matching) Description DI EER(%) Accuracy(%) EUC CRR(%) vcode Enhanced vcode hcode Enhanced hcode f usion Enhanced f usion Table 4.9: Enhancement based Performance boost-up over Iris Interval Databases (In the above Table fusion referred as vcode+hcode) same subject than Lamp database Impact of Enhancement on System Performance This section analyses the impact of enhancement on system performance boostup over CASIA V4 Iris Interval [14] database to justify the use of enhancement

128 94 CHAPTER 4. IRIS RECOGNITION SYSTEM Figure 4.26: Parameterized ROC Analysis of the Proposed System for Iris Lamp Database (only vcode matching) algorithm. It is observed that the enhancement method significantly improves the iris texture as evident from the graphs shown in Fig and Table 4.9. From ROC curve, it is seen that for vcode, hcode or fusion, the performance of the system is significantly improved after enhancement. One can observe clearly how the proposed enhancement can boost-up the dissimilarity score of imposters which reduces the errors effectively. From Table 4.9, it can be seen that EER obtained using hcode only is reduced by a factor of 4 times (i.e from 2.761% to 0.684%) and similar improvement has been observed in the case of vcode and f usion approaches Comparative Analysis It is seen from Table 4.10 that CRR (Rank 1 accuracy) of the proposed system is 100% and 99.87% for Interval and Lamp databases respectively. Further, EER for Interval and Lamp are 0.109% and 1.3% respectively. It can also be observed

129 4.6. EXPERIMENTAL RESULTS 95 Figure 4.27: Enhancement based Performance boost-up in terms of Receiver Operating Characteristic Curves for the Proposed System Database Performance Parameters DI CRR% EER% Accuracy% EUC Interval Lamp Table 4.10: Performance Analysis of Proposed System that the genuine and the imposter scores are well separated as the decidability index is found to be 2.02 and 1.50 for Interval and Lamp databases respectively. Accuracy and error under the curve, EUC, for Interval and Lamp are observed as 99.91, and 98.85, 0.24 respectively that can be considered as good for any verification system. The proposed system has been compared with state-of-the-art iris recognition systems [19],[45],[46],[60]. We have implemented the systems proposed in [19],[46] while we have used the results stated in [60] of the systems in [45],[60]. The compar-

130 96 CHAPTER 4. IRIS RECOGNITION SYSTEM Systems Interval DI CRR% EER% Daugman [19] Li Ma [60] Masek [46] K. Roy [60] Proposed Lamp Daugman [19] Proposed Table 4.11: Comparative Performance Analysis over CASIA V4 Interval and Lamp Iris Database ison of the proposed system with other state-of-the-art systems is shown in Table The performance of the proposed system is found to be better than the reported systems. For both Interval and Lamp iris databases, Receiver Operating Characteristics (ROC) curves of the proposed system are shown in Fig and Fig respectively. ROC curves of the proposed system are compared with the open source Masek s system (Log-Gabor) [46] and Daugman s system (Gabor) [19]. It is observed that vcode results are always better than hcode results. Also fusion of vcode and hcode performs much better than vcode alone. This performance gain occurs because there are several images that are mis-classified as genuine or imposter by vcode features, but its corresponding hcode contains discrimination. Hence the ability of discrimination of fused score is enhanced. The Masek s system has not been tested on Lamp database as the optimal parameters are not known and with default set of parameters, it has performed very poorly.

131 4.6. EXPERIMENTAL RESULTS 97 Figure 4.28: Comparative Analysis of Proposed System over Interval Database Figure 4.29: Comparative Analysis of Proposed System over Lamp Database

132 98 CHAPTER 4. IRIS RECOGNITION SYSTEM

133 Chapter 5 Knuckleprint Recognition System This chapter deals with the problem of designing an efficient knuckleprint based recognition system. The knuckleprint ROI is extracted by applying a modified version of gabor filter to estimate the central knuckle line and point as shown in Fig. 5.2(b). The central knuckle point is used to extract consistently the knuckleprint ROI from any image. The vertical and horizontal knuckle line based features may be of poor quality; hence they are required to be enhanced and transformed to achieve robustness against varying illumination. It consists of five major tasks, viz. ROI extraction, quality estimation, ROI preprocessing, feature extraction and matching. The overall architecture of the proposed knuckleprint based recognition system is shown in Fig The publicly available knuckleprint PolyU database [56] is used to test the proposed system. The PolyU database contains images that are acquired using normal web-cam of resolution using their indigenous capturing device. The device allows the user to place only one finger at a time and acquires images that are horizontally aligned images. Therefore, at the time of ROI extraction, raw knuckleprint images are assumed to be horizontal and contain single finger knuckle.

134 100 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM Figure 5.1: Overall Architecture of the Proposed Knuckleprint Recognition System 5.1 Knuckleprint ROI Extraction This section proposes an efficient knuckleprint ROI extraction technique. The prime objective of any ROI extraction technique is to segment same region of interest consistently from all images. The central knuckle point as shown in Figure 5.2(b) can be used to segment any knuckleprint. Since knuckleprint is aligned horizontally, one can easily extract the central region of interest from any knuckleprint that contains rich and discriminative texture as shown in Fig. 5.2(c). The proposed ROI extraction algorithm performs in three steps; detection of knuckle area, central knuckle-line and central knuckle-point Knuckle Area Detection In this step, the whole knuckle area is segmented from the background to discard background region. The acquired knuckleprint may be of poor quality. Each such

135 5.1. KNUCKLEPRINT ROI EXTRACTION 101 (a) Raw knuckleprint (b) Annotated Knuckleprint (c) Knuckleprint ROI Figure 5.2: Knuckleprint ROI Annotation knuckleprint is enhanced using contrast limited adaptive histogram equalization [55] (CLAHE) to obtain better edge representation which helps to detect the knuckle area. CLAHE divides the whole image into blocks of size 8 8 and applies histogram equalization over each block. The enhanced image is binarized using Otsu thresholding that segments the image into two clusters (knuckle region and background region) based on their gray values. Such a binary image is shown in Fig. 5.3(b). It can be observed that the knuckle region may not be accurate because of sensor noise and background clutter. This can be obtained by using canny edge detection. A resultant image is shown in Fig. 5.3(c) and the largest connected component is considered as the required knuckle boundary. The detected boundary is eroded to smoothen it as well as to remove any discontinuity as shown in Fig. 5.3(d). Finally all pixels within the convex hull of the knuckle boundary are considered as the knuckle area. Figure 5.3(e) shows a knuckle area. Some top and bottom rows are assumed to be background and are discarded from the raw image. (a) Raw (b) Binary (c) Canny (d) Boundary (e) FKP Area Figure 5.3: Knuckle Area Detection

136 102 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM Central Knuckle-Line Detection The central knuckle line is defined as that column of the image with respect to which the knuckle can be considered as symmetric. It can be observed from Figure 5.2(b). This line is used to extract the knuckleprint ROI. A very specific and symmetric texture is observed around the central knuckle line which is used for its detection. To perceive such a specific texture, a knuckle filter is created by modifying the gabor filter which is defined below. (a) 2D Gaussian (b) 2D Sinusoid (c) 2D Gabor (d) 2D Gabor Filter Figure 5.4: Conventional Gabor Filter [A] Knuckle Filter : The conventional gabor filter is created when a complex sinusoid is multiplied with a Gaussian envelope as defined in Eq.(5.1) and is shown in Figure 5.4(c). G(x, y; γ, θ, ψ, λ, σ) = e ( X2 +Y 2 γ 2 2 σ 2 ) }{{} Gaussian Envelope 2πX i( +ψ) e λ }{{} Complex Sinusoid (5.1) where x and y are the spatial co-ordinates of the filter and X, Y are obtained by rotating x, y by an angle θ using the following equations : X = x cos(θ) + y sin(θ) (5.2) Y = x sin(θ) + y cos(θ) (5.3)

137 5.1. KNUCKLEPRINT ROI EXTRACTION 103 (a) Curved Knuckle Lines (b) Curvature Knuckle Filter Figure 5.5: Curvature Knuckle Filter In order to model the curved convex knuckle lines, a knuckle filter is obtained by introducing curvature parameter in the conventional gabor filter. The basic gabor filter equation remains to be the same (as in Eq. (5.1)). Only X and Y co-ordinates are modified as follows : X = x cos(θ) + y sin(θ) + c ( x sin(θ) + y cos(θ)) 2 (5.4) Y = x sin(θ) + y cos(θ) (5.5) The curvature of the gabor filter can be modulated by the curvature parameter. This curved gabor filter with parameters (γ = 1, θ = π, ψ = 1, λ = 20, σ = 20) is used for knuckle filter creation. The value of curvature parameter is varied as shown in Fig. 5.6 and its optimal value for our database is selected heuristically. The proposed knuckle filter is obtained by concatenating two such curved gabor filters (f 1, f 2 ) ensuring that the distance between them is d. The first filter (f 1 ) is obtained using the above mentioned parameters while the second filter (f 2 = f flip 1 ) is just the vertically flipped version of the first filter because knuckleprints are vertically symmetric. In Figure 5.6, several knuckle filters are shown with varying curvature and distance parameters. One can observe that increasing curvature parameter (c) introduces more and more curvature in the filter. Finally c = 0.01 and d = 30 are considered in the knuckle filter (F 0.01,30 kp ). [B] Knuckle Line Extraction : All pixels belonging to knuckle area are con-

138 104 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM (a) c=0.00,d=0 (b) c=0.00,d=30 (c) c=0.01,d=0 (d) c=0.01,d=30 (e) c=0.04,d=0 (f) c=0.04,d=30 Figure 5.6: Knuckle Filter volved with the knuckle filter F 0.01,30 kp. Pixels over the central knuckle line must be having the higher response as compared to others because of filter s shape and construction. The filter response for each pixel is binarized using threshold as f max where max is the maximum knuckle filter response and f 0 to 1 is a fractional value. The binarized filter response is shown in Fig. 5.7(a) in which it is super imposed over knuckle area with blue color. The column-wise sum of the filter response for each column is computed. The central knuckle line is considered as that column which is having the maximum knuckle filter response as shown in Fig. 5.7(b) Central Knuckle-Point Detection The central knuckle point is required to crop the knuckle-print ROI that must lie over central knuckle line. Hence the top and the bottom point over central knuckle line, belonging to the knuckle area, are computed and their mid-point is considered

139 5.1. KNUCKLEPRINT ROI EXTRACTION 105 (a) Knuckle Filter Response (b) Annotated (c) Knuckle ROI Figure 5.7: Knuckleprint ROI detection. (a) Knuckle filter response is super impose over the knuckle area with blue color, (b) Full Annotated and (c) FKP (F KP ROI ) as the central knuckle point. It is shown in Figure 5.7(b). The required knuckleprint ROI is extracted as a region of size (2 w+1) (2 h+1) considering central knuckle point as the center. It is shown in Fig. 5.7(c). Algorithm 5.1 Knuckleprint ROI Detection Require: Raw Knuckleprint image I of size m n. Ensure: The knuckleprint ROI F KP ROI, of size (2 w + 1) (2 h + 1). 1: Enhance the FKP image I to I e using CLAHE; 2: Binarize I e to I b using Otsu thresholding; 3: Apply Canny edge detection over I b to get I cedges ; 4: Extract the largest connected component in I cedges as FKP raw boundary, (F KPBound raw ); 5: Erode the detected boundary F KPBound raw to obtain continuous and smooth FKP boundary, F KP smooth Bound ; 6: Extract the knuckle area K a = All pixels in image I the ConvexHull(F KP smooth Bound ); 7: Apply the knuckle filter F 0.01,30 kp over all pixels K a ; 8: Binarize the filter response using f max as the threshold; 9: The central knuckle line (c kl ), is assigned as that column which is having the maximum knuckle filter response; 10: The mid-point of top and bottom boundary points over c kl K a, is defined as the central knuckle point (c kp ). 11: The knuckle ROI (F KP ROI ) is extracted as the region of size (2 w+1) (2 h+1) from raw knuckleprint image I, considering c kp as its center point. Given a raw knuckleprint image I, Algorithm 5.1 can be used to extract the knuckleprint ROI (F KP ROI ) from it. Some of the segmented PolyU knuckleprint

140 106 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM (a) Subject A (b) Subject B Figure 5.8: Some Segmented Knuckleprints database images are shown in Fig One can observe that the proposed algorithm can extract consistently the ROI of images in the database. 5.2 Knuckleprint Image Quality Extraction The images acquired from the sensors have inevitably wide distribution of quality. Hence quality assessment should be done during early phases such as data acquisition so that one can discard the bad quality images and recapture the better one. Also quality parameters can revel the type of deficiency in the image which can be used to apply the suitable enhancement technique to reduce its effect. The image quality of knuckleprint is degraded mostly due to fewer or blurred line like features, defocus, poor contrast and high or low illumination and reflection produced by the camera flash as shown in Fig.5.9. In this section an algorithm has been proposed for knuckleprint quality assessment. Quality of the knuckleprint image has been modeled as a function of the following six attributes: Focus (F ), Clutter (C), Uniformity (S), Entropy (E), Reflection (Re), Contrast and Illumination (Con). Hence overall quality of the knuckleprint

141 5.2. KNUCKLEPRINT IMAGE QUALITY EXTRACTION 107 (a) Defocus (b) Less Features (c) High Reflection (d) Poor Uniformity Figure 5.9: Poor Quality Knuckleprint ROI Samples image can be represented by Quality = f(f, C, S, E, Re, Con) (5.6) where f is a function which is learned from the training data using a Support Vector Machine (SV M). Quality attributes from the training set of knuckleprints along with their respective true quality label are used to train the SV M classifier. This creates a model based classifier which can be used to predict the quality of a knuckleprint image using its quality attributes. The six quality attributes, are amount of well focus edges F, amount of clutter C, distribution of focused edges S, block-wise entropy of focused edges E, reflection caused by light source and camera flash Re and the amount of contrast Con. The most important and vital features in knuckleprint images are vertical long edges (vle) as shown in Fig. 5.10(a); hence most of the quality parameters tend to analyse initially vle for knuckleprint quality assessment. The set of pixels that corresponds to only vertical strong edge pixels is computed using sobel y-direction kernel from the input image (I) and the connected components are computed. Finally, out of all connected components, only long components (more than an empirically selected threshold t cc ) are retained that constitute the set of pixels which are long and vertical termed as vle.

142 108 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM Focus (F ) The defocus blurring effect occurs when the focal point of the sensor s lens is not at the reference object during image acquisition as shown in Fig. 5.9(a). The frequency analysis of defocus images has reveled that its 2D Fourier spectrum is usually dense towards lower frequencies while well focused image posseses uniform spectrum. The number of well focused edge pixels is considered for quality assessment. It is computed by convolving the image by the proposed 6 6 kernel K as defined in Eq (5.7) that can well approximate the 2D Fourier spectrum s high frequency band pass filter. Then only those pixels are retained that are well focused (i.e having convolved value more than an empirically selected threshold t f ) constituting the set of pixels which are well focused, termed as wf, as shown in Fig. 5.10(b) K = (5.7) The set of pixels F map obtained by taking the set intersection of pixel sets vle and wf, as shown in Fig. 5.10(c) is later used as the most significant region. Finally, focus quality parameter F is defined as the number of well focused vertically aligned long edge pixels which is computed by counting the number of pixels in F map Clutter (C ) The short vertical edge pixels that are well focused can be considered as clutter as shown in Fig. 5.10(d) because that can degrade the quality of the image. They are usually present due to abrupt discontinuity in the edge structure due to several

143 5.2. KNUCKLEPRINT IMAGE QUALITY EXTRACTION 109 (a) vle (b) wf (c) F map (anding vle,wf ) Figure 5.10: Defocus based Quality Attribute F and C (d) Clutter (Short Edges) possible extrinsic factors. Clutter creates false features that can confuse any recognition algorithm. The quality parameter, clutter (C ), is defined as the ratio of short vertically aligned strong edge pixels to the longer ones. It is inversely proportional to the image quality Uniformity based Quality Attribute (S) In any good quality image, the texture based features should be distributed uniformly through out the whole image. There are several images in the database that are having well focused left or right half only. Some images are shown in Fig. 5.9(d). The focus parameter F may consider as good quality even though half of the image is of very poor quality. Hence uniformity in texture distribution should be given some importance. The quality attribute (S) is proposed which is directly proportional to the uniformity in texture distribution as shown in Fig The pixel set F map as defined in focus parameter is clustered using K-Means algorithm using K = 2, because knuckleprint images have some symmetry along Y - axis. Some statistical and geometrical parameters of the two clusters that are obtained as output of 2-mean clustering, are used to obtain the value of S and are described in Algorithm 5.2.

144 110 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM Algorithm 5.2 Uniformity based Quality Attribute (S) Require: The vle and wf pixel set for the input image (I) of size m n. Ensure: Return the value S for the input image (I). 1. F map = and(wf, vle); [focus mask] m+n ) and Right half (, n ) of the input M 1, M 2 =Mid-point of Left half ( n, n 2 2 image (I); 3. Apply 2-Mean Clustering over pixel set F map ; 4. C 1, C 2, nc 1, nc 2, std 1, std 2 =Mean loc., Number of pixels and Standard dev. of Left and Right cluster respectively; 5. d 1, d 2 = Euclidean Distance between point C 1 and M 1 and that of between C 2 and M 2 respectively; 6. d = 0.7 max(d 1, d 2 ) min(d 1, d 2 ); 7.p r = max(nc 1,nc 2 ) min(nc 1,nc 2 ) 8.std r = max(std 1,std 2 ) ;[Cluster Point Ratio] min(std 1,std 2 ;[Cluster Standard Dev. Ratio] ) 9.comb r = 0.8 p r std r ; d 10.D std = 1 ; d 11.D nc = 1 std 2 1 +std2 2 nc 2 1 +nc2 2 ; 12.S = 0.5 d comb r D std D nc (a) Non Uniform Texture (0.221) (b) Uniform Texture (0.622) Figure 5.11: Uniformity based Quality Attribute (S) Entropy based Quality Attribute (E) The most common statistical measure that is used to quantify the amount of information in any gray scale image (I) is the entropy value defined by: 255 e = hist[i] log(2 hist[i]) (5.8) i=0

145 5.2. KNUCKLEPRINT IMAGE QUALITY EXTRACTION 111 where hist[i] is the i th element of the 256 valued gray level histogram, hist, of the input image I. The image is divided into blocks of size 5 5 and block-wise entropy is calculated using Eq. (5.8) (as shown in Figs. 5.12(b) and 5.12(e)). Since all blocks do not carry the same amount of importance, only blocks that are having well focused long vertically aligned edge pixels (using F map as defined in Section ) more than a predefined empirically selected threshold t fm are considered as significant blocks. Finally, the entropy based quality attribute (E) is obtained by summing up the entropy values of all significant blocks. They are shown in Figs. 5.12(c) and 5.12(f). (a) Low Informative Sample (b) Block-wise Entropy (c) Significant Blocks Entropy (d) High Informative Sample (e) Block-wise Entropy (f) Significant Blocks Entropy Figure 5.12: Entropy based Quality Attribute E Reflection based Quality Attribute (Re) High reflection can be caused due to light source or camera flash and creates a patch of very high intensity gray values. The unique line based information within this patch is completely ruined leading to severe image quality degradation. This reflection patch is identified by using adaptive thresholding and it can be ignored while matching. The sample knuckleprint is repeatedly thresholded to estimate the most accurate reflection patch intensity level, starting from a high gray level and

146 112 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM gradually reducing it. After each thresholding step, number of pixels that are having gray value more than the threshold is calculated. This count keeps on changing significantly because some of the nearby area around the reflection patch may not be captured by the previous threshold. This thresholding procedure gets terminated when this count is saturated (i.e when the difference in the count before and after thresholding is less than an empirically selected value t r ). After termination, the full reflection patch is identified. It is shown in Fig. 5.13(b). The reflection based quality attribute (Re) is defined as the fraction of pixels belonging to the reflection patch; hence it is inversely proportional to the image quality. (a) Original Image (b) Computed Reflection patch Figure 5.13: Reflection based Quality Attribute Re Contrast based Quality Attribute (Con) Often quality of the knuckleprint image gets severely affected by very poor or rich lighting condition. Large illumination variation can reduce the discriminative line based features and hence can degrade the overall uniqueness of biometric images. The contrast of the input image (I) can give some information about the dynamic gray level range present in the image. Hence, it can be used to infer that image is either too dark or light. Basically, we can use it to estimate the uniformity in illumination through-out the image. The whole gray level range is divided into three groups (0, 75), (76, 235), (236, 255). The contrast based quality attribute (Con) is defined as the fraction of pixels belonging to the mid gray level range (i.e (76, 235)) because it indicates the moderated intensity range.

147 5.2. KNUCKLEPRINT IMAGE QUALITY EXTRACTION 113 (a) Focus (b) Clutter (c) Uniformity (d) Entropy (e) Specular Reflection (f) Contrast Figure 5.14: Knuckleprint Quality Estimation. Top, Middle, Bottom Images for each Proposed Quality Parameter Quality Class Determination Similar to the iris quality attribute fusion as discussed in Section 4.2.7, Support Vector Machine (SV M) is trained using images from first 100 subjects from the database. The actual quality classes (i.e ground truth) in the training set of images are assigned manually. This set of quality attributes along with the quality class label of the knuckleprint image is used to train the SV M to learn the function f as defined in Eq. (5.6). The trained classifier based model is used for the prediction of the quality. Whenever a new knuckleprint image is given to the algorithm, all quality parameters are normalized to the range (0,1) and are passed to the trained SV M classifier. The SV M classifier provides the quality class corresponding to the knuckleprint image. In Fig some top, middle and bottom quality images with respect to each proposed quality parameters are shown.

148 114 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM 5.3 Knuckleprint Image Preprocessing The extracted region of interest (ROI) of knuckleprint is generally of poor contrast. Hence image enhancement algorithm as discussed in Section is applied over the ROI. The enhanced knuckleprint image posseses better quality texture. It is shown in Fig (a) Original (b) Bg Illum. (c) Uni. Illum. (d) Enhanced (e) Noise Removal Figure 5.15: Enhanced Knuckleprint In order to obtain robust representations (vcode and hcode) that can tolerate small amount of illumination variation, images are transformed using the proposed LGBP transformation as discussed in Section and is shown in Fig An original knuckle along with its vcode and hcode are shown in Fig (a) Orig. Knuckleprint (b) Knuckle vcode (c) Knuckle hcode Figure 5.16: Original and Transformed (vcode, hcode) for Knuckleprint ROI s In Fig. 5.17, one raw knuckleprint image is considered under varying illumination and is shown along with the corresponding vcode. One can observe that the original knuckleprint (as shown in Fig. 5.17(a)) has undergone very sever illumination variation (as shown in Figs. 5.17(b)-5.17(f)). But the corresponding vcodes may not be varying much (as shown in Figs. 5.17(g)-5.17(l)). This justifies the use of the proposed transformation and its robustness against varying illumination.

149 5.4. KNUCKLEPRINT FEATURE EXTRACTION AND MATCHING 115 (a) FKP Orig (b) FKP i1 (c) FKP i2 (d) FKP i3 (e) FKP i4 (f) FKP i5 (g) vcode Orig (h) vcode i1 (i) vcode i2 (j) vcode i3 (k) vcode i4 (l) vcode i5 Figure 5.17: Illumination Invariance of LGBP transformation. First row shows same image under five different illumination conditions. Second row shows their corresponding vcode s 5.4 Knuckleprint Feature Extraction and Matching The feature extraction and matching algorithm used in iris based recognition system can be used for knuckleprint as well. The corner based features [63] are extracted from both vcode and hcode obtained from any knuckleprint ROI. The KL tracking [44] has been used to track the corner features in the corresponding images for matching two knuckleprint ROI. The proposed CIOF dissimilarity measure is used to estimate the performance of KL tracking to differentiate between genuine and imposter matching. The performance of the KL tracking algorithm is good for genuine matching and bad for imposter matchings. All steps applied over any raw knuckleprint image for its recognition are shown in Fig Corners having Inconsistent Optical Flow (CIOF) The direction of pixel motion is termed as optical flow of that pixel. It can be computed using KL-tracking algorithm. A dissimilarity measure CIOF (Corners having Inconsistent Optical Flow) proposed in Section is used to estimate the KL-tracking performance. Apart from vicinity constraint and patch-wise dissimi-

150 116 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM (a) Original (Raw) (b) Localized (c) Cropped (d) Uni. Illu. (e) Enhanced (f) vcode (g) hcode (h) Features Figure 5.18: Knuckleprint Recognition larity, one more statistical constraint viz. correlation bound is evaluated for each potential matching feature pair given by KL-tracking algorithm defined as follows: Correlation Bound : The phase only correlation (P OC) [47] (as defined in Section 3.4) between a local patch centered at any feature and that of its estimated tracked location patch should be at-least equal to an empirically selected threshold T cb. This bound is used to ensure that local patch around each potential matching feature pair is correlated. Given vcode and hcode of two images knuckle a and knucle b, Algorithm 5.3 can be used to compute a dissimilarity score using CIOF measure. The vcodes are matched to obtain the vertical matching score, while the respective hcodes are matched to generate the horizontal matching score. The corner features that are having their tracked position which satisfy the above mentioned three constraints (viz. vicinity constraint, patch-wise dissimilarity and correlation bound) are considered as successfully tracked. Consistent global corner optical flow is further used to prune out some of the false matching corners as mentioned in Section The estimated optical flow angle for each potential matching pair is quantized into eight directions and the most consistent direction is chosen. Any corner matching pair with different optical flow direction is discarded. Finally, the ratio of unsuccessfully

151 5.5. DATABASE SPECIFICATIONS 117 tracked corners to the total number of corner is considered as the dissimilarity measure, CIOF. Algorithm 5.3 illustrates all steps. Some of the properties of CIOF distance measure computed between two images A, B are show below : 1. CIOF (A, B) = CIOF (B, A). 2. CIOF (A, A) = CIOF (A, B) always lies in the range [0, 1]. 4. CIOF (A, B) value will be high if A, B belongs different subjects. 5.5 Database Specifications The largest publicly available knuckleprint database has been used to analyze the performance of the proposed knuckleprint system. Inter-session matchings are done to perform various experimentation. Figure 5.19: Some Images from Knuckleprint Database [56] used in this work PolyU [56] : It is a huge Knuckleprint database consisting of 7, 920 F KP sample images obtained from 165 subjects in two sessions. On an average, time interval between the sessions has been 25 days. In each session, 6 images from 4 fingers (distinct index and middle fingers of both hands) are collected. Hence, a total of = 660 distinct knuckleprints data is collected. Out of 165, 143

152 118 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM Algorithm 5.3 CIOF (knuckle a, knuckle b ) Require: (a) The vcode IA v,iv B of two knuckleprint images knuckle a, knuckle b respectively. (b) The hcode IA h,ih B of two knuckleprint images knuckle a, knuckle b respectively. (c) Na v, Nb v,n a h and Nb h are the number of corners in Iv A, Iv B, Ih A and Ih B respectively. Ensure: Return CIOF (knuckle a, knuckle b ). 1: Track all the corners of vcode IA v in vcode Iv B and that of hcode Ih A in hcode Ih B. 2: Obtain the set of corners successfully tracked in vcode tracking (i.e. stc v AB ) and hcode tracking (i.e. stc h AB ) that have their tracked position within T d, their local patch dissimilarity under T e and also the patch-wise correlation is at-least equal to T cb. 3: Similarly compute successfully tracked corners of vcode IB v in vcode Iv A (i.e. stc v BA ) as well as hcode Ih B in hcode Ih A (i.e. stch BA ). 4: Quantize optical flow direction for each successfully tracked corners into eight directions (i.e. at an interval of π) and obtain 4 histograms 8 Hv AB, Hh AB, Hv BA and HBA h using these four corner sets stcv AB, stch AB, stcv BA and stch BA respectively. 5: For each histogram, out of 8 bins, the bin (i.e. direction) which is having the maximum number of corners is considered as the consistent optical flow direction. The maximum value obtained from each histogram is termed as corners having consistent optical flow represented as cofab v, cof AB h, cof BA v and cof BA h. 6: ciofab v = 1 cof AB v ; [Corners with Inconsis. Optical Flow (vcode)] Na v 7: ciofba v = 1 cof BA v ; [Corners with Inconsis. Optical Flow (vcode)] Nb v 8: ciofab h = 1 cof AB h ; [Corners with Inconsis. Optical Flow (hcode)] Na h 9: ciofba h = 1 cof BA h ; [Corners with Inconsis. Optical Flow (hcode)] Nb h 10: return CIOF (Knuckle a, Knuckle b ) = ciof AB v +ciof AB h +ciof BA v +ciof BA h ; 4 subjects are belonging to an age group and others are belonging to age group Testing Strategy For testing, all 6 images of first session are taken for training while images of second session are taken for testing. Hence, a total of 23, 760 genuine and 15, 657, 840 imposter matchings is considered. The database along with the specifications of testing strategy is given in Table 5.1. One can observe that a large number of

153 5.6. PERFORMANCE ANALYSIS 119 Table 5.1: Database Specifications Subject Pose Total Training Testing Genuine Matching Imposter Matching PolyU (Knuckleprint) 165 ( First 6 Last 6 23,760 15,657,840 Knuckles) genuine as well as imposter matching is needed to evaluate the performance of the proposed knuckleprint system. 5.6 Performance Analysis Knuckleprint Segmentation The PolyU knuckleprint database contains 12 images of 660 subjects each, out of which 7484 images are correctly segmented. Therefore, the proposed segmentation algorithm performs with an accuracy of 94.49% over PolyU [56] knuckleprint database. One can observe that the proposed algorithm can extract the ROI consistently. Some images for which the proposed algorithm fails to segment are shown in Fig The algorithm fails mainly due to poor image quality. Some other reasons for segmentation failure include lack of assumed horizontal knuckle alignment and missing symmetric knuckle texture which is captured using knuckle filter. Also, multiple finger knuckle in an image leads to improper segmentation. Such images are segmented manually Threshold Selection The matching performance of the proposed CIOF dissimilarity measure depends on three thresholds viz. T e, T d and T cb. The values of these parameters are selected

154 120 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM (a) Poor Quality (b) Multiple Knuckle (c) Not Horizontal (d) Improperly Placed Figure 5.20: Failed Knuckle ROI detection to maximize the system performance over a validation set containing only left index images of first 50 subjects and using only vcode matchings. Several sets of thresholds are considered during testing and the best parametric set in terms of performance is selected and is reported along with few other information in Table 5.2. Table 5.2: Parameterized Performance Analysis of the proposed system on PolyU Knuckleprint Database. (only vcode matching) PolyU Knuckleprint Database T d T e T cb DI EER(%) Accuracy(%) EUC CRR(%) The threshold values for which the proposed knuckleprint system is achieved maximum CRR and minimum ERR are T e = 2000 with patch size of with T d = 22 along with T cb = 0.4 over PolyU Knuckleprint databases. They are shown in Fig and in Table 5.2. It has bigger patch size along with higher T d values for knuckleprint images as compared to iris images, because knuckleprint images have long line like coarser features. Also, they are not normalized as in the case of iris.

155 5.6. PERFORMANCE ANALYSIS 121 Figure 5.21: Parameterized ROC Analysis of the Proposed System for PolyU Knuckleprint Database. First 50 subject s left index images are used. (only vcode matchings) Comparative Analysis Table 5.3: Comparative Performance Analysis over PolyU Knuckleprint (Results as reported in [27]) Algorithm Equal Error Rate Compcode [35] BOCV [1] ImCompcode and MagCode [71] MoriCode [27] MtexCode [27] MoriCode and MtexCode [27] vcode hcode vcode + hcode The performance of the proposed system over PolyU knuckleprint database is

156 122 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM compared with some well known systems. It is given in Table 5.3. The EER values as reported in [27] are used for the comparison as they have also adopted the same testing strategy. The ROC curve for performance analysis of the proposed system over PolyU Database is shown in Fig One can observe from Table 5.3 and Fig that the individual hcode performance is not as impressive as vcode, but fusion of both (vcode and hcode) significantly boost-up the system performance. Such a fusion is very useful because some images may have more discrimination in vertical direction while others have it in horizontal direction. Also, it is observed that the lowest EER is achieved with the proposed fusion which is much better than fusion of MoriCode and MtexCode [27]. Figure 5.22: Database Performance of the Proposed System over Knuckleprint PolyU

157 Chapter 6 Palmprint Recognition System This chapter deals with the problem of designing an efficient palmprint based recognition system. The palmprint ROI is extracted using key-point segmentation algorithm proposed in [9]. Features like palm principle lines, wrinkles and ridges are used for matching between two palmprints. Like any other biometric system, palmprint based recognition system consists of five major tasks, viz. ROI extraction, quality estimation, ROI preprocessing, feature extraction and matching. The overall architecture of the proposed system is shown in Fig Two publicly available palmprint databases CASIA [15] and PolyU [57], are used to analyse the performance of the proposed system. 6.1 Palmprint ROI Extraction The algorithm proposed in [9] for palmprint ROI extraction has been used. The hand images are thresholded to obtain the binarized image and the hand contour is extracted. Four key-points (X 1, X 2, V 1, V 2 ) in the hand contour are computed as shown in Fig. 6.2(b) where X 1, X 2 and V 1, V 2 are left and right-most hill and valley points respectively. Two more key-points, C 1 and C 2 where C 1 is the intersection

158 124 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM Figure 6.1: Overall Architecture of the Proposed Palmprint Recognition System point of hand contour and the line passing from V 1 with a slope of 45 and C 2 is the intersection point of hand countour and the line passing from V 2 with a slope of 60 as shown in Fig. 6.2(c). Finally, the midpoints of the line segment V 1 C 1 and V 2 C 2 are joined which is considered as one side of the square ROI. The final extracted palmprint ROI is shown in Fig. 6.2(d). The various steps involved in ROI extraction are shown in Fig Algorithm 6.1 can be used to extract the ROI from any palmprint image. (a) Original (b) Contour (c) Key Points (d) Palmprint ROI Figure 6.2: Palmprint ROI Extraction (Images are taken from [9])

159 6.2. PALMPRINT PREPROCESSING 125 Algorithm 6.1 Palmprint ROI Extraction Require: Full acquired palmprint image I of dimension m n as shown in Fig. 6.2(a). Ensure: The cropped palmprint ROI as shown in Fig. 6.2(d). 1: Threshold the palmprint image I p to extract the hand contour C. 2: Over the hand contour C find the coordinates of four key points X 1, X 2, V 1, V 2 as shown in Fig.6.2(b). 3: Compute C 1 as the intersection point of hand contour and line passing from V 1 with a slope of 45. 4: Compute C 2 as the intersection point of hand contour and line passing from V 2 with a slope of 60. 5: Midpoints of the line segments V 1 C 1 and V 2 C 2 are considered as one side of the required square palmprint ROI. 6: Extract the required square palmprint ROI as shown in Fig. 6.2(d). 6.2 Palmprint Preprocessing The extracted region of interest (ROI) of palmprint is generally of poor contrast. The image enhancement algorithm discussed in Section is applied over the ROI. The enhanced palmprint has better quality texture as shown in Fig It uses local block average as the background illumination which is subtracted from the original ROI to obtain uniformly illuminated ROI which is shown in Fig. 6.3(c). Finally image is enhanced using CLAHE [55] and noise is removed using weiner filtering [68]. The enhanced palmprint ROI is shown in Fig. 6.3(e). (a) Original (b) Bg Illum. (c) Uni. Illum. (d) Enhanced (e) Noise Removal Figure 6.3: Palmprint Image Enhancement In order to obtain robust representation (vcode and hcode) that can tolerate

160 126 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM small amount of illumination variation, images are transformed using the LGBP transformation. An original palm along with its vcode and hcode is shown in Fig It uses the sign of local gradient (vertical and horizontal) around any pixel to obtain a 8 bit lgbp code. This is robust to small illumination variation. (a) Orig. Palmprint (b) Palm vcode (c) Palm hcode Figure 6.4: Original and Transformed (vcode, hcode) for Palmprint ROI 6.3 Palmprint Feature Extraction and Matching The feature extraction and matching algorithm used for knuckleprint in Section 5.4 is also used for palmprint. Both of them posses similar type of features. In order to match two palmprint ROIs, corner features [63] are extracted from both vcode and hcode and are tracked using KL tracking [44] algorithm in the corresponding images. The performance of the KL tracking algorithm is assumed to be good for genuine matching and bad for imposter matching. Hence, a dissimilarity measure, Corners having Inconsistent Optical Flow CIOF, can be used to estimate the performance of KL tracking which can differentiate between genuine and imposter matching. Steps that are applied over any raw palmprint image for its recognition are shown in Fig. 6.5.

161 6.3. PALMPRINT FEATURE EXTRACTION AND MATCHING 127 (a) Original (Raw) (b) Localized (c) Cropped (d) Uni. Illu. (e) Enhanced (f) vcode (g) hcode (h) Features Figure 6.5: Steps of the Palmprint Recognition System Corners having Inconsistent Optical Flow (CIOF) All corners are tracked using KL-tracking. The direction in which any corner has moved is termed as its optical f low direction. A dissimilarity measure CIOF (Corners having Inconsistent Optical Flow) has been proposed to estimate the KLtracking performance. The CIOF measure is defined using three constraints viz. vicinity, patch-wise dissimilarity and correlation bound defined as : [a] Vicinity Constraints: Euclidean distance between any corner and its estimated tracked location should not be more than an empirically selected threshold T d. The parameter T d depends upon the amount of translation and rotation in the sample images. High T d signifies more translation and vise-versa. [b] Patch-wise Dissimilarity: Tracking error defined as pixel-wise sum of absolute difference between a local patch centered at current corner and that of its estimated tracked location patch. This error should not be more than an empirically selected threshold T e. The parameter T e ensures that the matching corners must have similar neighborhood patch around it. [c] Correlation Bound : The phase only correlation (P OC) [47] between a local patch centered at any feature and that of its estimated tracked location patch should be at-least equal to an empirically selected threshold T cb. This bound is used

162 128 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM to ensure that local patch around each potential matching feature pair is correlated. Given vcode and hcode of palm a and palm b, Algorithm 6.2 is used to compute a dissimilarity score using CIOF measure. The vcodes are matched to obtain the vertical matching score, while the respective hcodes are matched to generate the horizontal matching score. The corners having their tracked position satisfying the above mentioned three constraints (viz. vicinity constraint, patch-wise dissimilarity and correlation bound) are considered as successfully tracked. Consistent global corner optical flow is further used to prune out some of the false matching corners as discussed in Section The estimated optical flow direction (i.e angle in degrees) for each potential matching pair is quantized into eight directions and the most consistent direction is chosen. Any corner matching pair with different optical flow direction is discarded. Finally, the ratio of unsuccessfully tracked corners to the total number of corners is considered as the dissimilarity measure between two palmprints. 6.4 Database The proposed system has been tested on two publicly available databases, viz. CA- SIA [15] and PolyU [57]. All possible inter-session matchings are performed to analyze the system. CASIA [15] : The CASIA palmprint database has 5502 palmprints taken from 312 subjects (i.e 312 left and right palms is 624 distinct palms). From each subject around 8 images are collected from both hands. The acquisition device uses CMOS based sensor and is pegs free; hence anyone can place his hand easily. However, there are a few subjects who have less than 8 images and they are discarded from the experiment. PolyU [57] : The PolyU palmprint database has 7752 palmprints of 193

163 6.4. DATABASE 129 Algorithm 6.2 CIOF (palm a, palm b ) Require: (a) The vcode IA v,iv B of two palmprint images palm a, palm b respectively. (b) The hcode IA h,ih B of two palmprint images palm a, palm b respectively. (c) Na v, Nb v,n a h and Nb h are the number of corners in Iv A, Iv B, Ih A and Ih B respectively. Ensure: Return CIOF (palm a, palm b ). 1: Track all the corners of vcode IA v in vcode Iv B and that of hcode Ih A in hcode Ih B. 2: Obtain the set of corners successfully tracked in vcode tracking (i.e. stc v AB ) and hcode tracking (i.e. stc h AB ) that have their tracked position within T d, their local patch dissimilarity under T e and also the patch-wise correlation is at-least equal to T cb. 3: Similarly compute successfully tracked corners of vcode IB v in vcode Iv A (i.e. stc v BA ) as well as hcode Ih B in hcode Ih A (i.e. stch BA ). 4: Quantize optical flow direction for each successfully tracked corners into eight directions (i.e. at an interval of π) and obtain 4 histograms 8 Hv AB, Hh AB, Hv BA and HBA h using these four corner sets stcv AB, stch AB, stcv BA and stch BA respectively. 5: For each histogram, out of 8 bins, the bin (i.e. direction) which is having the maximum number of corners is considered as the consistent optical flow direction. The maximum value obtained from each histogram is termed as corners having consistent optical flow represented as cofab v, cof AB h, cof BA v and cof BA h. 6: ciofab v = 1 cof AB v ; [Corners with Inconsis. Optical Flow (vcode)] Na v 7: ciofba v = 1 cof BA v ; [Corners with Inconsis. Optical Flow (vcode)] Nb v 8: ciofab h = 1 cof AB h ; [Corners with Inconsis. Optical Flow (hcode)] Na h 9: ciofba h = 1 cof BA h ; [Corners with Inconsis. Optical Flow (hcode)] Nb h 10: return CIOF (palm a, palm b ) = ciof AB v +ciof AB h +ciof BA v +ciof BA h ; 4 subjects (i.e 192 left and right palms i.e. 386 distinct palms). From each subject, 20 images from both hands and in two sessions (10 images per session) are collected. The acquisition device uses CCD based sensor with spatial resolution of 75 dots per inch. But it contains pegs; hence user has to place his hand accordingly. There are some palms in both CASIA and PolyU databases with incomplete or missing data. These palms are also discarded for this experiment. Some of the sample images taken from both databases are shown in Fig. 6.6 and specifications of each image are given in Table 6.1. Testing is done over the left and the right

164 130 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM (a) Casia - A (b) Casia - B (c) Casia - C (d) Casia - D (e) Casia - E (f) PolyU - A (g) PolyU - B (h) PolyU - C (i) PolyU - D (j) PolyU - E Figure 6.6: Sample Palmprint Images from Casia and PolyU Databases Table 6.1: Database Specifications Subject Pose Total Training Testing Genuine Matching Imposter Matching Casia (Palmprint Left hand) 290 Palms 8 2,320 First 4 Last 4 4,640 1,340,960 Casia (Palmprint Right hand) 276 Palms 8 2,208 First 4 Last 4 4,416 1,214,400 Casia (Palmprint Left + Right hand) 566 Palms 8 4,528 First 4 Last 4 9,056 5,116,640 PolyU (Palmprint Left hand) 193 Palms 20 3,860 First 10 Last 10 19,300 3,705,600 PolyU (Palmprint Right hand) 193 Palms 20 3,860 First 10 Last 10 19,300 3,705,600 PolyU (Palmprint Left + Right hand) 386 Palms 20 7,720 First 10 Last 10 38,600 14,861,000 palm images of both databases to analyse the hand-wise system performance.

165 6.5. PERFORMANCE ANALYSIS Testing Strategy All intersession matchings are done for performance analysis. Four images of CASIA [15] palmprint database are taken for training while rest of them are kept for testing. Hence, there are 9, 056 genuine and 5, 116, 640 imposter matchings. For PolyU [57] palmprint database, ten images are used for training while rest are taken for testing. Hence, a total of 38, 600 genuine and 14, 861, 000 imposter matchings are performed. The database along with specifications of the testing strategy is given in Table 6.1. One can observe that a large number of genuine as well as imposter matchings are considered to evaluate the performance of the proposed system. 6.5 Performance Analysis The palmprint ROIs are extracted using the algorithm proposed in [9]. It is observed that the segmentation accuracy depends upon the key point extraction. In both databases, we have seen in several images. It may fail to get key points automatically. Hence, for those images key points are marked manually and segmentation is done. The threshold values for which the proposed palmprint system performs with maximum CRR and minimum ERR are T e = 750 with patch size of 5 5 with T d = 18 along with T cb = 0.4 for CASIA and PolyU palmprint databases. The graphs shown in Fig. 6.7 and Fig. 6.8 show the performance achieved by fusing vcode and hcode information as compared with only vcode matchings, for both palmprint databases. CASIA Database : In Fig. 6.7, the ROC characteristics for all different categories (i.e Left, Right and All) of CASIA palmprint database are shown. Table 6.1 shows the total number of genuine as well as imposter matchings. One can see that the proposed system has performed equally well over all three categories of CASIA databases and has shown huge performance boost-up after fusion of vcode

166 132 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM Figure 6.7: Vertical and Horizontal Information Fusion based Performance Boost-up for CASIA Palmprint Databases (X axis in Log scale) and hcode. It has achieved EER of , and with CRR of 100% over Left, Right and All CASIA palmprint databases as shown in Table 6.2. Hand Information d CRR EER PALM (CASIA) LEFT HAND PALM (CASIA) RIGHT HAND PALM (CASIA) FULL DB (LEFT + RIGHT) PALM (POLYU) LEFT HAND PALM (POLYU) RIGHT HAND PALM (POLYU) FULL DB (LEFT + RIGHT) Table 6.2: Palmprint Results (L=LEF T Hand,R=RIGHT Hand,ALL=L + R) PolyU Database : In Fig. 6.8 the ROC characteristics for various PolyU palmprint databases are shown. One can infer that overall system s performance with respect to EER is very good but as compared with CASIA database it is poor. This is because our ROI cropping algorithm fails to segment accurately for

167 6.5. PERFORMANCE ANALYSIS 133 Figure 6.8: Vertical and Horizontal Information Fusion based Performance Boost-up for PolyU Palmprint Databases (X axis in Log scale) some subjects. Also the PolyU database is subject-wise much bigger than CASIA database. Hence huge number of imposter and genuine matching are performed for PolyU (as shown in Table 6.1) database. An EER of , and with CRR more than 99.89% has been achieved over Left, Right and All PolyU palmprint databases as shown in Table 6.2. Huge performance boost-up after fusion has been observed. Performance of the system when it is used the right palm is observed to be better. This is mainly because of the fact that most of the subjects are right handers and hence they are comfortable to provide their right hand data. The generalized experimental analysis reveals that vcode is more discriminative than hcode mainly because palms have mostly vertical edges. But for the matchings where vcode fails to discriminate, hcode information has been used to enhance the system performance as it can reduce the falsely accepted imposters. For both databases,

168 134 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM Approach Database CRR % EER % PalmCode [76] Palm (CASIA) PalmCode [76] Palm (PolyU) CompCode [35] Palm (CASIA) CompCode [35] Palm (PolyU) OrdinalCode [65] Palm (CASIA) OrdinalCode [65] Palm (PolyU) Palm-Zernike [7] Palm (CASIA) Palm-Zernike [7] Palm (PolyU) Proposed Palm (CASIA) Proposed Palm (PolyU) Table 6.3: Comparative Performance with Other Systems one can observe a significant performance boost-up by fusing vcode and hcode scores. The analysis suggests that fusion can effectively reduces false acceptance rate and hence significantly enhances the system performance. The proposed palmprint recognition system is compared with the some well known systems like [7],[35],[65],[76]. It can be seen that the EER of the fusion of the proposed (vcode and hcode) over CASIA palmprint database is which is better than all of these systems. But for PolyU database, the performance of Ordinal code [65] is found to be better than all other systems. This performance is due to the huge number of genuine/imposter matchings as shown in Table 6.1. Also, some PolyU images are cropped inaccurately due to translation and illumination variation. Hence, the overall performance of the proposed system is found to be better or comparable with the most of known systems.

169 Chapter 7 Multimodal Based Recognition System In this chapter, details of multi-modal based recognition system have been discussed. The major focus is over the creation of multimodal database and its experimental results. Several multimodal systems are viz. iris and knuckleprint, knuckleprint and palmprint and finally iris, knuckleprint and palmprint have been proposed. Testing strategies such as intersession matching, one training and one testing and multiple training and multiple testing are adopted. Two publicly available iris databases (CASIA Interval [15] and LAMP [14]) are fused with two public palmprint databases (CASIA [15], PolyU [57]) while for knuckleprint, the publicly available PolyU [57] database is used. Threshold Selection : In all the proposed multimodal systems the iris, knuckleprint and palmprint matchings are performed using the proposed CIOF dissimilarity measure as proposed previously. The threshold values (such as T d, T e, T cb ) used for each database are obtained by optimizing the proposed system over a validation set (that considers only first few subjects) of that database in terms of performance as explained in Chapters 4, 5, 6.

170 136CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM 7.1 Knuckleprint and Palmprint Fusion In this section, knuckleprint and palmprint images are considered for authentication by fusing them at score level. The proposed system is tested over various combination of two publicly available benchmark palmprint databases CASIA [15] and PolyU [57] along with the largest publicly available PolyU knuckleprint database [75]. The CASIA palmprint database has 5502 palmprint images taken from 312 subjects (i.e 624 distinct palms). Eight, images are collected from both hands of each subject. The PolyU palmprint database has 7752 palmprint images of 193 subjects (i.e 386 distinct palms). From each subject, 20 images are collected from both hand in two sessions (10 images per session). The PolyU knuckleprint database consists of 7920 knuckleprint images taken from 165 subjects. Each subject has given 6 knuckleprint images of left index (LI), left middle (LM), right index (RI) and right middle (RM) finger in two sessions (i.e 660 distinct knuckles). There are some palms in both CASIA and PolyU databases with incomplete or missing data. Such palms are discarded for this experiment. Some of the sample images taken from each database are shown in Fig. 7.1 and detailed database specifications are given in Table Multi-modal Databases Creation Eight multi-modal databases (viz. A1 to A8) consisting of palm and knuckleprint images are constructed as defined below. Specifications of these databases are given in Table 7.1. In A1 and A2, only two modalities are fused while in A4, A5, A7, A8 three modalities are considered for fusion. In A3 and A6, six modalities are fused to analyze the performance boost-up. Databases - A1 and A2 : The A1 dataset is created by considering 566 palm subjects from CASIA databases along with first 566 knuckle subjects (out of 660)

171 7.1. KNUCKLEPRINT AND PALMPRINT FUSION 137 (a) CASIA A (b) CASIA B (c) CASIA C (d) CASIA D (e) CASIA E (f) Polyu A (g) Polyu B (h) Polyu C (i) Polyu D (j) Polyu E (k) Polyu A (l) Polyu B (m) Polyu C (n) Polyu D (o) Polyu E Figure 7.1: Sample Images from all Databases (Five Distinct Users) from PolyU knuckleprint database. The CASIA palmprint database consists of 8 images per subject while knuckleprint database consists of 12 images per subject; hence to make A1 database consistent, first and last 4 images per subject are considered to ensure inter-session matching. Therefore A1 dataset has 4528 palm and knuckle-print images collected from 566 distinct palm and knuckle, with 8 palm as well as knuckle images per subject. The first 4 images per trait of every subject are considered as training images and last 4 images are considered as testing images. In the similar way, A2 dataset has been constructed by considering all 386 palms from PolyU palm databases and only first 386 (out of 660) knuckles from PolyU knuckle database. The PolyU palmprint database consists of 20 images per subject while knuckleprint database consists of 12 images per subject; to make A2 database consistent, first and last 6 images per subject are considered to ensure inter-session matching. First 6 images per trait for every subject are considered as training images and last 6 images are considered as testing images.

172 138CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM Databases - A3 and A6 : The databases A3 and A6 are constructed by considering all 6 distinct traits per person viz. left and right palm (LP, RP ) along with left index, left middle, right index and right middle (LI, LM, RI, RM) knuckleprint images, where palmprint images are taken from CASIA and PolyU palmprint databases. All four distinct knuckleprints collected from 165 subjects of knuckleprint database along with only first 165 subject from CASIA and PolyU palm databases are used to obtain A3 and A6 respectively. First and last 4 images are considered as training and testing images respectively for A3 database while for A6 database, first and last 6 images are used for training and for testing images respectively. Databases - A4 and A7 : These databases are constructed by considering 3 distinct traits per subject viz. left and right palm (LP, RP ) from CASIA and PolyU palmprint databases along with the knuckleprints of first 276 and 193 subjects (out of 660) respectively. First and last 4 images are considered as training and testing images respectively for A4 database. For A7 database, first and last 6 images are used for training and testing images respectively. Databases - A5 and A8 : These databases also consider 3 distinct traits that includes all 330 knuckles from left and right hand along with the palmprint data of first 330 palms from CASIA and PolyU palmprint databases respectively. First and last 4 images are considered as training and testing images respectively for A5 database while for A8 database, first and last 6 images are used for training and testing images Testing Strategy The proposed fusion strategy is used over the databases A1 - A8 inter session matchings which is defined as follows. For each database, only inter session matchings are performed to ensure that the two images participating in any matching are temporally distant. The detailed testing specifications for all databases are given in

173 7.1. KNUCKLEPRINT AND PALMPRINT FUSION 139 Traits / Fusion Specifications Sub Pos Total Dataset UNIMODAL PALM (CASIA) LEFT HAND PALM (CASIA) RIGHT HAND PALM (CASIA) FULL DB (LEFT + RIGHT) PALM (POLYU) LEFT HAND PALM (POLYU) RIGHT HAND PALM (POLYU) FULL DB (LEFT + RIGHT) KNUCKLE (POLYU) LEFT HAND KNUCKLE (POLYU) RIGHT HAND KNUCKLE (POLYU) FULL DB (LEFT + RIGHT) Dataset Traits Fused MULTIMODAL A1 2 ALL.PALM(CASIA), ALL.KNUCKLE(POLYU) = 9056 A2 2 ALL.PALM(POLYU), ALL.KNUCKLE(POLYU) = 9264 A3 6 L.PALM(CASIA), R.PALM(CASIA), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R.MIDDLEKNUCKLE(POLYU) = 7920 A4 3 L.PALM(CASIA), R.PALM(CASIA), ALL.KNUCKLE(POLYU) A5 3 ALL.PALM(CASIA), L.KUNCKLE(POLYU)[LI+LM], R.KUNCKLE(POLYU)[RI+RM] A6 6 L.PALM(POLYU), R.PALM(POLYU), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R.MIDDLEKNUCKLE(POLYU) A7 3 L.PALM(POLYU), R.PALM(POLYU), ALL.KNUCKLE(POLYU) A8 3 ALL.PALM(POLYU), L.KNUCKLE(POLYU)[LI+LM], R.KUNCKLE(POLYU)[RI+RM] = = = = = Table 7.1: Database Specifications (L=LEF T Hand,R=RIGHT,ALL=L + R) Table 7.2. A matching is termed as genuine matching if both of the constituting images are of the same subject; otherwise, it is termed as an imposter matching.

174 140CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM The number of imposter matchings considered for the performance analysis of the proposed system ranges from half a million to 16 million along with the genuine matchings ranging from 3 to 40 thousand. One can observe that huge number of matchings is considered to compute performance parameters of the system. This testing strategy is considered because most of the state-of-the-art systems of palmprint and knuckleprint uses the same for the performance analysis of their systems Performance Analysis All results obtained for unimodal and multi-modal systems are presented in Table 7.3. It has been found that the proposed multimodal system performs much better than the unimodal systems that clearly justifies the fusion strategy. It is found that CRR of the proposed multi-modal system is % with an EER less than 0.01% over all eight multi-modal databases which is much better than their corresponding unimodal systems. Also, the performance of the proposed system is observed to be better than other proposed state-of-the-art multimodal based biometric systems [64], [58], [81] mainly because they have fused face and iris. Their performance got restricted primarily due to several face trait specific issues and challenges (such as pose, expression, aging, illuminations e.t.c.). In this work, palm and knuckleprint are considered for fusion as both of them have huge amount of unique and discriminative texture information to compliment each other. Also, both of them are hand based which reduce the data acquisition time, user cooperation and increases the user acceptance and data quality. The EER of the proposed multi-modal system for all 8 multi-modal databases (A1 to A8) is found to be either zero or very low as given in Table 7.3. Hence, all ROC curves look very similar to each other. Generally for such highly accurate systems, in order to draw any significant conclusion, decidability index d and genuine

175 7.1. KNUCKLEPRINT AND PALMPRINT FUSION 141 Traits / Fusion Specifications Training Testing Images Images Dataset UNIMODAL PALM (CASIA) LEFT HAND = 1160 = 1160 PALM (CASIA) RIGHT HAND = 1104 = 1104 PALM (CASIA) FULL DB (LEFT + RIGHT) = 2264 = 2264 PALM (POLYU) LEFT HAND = 1930 = 1930 PALM (POLYU) RIGHT HAND = 1930 = 1930 PALM (POLYU) FULL DB (LEFT + RIGHT) = 3860 = 3860 KNUCKLE (POLYU) LEFT HAND = 1980 = 1980 KNUCKLE (POLYU) RIGHT HAND = 1980 = 1980 KNUCKLE (POLYU) FULL DB (LEFT + RIGHT) = 3960 = 3960 Dataset Traits MULTIMODAL Fused A1 2 ALL.PALM(CASIA), ALL.KNUCKLE(POLYU) = 2264 = 2264 A2 2 ALL.PALM(POLYU), ALL.KNUCKLE(POLYU) = 2316 = 2316 A3 6 L.PALM(CASIA), R.PALM(CASIA), = 660 = 660 L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R.MIDDLEKNUCKLE(POLYU) A4 3 L.PALM(CASIA), R.PALM(CASIA), ALL.KNUCKLE(POLYU) A5 3 ALL.PALM(CASIA), L.KUNCKLE(POLYU)[LI+LM], R.KUNCKLE(POLYU)[RI+RM] A6 6 L.PALM(POLYU), R.PALM(POLYU), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R.MIDDLEKNUCKLE(POLYU) A7 3 L.PALM(POLYU), R.PALM(POLYU), ALL.KNUCKLE(POLYU) A8 3 ALL.PALM(POLYU), L.KNUCKLE(POLYU)[LI+LM], R.KUNCKLE(POLYU)[RI+RM] = = = = = = = = = = 1980 Genuine Matches Imposter Matches Table 7.2: Testing Strategy Specifications (L=LEF T,R=RIGHT,ALL=L + R)

176 142CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM Traits / Fusion Specifications d CRR EER Dataset UNIMODAL PALM (CASIA) LEFT HAND PALM (CASIA) RIGHT HAND PALM (CASIA) FULL DB (LEFT + RIGHT) PALM (POLYU) LEFT HAND PALM (POLYU) RIGHT HAND PALM (POLYU) FULL DB (LEFT + RIGHT) KNUCKLE (POLYU) LEFT HAND KNUCKLE (POLYU) RIGHT HAND KNUCKLE (POLYU) FULL DB (LEFT + RIGHT) Dataset Traits Fused MULTIMODAL A1 2 ALL.PALM(CASIA), ALL.KNUCKLE(POLYU) A2 2 ALL.PALM(POLYU), ALL.KNUCKLE(POLYU) A3 6 L.PALM(CASIA), R.PALM(CASIA), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R.MIDDLEKNUCKLE(POLYU) A4 3 L.PALM(CASIA), R.PALM(CASIA), ALL.KNUCKLE(POLYU) A5 3 ALL.PALM(CASIA), L.KUNCKLE(POLYU)[LI+LM], R.KUNCKLE(POLYU)[RI+RM] A6 6 L.PALM(POLYU), R.PALM(POLYU), L.INDEXKNUCKLE(POLYU), L.MIDDLEKNUCKLE(POLYU), R.INDEXKNUCKLE(POLYU), R.MIDDLEKNUCKLE(POLYU) A7 3 L.PALM(POLYU), R.PALM(POLYU), ALL.KNUCKLE(POLYU) A8 3 ALL.PALM(POLYU), L.KNUCKLE(POLYU)[LI+LM], R.KUNCKLE(POLYU)[RI+RM] Table 7.3: Consolidated Results (L=LEF T Hand,R=RIGHT Hand,ALL=L + R)

177 7.1. KNUCKLEPRINT AND PALMPRINT FUSION 143 (a) A1 (b) A2 (c) A3 (d) A4 (e) A5 (f) A6 (g) A7 (h) A8 Figure 7.2: Genuine Vs Imposter Best Matching Score Separation Graphs

Multimodal Biometric System by Feature Level Fusion of Palmprint and Fingerprint

Multimodal Biometric System by Feature Level Fusion of Palmprint and Fingerprint Multimodal Biometric System by Feature Level Fusion of Palmprint and Fingerprint Navdeep Bajwa M.Tech (Student) Computer Science GIMET, PTU Regional Center Amritsar, India Er. Gaurav Kumar M.Tech (Supervisor)

More information

CHAPTER 6 RESULTS AND DISCUSSIONS

CHAPTER 6 RESULTS AND DISCUSSIONS 151 CHAPTER 6 RESULTS AND DISCUSSIONS In this chapter the performance of the personal identification system on the PolyU database is presented. The database for both Palmprint and Finger Knuckle Print

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

6. Multimodal Biometrics

6. Multimodal Biometrics 6. Multimodal Biometrics Multimodal biometrics is based on combination of more than one type of biometric modalities or traits. The most compelling reason to combine different modalities is to improve

More information

Iris Recognition using Consistent Corner Optical Flow

Iris Recognition using Consistent Corner Optical Flow Iris Recognition using Consistent Corner Optical Flow Aditya Nigam and Phalguni Gupta Department of Computer Science and Engineering, Indian Institute of Technology Kanpur, India-208016 {naditya,pg}@cse.iitk.ac.in

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Fingerprint Recognition using Robust Local Features Madhuri and

More information

Keywords Wavelet decomposition, SIFT, Unibiometrics, Multibiometrics, Histogram Equalization.

Keywords Wavelet decomposition, SIFT, Unibiometrics, Multibiometrics, Histogram Equalization. Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Secure and Reliable

More information

Human Recognition using 2D and 3D Ear Images

Human Recognition using 2D and 3D Ear Images Human Recognition using 2D and 3D Ear Images A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by SURYA PRAKASH to the DEPARTMENT OF COMPUTER SCIENCE

More information

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication Tutorial 8 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication March 30, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Daugman s Method Problem

More information

An Overview of Biometric Image Processing

An Overview of Biometric Image Processing An Overview of Biometric Image Processing CHAPTER 2 AN OVERVIEW OF BIOMETRIC IMAGE PROCESSING The recognition of persons on the basis of biometric features is an emerging phenomenon in our society. Traditional

More information

ICICS-2011 Beijing, China

ICICS-2011 Beijing, China An Efficient Finger-knuckle-print based Recognition System Fusing SIFT and SURF Matching Scores G S Badrinath, Aditya Nigam and Phalguni Gupta Indian Institute of Technology Kanpur INDIA ICICS-2011 Beijing,

More information

A Novel Identification System Using Fusion of Score of Iris as a Biometrics

A Novel Identification System Using Fusion of Score of Iris as a Biometrics A Novel Identification System Using Fusion of Score of Iris as a Biometrics Raj Kumar Singh 1, Braj Bihari Soni 2 1 M. Tech Scholar, NIIST, RGTU, raj_orai@rediffmail.com, Bhopal (M.P.) India; 2 Assistant

More information

Biometric Security System Using Palm print

Biometric Security System Using Palm print ISSN (Online) : 2319-8753 ISSN (Print) : 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology Volume 3, Special Issue 3, March 2014 2014 International Conference

More information

Gurmeet Kaur 1, Parikshit 2, Dr. Chander Kant 3 1 M.tech Scholar, Assistant Professor 2, 3

Gurmeet Kaur 1, Parikshit 2, Dr. Chander Kant 3 1 M.tech Scholar, Assistant Professor 2, 3 Volume 8 Issue 2 March 2017 - Sept 2017 pp. 72-80 available online at www.csjournals.com A Novel Approach to Improve the Biometric Security using Liveness Detection Gurmeet Kaur 1, Parikshit 2, Dr. Chander

More information

Chapter 6. Multibiometrics

Chapter 6. Multibiometrics 148 Chapter 6 Multibiometrics 149 Chapter 6 Multibiometrics In the previous chapters information integration involved looking for complementary information present in a single biometric trait, namely,

More information

BIOMET: A Multimodal Biometric Authentication System for Person Identification and Verification using Fingerprint and Face Recognition

BIOMET: A Multimodal Biometric Authentication System for Person Identification and Verification using Fingerprint and Face Recognition BIOMET: A Multimodal Biometric Authentication System for Person Identification and Verification using Fingerprint and Face Recognition Hiren D. Joshi Phd, Dept. of Computer Science Rollwala Computer Centre

More information

A Contactless Palmprint Recognition Algorithm for Mobile Phones

A Contactless Palmprint Recognition Algorithm for Mobile Phones A Contactless Palmprint Recognition Algorithm for Mobile Phones Shoichiro Aoyama, Koichi Ito and Takafumi Aoki Graduate School of Information Sciences, Tohoku University 6 6 05, Aramaki Aza Aoba, Sendai-shi

More information

Biometric Security Roles & Resources

Biometric Security Roles & Resources Biometric Security Roles & Resources Part 1 Biometric Systems Skip Linehan Biometrics Systems Architect, Raytheon Intelligence and Information Systems Outline Biometrics Overview Biometric Architectures

More information

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Sowmya. A (Digital Electronics (MTech), BITM Ballari), Shiva kumar k.s (Associate Professor,

More information

Biometrics Our Past, Present, and Future Identity

Biometrics Our Past, Present, and Future Identity Biometrics Our Past, Present, and Future Identity Syed Abd Rahman Al-Attas, Ph.D. Associate Professor Computer Vision, Video, and Image Processing Research Lab Faculty of Electrical Engineering, Universiti

More information

Iris Recognition for Eyelash Detection Using Gabor Filter

Iris Recognition for Eyelash Detection Using Gabor Filter Iris Recognition for Eyelash Detection Using Gabor Filter Rupesh Mude 1, Meenakshi R Patel 2 Computer Science and Engineering Rungta College of Engineering and Technology, Bhilai Abstract :- Iris recognition

More information

Fingerprint-Iris Fusion Based Multimodal Biometric System Using Single Hamming Distance Matcher

Fingerprint-Iris Fusion Based Multimodal Biometric System Using Single Hamming Distance Matcher International Journal of Engineering Inventions e-issn: 2278-7461, p-issn: 2319-6491 Volume 2, Issue 4 (February 2013) PP: 54-61 Fingerprint-Iris Fusion Based Multimodal Biometric System Using Single Hamming

More information

IRIS recognition II. Eduard Bakštein,

IRIS recognition II. Eduard Bakštein, IRIS recognition II. Eduard Bakštein, edurard.bakstein@fel.cvut.cz 22.10.2013 acknowledgement: Andrzej Drygajlo, EPFL Switzerland Iris recognition process Input: image of the eye Iris Segmentation Projection

More information

CSCE 548 Building Secure Software Biometrics (Something You Are) Professor Lisa Luo Spring 2018

CSCE 548 Building Secure Software Biometrics (Something You Are) Professor Lisa Luo Spring 2018 CSCE 548 Building Secure Software Biometrics (Something You Are) Professor Lisa Luo Spring 2018 Previous Class Credentials Something you know (Knowledge factors) Something you have (Possession factors)

More information

Approach to Increase Accuracy of Multimodal Biometric System for Feature Level Fusion

Approach to Increase Accuracy of Multimodal Biometric System for Feature Level Fusion Approach to Increase Accuracy of Multimodal Biometric System for Feature Level Fusion Er. Munish Kumar, Er. Prabhjit Singh M-Tech(Scholar) Global Institute of Management and Emerging Technology Assistant

More information

An Efficient Iris Recognition Using Correlation Method

An Efficient Iris Recognition Using Correlation Method , pp. 31-40 An Efficient Iris Recognition Using Correlation Method S.S. Kulkarni 1, G.H. Pandey 2, A.S.Pethkar 3, V.K. Soni 4, &P.Rathod 5 Department of Electronics and Telecommunication Engineering, Thakur

More information

A Systematic Analysis of Face and Fingerprint Biometric Fusion

A Systematic Analysis of Face and Fingerprint Biometric Fusion 113 A Systematic Analysis of Face and Fingerprint Biometric Fusion Sukhchain Kaur 1, Reecha Sharma 2 1 Department of Electronics and Communication, Punjabi university Patiala 2 Department of Electronics

More information

CHAPTER 5 FEASIBILITY STUDY ON 3D BIOMETRIC AUTHENTICATION MECHANISM

CHAPTER 5 FEASIBILITY STUDY ON 3D BIOMETRIC AUTHENTICATION MECHANISM 107 CHAPTER 5 FEASIBILITY STUDY ON 3D BIOMETRIC AUTHENTICATION MECHANISM 5.1 AUTHENTICATION MECHANISMS Authentication is the process of establishing whether a peer is who or what it claims to be in a particular

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

Fingerprint Mosaicking &

Fingerprint Mosaicking & 72 1. New matching methods for comparing the ridge feature maps of two images. 2. Development of fusion architectures to improve performance of the hybrid matcher. 3. Constructing the ridge feature maps

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 5 Issue 5, Sep Oct 2017

International Journal of Computer Science Trends and Technology (IJCST) Volume 5 Issue 5, Sep Oct 2017 RESEARCH ARTICLE OPEN ACCESS Iris and Palmprint Decision Fusion to Enhance Human Ali M Mayya [1], Mariam Saii [2] PhD student [1], Professor Assistance [2] Computer Engineering Tishreen University Syria

More information

An Efficient on-line Signature Verification System Using Histogram Features

An Efficient on-line Signature Verification System Using Histogram Features RESEARCH ARTICLE OPEN ACCESS An Efficient on-line Signature Verification System Using Histogram Features Mr.Abilash S 1, Mrs.M.Janani, M.E 2 ME Computer Science and Engineering,Department of CSE, Annai

More information

A Survey on Security in Palmprint Recognition: A Biometric Trait

A Survey on Security in Palmprint Recognition: A Biometric Trait A Survey on Security in Palmprint Recognition: A Biometric Trait Dhaneshwar Prasad Dewangan 1, Abhishek Pandey 2 Abstract Biometric based authentication and recognition, the science of using physical or

More information

Multimodal Biometric System:- Fusion Of Face And Fingerprint Biometrics At Match Score Fusion Level

Multimodal Biometric System:- Fusion Of Face And Fingerprint Biometrics At Match Score Fusion Level Multimodal Biometric System:- Fusion Of Face And Fingerprint Biometrics At Match Score Fusion Level Grace Wangari Mwaura, Prof. Waweru Mwangi, Dr. Calvins Otieno Abstract:- Biometrics has developed to

More information

Spatial Frequency Domain Methods for Face and Iris Recognition

Spatial Frequency Domain Methods for Face and Iris Recognition Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026

More information

CIS 4360 Secure Computer Systems Biometrics (Something You Are)

CIS 4360 Secure Computer Systems Biometrics (Something You Are) CIS 4360 Secure Computer Systems Biometrics (Something You Are) Professor Qiang Zeng Spring 2017 Previous Class Credentials Something you know (Knowledge factors) Something you have (Possession factors)

More information

Keywords Palmprint recognition, patterns, features

Keywords Palmprint recognition, patterns, features Volume 7, Issue 3, March 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Review on Palm

More information

www.worldconferences.org Implementation of IRIS Recognition System using Phase Based Image Matching Algorithm N. MURALI KRISHNA 1, DR. P. CHANDRA SEKHAR REDDY 2 1 Assoc Prof, Dept of ECE, Dhruva Institute

More information

CHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS

CHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS CHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS As explained previously in the scope, this thesis will also create a prototype about face recognition system. The face recognition system itself has several

More information

Algorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa

Algorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa Algorithms for Recognition of Low Quality Iris Images Li Peng Xie University of Ottawa Overview Iris Recognition Eyelash detection Accurate circular localization Covariance feature with LDA Fourier magnitude

More information

Iris Recognition in Visible Spectrum by Improving Iris Image Segmentation

Iris Recognition in Visible Spectrum by Improving Iris Image Segmentation Iris Recognition in Visible Spectrum by Improving Iris Image Segmentation 1 Purvik N. Rana, 2 Krupa N. Jariwala, 1 M.E. GTU PG School, 2 Assistant Professor SVNIT - Surat 1 CO Wireless and Mobile Computing

More information

CHAPTER - 2 LITERATURE REVIEW. In this section of literature survey, the following topics are discussed:

CHAPTER - 2 LITERATURE REVIEW. In this section of literature survey, the following topics are discussed: 15 CHAPTER - 2 LITERATURE REVIEW In this section of literature survey, the following topics are discussed: Biometrics, Biometric limitations, secured biometrics, biometric performance analysis, accurate

More information

Chapter 5. Effective Segmentation Technique for Personal Authentication on Noisy Iris Images

Chapter 5. Effective Segmentation Technique for Personal Authentication on Noisy Iris Images 110 Chapter 5 Effective Segmentation Technique for Personal Authentication on Noisy Iris Images Automated authentication is a prominent goal in computer vision for personal identification. The demand of

More information

Polar Harmonic Transform for Fingerprint Recognition

Polar Harmonic Transform for Fingerprint Recognition International Journal Of Engineering Research And Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 13, Issue 11 (November 2017), PP.50-55 Polar Harmonic Transform for Fingerprint

More information

A Multimodal Biometric Identification System Using Finger Knuckle Print and Iris

A Multimodal Biometric Identification System Using Finger Knuckle Print and Iris A Multimodal Biometric Identification System Using Finger Knuckle Print and Iris Sukhdev Singh 1, Dr. Chander Kant 2 Research Scholar, Dept. of Computer Science and Applications, K.U., Kurukshetra, Haryana,

More information

Comparative Analysis of Hashing Schemes for Iris Identification using Local Features

Comparative Analysis of Hashing Schemes for Iris Identification using Local Features Comparative Analysis of Hashing Schemes for Iris Identification using Local Features Ravi Kumar Department of Computer Science and Engineering National Institute of Technology Rourkela Rourkela-769 008,

More information

wavelet packet transform

wavelet packet transform Research Journal of Engineering Sciences ISSN 2278 9472 Combining left and right palmprint for enhanced security using discrete wavelet packet transform Abstract Komal Kashyap * and Ekta Tamrakar Department

More information

Development of an Automated Fingerprint Verification System

Development of an Automated Fingerprint Verification System Development of an Automated Development of an Automated Fingerprint Verification System Fingerprint Verification System Martin Saveski 18 May 2010 Introduction Biometrics the use of distinctive anatomical

More information

Fingerprint Authentication for SIS-based Healthcare Systems

Fingerprint Authentication for SIS-based Healthcare Systems Fingerprint Authentication for SIS-based Healthcare Systems Project Report Introduction In many applications there is need for access control on certain sensitive data. This is especially true when it

More information

Biometric Quality on Finger, Face and Iris Identification

Biometric Quality on Finger, Face and Iris Identification Biometric Quality on Finger, Face and Iris Identification M.Chandrasekhar Reddy PG Scholar, Department of ECE, QIS College of Engineering & Technology, Ongole, Andhra Pradesh, India. Abstract: Most real-life

More information

Online Signature Verification Technique

Online Signature Verification Technique Volume 3, Issue 1 ISSN: 2320-5288 International Journal of Engineering Technology & Management Research Journal homepage: www.ijetmr.org Online Signature Verification Technique Ankit Soni M Tech Student,

More information

IRIS SEGMENTATION AND RECOGNITION FOR HUMAN IDENTIFICATION

IRIS SEGMENTATION AND RECOGNITION FOR HUMAN IDENTIFICATION IRIS SEGMENTATION AND RECOGNITION FOR HUMAN IDENTIFICATION Sangini Shah, Ankita Mandowara, Mitesh Patel Computer Engineering Department Silver Oak College Of Engineering and Technology, Ahmedabad Abstract:

More information

A New Encoding of Iris Images Employing Eight Quantization Levels

A New Encoding of Iris Images Employing Eight Quantization Levels A New Encoding of Iris Images Employing Eight Quantization Levels Oktay Koçand Arban Uka Computer Engineering Department, Epoka University, Tirana, Albania Email: {okoc12, auka}@epoka.edu.al different

More information

AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES

AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES K. P. M. L. P. Weerasinghe 149235H Faculty of Information Technology University of Moratuwa June 2017 AUTOMATED STUDENT S

More information

FINGERPRINT RECOGNITION FOR HIGH SECURITY SYSTEMS AUTHENTICATION

FINGERPRINT RECOGNITION FOR HIGH SECURITY SYSTEMS AUTHENTICATION International Journal of Electronics, Communication & Instrumentation Engineering Research and Development (IJECIERD) ISSN 2249-684X Vol. 3, Issue 1, Mar 2013, 155-162 TJPRC Pvt. Ltd. FINGERPRINT RECOGNITION

More information

A NOVEL PERSONAL AUTHENTICATION USING KNUCKLE MULTISPECTRAL PATTERN

A NOVEL PERSONAL AUTHENTICATION USING KNUCKLE MULTISPECTRAL PATTERN A NOVEL PERSONAL AUTHENTICATION USING KNUCKLE MULTISPECTRAL PATTERN Gayathri Rajagopal, Dhivya Sampath, Saranya Velu and Preethi Sampath Department of Electronics and Communication Engineering, Sri venkateswara

More information

Decision Level Fusion of Face and Palmprint Images for User Identification

Decision Level Fusion of Face and Palmprint Images for User Identification XI Biennial Conference of the International Biometric Society (Indian Region) on Computational Statistics and Bio-Sciences, March 8-9, 2012 83 Decision Level Fusion of Face and Palmprint Images for User

More information

A Survey on Feature Extraction Techniques for Palmprint Identification

A Survey on Feature Extraction Techniques for Palmprint Identification International Journal Of Computational Engineering Research (ijceronline.com) Vol. 03 Issue. 12 A Survey on Feature Extraction Techniques for Palmprint Identification Sincy John 1, Kumudha Raimond 2 1

More information

A Comparative Study of Palm Print Recognition Systems

A Comparative Study of Palm Print Recognition Systems A Comparative Study of Palm Print Recognition Systems Akash Patel akash.patel@somaiya.edu Chinmayi Tidke chinmayi.t@somaiya.edu Chirag Dhamecha Mumbai India chirag.d@somaiya.edu Kavisha Shah kavisha.shah@somaiya.edu

More information

Feature-level Fusion for Effective Palmprint Authentication

Feature-level Fusion for Effective Palmprint Authentication Feature-level Fusion for Effective Palmprint Authentication Adams Wai-Kin Kong 1, 2 and David Zhang 1 1 Biometric Research Center, Department of Computing The Hong Kong Polytechnic University, Kowloon,

More information

IRIS Recognition System Based On DCT - Matrix Coefficient Lokesh Sharma 1

IRIS Recognition System Based On DCT - Matrix Coefficient Lokesh Sharma 1 Volume 2, Issue 10, October 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

Enhanced Iris Recognition System an Integrated Approach to Person Identification

Enhanced Iris Recognition System an Integrated Approach to Person Identification Enhanced Iris Recognition an Integrated Approach to Person Identification Gaganpreet Kaur Research Scholar, GNDEC, Ludhiana. Akshay Girdhar Associate Professor, GNDEC. Ludhiana. Manvjeet Kaur Lecturer,

More information

An introduction on several biometric modalities. Yuning Xu

An introduction on several biometric modalities. Yuning Xu An introduction on several biometric modalities Yuning Xu The way human beings use to recognize each other: equip machines with that capability Passwords can be forgotten, tokens can be lost Post-9/11

More information

Multimodal Belief Fusion for Face and Ear Biometrics

Multimodal Belief Fusion for Face and Ear Biometrics Intelligent Information Management, 2009, 1, 166-171 doi:10.4236/iim.2009.13024 Published Online December 2009 (http://www.scirp.org/journal/iim) Multimodal Belief Fusion for Face and Ear Biometrics Dakshina

More information

Biometric Cryptosystems: for User Authentication

Biometric Cryptosystems: for User Authentication Biometric Cryptosystems: for User Authentication Shobha. D Assistant Professor, Department of Studies in Computer Science, Pooja Bhagavat Memorial Mahajana Post Graduate Centre, K.R.S. Road, Metagalli,

More information

Implementation and Comparative Analysis of Rotation Invariance Techniques in Fingerprint Recognition

Implementation and Comparative Analysis of Rotation Invariance Techniques in Fingerprint Recognition RESEARCH ARTICLE OPEN ACCESS Implementation and Comparative Analysis of Rotation Invariance Techniques in Fingerprint Recognition Manisha Sharma *, Deepa Verma** * (Department Of Electronics and Communication

More information

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation Walid Aydi, Lotfi Kamoun, Nouri Masmoudi Department of Electrical National Engineering School of Sfax Sfax University

More information

Rotation Invariant Finger Vein Recognition *

Rotation Invariant Finger Vein Recognition * Rotation Invariant Finger Vein Recognition * Shaohua Pang, Yilong Yin **, Gongping Yang, and Yanan Li School of Computer Science and Technology, Shandong University, Jinan, China pangshaohua11271987@126.com,

More information

Improved Iris Segmentation Algorithm without Normalization Phase

Improved Iris Segmentation Algorithm without Normalization Phase Improved Iris Segmentation Algorithm without Normalization Phase R. P. Ramkumar #1, Dr. S. Arumugam *2 # Assistant Professor, Mahendra Institute of Technology Namakkal District, Tamilnadu, India 1 rprkvishnu@gmail.com

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5].

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5]. ON THE VULNERABILITY OF FACE RECOGNITION SYSTEMS TO SPOOFING MASK ATTACKS Neslihan Kose, Jean-Luc Dugelay Multimedia Department, EURECOM, Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr

More information

Performance Improvement of Phase-Based Correspondence Matching for Palmprint Recognition

Performance Improvement of Phase-Based Correspondence Matching for Palmprint Recognition Performance Improvement of Phase-Based Correspondence Matching for Palmprint Recognition Vincent Roux Institut Supérieur d électronique de Paris,, rue Notre Dame des Champs, 76 Paris, France vincent.roux@isep.fr

More information

ALGORITHM FOR BIOMETRIC DETECTION APPLICATION TO IRIS

ALGORITHM FOR BIOMETRIC DETECTION APPLICATION TO IRIS ALGORITHM FOR BIOMETRIC DETECTION APPLICATION TO IRIS Amulya Varshney 1, Dr. Asha Rani 2, Prof Vijander Singh 3 1 PG Scholar, Instrumentation and Control Engineering Division NSIT Sec-3, Dwarka, New Delhi,

More information

Multimodal Biometric Security using Evolutionary Computation

Multimodal Biometric Security using Evolutionary Computation Multimodal Biometric Security using Evolutionary Computation A thesis submitted to Department of Computer Science & Software Engineering for the Partial Fulfillment of the Requirement of MS (CS) Degree

More information

Graph Geometric Approach and Bow Region Based Finger Knuckle Biometric Identification System

Graph Geometric Approach and Bow Region Based Finger Knuckle Biometric Identification System _ Graph Geometric Approach and Bow Region Based Finger Knuckle Biometric Identification System K.Ramaraj 1, T.Ummal Sariba Begum 2 Research scholar, Assistant Professor in Computer Science, Thanthai Hans

More information

CHAPTER 5 PALMPRINT RECOGNITION WITH ENHANCEMENT

CHAPTER 5 PALMPRINT RECOGNITION WITH ENHANCEMENT 145 CHAPTER 5 PALMPRINT RECOGNITION WITH ENHANCEMENT 5.1 INTRODUCTION This chapter discusses the application of enhancement technique in palmprint recognition system. Section 5.2 describes image sharpening

More information

K-Nearest Neighbor Classification Approach for Face and Fingerprint at Feature Level Fusion

K-Nearest Neighbor Classification Approach for Face and Fingerprint at Feature Level Fusion K-Nearest Neighbor Classification Approach for Face and Fingerprint at Feature Level Fusion Dhriti PEC University of Technology Chandigarh India Manvjeet Kaur PEC University of Technology Chandigarh India

More information

Palmprint Recognition Using Transform Domain and Spatial Domain Techniques

Palmprint Recognition Using Transform Domain and Spatial Domain Techniques Palmprint Recognition Using Transform Domain and Spatial Domain Techniques Jayshri P. Patil 1, Chhaya Nayak 2 1# P. G. Student, M. Tech. Computer Science and Engineering, 2* HOD, M. Tech. Computer Science

More information

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at

More information

1.1 Performances of a Biometric System

1.1 Performances of a Biometric System Performance Analysis of Multimodal Biometric Based Authentication System Mrs.R.Manju, a Mr.A.Rajendran a,dr.a.shajin Narguna b, a Asst.Prof, Department Of EIE,Noorul Islam University, a Lecturer, Department

More information

EFFECTIVE METHODOLOGY FOR DETECTING AND PREVENTING FACE SPOOFING ATTACKS

EFFECTIVE METHODOLOGY FOR DETECTING AND PREVENTING FACE SPOOFING ATTACKS EFFECTIVE METHODOLOGY FOR DETECTING AND PREVENTING FACE SPOOFING ATTACKS 1 Mr. Kaustubh D.Vishnu, 2 Dr. R.D. Raut, 3 Dr. V. M. Thakare 1,2,3 SGBAU, Amravati,Maharashtra, (India) ABSTRACT Biometric system

More information

PALM PRINT RECOGNITION AND AUTHENTICATION USING DIGITAL IMAGE PROCESSSING TECHNIQUE

PALM PRINT RECOGNITION AND AUTHENTICATION USING DIGITAL IMAGE PROCESSSING TECHNIQUE PALM PRINT RECOGNITION AND AUTHENTICATION USING DIGITAL IMAGE PROCESSSING TECHNIQUE Prof.V.R.Raut 1, Prof.Ms.S.S.Kukde 2, Shraddha S. Pande 3 3 Student of M.E Department of Electronics and telecommunication,

More information

World Journal of Engineering Research and Technology WJERT

World Journal of Engineering Research and Technology WJERT wjert, 2018, Vol. 4, Issue 4, 160-167. Review Article ISSN 2454-695X Titilayo et al. WJERT www.wjert.org SJIF Impact Factor: 5.218 COMPARATIVE ANALYSIS OF FACE, IRIS AND FINGERPRINT RECOGNITION SYSTEMS

More information

Fingerprint Retrieval Based on Database Clustering

Fingerprint Retrieval Based on Database Clustering Fingerprint Retrieval Based on Database Clustering Liu Manhua School of Electrical & Electronic Engineering A thesis submitted to the Nanyang Technological University in fulfillment of the requirements

More information

Tutorial 1. Jun Xu, Teaching Asistant January 26, COMP4134 Biometrics Authentication

Tutorial 1. Jun Xu, Teaching Asistant January 26, COMP4134 Biometrics Authentication Tutorial 1 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication January 26, 2017 Table of Contents Problems Problem 1: Answer the following questions Problem 2: Biometric

More information

Periocular Biometrics: When Iris Recognition Fails

Periocular Biometrics: When Iris Recognition Fails Periocular Biometrics: When Iris Recognition Fails Samarth Bharadwaj, Himanshu S. Bhatt, Mayank Vatsa and Richa Singh Abstract The performance of iris recognition is affected if iris is captured at a distance.

More information

Dr. Enrique Cabello Pardos July

Dr. Enrique Cabello Pardos July Dr. Enrique Cabello Pardos July 20 2011 Dr. Enrique Cabello Pardos July 20 2011 ONCE UPON A TIME, AT THE LABORATORY Research Center Contract Make it possible. (as fast as possible) Use the best equipment.

More information

Fusion of Hand Geometry and Palmprint Biometrics

Fusion of Hand Geometry and Palmprint Biometrics (Working Paper, Dec. 2003) Fusion of Hand Geometry and Palmprint Biometrics D.C.M. Wong, C. Poon and H.C. Shen * Department of Computer Science, Hong Kong University of Science and Technology, Clear Water

More information

Touchless Fingerprint recognition using MATLAB

Touchless Fingerprint recognition using MATLAB International Journal of Innovation and Scientific Research ISSN 2351-814 Vol. 1 No. 2 Oct. 214, pp. 458-465 214 Innovative Space of Scientific Research Journals http://www.ijisr.issr-journals.org/ Touchless

More information

Multimodal Biometrics Information Fusion for Efficient Recognition using Weighted Method

Multimodal Biometrics Information Fusion for Efficient Recognition using Weighted Method Multimodal Biometrics Information Fusion for Efficient Recognition using Weighted Method Shalini Verma 1, Dr. R. K. Singh 2 1 M. Tech scholar, KNIT Sultanpur, Uttar Pradesh 2 Professor, Dept. of Electronics

More information

A Novel Approach to Improve the Biometric Security using Liveness Detection

A Novel Approach to Improve the Biometric Security using Liveness Detection Volume 8, No. 5, May June 2017 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info ISSN No. 0976-5697 A Novel Approach to Improve the Biometric

More information

Biometrics Technology: Hand Geometry

Biometrics Technology: Hand Geometry Biometrics Technology: Hand Geometry References: [H1] Gonzeilez, S., Travieso, C.M., Alonso, J.B., and M.A. Ferrer, Automatic biometric identification system by hand geometry, Proceedings of IEEE the 37th

More information

The Impact of Diffuse Illumination on Iris Recognition

The Impact of Diffuse Illumination on Iris Recognition The Impact of Diffuse Illumination on Iris Recognition Amanda Sgroi, Kevin W. Bowyer, and Patrick J. Flynn University of Notre Dame asgroi kwb flynn @nd.edu Abstract Iris illumination typically causes

More information

CURRENT iris recognition systems claim to perform

CURRENT iris recognition systems claim to perform 1 Improving Iris Recognition Performance using Segmentation, Quality Enhancement, Match Score Fusion and Indexing Mayank Vatsa, Richa Singh, and Afzel Noore Abstract This paper proposes algorithms for

More information

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface , 2 nd Edition Preface ix 1 Introduction 1 1.1 Overview 1 1.2 Human and Computer Vision 1 1.3 The Human Vision System 3 1.3.1 The Eye 4 1.3.2 The Neural System 7 1.3.3 Processing 7 1.4 Computer Vision

More information

Eyelid Position Detection Method for Mobile Iris Recognition. Gleb Odinokikh FRC CSC RAS, Moscow

Eyelid Position Detection Method for Mobile Iris Recognition. Gleb Odinokikh FRC CSC RAS, Moscow Eyelid Position Detection Method for Mobile Iris Recognition Gleb Odinokikh FRC CSC RAS, Moscow 1 Outline 1. Introduction Iris recognition with a mobile device 2. Problem statement Conventional eyelid

More information

Online and Offline Fingerprint Template Update Using Minutiae: An Experimental Comparison

Online and Offline Fingerprint Template Update Using Minutiae: An Experimental Comparison Online and Offline Fingerprint Template Update Using Minutiae: An Experimental Comparison Biagio Freni, Gian Luca Marcialis, and Fabio Roli University of Cagliari Department of Electrical and Electronic

More information

Face Recognition with Local Binary Patterns

Face Recognition with Local Binary Patterns Face Recognition with Local Binary Patterns Bachelor Assignment B.K. Julsing University of Twente Department of Electrical Engineering, Mathematics & Computer Science (EEMCS) Signals & Systems Group (SAS)

More information

An approach for Fingerprint Recognition based on Minutia Points

An approach for Fingerprint Recognition based on Minutia Points An approach for Fingerprint Recognition based on Minutia Points Vidita Patel 1, Kajal Thacker 2, Ass. Prof. Vatsal Shah 3 1 Information and Technology Department, BVM Engineering College, patelvidita05@gmail.com

More information

Towards Online Iris and Periocular Recognition under Relaxed Imaging Constraints

Towards Online Iris and Periocular Recognition under Relaxed Imaging Constraints IEEE Trans. Image Processing, 2013 Towards Online Iris and Periocular Recognition under Relaxed Imaging Constraints Chun-Wei Tan, Ajay Kumar Abstract: Online iris recognition using distantly acquired images

More information