A Comparative Evaluation of Iris and Ocular Recognition Methods on Challenging Ocular Images

Similar documents
Matching Highly Non-ideal Ocular Images: An Information Fusion Approach

Spatial Frequency Domain Methods for Face and Iris Recognition

Online Appendix to: Generalizing Database Forensics

Classical Mechanics Examples (Lagrange Multipliers)

Shift-map Image Registration

Classifying Facial Expression with Radial Basis Function Networks, using Gradient Descent and K-means

THE BAYESIAN RECEIVER OPERATING CHARACTERISTIC CURVE AN EFFECTIVE APPROACH TO EVALUATE THE IDS PERFORMANCE

Bayesian localization microscopy reveals nanoscale podosome dynamics

Shift-map Image Registration

A Versatile Model-Based Visibility Measure for Geometric Primitives

Exploring Context with Deep Structured models for Semantic Segmentation

Graphical Model Approach to Iris Matching Under Deformation and Occlusion

Exploring Context with Deep Structured models for Semantic Segmentation

IRIS recognition II. Eduard Bakštein,

Learning convex bodies is hard

UNIT 9 INTERFEROMETRY

Refinement of scene depth from stereo camera ego-motion parameters

Graph Matching Iris Image Blocks with Local Binary Pattern

CAMERAS AND GRAVITY: ESTIMATING PLANAR OBJECT ORIENTATION. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen

EXTRACTION OF MAIN AND SECONDARY ROADS IN VHR IMAGES USING A HIGHER-ORDER PHASE FIELD MODEL

Transient analysis of wave propagation in 3D soil by using the scaled boundary finite element method

Using Vector and Raster-Based Techniques in Categorical Map Generalization

EFFICIENT STEREO MATCHING BASED ON A NEW CONFIDENCE METRIC. Won-Hee Lee, Yumi Kim, and Jong Beom Ra

Estimating Velocity Fields on a Freeway from Low Resolution Video

Animated Surface Pasting

Image Segmentation using K-means clustering and Thresholding

APPLYING GENETIC ALGORITHM IN QUERY IMPROVEMENT PROBLEM. Abdelmgeid A. Aly

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Fast Fractal Image Compression using PSO Based Optimization Techniques

Discriminative Filters for Depth from Defocus

BIJECTIONS FOR PLANAR MAPS WITH BOUNDARIES

Estimation of large-amplitude motion and disparity fields: Application to intermediate view reconstruction

Non-homogeneous Generalization in Privacy Preserving Data Publishing

The Reconstruction of Graphs. Dhananjay P. Mehendale Sir Parashurambhau College, Tilak Road, Pune , India. Abstract

Evolutionary Optimisation Methods for Template Based Image Registration

FINDING OPTICAL DISPERSION OF A PRISM WITH APPLICATION OF MINIMUM DEVIATION ANGLE MEASUREMENT METHOD

Vision-based Multi-Robot Simultaneous Localization and Mapping

Multilevel Linear Dimensionality Reduction using Hypergraphs for Data Analysis

A fast embedded selection approach for color texture classification using degraded LBP

Kinematic Analysis of a Family of 3R Manipulators

Deep Spatial Pyramid for Person Re-identification

Learning Polynomial Functions. by Feature Construction

NEW METHOD FOR FINDING A REFERENCE POINT IN FINGERPRINT IMAGES WITH THE USE OF THE IPAN99 ALGORITHM 1. INTRODUCTION 2.

Coupling the User Interfaces of a Multiuser Program

A Framework for Dialogue Detection in Movies

Feature Extraction and Rule Classification Algorithm of Digital Mammography based on Rough Set Theory

Modifying ROC Curves to Incorporate Predicted Probabilities

Iterative Directional Ray-based Iris Segmentation for Challenging Periocular Images

A Convex Clustering-based Regularizer for Image Segmentation

Lecture 1 September 4, 2013

An Algorithm for Building an Enterprise Network Topology Using Widespread Data Sources

Characterizing Decoding Robustness under Parametric Channel Uncertainty

Figure 1: Schematic of an SEM [source: ]

Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms

A Discrete Search Method for Multi-modal Non-Rigid Image Registration

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

AnyTraffic Labeled Routing

Rough Set Approach for Classification of Breast Cancer Mammogram Images

A Duality Based Approach for Realtime TV-L 1 Optical Flow

A Classification of 3R Orthogonal Manipulators by the Topology of their Workspace

Loop Scheduling and Partitions for Hiding Memory Latencies

Object Recognition Using Colour, Shape and Affine Invariant Ratios

Random Clustering for Multiple Sampling Units to Speed Up Run-time Sample Generation

Design of Policy-Aware Differentially Private Algorithms

Local Path Planning with Proximity Sensing for Robot Arm Manipulators. 1. Introduction

Using the disparity space to compute occupancy grids from stereo-vision

6 Gradient Descent. 6.1 Functions

MORA: a Movement-Based Routing Algorithm for Vehicle Ad Hoc Networks

A Neural Network Model Based on Graph Matching and Annealing :Application to Hand-Written Digits Recognition

Particle Swarm Optimization Based on Smoothing Approach for Solving a Class of Bi-Level Multiobjective Programming Problem

Robust Camera Calibration for an Autonomous Underwater Vehicle

AN INVESTIGATION OF FOCUSING AND ANGULAR TECHNIQUES FOR VOLUMETRIC IMAGES BY USING THE 2D CIRCULAR ULTRASONIC PHASED ARRAY

Parts Assembly by Throwing Manipulation with a One-Joint Arm

A multiple wavelength unwrapping algorithm for digital fringe profilometry based on spatial shift estimation

Discrete Markov Image Modeling and Inference on the Quadtree

PAMPAS: Real-Valued Graphical Models for Computer Vision. Abstract. 1. Introduction. Anonymous

Learning Subproblem Complexities in Distributed Branch and Bound

Computer Organization

Multi-camera tracking algorithm study based on information fusion

Overlap Interval Partition Join

New Geometric Interpretation and Analytic Solution for Quadrilateral Reconstruction

Figure 1: 2D arm. Figure 2: 2D arm with labelled angles

Architecture Design of Mobile Access Coordinated Wireless Sensor Networks

Texture Recognition with combined GLCM, Wavelet and Rotated Wavelet Features

Blind Data Classification using Hyper-Dimensional Convex Polytopes

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

Periocular Biometrics: When Iris Recognition Fails

Cloud Search Service Product Introduction. Issue 01 Date HUAWEI TECHNOLOGIES CO., LTD.

arxiv: v2 [cs.lg] 22 Jan 2019

Tight Wavelet Frame Decomposition and Its Application in Image Processing

Multimodal Stereo Image Registration for Pedestrian Detection

Generalized Edge Coloring for Channel Assignment in Wireless Networks

1 Surprises in high dimensions

Texture Defect Detection System with Image Deflection Compensation

A PSO Optimized Layered Approach for Parametric Clustering on Weather Dataset

Skyline Community Search in Multi-valued Networks

PDE-BASED INTERPOLATION METHOD FOR OPTICALLY VISUALIZED SOUND FIELD. Kohei Yatabe and Yasuhiro Oikawa

Cluster Center Initialization Method for K-means Algorithm Over Data Sets with Two Clusters

Dual Arm Robot Research Report

Frequency Domain Parameter Estimation of a Synchronous Generator Using Bi-objective Genetic Algorithms

Transcription:

A Comparative Evaluation of Iris an Ocular Recognition Methos on Challenging Ocular Images Vishnu Naresh Boeti Carnegie Mellon University Pittsburgh, PA 523 naresh@cmu.eu Jonathon M Smereka Carnegie Mellon University Pittsburgh, PA 523 jsmereka@anrew.cmu.eu B.V.K. Vijaya Kumar Carnegie Mellon University Pittsburgh, PA 523 kumar@ece.cmu.eu Abstract Iris recognition is believe to offer excellent recognition rates for iris images acquire uner controlle conitions. However, recognition rates egrae consierably when images exhibit impairments such as off-axis gaze, partial occlusions, specular reflections an out-of-focus an motioninuce blur. In this paper, we use the recently-available face an ocular challenge set (FOCS) to investigate the comparative recognition performance gains of using ocular images (i.e., iris regions as well as the surrouning peri-ocular regions) instea of just the iris regions. A new metho for ocular recognition is presente an it is shown that use of ocular regions leas to better recognition rates than iris recognition on FOCS ataset. Another avantage of using ocular images for recognition is that it avois the nee for segmenting the iris images from their surrouning regions.. Introuction Among the many biometric moalities, iris is of significant interest because of the excellent recognition rates shown when iris images are acquire in controlle conitions, e.g., when subjects are able to place their eyes close to the camera. But in some situations, subjects may be at a istance (e.g., 5 to 0 m) from the camera, may not be looking irectly at the camera an may be moving. In such cases, the resulting iris images exhibit impairments such as blur inuce by motion an efocus, specular reflections, partial occlusions ue to eyelis an eye lashes an non-frontal This work is sponsore uner IARPA BAA 09-02 through the Army Research Laboratory an was accomplishe uner Cooperative Agreement Number W9NF-0-2-003. The views an conclusions containe in this ocument are those of the authors an shoul not be interprete as representing official policies, either expresse or implie, of IARPA, the Army Research Laboratory, or the U.S. Government. The U.S. Government is authorize to reprouce an istribute reprints for Government purposes notwithstaning any copyright notation herein. gaze. The performance of conventional iris recognition algorithms egraes in the presence of such impairments for two main reasons: segmenting the iris from its surrouning regions (pupil an sclera) becomes harer an the nonlinear eformations an occlusions between the probe an the gallery images make it ifficult to match the two images even after segmentation. One way to improve the recognition performance in the presence of above impairments is to use ocular images, i.e., to use iris regions as well as the surrouning regions such as eyebrows, skin texture, etc. Inclusion of the aitional regions may help by proviing more clues about an iniviual s ientity an some regions may be in better focus than other regions proviing more usable information. Another avantage of using ocular regions is that we may be able to avoi the step for segmenting the iris from its surrouning regions, which is often an error-prone an computationally emaning task. So, the goal of this paper is a comparative evaluation of iris recognition an ocular image recognition methos on a common atabase of challenging ocular images. We use the recently-available face an ocular challenge set (FOCS) [] containing 958 images from 36 subjects as the common ataset to evaluate the algorithms. For both iris recognition an ocular recognition methos, it is important to center the probe an gallery images before they are matche. Towar this goal, we have evelope a new correlation filter-base eye etector, which is applie to all images in the FOCS ataset to center them. For iris recognition, we aapt the Chan-Vese segmentation metho to extract iris regions an erive a binary coe for each iris image using an optimize Gabor wavelet set. The usual Hamming istance-base comparison is use to etermine whether the probe an gallery images are from the same class or not. For ocular recognition, there is no nee for segmentation an a Bayesian graphical moel is esigne to obtain match scores that take into account the occlusions an nonlinear eformations between the probe an the gallery images. First, the images are ivie into a number of non-overlapping patches an patch-base correlation 978--4577-359-0//$26.00 20 IEEE

outputs are use to estimate the similarity of the patches as well as the nonlinear eformation between the probe an the gallery ocular images. The information provie by patchbase correlation outputs is input to a Bayesian graphical moel which is traine using loopy belief propagation algorithm. The traine Bayesian graphical moel is use to obtain a similarity score that reflects the estimate levels of occlusion an nonlinear eformations. We show that the recognition rates for ocular recognition metho on the FOCS ata set are noticeably better than the recognition rates for iris recognition inicating that the use of periocular regions is beneficial for recognition, particularly for challenging images. The rest of this paper is organize as follows. Section 2 provies a brief summary of the FOCS ata an Section 3 introuces the new eye etector an reviews the preprocessing applie to the images. Iris segmentation, normalization an binary coe extraction an matching are iscusse in Section 4. The new ocular matching algorithm base on Bayesian graphical moels is iscusse in Section 5 an recognition results on FOCS ataset are compare in Section 6. Finally, Section 7 provies a summary. 2. FOCS Dataset The ocular challenge ata in FOCS ata set consist of still images of a single eye region. These regions were extracte from near infrare (NIR) vieo sequences collecte from the Iris on the Move system [9] from moving subjects in an unconstraine environment. The atabase has 9588 images of 36 subjects with between 2 an 236 images per subject. The atabase is characterize by impairments such as variations in illumination, out-of-focus blur, sensor noise, specular reflections, partially occlue iris an off-angle iris. Further the iris region is very small (about 50 pixels wie) within an image of resolution 750 600. Fig. shows examples of some goo an poor quality images in FOCS ataset. 3. Pre-processing In this section, we escribe the pre-processing applie to the images. Due to the rastic variations in the illumination observe in the atabase, we first perform illumination normalization via simple aaptive histogram equalization. Unlike iris images where the eye covers most of the image, the eyes are only a small part of the ocular images. Estimating the center of the eye (center of the pupil to be precise) is very important for aligning the images to be compare. We use a template-base eye etector, more specifically we use a correlation filter-base eye etector. A correlation filter (CF) is a spatial-frequency array (equivalently, a template in the image omain) that is specifically esigne from a set of training patterns that are representative of a particular class. This template is compare to a query image by computing the cross-correlation between the template an the query as function of relative shift. For computational efficiency the cross-correlation is compute in the frequency omain. Since the images an their 2D Fourier Transforms (FTs) are iscrete-inexe, FT here refers to the 2D iscrete Fourier transform (DFT) which is implemente using efficient fast Fourier transform (FFT) algorithm. The correlation filters are usually esigne to give a sharp peak at the center of the correlation output plane c(x, y) for a centere authentic query pattern an no such peak for an impostor. While many ifferent forms of correlation filters exist [5], we use unconstraine correlation filters for eye center etection because they are the simplest to esign an they yiel performance comparable to the other CF types. The general optimization formulation for unconstraine CF esigns is, max f f mm f f Tf where enotes conjugate transpose, m R N is the average of all vectorize 2D FTs of the training images, f R N is the vectorize 2D correlation filter an T R NxN is a iagonal matrix. The numerator of the objective function is essentially the square of the mean peak value. Different forms of T lea to ifferent CFs, e.g., using an ientity matrix for T leas to a enominator that is the output noise variance when the input noise is white. The above optimization problem has a close form solution for the filter, () f = T m (2) We consier one particular form of T namely T = N N i= X i X i where X i is an N N iagonal matrix whose entries are the vector versions of the 2D FT of the i-th training image. The resulting filter is calle the Unconstraine Minimum Average Correlation Energy (UMACE) filter. This choice of T leas to sharp correlation peaks [8]. We have traine a UMACE filter for eye center etection in ocular images using about 000 manually labele images. For about 95% of the 9588 images in the FOCS atabase, the eye center etector foun a point insie the pupil (Fig. 2 shows examples of some eye center etections on the FOCS ata). While we are not reporting those etails here ue to lack of space, this eye etector out-performe state-ofthe-art eye etector [2] on FERET [0] ata set where the groun truth about eye centers was available. 4. Iris Recognition Binary encoing [4][5] is one of the most popular an best performing algorithm for iris recognition uner controlle conitions. However, recognition rates egrae consierably when images exhibit the kin of impairments seen

Figure. The top row shows images with goo quality ocular regions an the bottom row shows some poor quality images (poor illumination, out-of-focus, close eyes). Figure 2. The re ot shows the eye center etecte by UMACE. The top row shows example of successful eye center etections while the bottom row shows some failure cases. in the FOCS atabase. In this section, we escribe the binary coe-base iris matching on the FOCS ata set. 4.. Iris Segmentation The performance of iris recognition systems is greatly epenent on the ability to isolate the iris from the other parts of the eye such as eyelis an eyelashes. Commonly use iris segmentation techniques [6] use some variant of ege etection methos an since the blurring ue to motion an efocus smuges the ege information to the point of there not being a iscernible ege, iris segmentation be- comes a challenging task. To alleviate this problem, we use a region-base active contour segmentation base on the seminal work of Chan an Vese [3]. Our technique segments the iris base on the istribution of pixel intensities or features extracte from the eye image from a region both insie an outsie the contour rather than looking for sharp eges, making it more robust to blurring an illumination variations than an ege-base active contour. Our segmentation involves two steps, pupil segmentation followe by iris segmentation. We use the output of the eye center etector to initialize a contour for pupil segmentation. Once

the pupil is segmente, we initialize a contour just outsie the pupil, to segment the iris. 4.2. Normalization Once the iris bounaries have been foun, we map the iris pattern into the polar omain as is popularly one. This has two effects.. Normalizes ifferent irises to the same size thus allowing for better matching. 2. Any rotation of the iris manifests as a cyclic shift (along the angular-axis) in the polar omain which can be hanle easily via circular shifts of the binary iris coe. 4.3. Feature Extraction an Matching Gabor filters with carefully selecte parameters have been shown to be the most iscriminative banpass filters for iris image feature extraction among a variety of wavelet caniates [4]. A Gabor filter is a moulate Gaussian envelope an is given by g(ρ, φ) = exp [ 2 ( ρ 2 σ 2 ρ ) ] + φ2 σφ 2 jρ(ω sin θ) jφ(ω cos θ) (3) in the polar omain (ρ, φ) where the filter is applie to the iris pattern. Here θ enotes the wavelet orientation, σ ρ an σ φ enote the wavelet withs in raial an angular irections, respectively an ω enotes the moulation frequency of the wavelet. By varying these parameters, the filters can be tune to extract features at ifferent scales, rotations, frequencies an translations. We use a set of these ifferently localize Gabor filters as our feature extraction filter bank. The filter bank use in our experiments has 2 scales an 4 orientations for a total of 8 channels an features are extracte at every point of the unwrappe iris. More etails on the parameters of the Gabor filters use an how they were chosen can be foun in [4]. The phases of the complex Gabor wavelet projections are quantize to 2 bits by mapping the phase to one of the four quarants in the complex plane. All the bits obtaine this way constitute a binary coe. A pair of iris images are compare by matching their respective binary coes. The matching is one by computing the Normalize Hamming Distance between the two binary coes. There are also corresponing masks to ientify which bits in the binary coe to use for matching. The mask bits are set to either or 0 epening on whether the corresponing coe bits are use or not use (e.g., ue to eyeli occlusions) for matching. When matching two binary coes A an B with respective masks m A an m B the issimilarity is efine as, Figure 3. An example of a true class eformation when correlating an authentic class query image. The re boxes are centere on the highest peak in each region to isplay the shifts that occur, while the corresponing correlation plane an zoome patch are isplaye below. = (A B) m A m B m A m B where enotes an XOR operation an enotes the weight (i.e.,the number of nonzero elements) of the binary coe. Any rotation of the eye is compensate for by matching the binary coes at ifferent circular shifts along the angular axis an taking the minimum Normalize Hamming Distance value. 5. Probabilistic Matching We aapt the technique originally propose by Thornton et al. [3] for iris recognition, to ocular recognition. The overall process prouces a similarity score between a template an query image that takes into account the relative eformation between the two. The input image is first segmente into non-overlapping patches which are cross correlate with the template using the fusion Optimal Trae-off Synthetic Discriminant Function (OTSDF) correlation filter, which is base on the stanar OTSDF filter propose by Refregier []. When comparing a patch from the template with a patch from the query image using the fusion OTSDF the result will prouce either a sharp correlation peak at the center of the location of the best match or noise with peaks close to zero. Deformation occurs when the correlation peak is shifte from the center of the image region (4)

as seen in Figure 3. To effectively learn an istinguish a true eformation from just ranom movements, approximation is performe by builing a Bayesian graph through maximum a posterior probability (MAP) estimation. Aapting this technique to ocular images presents its own challenges as the original framework incorporates occlusion along with eformation as hien states to the observe variable. In iris images, occlusion comes from eyelis, eyelashes, sclera, or parts of the pupil from poor segmentation, while occlusion in ocular images is much more subjective an unefine. For now we only aress occlusion as the lost portion of the image from centering base on the output of the eye etector. 5.. Fusion OTSDF Correlation Filter When segmenting the ocular region into nonoverlapping patches, it is ifficult to esign a robust CF to istinguish, for example, part of an eyebrow of one user from part of an eyebrow of another user. By esigning several CFs per region there is an increase chance of obtaining higher similarity values for authentic class images. The fusion OTSDF filter is a powerful multichannel CF that uses many egrees of freeom to jointly satisfy esign criteria leaing to robust iscrimination for etecting similarities between members of the same pattern class. In contrast to the iniviual OTSDF filter esign, the fusion CF esign takes avantage of the joint properties of ifferent feature channels to prouce the optimal output plane. Each channel prouces a similarity metric base on a relative transformation of the observe pattern an the inner sum represents a spatial cross-correlation between the channels giving an increase chance that the similarity metric will prouce high peaks for true class images. Setting up the fusion OTSDF filter is much like setting up an iniviual OTSDF filter, with some minor ajustments to account for the aitional channels. Ultimately the goal of a CF is to obtain a pre-specifie result when correlating an image with a pre-esigne filter (note that unless specifie otherwise, all operations are in the Fourier omain): x i + h = u i (5) where h is the filter, x + i is the conjugate transpose of the FT of the image i, an u i is the pre-specifie filter response to the vectorize 2D Fourier transform of image i (usually u i = i true-class an u i = 0 i false-class images). By lexicographically arranging N training images of k feature channels into column vectors an concatenating them into a large training matrix X, the entire system of linear constraints becomes: x (i) = x. x k h = h. h k X = x () (N) x..... x () (N) k x k (6) X + h = u (7) To to prouce sharp peaks in the correlation plane an to give the filter a robustness to aitive white noise, the CF nees to be esigne to minimize average correlation energy (ACE) an output noise variance (ONV). It has been shown [7] that the ACE an ONV criteria respectively, must be minimize subject to the peak constraints resulting in the following form: L = h + Qh (8) in which Q represents the energy between the feature channels when minimizing ACE (L = E avg, where E avg is the average energy in the correlation plane), or the covariance of the noise (n) when minimizing ONV (L = var (h + n)). Refregier [] showe that when minimizing for two quaratic criteria (h + Dh an h + P h), the set of optimal solutions may be erive from minimizing a single weighte combination of the two criteria: D = T = τp + ( τ) D (9) N i X(i) X (i) N i X(i) N..... i X(i) k X (i) N X (i) k i X(i) k X (i) k (0) where T is a weighte sum of the cross-power spectrum matrix (energy between the feature channels), D, an the covariance matrix for the noise, P. The parameter, τ, offers a trae-off between ACE or ONV. In contrast to traitional CF esigns, fusion CF esigns jointly optimize the performance of multiple channels. After minimizing h + T h subject to X + h = u, a close form solution for the k-channel CF can be foun as (a general form of the solution is erive in [7]): h = T X ( X + T X ) u () To apply the filter h to a image, it is necessary to reshape the vector to a 2-D image equal in size to the training image, i.e., h h (m, n). The test image can be much larger than the training image an may contain multiple targets of interest. When correlating h (m, n) with a query image y (m, n) the result is the correlation plane g (m, n):

Figure 4. Fusion OTSDF correlation filter g(m, n) = y(m + k, n + l)h(k, l) (2) k l = y(m, n) h(m, n) (3) where is the correlation operator. 5.2. MAP Estimation The overall goal of this process is to authenticate a true class image I by a template T an reject a false class image. CFs can provie a reliable approach to obtaining a similarity measure between a template an query image. However, in the presence of eformations, the matching performance of CFs eteriorates. After inepenently correlating nonoverlapping regions of each image, a goo measure of similarity nees to account for any eformations present. One metho of oing this, is to use MAP estimation to fin the most likely parameter vector, which escribes the eformations, for some (possibly nonlinear) image transformation between I an T, assuming I is a true class image. Maximizing the posterior probability istribution on the latent eformation variables results in the following: ˆ = arg max p ( T, I, true match) (4) = arg max p (I T, ) p ( T ) (5) = arg max p (I T, ) p () (6) Ignoring the normalization constant in Bayes rule an assuming that the eformation an template T are statistically inepenent shows that the image unergoing a specific eformation can be matche to the template. As it is not necessary to learn all possible eformations, the moel can be restricte to the values escribe by the parameter vector. This restricts the prior istribution to these specific parameters which is efine on a space with low imensionality an is moele as a multivariate normal istribution. Specifically, is efine so that no eformation occurs at the zero vector, which is assume to be the mean of the istribution, leaving only the covariance to be estimate. To etermine the eformation parametrization, a coarse vector fiel moel is use, in which the input image I is ivie into a set of small regions with corresponing translation vectors {( x i, y i )} an the eformation parameter vector = ( x, y,, x N, y N ) t. We can think of the prior term being specific to the pattern-type, an thus Σ can be estimate irectly from matching pairs of the pattern-type. Testing has shown that there are significant correlations between neighboring regions of the vector fiel an that there is little to no correlation between nonneighboring regions.[3] Since the generative probability is efine over a large imensionality (number of pixels in I), estimation can become a aunting task. Thus, the fusion OTSDF output S (I, T ;) is use as a similarity measure between the image I an the template T at relative eformation, setting p (I T, ) = p (S (I, T ;)). Rewriting the expression shows this as a minimization problem where the final match score can be compute by taking the similarity function values at the specific estimate eformation. ˆ = arg max = arg min { ( p (S (I, T ;)) exp )} 2 t Σ (7) { ( )} ln (p (S (I, T ;))) + 2 t Σ (8) Essentially a Bayesian graphical moel is forme as Markov ranom fiel (MRF) for the eformation variables as they have spatial significance to neighboring regions. Creating a 2D lattice MRF where each eformation variable represents an image region, a potential function Ψ ( i, j ) can be forme to give some measure of likelihoo over the joint values of two separate regions. Being non-negative an approximately Gaussian centere at zero local eformation, we want Ψ ( i, j ) to emphasize vector pairs that are more probable. Ψ ( i, j ) [ = exp ( α i 2 j 2 i 2 )] + α + β j (9) 2 The α parameter specifies the penalty on the absolute magnitues of the eformation vectors an the β parameter specifies the penalty on the relative ifference between the magnitues of the eformation vectors. Learning α an β can be one by using generalize expectation-maximization (EM) algorithm as irectly learning the true eformation fiels is too ifficult, a graient ascent approach can converge on a solution iteratively. Heuristics show the tuning parameters for the prior converge to approximately (α, β) (0.05, 0.09) regarless of initialization [2]. Thus the prior can be efine as: p () = Z (i,j) ε = Z exp Ψ ( i, j ) (20) ( ) 2 t Σ (2)

where ε contains all inex pairs of noes which are connecte by eges in the MRF, an Z is the normalization constant, making the prior a proper istribution. 5.3. Score Calculation The evelope moel is esigne to assign a higher match score to iscernible correlation peaks in similar patterns, an give a lower match score to uncorrelate query images exhibiting seemingly ranom movements. However, the objective involves estimating the posterior istribution of eformation given the observe image, which can become computationally expensive given the number of values the eformation vector can take. Thus, a variant of Pearl s message passing algorithm is implemente to estimate the marginal posterior istributions at each patch, or noe in the Bayesian moel. Assuming a sparse, acyclic graphical moel, loopy belief propagation (LBP) is an iterative solution to estimating the marginal posterior istribution at each noe. LBP operates on a set of beliefs about the eformation in the moel base on messages passe between noes on which there is a irect statistical epenence. These beliefs are a prouct of the incoming messages from neighboring noes giving the current estimate of the marginal posteriors at each noe by maximizing the likelihoo of the eformation values given the observe evience from the correlation plane at a specific local region. The algorithm continues until the beliefs converge or a maximum number of iterations occur (currently set at four). The resulting beliefs are use to estimate the expecte match score. M = k S k ( k ) P ( k O) (22) The final score is the summation of the similarity measures, S k ( k ), from correlation multiplie by the marginal posterior istribution of eformation given the observe image, P ( k O), for each image region. 6. Score Fusion Due to the challenging nature of the atabase uner consieration, we consier a score level fusion of the two techniques uner comparison namely, binary iris coe an probabilistic matching. In this paper we consier a simple weighte linear combination of the match scores, with the weight being the same for all the match pairs. 7. Experimental Results We teste the two methos, binary iris coe an the probabilistic matching (PM) on the FOCS ataset that was escribe earlier, first pre-processing the images by centering (a) ROC Curves (best viewe in color) (b) CMC Curves (best viewe in color) Figure 5. (a) ROC curves for the ifferent experiments (PML refers to probabilistic matching using left graph an PMR refers to probabilistic matching using right graph) an (b) Cumulative Match Characteristic curves. them with the eye etector an applying histogram equalization. For the binary coe, we compute the full similarity matrix for all the image pairs in the FOCS atabase achieving an Equal Error Rate (EER) of 30.8%. When we limite the comparison to left iris images versus left iris images we observe an EER of 35.2% an a Rank- ID accuracy of 87.9% an for right versus right we observe an EER of 33.% an a Rank- ID accuracy of 88.7%. The full FAR- FRR trae-off in the form of an ROC curve is shown in Fig. 5 along with the Cumulative Match Characteristic (CMC) curve. For the probabilistic matching algorithm we normalize the size of the images to 28x28 pixels, an ivie each image into 36 non-overlapping patches (6x6 configuration). Then the images were ranomly separate such that half of

the atabase, equally istribute between left an right ocular images, were use for training an the remaining for testing in which no istinction between the left an right ocular regions is mae uring matching. A correlation filter is built for each image using the fusion OTSDF filter with τ = 0 5 an every test image compare against every filter to create a 479x479 score matrix. With the traine graph, as we ll call it, we were able to obtain EER of 40.% an 53.2% Rank- ID accuracy. Breaking own the problem we consier the scenario where a separate graph is built for the left an right ocular regions iviing the images into a 6x6 patch configuration. We use half of the left an right ocular images to learn the parameters of the two graphs respectively. Testing the respective graphs on all the left an right ocular images yiele an EER of 3.28% with a Rank- ID accuracy of 86.8% an an EER of 29.74% with a Rank- ID accuracy of 84.0% respectively. For this configuration, the score level fusion yiels an EER of 29.72% an 28.07% for the left an right ocular images respectively, suggesting that score level fusion oes help improve verification performance. These results also suggest that knowing whether a query image is of the left ocular region or right ocular region can benefit the accuracy of the system significantly. Finally, we consier another scenario where the ocular images are ivie into a 4x4 configuration, i.e., each patch is now of a larger size, with everything else being the same. The PM algorithm now yiels an EER of 26.8% with a Rank- ID accuracy of 92.8% an an EER of 23.83% with a Rank- ID accuracy of 94.2% for the left an right ocular images respectively. However, the score level fusion for this configuration oes not help improve the verification performance yieling an EER of 26.80% an 23.8% for the left an right ocular images respectively, suggesting that the simple score level fusion consiere in this paper is not always helpful. 8. Summary In this paper, we compare two biometric recognition methos on the recently mae-available FOCS ata set of challenging single eye images. In the first metho base on iris recognition, we use an eye etector to center the images, applie aaptive histogram equalization to eal with illumination variations, segmente the iris using an aaptation of the Chan-Vese segmentation algorithm an use carefully selecte Gabor wavelets to obtain a binary coe to represent each image. In the secon approach base on ocular recognition, same eye etector was use to center the images, no segmentation was neee an patch-base fusion correlation filter outputs were input to a Bayesian graphical moel that was traine to prouce match scores that take into account any nonlinear eformations between the two images. Equal error rates were 30.8% for binary coe iris recognition an 26.8% for left ocular regions an 23.83% for right ocular regions, inicating that use of ocular regions may be beneficial when ealing with challenging ocular images. We also observe that the propose ocular recognition metho benefits from the knowlege of whether it is a left eye or right eye whereas the binary iris coe recognition metho appears to not benefit. References [] Face an ocular challenge series (focs) http://www. nist.gov/itl/ia/ig/focs.cfm. [2] D. Bolme, B. Draper, an J. Beverige. Average of synthetic exact filters. 2009. 2 [3] T. Chan an L. Vese. Active contours without eges. Image Processing, IEEE Transactions on, 0(2):266 277, 200. 3 [4] J. Daugman. High confience visual recognition of persons by a test of statistical inepenence. Pattern Analysis an Machine Intelligence, IEEE Transactions on, 5():48 6, 993. 2 [5] J. Daugman. How iris recognition works. Circuits an Systems for Vieo Technology, IEEE Transactions on, 4():2 30, 2004. 2 [6] J. Daugman. New methos in iris recognition. Systems, Man, an Cybernetics, Part B: Cybernetics, IEEE Transactions on, 37(5):67 75, 2007. 3 [7] A. Mahalanobis, B. V. K. Vijaya Kumar, an D. Casasent. Minimum average correlation energy filters. Applie Optics, 26(7):3633 3640, 987. 5 [8] A. Mahalanobis, B. V. K. Vijaya Kumar, S. Song, S. Sims, an J. Epperson. Unconstraine correlation filters. Applie Optics, 33(7):375 3759, 994. 2 [9] J. Matey, O. Naroitsky, K. Hanna, R. Kolczynski, D. LoIacono, S. Mangru, M. Tinker, T. Zappia, an W. Zhao. Iris on the move: Acquisition of images for iris recognition in less constraine environments. Proceeings of the IEEE, 94():936 947, 2006. 2 [0] P. Phillips, H. Wechsler, J. Huang, an P. Rauss. The feret atabase an evaluation proceure for face-recognition algorithms. Image an Vision Computing, 6(5):295 306, 998. 2 [] P. Réfrégier. Filter esign for optical pattern recognition: multicriteria optimization approach. Optics Letters, 5(5):854 856, 990. 4, 5 [2] J. Thornton. Matching eforme an occlue iris patterns: a probabilistic moel base on iscriminative cues. PhD thesis, Pittsburgh, PA, USA, 2007. AAI326282. 6 [3] J. Thornton, M. Savvies, an B. V. K. Vijaya Kumar. A bayesian approach to eforme pattern matching of iris images. IEEE Transactions on Pattern Analysis an Machine Intelligence, pages 596 606, 2007. 4, 6 [4] J. Thornton, M. Savvies, an B. V. K. Vijaya Kumar. An evaluation of iris pattern representations. In Biometrics: Theory, Applications, an Systems, 2007. BTAS 2007. First IEEE International Conference on, pages 6. IEEE, 2007. 4 [5] B. V. K. Vijaya Kumar, A. Mahalanobis, an R. Juay. Correlation pattern recognition. Cambrige Univ Pr, 2005. 2