An Iris Recognition System Based on Angular Radial Partitioning and Statistical Texture Analysis with Sum & Difference Histogram

Similar documents
A Binarization Algorithm specialized on Document Images and Photos

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

Detection of an Object by using Principal Component Analysis

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Edge Detection in Noisy Images Using the Support Vector Machines

Cluster Analysis of Electrical Behavior

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

Feature Extractions for Iris Recognition

Iris recognition algorithm based on point covering of high-dimensional space and neural network

An Image Fusion Approach Based on Segmentation Region

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

Parallelism for Nested Loops with Non-uniform and Flow Dependences

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

CS 534: Computer Vision Model Fitting

EYE CENTER LOCALIZATION ON A FACIAL IMAGE BASED ON MULTI-BLOCK LOCAL BINARY PATTERNS

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

A Gradient Difference based Technique for Video Text Detection

Machine Learning: Algorithms and Applications

A Gradient Difference based Technique for Video Text Detection

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Support Vector Machines

Shape Representation Robust to the Sketching Order Using Distance Map and Direction Histogram

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Classifier Selection Based on Data Complexity Measures *

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

An Improved Image Segmentation Algorithm Based on the Otsu Method

TN348: Openlab Module - Colocalization

Face Recognition Based on SVM and 2DPCA

Palmprint Feature Extraction Using 2-D Gabor Filters

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

Combination of Color and Local Patterns as a Feature Vector for CBIR

Programming in Fortran 90 : 2017/2018

Image Alignment CSC 767

Efficient Segmentation and Classification of Remote Sensing Image Using Local Self Similarity

Local Quaternary Patterns and Feature Local Quaternary Patterns

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Data Mining: Model Evaluation

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

User Authentication Based On Behavioral Mouse Dynamics Biometrics

Machine Learning 9. week

Discriminative Dictionary Learning with Pairwise Constraints

A Background Subtraction for a Vision-based User Interface *

Applying EM Algorithm for Segmentation of Textured Images

A Novel Fingerprint Matching Method Combining Geometric and Texture Features

Research and Application of Fingerprint Recognition Based on MATLAB

Multi-view 3D Position Estimation of Sports Players

Novel Fuzzy logic Based Edge Detection Technique

An efficient method to build panoramic image mosaics

The Research of Support Vector Machine in Agricultural Data Classification

X- Chart Using ANOM Approach

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Novel Pattern-based Fingerprint Recognition Technique Using 2D Wavelet Decomposition

Using Fuzzy Logic to Enhance the Large Size Remote Sensing Images

MOTION PANORAMA CONSTRUCTION FROM STREAMING VIDEO FOR POWER- CONSTRAINED MOBILE MULTIMEDIA ENVIRONMENTS XUNYU PAN

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

Face Recognition by Fusing Binary Edge Feature and Second-order Mutual Information

Improved SIFT-Features Matching for Object Recognition

Face Recognition using 3D Directional Corner Points

Fuzzy C-Means Initialized by Fixed Threshold Clustering for Improving Image Retrieval

Brushlet Features for Texture Image Retrieval

SRBIR: Semantic Region Based Image Retrieval by Extracting the Dominant Region and Semantic Learning

Gender Classification using Interlaced Derivative Patterns

Histogram of Template for Pedestrian Detection

Face Detection with Deep Learning

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga

Lecture 13: High-dimensional Images

Fast Feature Value Searching for Face Detection

A COMBINED APPROACH USING TEXTURAL AND GEOMETRICAL FEATURES FOR FACE RECOGNITION

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

SHAPE RECOGNITION METHOD BASED ON THE k-nearest NEIGHBOR RULE

Image Matching Algorithm based on Feature-point and DAISY Descriptor

Tone-Aware Sparse Representation for Face Recognition

Classifier Swarms for Human Detection in Infrared Imagery

Feature Reduction and Selection

Remote Sensing Image Retrieval Algorithm based on MapReduce and Characteristic Information

Pictures at an Exhibition

Object-Based Techniques for Image Retrieval

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches

PERFORMANCE EVALUATION FOR SCENE MATCHING ALGORITHMS BY SVM

A New Feature of Uniformity of Image Texture Directions Coinciding with the Human Eyes Perception 1

MOTION BLUR ESTIMATION AT CORNERS

COMPLEX WAVELET TRANSFORM-BASED COLOR INDEXING FOR CONTENT-BASED IMAGE RETRIEVAL

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

Collaboratively Regularized Nearest Points for Set Based Recognition

Smoothing Spline ANOVA for variable screening

Machine Learning. Topic 6: Clustering

IMAGE MATCHING WITH SIFT FEATURES A PROBABILISTIC APPROACH

DEFECT INSPECTION OF PATTERNED TFT-LCD PANELS USING A FAST SUB-IMAGE BASED SVD. Chi-Jie Lu* and Du-Ming Tsai**

On Modeling Variations For Face Authentication

Invariant Shape Object Recognition Using B-Spline, Cardinal Spline, and Genetic Algorithm

Real-time Joint Tracking of a Hand Manipulating an Object from RGB-D Input

Unsupervised Learning

Recognizing Faces. Outline

Classification of Face Images Based on Gender using Dimensionality Reduction Techniques and SVM

Support Vector Machines

Transcription:

An Irs Recognton System Based on Angular Radal Parttonng and Statstcal Texture Analyss wth Sum & Dfference Hstogram Abbas Memş Department of Computer Engneerng Yıldız Techncal Unversty İstanbul, Turkey abbasmems@gmal.com Songül Albayrak Department of Computer Engneerng Yıldız Techncal Unversty İstanbul, Turkey songul@ce.yldz.edu.tr Elena Battn Sönmez Department of Computer Engneerng İstanbul Blg Unversty İstanbul, Turkey elena.sonmez@blg.edu.tr Abstract Irs based dentfcaton systems are consdered among the most promsng recognton systems due to the nner characterstcs of the rs, such as unqueness, stablty and tme nvarance. Ths paper proposes a new texture based rs recognton system based on Angular Radal Parttonng (ARP) and Sum & Dfference Hstogram (SDH). After the rs segmentaton step, ARP s used to dvde the rs s texture nto sectors, SDH allows for the producton of probablty vectors, whch are then used to extract statstcal features. Fnally, classfcaton s performed wth the K-Nearest Neghbour algorthm. Expermental results on the Ubrs and Upol databases testfy the superor performance of the proposed approach, whch can handle the presence of eyelds and eyelashes, as well as partally occluded rses and out of focus mages. In all experments the accuracy of the our system s around 97% also when the tranng set s made up of only two pctures per class, and the correspondng low percentage of FAR suggests that the proposed approach s a good prototype for bometrc recognton systems run n dentfcaton mode. Keywords Irs recognton, Angular Radal Parttonng (ARP), texture analyss, Sum & Dfference Hstogram (SDH). I. INTRODUCTION Recent call for better securty together wth the rapd progress n electronc and Internet use, have brought bometrcs based personal dentfcaton systems nto focus. Bometrcs s the scence to dentfy a person by hs/her nner characterstcs, whch nclude (but are not lmted to) a person s fngerprnt, palm prnt, face, rs, voce, gat or sgnature. Bometrcs based systems are consder more relable than the tradtonal ones, based on dentfcaton cards, personal numbers and passwords, because nner attrbutes cannot be lost, forgotten or shared. The human rs s the annular and colourful part of the eye between the black pupl and the whte sclera. The rs pattern contans many dstnctve features such as pgment spots, freckles, strpes, furrows, coronas etc., whch are located randomly at gestaton tme. Together wth ths unque aspect, the rs has the advantages () to be an nternal and well protected organ of the eye, (2) to be a planar obect, nsenstve to llumnaton effects, and (3) to be tme nvarant. However, current rs-based recognton systems are severely lmted by low robustness, accuracy and speed of the algorthms when dealng wth poor qualty mages, whch have been acqured n presence of moton, partal cooperaton and dstance from the camera. In ths paper we propose a new texture-based rs recognton system, whch as the advantage to be robust to a large varety of dsturbance elements, such as partal occluson of the rs, presence of eyelashes on the rs s texture, poor focus of the mage, and presence of reflecton of the camera. Moreover, we detaled our experments so as to make t reproducble and to allow for comparson wth other rs recognton systems. The frst rs-based recognton system was proposed by Daugman, [] [2] [3], who segmented the rs regon wth an ntegro-dfferental operator and encoded the rs feature by 2D Wavelets demodulaton, whch resulted n 2048 bts of phase nformaton. The percentage of msmatched bts s calculated wth an XOR operator and the Hammng dstance gves the dfference between any par of rs codes. Wldes system [4] generated rs code usng a Laplacan pyramd wth four dfferent resoluton levels; resultng features were down-sampled wth Fsher s lnear dscrmnant [5] and compared va normalzed correlaton. Boles and Bonashash [6] extracted the rs features from the zero-crossng representaton of a concentrc crcle of an rs mage usng one-dmensonal wavelet transform at varous resoluton levels; classfcaton s performed usng two dssmlarty functons. Ma et al. [7] decomposed the rs texture nto a set of one-dmensonal (D) ntensty sgnals; the obtaned rs features are downsampled wth Fsher lnear dscrmnant [5] and classfcaton s based on cosne smlarty. Bowyer et al. [8] presented a comprehensve survey on rs bometrcs, whch summarzes the state of art up to 2008. In 2009, Chen and Chu [9] extracted the rs feature usng a Sobel operator and -D wavelet transform and made classfcaton wth a mxture of probablstc neural network (PNN) and partcle swarm optmzaton (PSO). In 20, Sba et al. [0] performed rs recognton wth neural networks, whle Plla et al. [] used random proecton and sparse representaton. At current tme, rs based recognton systems are stll consdered as one of the most promsng bometrc dentfcaton technque, and there s bg research effort to ncrease robustness, accuracy and speed of these algorthms n case of problematc pctures. Among the others, S et al. [2] proposed () a new eyelashes detecton algorthm, (2) the use of a 2-D flter for feature extracton and (3) a corner based rs dentfcaton method to speed up the :N search n bg rses databases; whle Rahulkar and Holambe [3] presented a shft, scale and rotaton nvarant

technque for rs feature extracton. In the proposed rs recognton system () the rs regon s segmented usng a varaton of the Daugman s ntegrodfferental operator, (2) each solated rs pattern s then parttoned nto sectors wth the Angular Radal Parttonng (ARP) method [4], (3) the Sum & Dfference Hstogram (SDH) technque [5] s used to represent every sector wth a couple of fx szed vectors, (4) whch are then converted nto a set of statstcal features. Fnally, (5) classfcaton s performed va the K-Nearest Neghbour method usng the Manhattan dstance. We worked wth the Ubrs and Upol databases, whch are common used databases havng a wde varety of dsturbance elements, such as occluded rses, poor focus of the mages, llumnaton effects, and blurred edges. We compared our results wth the ones of smlar experments run by Celeb [6] and Erblek and Toygar [7]. To summarze, the man contrbuton of ths work are () the ntroducton of a new, promsng, texture-based rs recognton system and (2) the detaled descrpton of the experments, so as to make them reproducble and allow for comparson wth other methods. Secton II descrbes the segmentaton step, whch ncludes the solaton of the rs from the rest of the eye and the subdvson of the rs texture nto segments. Secton III llustrates the feature extracton process, wth the Sum & Dfference Hstograms technque followed by the extracton of statstcal features from texture. Secton IV detals the experments run on the Ubrs and Upol databases and compares our results aganst the ones present n the lterature. Conclusons are drawn n Secton V. A. Irs Segmentaton II. SEGMENTATION The frst step n all rs recognton algorthms s to fnd both the nsde (papllary) and the outsde (lmbus) boundares of the rs. Dfferent methods have been appled such as the Daugman s ntegro-dfferental operator, [], [2], [3], and the Canny s edge detecton wth crcular Hough Transform [8]. Equaton gves the formula of the ntegro-dfferental operator of Daugman: max(r, x 0, y 0 ) G σ(r) I(x, y) r 2πr ds () r,x 0,y 0 where I(x, y) s the nput mage contanng the eye, (x 0, y 0 ) s the centre of a crcular arc ds of radus r, and G σ (r) s a Gaussan smoothng functon. The operator defned n () s a crcular edge detector that must be appled two tmes to detect the papllary and the lmbus boundares, snce they are, generally, not concentrc; that s, t s necessary to search for the three parameters (x 0, y 0, r) of the two crcles separately. The fnal result of these operatons s the solaton of the rs from the rest of the eye. Most of the present rs-recognton algorthms are senstve to the outputs of the rs-segmentaton step, where the dsturbances caused by an naccurate detecton of nner and outer boundares are generally removed by gnorng the borders areas. Among the other, ths problem was faced by () Jang et al. n [9] who proposed a new soluton to detect and localze eyelds; (2) Tan et al. n [20] who focused on effcent and robust segmentaton of rs mages; but, n the recent work of S et al., [2], t s ponted out that stll there s not a feasble soluton for the presence of eyelashes. We appled two dfferent varatons of the Daugman s ntegro-dfferental operator, one for each of the two databases used. On the Ubrs [2] database, we decreased the effect of the presence of eyelds and eyelashes by restrctng the orgnal mage doman and consderng only some angles, (θ, θ 2 ), as llustrated n fgure : Fg.. Arc gradent model used n equaton (). Notce that the two angles, (θ, θ 2 ), can be dfferent from each other and ther values can be changed, at segmentaton step, to detect each pupl and rs boundares; we made automatc segmentaton by usng a sngle value for the Ubrs and another one for the Upol database. The rs segmentaton step n the Ubrs database s partcularly challengng due to the presence of partally occluded rses, as shown n fgure 2: Fg. 2. Partally occluded rs n the Ubrs database. That s, the Ubrs database stores eyes where the eyeld covers most of the rs; obvously, n such a case, the rs segmentaton step fals and the dentfcaton process ends n msclassfcaton. In our experments we dd not choose the mages to work wth, and, therefore, we used also these partally (totally) occluded rses, whch, obvously, contrbuted to the error rate. That s, when all mages of the Ubrs database are used, the mnmum error rate must be set to the number of close eyes over the total number of mages. The partcularty of the Upol [22] database s that the eye s photographed through a black hole, whch results n a black crcle outsde the sclera. To segment the Upol database we used the varaton of equaton () proposed by Hebashy, [23]: whle searchng for the lmbus by ncreasng the radus r, the detected outer crcle s the frst one havng an ntegro-dfferental value

bggest that the threshold. Fgure 3 shows some rses of the Upol database: the st row stores the orgnal pctures, the 2nd row zooms on the correspondng problematc areas. Moreover, whle all rses have the black frame outsde the sclera, some cases are more challengng because of () the presence of a whte crcle nsde the pupl (due to the reflecton of the camera) and/or (2) blurred boundares. Fg. 5. Angular Radal Parttonng(ARP) wth M radal parttons and N angular parttons. Very often the papllary and lmbus crcles are not concentrc, that s the pupl centre s dfferent from the rs one. ARP uses the pupl crcle centre coordnates to make segmentaton; f rs coordnates were used, parttons could slde brngng to worse classfcaton results. Fg. 3. st row: challengng rses n the Upol database, 2nd row: zoom on the correspondng problematc areas. To summarze, we pont out that the segmentaton of Upol mages was more challengng than the one of Ubrs database manly due to the crcular black mask around the captured rs mage. Fgure 4 shows segmented rs mages from the Ubrs and the Upol databases: The proposed system does not use frst and last radal parttons, because they could contan the data from the pupl s texture or the one form the sclera; ths expedent reduces the total complexty of the system and t has also the advantage to lmt the msclassfcaton effect, due to the presence of eyelashes and eyelds around the upper/lower lmbus s areas. As a result, the total number of acqured segment s (M 2) N. Fgure 6 shows the results of ARP segmentaton on Ubrs and Upol databases; the pcture has been produced by workng wth the rs mage I(ρ, θ) and ncreasng the values of ρ and θ so as to cover all rs; the outer and nner sectors wll not be consdered. Fg. 4. Segmented rs sample from the Ubrs (2 pctures on the left) and the Upol (2 eyes on the rght) databases. B. Angular Radal Parttonng The Angular Radal Parttonng (ARP) method [4] s a common used technque for edge detecton; the man obectve of ARP s to re-wrte the orgnal mage nto a new structure, whch must be s nvarant to scale and rotaton, whle beng capable of supportng measurement of smlarty between mages. In [6] Celeb used ARP for texture based rs recognton systems. We followed hs example and we used ARP to partton the segmented rs regon nto a number of sectors. More n detals:. The rs texture s converted from Cartesan to polar coordnate axes; that s Irs(x, y) becomes I(ρ, θ), and we adopted the conventon to ndcate wth I(ρ, θ) also the greylevel of the pxel n poston (ρ, θ); 2. The radus of the rs, R, s dvded nto M radal parttons and the crcle angle, θ = 360, s parttoned nto N angular sub-angles, whch resulted n a M N sectors. Fgure 5 llustrates ths concept n case of θ = 80 angle. After ARP segmentaton, the rs mage s dvded nto a set of sectors, {sector(k, )} for all k = 0,..., M and = 0,..., N. In equaton 2: (k+)r (+)2π M N Irs = {sector(k, )} = X X θ= 2π ρ= kr N M I(ρ, θ) (2) Fg. 6. Results of the Angular Radal Parttonng on Ubrs (a) and Upol (b) mage samples. III. F EATURE E XTRACTION A. Sum & Dfference Hstograms (SDH) The method used for texture analyss s the Sum and Dfference Hstogram (SDH) algorthm ntroduced by Unser n [5] and used for (marble) texture analyss by Alaardn et al. [24]. SDH s an alternatve to the Grey-level Co-occurrence Matrces (GLCM) method [25]; t has the bg advantage of decreasng both memory storage and computatonal tme whle keepng smlar performance. Representng a texture of a greyscale mage wth SDH technque requres the calculaton of the sum and dfference vectors, wth the correspondng normalzed sum and dfference hstograms. Let us consder a K L b-bt grey-level pcture havng NG = 2b quantzed grey-levels, e.g. n case of b = 8 NG = 256, and the range (R) of ntensty greylevel values Rgreylevel = [0,..., NG ]; fnally, let us defne as yk,l the grey-level of the pxel n poston (k, l), for k =,..., K and l =,..., L. Wth SDH, out of ts 8 neghbours, for every pxel, we are nterested only n the four pxels stuated n non-opposte drectons, that

s D = {(, ), (, 0), (, ), (0, )}. Fgure 7 shows the 8-Neghborhood of a texture pxel and names the four neghbour pxels used by the Sum and Dfference Hstogram (SDH): {V, V 2, V 3, V 4 }; adoptng the same conventon as before, the poston of the pxel gves also ts greylevel values; that s, V s the name of the top-left pxel of y k,l, the centre pxel ( ), and t s also ts greylevel value. Fg. 7. 8-Neghborhood of a texture pxel and the four neghbourhood drectons used by SDH algorthm. Stll consderng the K L b-bt grey-level pcture, we calculate the sum and the dfference vectors usng equaton (3) and (4) by movng the centre pxel, y k,l, startng from the bottom-left corner of the mage and gnorng the top row and the most rght column of the pcture, that s, for all l =,..., L and k =,..., K : sum k,l = y k,l + (V + V 2 + V 3 + V 4 )/4 (3) dff k,l = y k,l (V + V 2 + V 3 + V 4 )/4 (4) In words, the sum (dfference) vector stores the sum (mnus) of the greylevel of pxel (k, l) wth the average greylevels of the fours neghbours of y k,l. The resultng sze of both vectors, sum k,l and dff k,l, s therefore (K ) (L ), and the range of ther ntensty grey-levels s, respectvely, R sum = [0,..., 2 (N G )] and R dff = [ (N G ),..., N G ]. In case of 8-bt pcture, R greylevel = [0,..., 255], the range of the sum vector s R sum = [0,..., 50] and the range of the dfference vector s R dff = [ 255,..., 255]. In ths work, we appled the SDH technque sector wse; that s, for every sector, we calculated the couple (sum k,l, dff k,l ) vector, startng from the smallest values of ρ and θ and ncreasng by, ether pxel or degree, at every step. Notce that for lttle crcle, ρ =, the ncrease of θ = can result n the same pxel n the Cartesan coordnate; when ths happens we ncreased agan the angle θ. That s, because the shape of a sector s not square, we searched for the next centre pxel y k,l and for ts neghbours {V, V 2, V 3, V 4 } usng polar coordnates and ncreasng ether ρ or θ; the resultng pxel s poston s then mapped nto Cartesan coordnate and duplcated pxels are gnored. The sze of both vectors, sum k,l and dff k,l, changes dependng to the sector s sze, but the range of ther ntensty grey-levels s fxed to R sum = [0,..., 2 (N G )] and R dff = [ (N G ),..., N G ]. The hstograms of the sum and the dfference vectors are calculated usng equatons (5) and (6), sector wse: h sum () = Card {(k, l) sector, sum k,l = } for all = 0,..., 2 (N G ) (5) h dff () = Card {(k, l) sector, dff k,l = } for all = N G,..., N G (6) where Card s the total number of pxel havng greylevel value n the consdered sector. Knowng SectP xel = h sum() = h dff (), the total number of pxels n the actual sector, the normalzed sum and dfference hstogram vectors are the correspondng probablty vectors: P sum () = h sum ()/SectP xel (7) P dff () = h dff ()/SectP xel (8) Notce that the sze of all hstograms and probablty vectors s fxed to 2 (N G ); that s, ther dmenson s ndependent to the sectors s sze. When the nput pcture s a colourful one SDH calculates three sum and three dfference vectors, one for each colour channels, and converts them nto three couples of probablty vectors, (P sum (), P dff ()). Interestng to pont out that, the memory requrement of SDH s always better that the one of the GLCM method; that s, n case of RGB mage, GLCM makes texture analyss consderng the 8-Neghbours of the centre pxel, y k,l, by buldng and processng 3 matrces of sze N G N G. For example, n case of a 8-bt RGB mage, the memory requrement of SDH s 3 2 2 (N G ) = 3060 elements (3 for the palettes, R,G,B; 2 for the sum and dfference vectors; 2 (N G ) s the fx sze of every probablty vector), whle GLCM requres 3 N G N G = 96608 elements. B. Statstcal Features from Texture Seven statstcal features are calculated out of the two SDH vectors of every ARP sector, namely: mean, varance, energy, correlaton, entropy, contrast and homogenety features. Table I gves ther formulas. TABLE I. Parameter Mean Varance Energy Correlaton Entropy Contrast Homogenety FORMULAS TO CALCULATE THE STATISTICAL PARAMETERS Formulas 2 P s () = µ 2 ( ( 2µ) 2 P s () + 2 P d ()) P s () 2 P d () 2 2 ( ( 2µ) 2 P s () 2 P d ()) P s () log P s () P d () log P d () 2 P d () + P 2 d () Interestng to notce that ARP sectors wth dfferent radal dstance from the centre have a varable number of pxels; out of every sector the SDH algorthm calculates two fx sze vectors, (P sum, P dff ), whch are then converted nto seven features. In other words, sectors of dfferent szes contrbute wth fx sze feature vectors. The length of the extracted feature vectors depends to the type of the processed mage: rangng from (M 2) N 7 n case of greyscale pcture to (M 2) N 7 3 for RGB mages. In the specfc case of M = 6 and N = 8, wth

a 24-bts colour mage, the resultng feature vector has sze (6 2) 8 7 3 = 52. As ARP partton counts were decreased, feature vector length became shorter. IV. EXPERIMENTS AND RESULTS We evaluated the performance of the proposed texturebased rs recognton system on two dfferent databases; we chose the Ubrs and the Upol databases snce they are both challengng databases and have dfferent dsturbance elements; that s, whle n the Ubrs database the man ssues are llumnaton and partal occluson, n the Upol database problematc pctures have a whte crcle nsde the pupl, due to reflecton of the camera, and/or blurred lmbus contour. The Ubrs [2] database ncludes 877 mages captured from 24 people n two sessons usng a Nkon E5700 model camera; pctures capture n frst sesson have less nose comparng wth the ones of the second sesson. Images have resoluton of 50 200 pxels and are ether 24-bts RGB or greyscales. Ubrs s known as a challengng rs database due to poor focus of the mages and the presence of reflecton and partal occluson of the rs; some statstcs are shown n table II. TABLE II. UBIRIS DATABASE CHARACTERISTICS WITH CLASSIFICATION PARAMETERS [26]. Percantages Good Average Bad Focus 73.83 % 7.53 % 8.63 % Reflectons 58.87 % 36.78 % 4.34 % Vsble Irs 36.73 % 47.83 % 5.44 % Out of every subect we selected the frst fve pctures, wth the only excepton of one person who has only four rs mages. That s, we dd not choose good rses nor dscard problematc mages, and we used both greyscale and RGB rses; ths selecton process results n 5 24 = 204 mages. We pre-processed the mages wth the MATLAB lbrary functon mfll to clear the orgnal samples from lght reflectons on rs and pupl regons, as shown n Fgure 8: In all experments we worked n a close envronment, where the nput test sample belongs to one of the tranng subect. To maxmze the amount of tranng and testng data, we made classfcaton usng the k-fold cross valdaton technque: at the frst round, the test set s made up of all frst nstances of every class, whch are classfed usng all remanng samples as tranng mages; the same process s repeated wth the second, thrd,..., and m th nstance. In case of the Upol database, havng 64 subects and 64 2 = 28 classes, n = 28 and m = 3, whle the Ubrs database has n = 24 classes and m = 5 samples per class; t follows that the k-fold cross valdaton technque has k = 3, when workng wth the Upol database, and k = 5, when workng wth the Ubrs database. We run our experments n the dentfcaton mode (oneto-many matchng) and we evaluated the performance of our system usng the Correct Recognton Rate (CRR), whch s the percentage of correct classfed rses out the total number of test sample. We compared classfcaton performances of K- NN. Followng the study of Celeb, [6], we fxed N = 8 and we run dfferent experments rangng M n [6, 8, 0, 2, 20]; classfcaton results are stored n the followng tables: TABLE III. CRR (%) ON UBIRIS GREY IMAGES FOR DIFFERENT VALUES OF M (PARAMETER OF ARP) AND DISTANCES. MxN:ARP Parameters 6x8 8x8 0x8 2x8 20x8 K-NN (Eucldean) 93.52 94.85 94.68 94.93 94.60 K-NN (Manhattan) 95.68 95.76 95.93 96.0 95.84 TABLE IV. CRR (%) ON UBIRIS COLOR IMAGES FOR DIFFERENT VALUES OF M (PARAMETER OF ARP) AND DISTANCES. MxN:ARP Parameters 6x8 8x8 0x8 2x8 20x8 K-NN (Eucldean) 94.85 95.0 95.26 95.68 94.68 K-NN (Manhattan) 95.84 96.09 96.7 96.5 95,68 TABLE V. CRR (%) ON UPOL GREY IMAGES FOR DIFFERENT VALUES OF M (PARAMETER OF ARP) AND DISTANCES. MxN:ARP Parameters 6x8 8x8 0x8 2x8 20x8 K-NN (Eucldean) 94.27 94.53 95.3 95.83 95.83 K-NN (Manhattan) 96.09 95.83 96.35 96.87 96.87 TABLE VI. CRR (%) ON UPOL COLOR IMAGES FOR DIFFERENT VALUES OF M (PARAMETER OF ARP) AND DISTANCES. Fg. 8. row). Orgnal Ubrs mages (st row) versus pre-processed ones (2nd The Upol [22] rs database stores 28 3 = 384 mages captured from rght and left eyes of 64 people s. Images are 24 bts, have resoluton 576 768 pxels and were taken wth SONY DXC-950P 3CCD camera. The man challenges of the Upol database are () the black frame surroundng the sclera, due to the partcular process used to take the mages, (2) the presence of a whte crcle nsde the pupl, due to the reflecton of the camera, and (3) a blurred lmbus, the outer crcle of the rs. We worked wth all mages. MxN:ARP Parameters 6x8 8x8 0x8 2x8 20x8 K-NN (Eucldean) 95.3 95.05 95.3 96.35 95.57 K-NN (Manhattan) 96.35 97.39 96.87 97.39 96.87 Results of Tables III-VI show that the best performance s reached wth M = 2 and K-NN usng the Manhattan dstance. Havng n classes, we created a Confuson Matrx, CM, of sze n n, where rows label the actual class and columns the predcted class. The ntal values of CM(, ) = 0, for all =,..., n and =,..., n, and CM(, ) s ncreased by whenever a sample of class s assgned to class ; deally, CM s a dagonal matrx wth all off-dagonal elements equal to 0 and C(, ) s equal to the total number of samples belongng to class. When classfcaton errors occur, C(, )

s equal to the number of samples belongng to class and assgned to class,. TABLE VII. PERFORMANCE MEASUREMENTS (%) OF THE PROPOSED TEXTURE BASED IRIS IDENTIFICATION SYSTEM. Database Color Format FAR FRR Ubrs grey 0.02 3.99 color 0.0 3.49 Upol grey 0.03 3.2 color 0.02 2.60 Fg. 9. The Class Confuson Matrx. recognzed; that s, consderng that 5.44% of the rses are not vsble, our average CRR of 96.5% s vrtually equvalent to the best possble performance. The followng equatons defne (9) the False Postve of class, F P ; whch s equal to the sum of off-dagonal elements of column(); (0) the False Acceptance Rate of class, F AR ; () the weght of class, W ; (2) the False Acceptance Rate, FAR; whch s equal to the weghted sum of F AR ; (3) the False Negatve of class, F N ; whch s equal to the sum of off-dagonal elements of row(); (4) the False Reecton Rate of class, F RR ; and (5) the False Reecton Rate; n all equatons Number Sample stands for the number of samples belongng to class : F P = Number of W rong Classfed Sample Assgned to Class (9) F AR = F P (T otal Num. Sample Num. Sample ) (0) Number Sample W = () T otal Number Sample n F AR = F AR W (2) = F N = Number of Sample W rongly Assgned to Class (3) F N F RR = (4) Number Sample n F RR = F RR W (5) = Notce that, n case of the Upol and Ubrs databases, all weghts are equal because all classes have the same number of samples. In the followng table we report the values of FAR and FRR n case of K-NN wth the Manhattan dstance and M N = 2 8 ARP sectors: Interestng to notce that the low value of FAR s due to the very good performance of the proposed system but also to the characterstcs of our experments, havng only one test sample per class and a hgh number of classes. Fgure 0 stores some of the msclassfed rses belongng to the Ubrs database. In ths study, we dd not choose the pctures to work wth, and fgure 0 shows that msclassfcaton occurs for occluded rses, practcally mpossble to be Fg. 0. Msclassfed rses n the Ubrs database. Wth the am to nvestgate on the correlaton between segmentaton and classfcaton steps, we report n table VIII the average segmentaton accuracy on both databases: TABLE VIII. AVERAGE SEGMENTATION ACCURACY ON THE UBIRIS AND UPOL DATABASES. Dataset Accuracy (%) Ubrs 97 Upol 83 Comparng these results wth the CRR of table VII, we may say that, n case of the Ubrs database, the maorty of the error s due to ms-segmentaton; that s, we segmented n a correct way 97% of the eyes, and we classfy n a proper way 96% of them; the % dfference s due to added classfcaton error. On the contrary, n case of the Upol database, only 83% of the mages are segmented n a correct way,.but stll the average CRR s 96%; that s, the sub-sequent classfcaton step can recover for lttle shfts n segmentaton boundary, and ths s due, manly, to the hgh resoluton of the rses. Fgure stores some of the ms-segmented rses belongng to the Upol database, only the last two were ms-classfed: Fg.. Ms-segmented rses n the Upol database. One of the man problems encountered durng ths work was to fnd a benchmark paper storng a clear descrpton of the experments run, so as to be able to reproduce them and compare the resultng performances. We chose the two papers of Celeb [6] and Erblek et al. [7] because we were attracted by ther algorthms and they have a partal descrpton of the experments. That s, lke Celeb and Erblek, we worked wth the Ubrs and Upol databases n dentfcaton mode (one-tomany matchng); unfortunately, n case of the Ubrs database, [6] does not gve any other nformaton, nether the secton of the used mages, whch remans undefned; whle [7] used

mages of secton, they selected 80 subects to work wth (but we do not know whch subects), and they manually cropped all rses. Table IX stores a comparson of the results of these three systems: TABLE IX. UBIRIS DATABASE: CRR (%) COMPARISON OF DIFFERENT IRIS BASED RECOGNITION SYSTEMS. Celeb [6] Erblek [7] Proposed Sesson Not known Sesson Sesson CRR 94.44 95.83 96.5 In case of [6], not knowng the sesson of the used mages, t s possble to make only a mld comparson on the CRR. On the contrary, comparng the performance of the proposed system wth the one of Erblek et al., we pont out that we reached better results also by selectng the frst fve mages of all subects and by usng automatc segmentaton of the rses. That s, CRR values together wth the random selectons of the mages and the automatc segmentaton of the rses make our system superor to the one of [7]. Experments on the Upol database are too obscured to be compared. V. CONCLUSIONS In the proposed texture based rs recognton system, () we made automatc segmentaton of the rs usng a varaton of the Daugman s ntegro-dfferental operator followed by the ARP technque, whch s nvarant to scale and rotaton; (2) we dd not use the frst and last radal partton, because they could contan eyelds and eyelashes as well as data from the pupl s and sclera s texture; (3) we extracted seven dmensonal feature out of every sector by re-wrtng the sector s texture nto two probablty vectors. Our experments ndcate that ths new system has the advantage to be robust to a wde varety of dsturbance elements, such as partal occluson of the rs, poor focus of the mage, llumnaton effects and blurred contours. Moreover, the low percentage of FAR obtaned n out trals suggests that the proposed approach s a good prototype for bometrc recognton systems run n dentfcaton mode, where securty s a key ssue. Another mportant advantage of ths new approach s the lttle number of samples per class necessary to tran t; that s, whle n the Ubrs database we worked wth 24 classes and we used only 4 tranng samples per class, n the Upol database 2 tranng samples per class are enough to dentfy a person out of 28 classes, 97% of the tme. REFERENCES [] J. Daugman, Hgh confdence vsual recognton of persons by a test of statstcal ndependence, IEEE Transactons on Pattern Analyss and Machne Intellgence, vol. 5, no., pp. 48 6, Nov. 993. [2] J. Daugman, The mportance of beng random: statstcal prncples of rs recognton, Pattern Recognton, vol. 36, no. 2, pp. 279 29, Feb. 2003. [3] J. Daugman, How rs recognton works, IEEE Transactons on Crcuts and Systems for Vdeo Tech., vol. 4, no., pp. 2 30, Jan. 2004. [4] R. Wldes, J. Asmuth, G. Green, S. Hsu, R. Kolczynsk, and S. McBrde, A system for automated rs recognton, n Proceedngs of the Second IEEE Workshop on Applcatons of Computer Vson, Sarasota, FL, Dec. 5 7, 994, pp. 2 28. [5] P. Belhumeur, J. Hespanha, and D. Kregman, Egenfaces vs fsherfaces: recognton usng class specfc lnear proecton, IEEE Transactons on Pattern analyss and Machne Intellgence, vol. 9, no. 7, pp. 7 720, Jul. 997. [6] W. Boles and B. Boashash, A human dentfcaton technque usng mage of the rs and wavelet transform, IEEE Transactons on Sgnal Processng, vol. 46, no. 4, pp. 85 88, Apr. 998. [7] L. Ma, T. Tan, Y. Wang, and D. Zhang, Local ntensty varaton analyss for rs recognton, Pattern Recognton, vol. 37, no. 6, pp. 287 298, Jun. 2004. [8] K. Bowyer, K. Hollngsworth, and P. Flynn, Image understandng for rs bometrcs: a survey, Computer Vson and Image Understandng, vol. 0, no. 2, pp. 28 307, May 2008. [9] C. Chen and C. Chu, Hgh performance rs recognton based on -d crcular feature extracton and pso pnn classfer, Expert Systems wth Applcatons, vol. 36, no. 7, pp. 0 35 0 356, Sep. 2009. [0] F. Sba, H. Hosan, R. Naqb, S. Dhanhan, and S. Shehh, Irs recognton usng artfcal neural networks, Expert Systems wth Applcatons, vol. 38, no. 5, pp. 5940 5946, May 20. [] J. Plla, V. Patel, R. Chellappa, and N. Ratha, Secure and robust rs recognton usng random proectons and sparse representatons, IEEE Transactons on Pattern Analyss and Machne Intellgence, vol. 33, no. 9, pp. 877 893, Sep. 20. [2] Y. S, J. Me, and H. Gao, Novel approach to mprove robustness, accuracy and rapdty of rs recognton systems, IEEE Transactons on Industral Informatcs, vol. 8, no., pp. 0 7, Feb. 202. [3] A. Rahulkar and R. Holambe, Half-rs feature extracton and recognton usng a new class of borthogonal trplet half-band flter bank and flexble k-out-of n: a post-classfer, IEEE Transactons on Informaton Forenscs and Securty, vol. 7, no., pp. 230 240, Feb. 202. [4] A. Chalachale, A. Mertns, and G. Naghdy, Edge mage descrpton usng angular radal parttonng, IEEE Proceedngs on Vson, Image and Sgnal Processng, vol. 5, no. 2, pp. 93 0, Apr. 2004. [5] M. Unser, Sum and dfference hstograms for texture classfcaton, IEEE Transactons on Pattern Analyss and Machne Intellgence, vol. 8, no., pp. 8 25, Jan. 986. [6] A. Celeb, M. Gullu, and S. Erturk, Low complexty rs recognton usng one-bt transform and angular radal parttonng, n IEEE Sgnal Processng and Communcatons Applcatons Conference (SIU), Antalya, Apr. 2009, pp. 696 699. [7] M. Erblek and O. Toygar, Recognzng partally occluded rses usng sub-pattern-based approaches, n Internatonal Symposum on Computer and Informaton Scences (ISCIS), Guzelyurt, Sep. 2009, pp. 606 60. [8] J. Canny, A computatonal approach to edge detecton, IEEE Transactons on Pattern Analyss and Machne Intellgence, vol. 8, no. 6, pp. 679 698, Nov. 986. [9] Y. Jang, B. Kang, and K. Park, A study on eyeld localzaton consderng mage focus for rs recognton, Pattern Recognton Letters, vol. 29, no., pp. 698 704, Aug. 2008. [20] T. Tan, Z. He, and Z. Sun, Effcent and robust segmentaton of nosy rs mages for non-cooperatve rs recognton, Image and Vson Computng, vol. 28, no. 2, pp. 223 230, Feb. 200. [2] H. Proenca and L. Alexandre. Ubrs rs mage database: http://rs.d.ub.pt. [22] M. Dobes and L. Machala. Upol rs mage database: http://phoenx.nf.upol.cz/rs/. [23] M. Hebashy. Poster: Optmzed daugman s algorthm for rs localzaton, http://wscg.zcu.cz/wscg2008/papers_2008/poster/a full.pdf. [24] J. Alaarn, J. Lus-Delgado, and L. Tomas-Balbrea, Automatc system for qualty-based classfcaton of marble textures, IEEE Transactons on Systems, Man and Cybernetcs-Part C: Applcatons and Revews, vol. 35, no. 4, pp. 488 497, Nov. 2005. [25] R. Gonzales and R. Woods, Eds., Dgtal Image Processng. 3th Pearson Int. Edton, 2008. [26] Ubrs classfcaton statstcs, http://rs.d.ub.pt/ubrs.html. [27] C. Burges, A tutoral on support vector machnes for pattern recognton, Data Mnng and Knowledge Dscovery, vol. 2, no. 2, pp. 2 67, 998.