SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATION. A thesis presented to

Size: px
Start display at page:

Download "SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATION. A thesis presented to"

Transcription

1 SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATION A thesis presented to the faculty of the Russ College of Engineering and Technology at Ohio University In partial fulfillment of the requirements for the degree Master of Science Michael J. Brown June 2006

2 This thesis entitled SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATION by MICHAEL J. BROWN has been approved for the School of Electrical Engineering and Computer Science and the Russ College of Engineering and Technology by Mehmet Celenk Associate Professor of Electrical Engineering and Computer Science Dennis Irwin Dean, Russ College of Engineering and Technology

3 Abstract BROWN, MICHAEL J., M.S., June 2006, Electrical Engineering SINGULAR VALUE DECOMPOSITION AND 2D PRINCIPAL COMPONENT ANALYSIS OF IRIS-BIOMETRICS FOR AUTOMATIC HUMAN IDENTIFICATION (95 pp.) Director of Thesis: Mehmet Celenk With the recent emphasis given to security, automatic human identification has received significant attention. In particular, iris based subject recognition has become especially important because of its high level of complexity which lends itself to high confidence recognition. In addition, the eye is well protected and generally does not change very much over extended periods of time. This thesis gives a review of some currently available methods that have already been investigated. A wide sense stationary approximation for gray scale values is explored as a possible means of feature extraction. The singular value decomposition (SVD) is discussed as a low bit rate tool for iris discrimination. The 2D principal component analysis (2DPCA) is explored as a method for feature extraction. It is determined experimentally that the SVD for iris recognition is a novel way to significantly reduce the storage requirements (133 bits) for iris recognition as compared to other methods (2048 bits). However, recognition accuracy has not reached a desirable level. The 2DPCA, on the other hand, significantly improves recognition accuracy on the same dataset, but at the cost of greater storage requirements.

4 Acknowledgements Thanks to Mehmet Celenk, Yi Luo, and Jason Kaufman.

5 v Table of Contents Page Abstract...iii List of Tables... vii List of Figures...viii Chapter 1. Introduction Biometric Identification and Iris Recognition Iris Physiology Literature Review Image Acquisition and Selection Chapter 2. Fixed and Variable Size Window Sampling and Stochastic Analysis Wide Sense Stationary Approximation Feature Generation with Correlation Matrices Chapter 3. Image Preprocessing Histogram Analysis Iris Extraction Image Unwrapping Image Enhancement Chapter 4. Singular Value Decomposition of Iris Biometrics The Singular Value Decomposition Feature Extraction with the SVD Classification Performance with the SVD... 39

6 vi 4.4 Future Improvements Chapter 5. 2D Principal Component Analysis (2DPCA) of Iris Biometrics The 2DPCA Feature Extraction with the 2DPCA Classification Performance with the 2DPCA Chapter 6. Conclusions and Discussion Bibliography Appendix A. Additional Images for four subjects Appendix B. Software Flow Appendix C. Source code Section C.1 - Recognize.m Section C.2 - Reader.m Section C.3 - Preproc.m Section C.4 - Recognizetwodpca.m Section C.5 - Twod_pca.m Section C.6 - My_eig.m... 86

7 vii List of Tables Page Table 4.1. SVD Integer Storage Requirements

8 viii List of Figures Page Figure 1.1 Anatomy of the human eye (from [8])... 4 Figure 1.2. Iris boundaries and corresponding iris code (from [1])... 6 Figure 1.3. Calculated Hamming distances (from [1])... 7 Figure 1.4. Stages of iris unwrapping (from [3])... 8 Figure 1.5. Sample zero crossing iris representation (from [6]) Figure 1.6. Image with excessive occlusion (from [7]) Figure 1.7. Image with minimal occlusion (from [7]) Figure 2.1. Cropped iris image (from [10]) Figure 2.2. Fixed size windows (from [10]) Figure 2.3. Difference matrix feature space (from [10]) Figure 2.4. Three dimensional feature space for fixed window size (from [10]) Figure 2.5. Three dimensional feature space for variable window size (from [10]) Figure 3.1. Sample iris image (from [7]) Figure 3.2. Typical grayscale distribution for iris image Figure 3.3. Iris image with excessive eyelid glare (from [7]) Figure 3.4. Grayscale distribution of Figure Figure 3.5. Grayscale distribution of Figure 3.3 compensated for glare Figure 3.6. Typical iris extraction progression Figure 3.7. Cropped localized iris image Figure 3.8. Localized iris image with corresponding pupil boundary and centroid... 28

9 ix Figure 3.9. Local iris image with inner and outer boundaries indicated Figure Unwrapped iris image Figure Sample illumination pattern of a normalized unwrapped image Figure Enhanced unwrapped image after illumination subtraction Figure 4.1. Classification performance (η) with different radius multipliers Figure 4.2. Between class distances with different radius multipliers Figure 4.3. Within class distances using different radius multipliers Figure 4.4. Classification performance (η) using different numbers of iris images Figure 4.5. Average intraclass distance using different numbers of images Figure 4.6. Average between class distance using different numbers of images Figure 4.7. Performance with varying numbers of singular values Figure 4.8. Average intraclass distance with varying numbers of singular values Figure 4.9. Average interclass separation with different numbers of singular values Figure Incorrectly located pupil Figure Unwrapped image including too much conjunctiva Figure 5.1. Classification performance with different radius multipliers Figure 5.2. Average intraclass distance with different radius multipliers Figure 5.3. Average interclass distances with different radius multipliers Figure 5.4. Classification performance with different numbers of Eigenvectors Figure 5.5. Average intraclass distance with different numbers of Eigenvectors Figure 5.6. Average interclass distance with different numbers of Eigenvectors... 60

10 1 Chapter 1. Introduction 1.1 Biometric Identification and Iris Recognition As a greater need for secure identification methods is addressed, biometrics are at the forefront of modern research. Conventional systems are generally either token (key) or knowledge (password) [24] based, which makes them susceptible to numerous problems. Obvious examples are losing a key or forgetting one s password. In addition, unauthorized individuals may still be able to use a key, and passwords that are easy for someone to remember are often easy to guess [25]. Clearly, a more robust and flexible system for identification would alleviate many of these and other associated problems. For these reasons, considerable effort has been devoted to biometrics as an alternative to conventional methods. Biometric traits are inherent to a person s physical appearance, making them based on who you are [25]. In addition, they are not susceptible to being lost or easily duplicated in most cases. Some common biometric identifiers that have been researched include facial characteristics, hand geometry, retinal blood vessel patterns, fingerprints, speech and voice patterns, gait, and iris features [24]. When using handprints or fingerprints, a high degree of accuracy is achieved due to the complexity of the feature. However, more interaction is required since the subject must touch a sensing device [27]. Facial features and gait recognition do not require as much cooperation from the subject, but they are generally less accurate when making

11 2 identifications [27]. In addition, different facial expressions and styles of walking make these systems more susceptible to variations that could hinder identification [1]. One feature that is not limited by these factors is the iris. In general, the eye is well protected from danger by the eyelid. In addition, since vision is a sense that most individuals rely heavily on, care is usually taken to avoid damage to the eye. This makes the iris especially desirable since it does not change appreciably over time in most cases. [1]. In the following pages, different aspects of this challenging technology are discussed. A variety of algorithms were explored both for image segmentation and feature extraction. Discussions on different approaches for both included. In Chapter 1, a brief discussion of the physiology of the iris is presented, along with a review of the currently available literature and methods. Image acquisition and selection is also briefly discussed. Chapter 2 covers the initial research conducted using a wide sense stationary approximation of the iris. Chapter 3 discusses image preprocessing for use with the singular value decomposition as detailed in Chapter 4. Chapter 5 details a novel use of the 2D principal component analysis for feature extraction.

12 3 1.2 Iris Physiology The iris is the concentric area located in the eye between the cornea and the lens (see Figure 1.1) which gives the eye its color. It is made up of tiny muscles that dilate and contract the pupil for varying lighting conditions [8]. Beginning in the third month of gestation [1], a process known as chaotic morphogenesis causes the iris to develop in a mostly random manner. Since this process is unrelated to an individual s genetics, even identical twins will have irises that are very different from one another [1]. The main function of the iris is to vary the size of the pupil in response to different lighting conditions. This is done involuntarily with the aid of muscles which are within the iris. As a result, this can be used to verify that several sequential images are from an actual eye by looking for variations in pupil size [1].

13 Figure 1.1 Anatomy of the human eye (from [8]) 4

14 5 1.3 Literature Review The most prevalent method for iris identification currently in use was developed by Daugman [1,2]. A digital video camera is employed to capture several sequential images of an iris in the near infrared (NIR) frequency range, usually with the subject s cooperation to center the eye in the view of the camera via a feedback video loop. NIR is used because it provides more richly detailed images of the iris, so more useful features can be extracted [1]. It also is less intrusive to human subjects [1]. While the images are being acquired, the image quality is assessed by measuring the spectral power in the mid to high frequency portion of the two dimensional Fourier transform. This quantity is then maximized by either adjusting the camera or indicating to the subject a need for adjustment [1]. During the processing stage, the boundaries of the iris are located using a coarse to fine strategy with an integrodifferential edge detection operator which searches for the maximum of the blurred partial derivative [1,2]. A similar operation is also applied to locate the upper and lower eyelid boundaries. Once the boundaries have been located, quadrature 2-D Gabor wavelets [2] are used to extract the phase information present in the picture. It is important to note that only phase information is used, as amplitude information was deemed to be too susceptible to illumination and other environmental factors by the author. The extracted phase information is converted and stored as a 2048 bit iris code, along with a 2048 bit masking code to determine what information will actually be used in the identification process. An example of the calculated boundaries and an iris code are shown in Figure 1.2.

15 6 Figure 1.2. Iris boundaries and corresponding iris code (from [1]) In order to classify an iris code, a test for statistical independence was implemented using the fractional Hamming distance (HD), where an HD of zero would indicate a match that is perfect [1]. Figure 1.3 shows a comparison of the HD for the same iris versus two different irises with 2.3 million comparisons under non-ideal circumstances.

16 7 Figure 1.3. Calculated Hamming distances (from [1]) An examination of the histogram inerror! Reference source not found. Figure 1.3 suggests that a suitable classifier could be implemented by placing the decision boundary at an HD of about 0.33 when using 1 million different iris patterns. This value could be raised or lowered depending on the desired security. In order to achieve this performance, the image I(x,y) was remapped to dimensionless polar coordinates, which allows the algorithm to correct for changes in the pupil s size. In addition, multiple iterations were computed over several different angles in order to achieve invariance to the image s orientation. Finally, a decision confidence level was calculated with a decidability index (denoted by d ) [1] based on the separation of the distributions such as in Figure 1.3. The results obtained by this method would give a false match probability of about one in four million and would require 2048 iris code bits and 2048 masking bits.

17 8 In the method presented in [3], the authors propose a different approach. Initially, the iris and pupil are assumed to be circular, and simple filtering and edge detection methods are used to find the iris in the picture. The image is then normalized and unwrapped [3]. These steps are shown in Figure 1.4. Figure 1.4(a) shows an initial iris image, and Figure 1.4(b) illustrates the iris after the boundaries are located. The unwrapped iris is presented in Figure 1.4(c), and the unwrapped iris with additional enhancement is depicted in Figure 1.4(d). Figure 1.4. Stages of iris unwrapping (from [3]) After unwrapping, the iris was characterized using frequency and orientation information. This was done by using a bank of circular symmetric filters [3] to capture frequency information, which the authors contend is the most useful for iris identification. The filters used were circular Gabor filters, which were modified for each frequency to be captured. These filters do not capture orientation information. The iris is divided into several different regions of interest, and each is characterized using the

18 9 circular symmetric filter at a different frequency. Once the features are extracted from the iris, a nearest feature line (NFL) classifier [3] was used to identify the iris. The results obtained by the author are less accurate than those obtained in the method presented by Daugman [1,2]. In [4] and [5], the authors use the same localization and unwrapping approaches outlined in [3]. However, classifiers are generated in different manners. With the method presented in [4], features are generated by looking for key local variations in the unwrapped image. Dyadic wavelet transforms are used for this purpose. In [5], Gabor filters are chosen in the spatial domain in order to extract features. In [6], yet another method for iris identification is presented. In this approach, the iris edges are detected using simple filtering and edge detection techniques. Then, the image is cropped to create a new picture. The center of the pupil is found by assuming a circular shape for both the iris and the pupil, and a scaling factor is chosen for the dyadic wavelet transform to make sure features are extracted from the same portions of the iris [6]. Concentric circles are followed around the iris to create gray level representations, and this process is repeated until a suitable number of 1-D signals are obtained. The dyadic wavelet transform is applied, and zero crossings from different wavelet resolution levels are captured to create an iris signature [6] as shown in Figure 1.5. In addition, since higher resolution levels are discarded, the authors contend that high frequency noise will have little effect on the identification accuracy. Once these features are found, comparisons are made using the zero crossing iris signatures [6]. Identification was implemented by finding the iris in the database with the least amount of dissimilarity.

19 One problem with the presentation of this method is that the author has selected a significantly smaller database than the other approaches detailed here. 10 Figure 1.5. Sample zero crossing iris representation (from [6]) In comparison to the algorithms that are currently available, the approach presented in this thesis only requires 133 bits for the iris identification code to provide a maximum recognition rate of 83% using the singular value decomposition. This was achieved with 18 different subjects taken from the CASIA iris image database [7]. This method was also limited by the number of subjects that were tested, but it produces promising results with a comparatively small feature space. The recognition accuracy was then improved to 89% using the 2D principal component analysis (2DPCA), but the storage requirements for this method are greater.

20 Image Acquisition and Selection Initially, images were selected from the CASIA iris image database [7]. This database is maintained by the National Laboratory of Pattern Recognition (NLPR), the Institute of Automation(IA), and the Chinese Academy of Sciences (CAS). The database consists of 108 classes (subjects). For each subject, there are a total of seven images; three from one session, and four from another [7]. Each uses 8 bits per pixel for gray scale values, and is stored as a 320x280 (width x height) bitmap. Pictures with too much eyelash or eyelid occlusion were less ideal, as shown in Figures 1.6 and 1.7. Initially they were rejected, but later the algorithm was tested with both types of images. Figure 1.6. Image with excessive occlusion (from [7])

21 Figure 1.7. Image with minimal occlusion (from [7]) 12

22 13 Chapter 2. Fixed and Variable Size Window Sampling and Stochastic Analysis 2.1 Wide Sense Stationary Approximation In the initial phase of research that was conducted and outlined in [10], a wide sense stationary approximation was used for texture characterization, and second order statistics were explored as a method for classification of iris images [10]. Initially, the iris was cropped from the image using a rectangular mask [11], and the pupil was detected by analyzing a histogram of gray scale values. The pupil and the area surrounding the iris were then assigned a gray scale value of zero (black) to make subsequent processing easier as shown in Figure 2.1. Figure 2.1. Cropped iris image (from [10])

23 14 Figure 2.2. Fixed size windows (from [10]) Two small fixed size windows were then extracted from the image; one at the left and one at the right boundary of the pupil as illustrated in Figure 2.2. Next, the two dimensional (2-D) discrete auto- (R XX (m,n;k,l)) and cross- (R XY (m,n;k,l)) correlations [14] for the rectangular sub-images of Figure 2.2 were calculated. Since the image data is twodimensional, by definition the ensuing correlation functions would be four dimensional. (4-D) [10]. In order to reduce the complexity of the system, a wide sense stationary approximation was used when calculating the autocorrelation (R xx ) and cross-correlation (R xy ) using [26] R XX R XY [ X ( k, l) X ( k m, l )] ( m, n) = E n (2.1) [ X ( k, l) Y( k m, l )] ( m, n) = E n (2.2) where E indicates the expected value, and X(k,l) is a pixel value of the subimage X at location (k,l). The autocorrelation matrix was determined for each sub-image in the first iris image window and used as a reference. Next, the cross correlation was computed

24 between corresponding windows of different iris images to create a new difference matrix D defined as [10]: 15 D XX XY ( m, n) = RXX ( m, n) RXY ( m, n) (2.3) It is prudent to note that, for a sub-image of size NxN, the correlation and difference matrices are size (2N-1)x(2N-1) [10], which can be quite large. In testing, one of the images was used as a reference or training value, and another as a comparison or test value. The 2-D autocorrelation was found as described earlier for each rectangular sub-image on either side of the pupil (see Figure 2.2). The difference matrix was calculated, along with its mean and summation. It was found through experimentation that the magnitude of the difference matrix was generally quite large when comparing two different irises. However, it was significantly smaller when comparing two different images of the same iris. Figure 2.3 shows a sample classification using two different irises. A decision boundary could be easily defined in this case to separate the two irises. This serves to show that a low dimensional feature space might have been possible using a wide sense stationary assumption [26].

25 Figure 2.3. Difference matrix feature space (from [10]) 16

26 Feature Generation with Correlation Matrices Based on the initial results, the correlation matrices were then used to attempt to create feature vectors instead of directly comparing two pictures [10]. Again, the image was first cropped as shown in Figure 2.1. Next, moving windows of fixed size were used, and starting at the inside edge of the iris, the windows were moved outwards a single pixel at a time. For each shift, the autocorrelation was calculated for both windows, as well as the cross correlation between the two [10]. Then, the features were extracted from the matrices by finding the maximum, minimum, sum, mean, and diagonal summation (trace). In a similar attempt, the windows were also increased in size by a single pixel for each iteration while the inside edge remained stationary [10]. Features were generated in the same manner once the auto- and cross-correlation matrices were created. After the feature vectors had been created, the difference was computed to check for classification performance. Using eight images from four subjects (two images per person), finding the minimum distance classifier correctly identified each iris. This method was also tested using the Euclidean distance with the same results [10]. By examining a three dimensional feature space, the performance using fixed and variable sized windows can be seen in Figures 2.4 and 2.5, respectively. It is noticeable that there is not a large amount of interclass separation, but the grouping for each class is reasonably well defined.

27 Figure 2.4. Three dimensional feature space for fixed window size (from [10]) 18

28 19 Figure 2.5. Three dimensional feature space for variable window size (from [10]) Following these initial findings, further attempts were made to explore this window sampling technique on a larger number of samples. However, the results indicated that the use of these second order statistics were not enough for useful iris feature classification purposes. It was also discovered that the preprocessing steps that were implemented to find the iris may not have been sufficiently robust for use on a larger number of images.

29 20 Because the orientation and slope of the vectors that were generated seemed to be a promising feature as well, additional experiments were performed using the first and second order derivatives of the curves shown in Figures 2.4 and 2.5. However, this did not significantly improve correct identifications, and it was determined that a different approach was required. In following iterations based on this windowing approach, attempts were made to generate useful features using the Fourier transform, as well as the Hadamard transform and the singular value decomposition (SVD). None of these produced significantly better results, most likely because the image preprocessing steps used were unable to accurately locate the iris in all cases. Additionally, there was no image size normalization, so differences in the size of the iris had a profound effect on generated feature vectors. With this in mind, more powerful preprocessing steps along with the singular value decomposition were implemented to provide a far more accurate identification technique as described in Chapters 3 and 4, respectively.

30 21 Chapter 3. Image Preprocessing 3.1 Histogram Analysis Once a suitable number of training samples were selected, preprocessing was required to isolate the iris for identification. First, a grayscale distribution is generated from an image such as the one shown in Figure 3.1 by counting the number of pixels with each gray scale value from 0 to 255. A sample histogram showing such a distribution is given in Figure 3.2. By examining the histogram, we can see that the largest number of pixels have a gray scale value of around 48, which corresponds to the pupil in the image. In some cases, however, an image will have two peaks if there is significant glare from the eyelid as is evident in Figure 3.3. This is detected by testing to make sure the maximum value in the distribution is not larger than the mean grayscale value of the image. If the test fails, as would happen in Figure 3.4, a new distribution is created by discarding all grayscale values greater than the mean as shown in Figure 3.5. The size of the pupil is then estimated by using the maximum value in the gray level distribution as the area of a circle A and calculating the radius r using r = A/π (3.1)

31 22 Figure 3.1. Sample iris image (from [7]) Figure 3.2. Typical grayscale distribution for iris image

32 23 Figure 3.3. Iris image with excessive eyelid glare (from [7]) Figure 3.4. Grayscale distribution of Figure 3.3

33 Figure 3.5. Grayscale distribution of Figure 3.3 compensated for glare 24

34 Iris Extraction In order to define the iris boundaries for feature extraction, the pupil s center is first estimated and a cropped image is created. This is followed by edge detection of the iris and definition of the outer iris boundary. Next, a ring defined by the inner and outer boundaries is unwrapped [3,4,5] to a rectangular coordinate system and enhanced. A typical progression of these steps is shown in Figure 3.6. A more detailed discussion of each is presented below. Figure 3.6. Typical iris extraction progression

35 26 The center of the pupil is estimated in the image by searching row and column wise for the minimum summation value along each direction. As outlined in [3,4,5], the pupil center at (X p,y p ) is located in the original image I(x,y) with X p = arg min ( I ( x, y)) x y Y p = arg min ( I( x, y)) (3.2) y x Once an estimate is found for the pupil s center, the original image is cropped with a rectangular mask [11] to further isolate the iris using the pupil s estimated radius in (3.1) as a reference for sizing. The actual cropped image must be larger than the pupil, and a suitable scaling factor was found for the estimated radius with experimentation. This method is applied once again to the subimage to further refine the location of the pupil s center. A sample cropped image is shown in Figure 3.7.

36 27 Figure 3.7. Cropped localized iris image Following the pupil and iris location, the image is binarized [3] using the gray scale distribution s maximum value, so that all pixels located in the pupil have a grayscale value of zero. Simple edge detection is then applied to find the actual boundary between the iris and the pupil. During detection, each pixel on the edge is counted, and the center of mass, or centroid, (x c,y c ) of the iris is calculated using a discrete version of the method outlined in [19] as x c = 1 N b N b i= 1 x i y c N 1 b = yi (3.3) N b i= 1

37 28 where N b is the total number of pixels on the detected boundary, and (x i,y i ) is a pixel location on the boundary. Figure 3.8 demonstrates a cropped iris image with its corresponding detected inner boundary and center of mass. Figure 3.8. Localized iris image with corresponding pupil boundary and centroid After finding an inner boundary for the iris, an outer boundary must also be determined. In this case, the radius of the outer boundary was calculated by simply multiplying the inner boundary s radius by an experimentally determined value of The outer points (x b,y b ) of the boundary are then created with x b ( θ ) = r cos( θ ) yb ( θ ) = r sin( θ ) (3.4)

38 29 with θ varying from zero to 2π, and r being the calculated radius of the outer boundary. Sample inner and outer calculated boundaries are presented in Figure 3.9. Additional sample images are included in Appendix A. Figure 3.9. Local iris image with inner and outer boundaries indicated

39 Image Unwrapping Following the boundary generation, the iris is unwrapped to a rectangular block of a fixed size as discussed in [3,4,5]. This is similar to the method employed by [1] and [2], which projected the iris image from a rectangular coordinate system to a doubly dimensionless pseudopolar coordinate system [3,4,5]. By doing so, the system can compensate for iris changes due to different lighting conditions and imaging distances, which will make the processing that follows easier to accomplish [3,4,5]. The unwrapping is performed by using [3,4,5] I ( X, Y) I ( x, y) n = o x = x p ( θ ) + (( xi ( θ ) x p ( θ )) Y M Y y = y p ( θ ) + (( yi ( θ ) y p ( θ )) (3.5) M θ = 2πX / N where I n represents an MxN normalized and unwrapped image. In this case, M and N are 48 and 256, respectively. The coordinates of the inner and outer boundaries of the iris at angle θ in the original image I o are represented by (x p (θ), y p (θ)) and (x i (θ), y i (θ)), respectively. A sample unwrapped image is included in Figure 3.10.

40 Figure Unwrapped iris image 31

41 Image Enhancement After unwrapping the image, the gray scale values are adjusted to account for differences in illumination and other factors. In [11], several methods are presented that help to balance the image in addition to improving contrast. After examining the available images, however, it was determined that the contrast was suitable. In order to compensate for illumination differences, the image is first scanned to find the average grayscale value of each 16x16 block in the picture as described in [3,4,5]. This gives a rough estimate of the illumination present in each portion of the image, and makes later processing somewhat easier. Figure 3.11 is representative of a typical calculated illumination pattern. Once it has been determined, it is applied to the original image by simple subtraction to produce an enhanced image as presented in Figure It is interesting to note that, although there are detectable rectangular artifacts in the enhanced image after using this method, it still improves feature extraction and recognition performance significantly when compared to no image enhancement. After preprocessing, the enhanced and unwrapped image is used to extract texture characterization features by employing the singular value decomposition (SVD).

42 33 Figure Sample illumination pattern of a normalized unwrapped image Figure Enhanced unwrapped image after illumination subtraction

43 34 Chapter 4. Singular Value Decomposition of Iris Biometrics 4.1 The Singular Value Decomposition The singular value decomposition (SVD) is a linear transform used often in pattern recognition because it possesses good information packing characteristics [9]. In addition, the SVD is considered to be more resistant to small changes in a matrix, as opposed to using only the eigenvalues, which can change significantly when the matrix is altered [16]. This characteristic also contributes to it being less susceptible to noise as described in [12]. This is ideal for pattern characterization since there almost certainly will be small variations among the images that are used. Care should be used when dealing with the SVD, for as mentioned in [17], some of its applications rely on inconsistent assumptions about dimensionality. In the case of this thesis, however, the SVD is performed on a dimensionless matrix (consisting of only numbers), so its application is valid as described in [17]. The SVD has been used for texture characterization and image restoration in [18], in addition to its use for facial recognition in [23]. For a matrix with elements that are spatially unrelated [18], the singular values will be of similar magnitudes. However, if they are related, the singular values will decrease in size along the SVD from lower order to higher order singular values [18]. In the case of feature extraction for iris images as presented in this chapter, it was found that lower order singular values are much larger than those of higher order. Since the approximation error for an image is simply the sum

44 35 of all unused singular values [9], removing the higher order values will not significantly reduce the approximation accuracy. The SVD has also proven useful for purposes of image compression as outlined in [20] and [21]. In both methods presented, a large amount of image data could be reduced significantly without losing much appreciable image quality. This seems to indicate that the SVD is of particular interest in extracting spatially important data from an image matrix, making it ideal for feature generation. To define the SVD, it is useful to first define eigenvalues and eigenvectors. A nonzero vector x can be defined as an eigenvector of a square matrix A if there exists a scalar value λ (known as an eigenvalue) such that [13] Ax = λx (4.1) Beginning with an M x N matrix X, it can be shown that there is an M x M unitary matrix U and an N x N unitary matrix V such that [9] X 1/ Λ = U O 2 O V 0 H or Y 1/ Λ O 2 O = U 0 H XV (4.2)

45 where 1/ 2 Λ is an r x r diagonal matrix with elements λi 1/2, and λ i is the i th nonzero 36 eigenvalue of the matrix X H X. H refers to the Hermitian operation of complex conjugation and transposition of a matrix [9], and O represents a zero element matrix. In the case of this work, the Hermitian operation can be replaced by transposition since only real vales are being used. By further working with (4.2), we can reconstruct X with [9] r 1 = 1/ 2 H X λ i uivi (4.3) = 0 i where u i and v i are the first r columns of U and V, respectively, as shown in [9]. This means that u i is an eigenvector for XX H and v i is an eigenvector for X H X. The subsequent eigenvalues λ i are thus known as singular values, and the matrix X can be reconstructed to varying levels of accuracy with different values for r [9] by using the algorithm presented in (4.3).

46 Feature Extraction with the SVD After performing image preprocessing and iris localization, features are extracted from the iris for the purpose of identification. For each unwrapped and enhanced image, the singular value decomposition is performed on a range of horizontal lines in the unwrapped imaged, and the singular values are stored as a vector. The optimal horizontal sampling range was determined in experiments. Initially, only one image was selected to generate the feature vectors used for classification. This method is flawed, however, because there still tend to be some variations in the calculated SVD that produced spurious results and hindered classification. When this was attempted, a recognition rate of only about 55% was achieved. In an attempt to improve classification accuracy, an approach was implemented using averaging. Because the CASIA database [7] is divided into images from two different sessions, it was deemed appropriate to generate a feature vector from each session. The SVD was again calculated separately for each image, while the feature vector was created by finding the average of the singular values along each dimension from several images. Each value was converted to an integer using rounding in order to reduce the total amount of space required for storing each iris signature. Based on the nature of the SVD, higher order singular values are smaller in magnitude than those of lower order [18], and an optimal approximation can be achieved with a mean square error equal only to the sum of all singular values not included. During this phase of testing, it was determined

47 38 that only the first 15 singular values were necessary. Table 1 summarizes the iris signature s size and storage requirements. The first row indicates the index of the singular value, with the second row showing the maximum integer value encountered during testing. The third row then shows the number of bits that would be required to represent each of these maximum values. Summing these indicates that 133 bits would be required for the iris representation using this method. Table 4.1. SVD integer storage requirements SVD Index Max Value Bits Required

48 Classification Performance with the SVD After features were generated using the SVD, classification was implemented by calculating the Euclidean distances between a test set and each training feature vector. This distance was minimized, and the image was included in the class it was nearest to in the feature space. In the case of this experiment, the images from the first session in the CASIA database [7] were used to create the feature vectors for the training section, and the pictures from the second session were used for testing purposes. This seemed to be the best use of the available images to effectively simulate a real-world situation. The ability of the system to correctly identify an iris was impacted by several factors that will be outlined here. Testing was conducted using 18 different subjects comprising a total of 125 available iris images. In each case, the average intraclass and interclass distances were calculated, as well as the classification performance, η, given by Number of correct classifications η = (4.4) Total number of comparisons The selection of the outer radius of the iris was the first major factor in determining how well the system would work. It was desired that the value would allow for a large amount of information to be extracted, but it was also important to not include too much of the surrounding eyelid or eyelashes. Figure 4.1 shows the system performance with different multiplying values for the outer radius. Figures 4.2 and 4.3

49 40 show the average between class and within class distances for different radius multipliers, respectively. By examining Figure 4.1, it can be determined that, once the multiplier is around 1.25, the performance reaches an 83% correct recognition rate. As was expected, a multiplier very close to 1 yielded poor results because very little information was contained in the region used. Additionally, Figures 4.2 and 4.3 indicate that a radius near 1.25 yields higher confidence recognition since the classes are better clustered and separated. Increasing the radius multiplier any further tended to include too much of the surrounding eyelid and other areas. Figure 4.1. Classification performance (η) with different radius multipliers

50 Figure 4.2. Between class distances with different radius multipliers 41

51 42 Figure 4.3. Within class distances using different radius multipliers The number of images used for feature generation was also explored to determine its effect on the system s performance. Figure 4.4 shows the classification performance of the system with various numbers of images used for both the training and testing sections of the experiment. Figures 4.5 and 4.6 show the within class and between class distances, respectively. From Figure 4.4, it is clear that using more images will yield better performance that eventually reaches a maximum of 83%. If only one image is used for the test and control signatures, we find that the correct identification rate drops to around This data indicates that even more images would likely produce better results. By inspecting

52 43 Figure 4.5, it can be seen that the clustering of classes also improves with a larger number of images being used. However, the average between class distance as shown in Figure 4.6 changes very little. This is possibly because of the mostly random nature of the textures being characterized. Figure 4.4. Classification performance (η) using different numbers of iris images

53 Figure 4.5. Average intraclass distance using different numbers of images 44

54 45 Figure 4.6. Average between class distance using different numbers of images The use of image enhancement also had a significant positive effect on the ability of the system to correctly identify an iris. Without correcting for illumination differences, the system was only able to correctly identify an iris with 61.11% accuracy. In addition, the average between class distance became 1,895 and the within class distance increased to approximately 240. This seems to be a clear indicator that the use of image enhancement to correct for illumination differences is vital for accurate identification. Finally, the number of singular values used for the feature vector was explored to determine how many were needed to provide optimal recognition results. Because the singular values decrease in magnitude along the SVD (see Table 1), many with smaller

55 46 magnitudes can be omitted without appreciably decreasing the system s overall performance. Figure 4.7 depicts the system s recognition performance with different numbers of singular values. Figures 4.8 and 4.9 illustrate the intraclass and interclass separation, respectively, with different numbers of singular values. By examining Figure 4.7, it can be noted that the recognition rate does not improve when using any more than 15 singular values. This is also the limit where the average inter- and intraclass separations cease to change considerably. Figure 4.7. Performance with varying numbers of singular values

56 47 Figure 4.8. Average intraclass distance with varying numbers of singular values Figure 4.9. Average interclass separation with different numbers of singular values

57 Future Improvements The iris identification system that has been presented certainly allows room for improvement in several areas such as preprocessing, iris localization, and feature generation. In the preprocessing stage, the system still had some difficulty identifying the true center of the pupil if there were a large number of very dark eyelashes. A sample cropped image that was created with an incorrectly located pupil is shown in Figure An improved method for pupil location could help to remedy this issue. In addition, some type of filtering to reduce the appearance of the eyelashes may also be useful. It should be noted that its effect on classification performance must be considered since it would affect the spatial gray level properties of the iris as well. Figure Incorrectly located pupil

58 49 Another consideration for improvement is the location and unwrapping of the iris. In some instances, the unwrapped image will include too many pixels from the pupil or the surrounding parts of the eye. This falls in line with an incorrect determination of the pupil s location. Performance is affected by the introduction of spurious pixels to the calculation of the SVD and reducing the ability of the system to make a correct identification. An image with too much of the area surrounding the iris (the conjunctiva) [8] is shown in Figure In the current work, this was corrected for by simply selecting a range of the unwrapped image instead of using it in its entirety for feature generation. An improvement of this method could be very beneficial to improving recognition accuracy. Figure Unwrapped image including too much conjunctiva.

59 50 Image quality assessment is also a useful addition that could be made to this approach. In the work of [5], a discussion is presented that outlines an approach for ensuring that an iris is properly in focus and suitable for feature extraction. This method involves searching for three main categories of unsuitable images: those that are out of focus, those that are blurred because of movement, and those that have too much occlusion from the eyelid or eyelashes [5]. By examining the image with a quality descriptor that uses the 2-D Fourier transform, it is possible to look for key frequency ranges that indicate the presence of these types of images. In the work presented by [1], a 2-D Fourier transform is also used, and frequencies in the upper and middle bands are assessed in real time to determine if the image is in focus [1]. Both of these approaches produce desirable results in determining image quality. Finally, another performance enhancement to be considered involves feature generation. The singular value decomposition has shown itself to be a useful tool for texture characterization. However, the recognition rates achieved with this system would have to be improved to make it an effective tool for identification. One way to improve this would be the use of more iris images for each feature vector, thus helping to remove the effects of a large variation in a single image. In addition, it may be helpful to compare each SVD to others from the same eye, discarding signatures that are found to be beyond a previously determined tolerance. In this manner, it would be possible to further improve recognition performance while still allowing for a very small iris signature. Additionally, the use of multiple biometrics could be included as outlined in [24] to improve recognition rates.

60 51 While discussing the use of multiple images, it is also interesting to examine the use of principal component analysis (PCA) for feature extraction since it works for characterizing statistics based on a set of vectors. In the work presented in [15], the two dimensional PCA (2DPCA) is applied successfully for facial recognition purposes, making its use for iris recognition attractive. The use of the 2DPCA as a tool for improved feature extraction is discussed in Chapter 5.

61 52 Chapter 5. 2D Principal Component Analysis (2DPCA) of Iris Biometrics 5.1 The 2DPCA The two dimensional principal component analysis (2DPCA) has been used successfully to characterize facial characteristics in [15]. It can also be used for reconstruction of images to varying degrees of accuracy, much like the SVD. However, the 2DPCA analyzes groups of input samples as opposed to individual images. By applying a transformation matrix X to an input A, a new matrix Y results from [15] Y=AX (5.1) The matrix X is determined by using all available training samples from each class. First, the covariance matrix for the entire training set is determined by [15] G t M 1 = ( A M j= 1 j A) T ( A j A) (5.2) where G t is the square covariance matrix determined from M available images. A j is the jth image, and A is the mean image of the ensemble computed with [15]

62 53 A = 1 M M A j j= 1. (5.3) Upon determination of G t, its first N eigenvectors corresponding to the N largest eigenvalues are used to make up the transformation matrix X. This gives the optimal directions upon which to project the input A. Similar to the SVD, the number of eigenvectors used to create X directly influences reconstruction accuracy when performing the inverse transform. Since X is unitary,  can be reconstructed with  =YX T (5.4) If N is equal to the total number of available eigenvectors, then  =A, and a perfect reconstruction is achieved.

63 Feature Extraction with the 2DPCA As described in Chapter 4, the CASIA iris image database [7] contains seven iris images for each individual. The images are divided into two sessions, with three images corresponding to the first session and four belonging to the second. The first session was again used for training, and the second session was used for testing purposes. Testing was performed with the same 18 classes that were tested with the SVD implementation in Chapter 4. First, a transformation matrix X is computed by finding the mean image and covariance matrix from all training samples as outlined in (5.2) and (5.3). The eigenvectors for the N largest eigenvalues of X were then used to create the transformation matrix. This matrix was then applied to each training image to yield Y k,l where k is the class, and l is the corresponding image number ranging from one to three. For each test sample, four images are available. The transformation matrix X is applied to each test image to yield W m, where m is the image number from one to four. Classification is performed by finding the Euclidean distance (DE) from W m to each Y k,l, and placing the test sample in the class where the distance is minimized.

64 Classification Performance with the 2DPCA Like the SVD implementation discussed in Chapter 4, experiments were conducted by altering parameters to optimize the system s performance. Maximizing classification performance as defined in (4.4) is the main consideration, but grouping and separation of classes is also discussed here. A factor that had a huge impact on performance was the use of illumination compensation in the unwrapped image as discussed in Chapter 3. When illumination correction was included, the classification performance dropped to below 20%. It is likely that, since the 2DPCA relies on the covariance matrix, any steps that significantly alter the mean gray scale value will have a catastrophic effect on performance. Without correcting for illumination, the maximum classification accuracy was found to be 89%. The radius multiplier for determining the outer iris boundary was explored. Once again it was desired that the region contained by the boundaries would contain as much useful information as possible. Figure 5.1 shows classification performance with different radius multipliers. Figures 5.2 and 5.3 show average intraclass and interclass separation, respectively. It is clear from Figure 5.1 that a radius multiplier of 1.4 yields the best performance of about 89%. This is also the point that seems to minimize the within class distances as demonstrated in Figure 5.2.

65 Figure 5.1. Classification performance with different radius multipliers 56

66 57 Figure 5.2. Average intraclass distance with different radius multipliers Figure 5.3. Average interclass distances with different radius multipliers Another factor to consider was the number of eigenvectors to include in the transformation matrix X. It is useful to optimize this value not only to decrease storage requirements, but also to reduce the computational complexity associated with finding the matrix transforms. Figure 5.4 shows classification performance as a function of the number of eigenvectors used to form X. Figures 5.5 and 5.5 show average class grouping and separation versus different numbers of eigenvectors, respectively. It is clear from Figure 5.4 that using any more than 64 eigenvectors in X will not yield any improvement in performance.

67 Figure 5.4. Classification performance with different numbers of Eigenvectors 58

68 Figure 5.5. Average intraclass distance with different numbers of Eigenvectors 59

69 Figure 5.6. Average interclass distance with different numbers of Eigenvectors 60

70 61 Chapter 6. Conclusions and Discussion Biometrics are powerful tools for human identification that have a wide range of applications. In comparison to more conventional security systems that use a password or keycard, biometrics are difficult to lose, forget, or forge. Research has been conducted on the use of biometrics that include facial characteristics, hand geometry, retinal blood vessel patterns, fingerprints, speech and voice patterns, gait, and iris features [24]. Several novel approaches to iris classification have been investigated and reported in this thesis. A wide sense stationary approximation was first attempted by using the auto- and cross-correlation. Following that, more robust preprocessing steps were implemented to more accurately locate the iris and its boundaries. The steps outlined in [3,4,5] for iris unwrapping and image enhancement were also repeated successfully to aid in image normalization and enhancement. In comparison to the other methods that have been reported, one approach outlined in this thesis has the advantage of requiring very few bits to achieve the results reported. By using only 133 bits, the SVD based system described in Chapter 4 is able to correctly identify an iris 83% of the time. This employs far fewer bits than the system presented by Daugman [1,2], which uses a total of 2048 identification bits and 2048 masking bits. However, that method is able to achieve a nearly 100% recognition rate. This leaves considerable room to investigate increasing the dimensionality of the feature space to improve system performance. In the method presented in [5], 200 features are used, but the storage requirements of these features were not discussed. The recognition accuracy of this approach was 99.43%. When using the 2DPCA based approach

71 62 presented in Chapter 5, a significant performance increase was observed as the recognition accuracy jumped to 89%. It is expected that more image samples would further improve these results. However, the storage requirements associated with the 2DPCA are greater than those of the SVD. One possibility may be to use the SVD in conjunction with the 2DPCA to further reduce the storage requirements of the system while still providing a high classification accuracy. Both the SVD and 2DPCA also have the distinction of being very different from the wavelet transform methods employed in [1,2,3,4,5,6]. The iris as a biometric is attractive because it is well protected, has a very complex structure, and changes very little over time. Several new methods have been investigated that use the iris as a biometric identifier. The use of the SVD allows for very small storage requirements but is somewhat lacking in classification accuracy. The 2DPCA, however, has been shown to have a higher recognition accuracy, but with greater storage requirements. Both are novel approaches that have produced desirable results for iris recognition.

72 63 Bibliography 1) J. Daugman, How iris recognition works, IEEE Transactions on Circuits and Systems for Video Technology, vol. 14, pp , January ) J. Daugman, High confidence visual recognition of persons by a test of statistical independence, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, no. 11, pp , November ) L. Ma, Y. Wang, T. Tan, Iris recognition using circular symmetric filters, 16th International Conference on Pattern Recognition Proceedings, vol.2, pp , August 11-15, ) La Ma, T. Tan, Y. Wang, Efficient iris recognition by characterizing key local variations, IEEE Transactions on Image Processing, vol. 13, no. 6, pp , June ) La Ma, T. Tan, Y. Wang, Personal identification based on iris texture analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 12, pp , December ) W.W. Boles and B. Boashash, A human identification technique using images of the iris and wavelet transform, IEEE Transactions on Signal Processing, vol. 46, no. 4, pp , April ) CASIA Iris Image Database, Institute of Automation, Chinese Academy of Sciences 8) Illinios Eye Institute website - <

73 64 9) S. Theodoridis and K. Koutroumbas, Pattern Recognition. Elsevier Academic Press, 2003, pp. 208, ) M. Celenk, M. Brown, Y. Luo, J. Kaufman, L. Ma, Q. Zhou, Human identification using correlation metrics of iris images, Storage and Retrieval Methods and Applications for Multimedia 2005, Proc. of SPIE-IS&T Electronic Imaging, SPIE vol. 5682, pp ) M. Kr. Mandal, Multimedia Systems and Signals. Kluwer Academic Publishers, 2003, pp ) Jens-Rainer Ohm, Multimedia Communications Technology. Springer, 2004, pp , ) R. Bronson, Matrix Methods: An Introduction. Academic Press, 1990, pp ) J. S. Lim, Two-Dimensional Signal and Image Processing. Prentice Hall, 1990, pp ) J.Yang, David Zhang, A. F. Frangi, J. Yang Two dimensional PCA: a new approach to appearance-based face representation and recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp , January ) P.C. Chandrasekharan, Robust Control of Linear Dynamical Systems, Academic Press, 1996, pp ) G. W. Hart Multidimensional Analysis: Algebras and Systems for Science and Engineering, Springer-Verlag, 1995, pp ) W. K. Pratt, Digital Image Processing, 2nd ed., Wiley-Interscience, 1991, pp ,

74 65 19) B. Klaus and P. Horn, Robot Vision. MIT Press, 1989 pp ) C.J. Ogden and T. Huff, The singular value decomposition and its applications in image compression, College of the Redwoods, 1997, < 21) B. Arnold, An investigation into using singular value decomposition as a method of image compression, University of Canterbury, 2000, < 22) J. Vartiainen, Iris recognition systems and methods, Lappeenranta University of Technology, < /reading/texts/iris_recog/Vartiainen.pdf> 23) N. Muller, L. Magaia, B.M. Erbst, Singular value decomposition, eigenfaces, and 3-d reconstructions, Society for Industrial and Applied Mathematics Review, vol. 46 no. 3, pp ) A.K. Jain and A. Ross, Multibiometric systems, Communications of the ACM, vol. 47, pp , January ) A.C. Weaver, Biometric authentication, IEEE Computer, vol. 39, number 2, pp , February ) Peyton Z. Peebles, Jr. Probability, Random Variables and Random Signal Principles, 4 th edition. McGraw-Hill Inc, 2001, pp ) L. Wang and T. Tan, Automatic gait recognition based on statistical shape analysis, IEEE Transactions on Image Processing, vol. 12, no. 9, September ) L. Shapiro and G. Stockman, Computer Vision, 2001, pp

75 66 Appendix A. Additional Images for four subjects Figure A.1 Original, boundary, and unwrapped images for subject 1

76 Figure A.2 Original, boundary, and unwrapped images for subject 2 67

77 Figure A.3 Original, boundary, and unwrapped images for subject 3 68

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication Tutorial 8 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication March 30, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Daugman s Method Problem

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

An Efficient Iris Recognition Using Correlation Method

An Efficient Iris Recognition Using Correlation Method , pp. 31-40 An Efficient Iris Recognition Using Correlation Method S.S. Kulkarni 1, G.H. Pandey 2, A.S.Pethkar 3, V.K. Soni 4, &P.Rathod 5 Department of Electronics and Telecommunication Engineering, Thakur

More information

Iris Recognition for Eyelash Detection Using Gabor Filter

Iris Recognition for Eyelash Detection Using Gabor Filter Iris Recognition for Eyelash Detection Using Gabor Filter Rupesh Mude 1, Meenakshi R Patel 2 Computer Science and Engineering Rungta College of Engineering and Technology, Bhilai Abstract :- Iris recognition

More information

IRIS recognition II. Eduard Bakštein,

IRIS recognition II. Eduard Bakštein, IRIS recognition II. Eduard Bakštein, edurard.bakstein@fel.cvut.cz 22.10.2013 acknowledgement: Andrzej Drygajlo, EPFL Switzerland Iris recognition process Input: image of the eye Iris Segmentation Projection

More information

A Novel Identification System Using Fusion of Score of Iris as a Biometrics

A Novel Identification System Using Fusion of Score of Iris as a Biometrics A Novel Identification System Using Fusion of Score of Iris as a Biometrics Raj Kumar Singh 1, Braj Bihari Soni 2 1 M. Tech Scholar, NIIST, RGTU, raj_orai@rediffmail.com, Bhopal (M.P.) India; 2 Assistant

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Algorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa

Algorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa Algorithms for Recognition of Low Quality Iris Images Li Peng Xie University of Ottawa Overview Iris Recognition Eyelash detection Accurate circular localization Covariance feature with LDA Fourier magnitude

More information

A Method for the Identification of Inaccuracies in Pupil Segmentation

A Method for the Identification of Inaccuracies in Pupil Segmentation A Method for the Identification of Inaccuracies in Pupil Segmentation Hugo Proença and Luís A. Alexandre Dep. Informatics, IT - Networks and Multimedia Group Universidade da Beira Interior, Covilhã, Portugal

More information

The Impact of Diffuse Illumination on Iris Recognition

The Impact of Diffuse Illumination on Iris Recognition The Impact of Diffuse Illumination on Iris Recognition Amanda Sgroi, Kevin W. Bowyer, and Patrick J. Flynn University of Notre Dame asgroi kwb flynn @nd.edu Abstract Iris illumination typically causes

More information

Chapter 5. Effective Segmentation Technique for Personal Authentication on Noisy Iris Images

Chapter 5. Effective Segmentation Technique for Personal Authentication on Noisy Iris Images 110 Chapter 5 Effective Segmentation Technique for Personal Authentication on Noisy Iris Images Automated authentication is a prominent goal in computer vision for personal identification. The demand of

More information

Improved Iris Recognition in 2D Eigen Space

Improved Iris Recognition in 2D Eigen Space Improved Iris Recognition in 2D Eigen Space Abhijit Das School of Education Technology Jadavpur University Kolkata, India Ranjan Parekh School of Education Technology Jadavpur University Kolkata India

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at

More information

GABOR WAVELETS FOR HUMAN BIOMETRICS

GABOR WAVELETS FOR HUMAN BIOMETRICS GABOR WAVELETS FOR HUMAN BIOMETRICS MD. ASHRAFUL AMIN DOCTOR OF PHILOSOPHY CITY UNIVERSITY OF HONG KONG AUGUST 2009 CITY UNIVERSITY OF HONG KONG 香港城市大學 Gabor Wavelets for Human Biometrics 蓋博小波在人體識別中的應用

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

Improved Iris Segmentation Algorithm without Normalization Phase

Improved Iris Segmentation Algorithm without Normalization Phase Improved Iris Segmentation Algorithm without Normalization Phase R. P. Ramkumar #1, Dr. S. Arumugam *2 # Assistant Professor, Mahendra Institute of Technology Namakkal District, Tamilnadu, India 1 rprkvishnu@gmail.com

More information

A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance

A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance Safety and Security Engineering VI 203 A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance V. Nosso 1, F. Garzia 1,2 & R. Cusani 1 1 Department

More information

Enhanced Iris Recognition System an Integrated Approach to Person Identification

Enhanced Iris Recognition System an Integrated Approach to Person Identification Enhanced Iris Recognition an Integrated Approach to Person Identification Gaganpreet Kaur Research Scholar, GNDEC, Ludhiana. Akshay Girdhar Associate Professor, GNDEC. Ludhiana. Manvjeet Kaur Lecturer,

More information

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation Walid Aydi, Lotfi Kamoun, Nouri Masmoudi Department of Electrical National Engineering School of Sfax Sfax University

More information

Spatial Frequency Domain Methods for Face and Iris Recognition

Spatial Frequency Domain Methods for Face and Iris Recognition Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Iris Recognition in Visible Spectrum by Improving Iris Image Segmentation

Iris Recognition in Visible Spectrum by Improving Iris Image Segmentation Iris Recognition in Visible Spectrum by Improving Iris Image Segmentation 1 Purvik N. Rana, 2 Krupa N. Jariwala, 1 M.E. GTU PG School, 2 Assistant Professor SVNIT - Surat 1 CO Wireless and Mobile Computing

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

Combining Gabor Features: Summing vs.voting in Human Face Recognition *

Combining Gabor Features: Summing vs.voting in Human Face Recognition * Combining Gabor Features: Summing vs.voting in Human Face Recognition * Xiaoyan Mu and Mohamad H. Hassoun Department of Electrical and Computer Engineering Wayne State University Detroit, MI 4822 muxiaoyan@wayne.edu

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

Chapter-2 LITERATURE REVIEW ON IRIS RECOGNITION SYTSEM

Chapter-2 LITERATURE REVIEW ON IRIS RECOGNITION SYTSEM Chapter-2 LITERATURE REVIEW ON IRIS RECOGNITION SYTSEM This chapter presents a literature review of iris recognition system. The chapter is divided mainly into the six sections. Overview of prominent iris

More information

IRIS Recognition System Based On DCT - Matrix Coefficient Lokesh Sharma 1

IRIS Recognition System Based On DCT - Matrix Coefficient Lokesh Sharma 1 Volume 2, Issue 10, October 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at: www.ijarcsms.com

More information

Image Compression with Singular Value Decomposition & Correlation: a Graphical Analysis

Image Compression with Singular Value Decomposition & Correlation: a Graphical Analysis ISSN -7X Volume, Issue June 7 Image Compression with Singular Value Decomposition & Correlation: a Graphical Analysis Tamojay Deb, Anjan K Ghosh, Anjan Mukherjee Tripura University (A Central University),

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach

More information

Image Enhancement Techniques for Fingerprint Identification

Image Enhancement Techniques for Fingerprint Identification March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement

More information

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences

UNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages

More information

Chapter 4 Face Recognition Using Orthogonal Transforms

Chapter 4 Face Recognition Using Orthogonal Transforms Chapter 4 Face Recognition Using Orthogonal Transforms Face recognition as a means of identification and authentication is becoming more reasonable with frequent research contributions in the area. In

More information

Recognition, SVD, and PCA

Recognition, SVD, and PCA Recognition, SVD, and PCA Recognition Suppose you want to find a face in an image One possibility: look for something that looks sort of like a face (oval, dark band near top, dark band near bottom) Another

More information

International Journal of Advance Engineering and Research Development. Iris Recognition and Automated Eye Tracking

International Journal of Advance Engineering and Research Development. Iris Recognition and Automated Eye Tracking International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 Special Issue SIEICON-2017,April -2017 e-issn : 2348-4470 p-issn : 2348-6406 Iris

More information

Implementation of Reliable Open Source IRIS Recognition System

Implementation of Reliable Open Source IRIS Recognition System Implementation of Reliable Open Source IRIS Recognition System Dhananjay Ikhar 1, Vishwas Deshpande & Sachin Untawale 3 1&3 Dept. of Mechanical Engineering, Datta Meghe Institute of Engineering, Technology

More information

Fast and Efficient Automated Iris Segmentation by Region Growing

Fast and Efficient Automated Iris Segmentation by Region Growing Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 6, June 2013, pg.325

More information

A NEW OBJECTIVE CRITERION FOR IRIS LOCALIZATION

A NEW OBJECTIVE CRITERION FOR IRIS LOCALIZATION The Nucleus The Nucleus, 47, No.1 (010) The Nucleus A Quarterly Scientific Journal of Pakistan Atomic Energy Commission NCLEAM, ISSN 009-5698 P a ki sta n A NEW OBJECTIVE CRITERION FOR IRIS LOCALIZATION

More information

Eyelid Position Detection Method for Mobile Iris Recognition. Gleb Odinokikh FRC CSC RAS, Moscow

Eyelid Position Detection Method for Mobile Iris Recognition. Gleb Odinokikh FRC CSC RAS, Moscow Eyelid Position Detection Method for Mobile Iris Recognition Gleb Odinokikh FRC CSC RAS, Moscow 1 Outline 1. Introduction Iris recognition with a mobile device 2. Problem statement Conventional eyelid

More information

Iris Recognition Using Gabor Wavelet

Iris Recognition Using Gabor Wavelet Iris Recognition Using Gabor Wavelet Kshamaraj Gulmire 1, Sanjay Ganorkar 2 1 Department of ETC Engineering,Sinhgad College Of Engineering, M.S., Pune 2 Department of ETC Engineering,Sinhgad College Of

More information

Gabor Filter for Accurate IRIS Segmentation Analysis

Gabor Filter for Accurate IRIS Segmentation Analysis Gabor Filter for Accurate IRIS Segmentation Analysis Rupesh Mude M.Tech Scholar (SE) Rungta College of Engineering and Technology, Bhilai Meenakshi R Patel HOD, Computer Science and Engineering Rungta

More information

The Elimination Eyelash Iris Recognition Based on Local Median Frequency Gabor Filters

The Elimination Eyelash Iris Recognition Based on Local Median Frequency Gabor Filters Journal of Information Hiding and Multimedia Signal Processing c 2015 ISSN 2073-4212 Ubiquitous International Volume 6, Number 3, May 2015 The Elimination Eyelash Iris Recognition Based on Local Median

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Biometric Security System Using Palm print

Biometric Security System Using Palm print ISSN (Online) : 2319-8753 ISSN (Print) : 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology Volume 3, Special Issue 3, March 2014 2014 International Conference

More information

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) 5 MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) Contents 5.1 Introduction.128 5.2 Vector Quantization in MRT Domain Using Isometric Transformations and Scaling.130 5.2.1

More information

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9: Hough Transform and Thresholding base Segmentation #1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting

More information

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING SECOND EDITION IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING ith Algorithms for ENVI/IDL Morton J. Canty с*' Q\ CRC Press Taylor &. Francis Group Boca Raton London New York CRC

More information

Applications Video Surveillance (On-line or off-line)

Applications Video Surveillance (On-line or off-line) Face Face Recognition: Dimensionality Reduction Biometrics CSE 190-a Lecture 12 CSE190a Fall 06 CSE190a Fall 06 Face Recognition Face is the most common biometric used by humans Applications range from

More information

www.worldconferences.org Implementation of IRIS Recognition System using Phase Based Image Matching Algorithm N. MURALI KRISHNA 1, DR. P. CHANDRA SEKHAR REDDY 2 1 Assoc Prof, Dept of ECE, Dhruva Institute

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

An Improved Iris Segmentation Technique Using Circular Hough Transform

An Improved Iris Segmentation Technique Using Circular Hough Transform An Improved Iris Segmentation Technique Using Circular Hough Transform Kennedy Okokpujie (&), Etinosa Noma-Osaghae, Samuel John, and Akachukwu Ajulibe Department of Electrical and Information Engineering,

More information

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image Processing

More information

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0047 ISSN (Online): 2279-0055 International

More information

Image Segmentation Techniques for Object-Based Coding

Image Segmentation Techniques for Object-Based Coding Image Techniques for Object-Based Coding Junaid Ahmed, Joseph Bosworth, and Scott T. Acton The Oklahoma Imaging Laboratory School of Electrical and Computer Engineering Oklahoma State University {ajunaid,bosworj,sacton}@okstate.edu

More information

IRIS RECOGNITION BASED ON FEATURE EXTRACTION DEEPTHI RAMPALLY. B.Tech, Jawaharlal Nehru Technological University, India, 2007 A REPORT

IRIS RECOGNITION BASED ON FEATURE EXTRACTION DEEPTHI RAMPALLY. B.Tech, Jawaharlal Nehru Technological University, India, 2007 A REPORT IRIS RECOGNITION BASED ON FEATURE EXTRACTION by DEEPTHI RAMPALLY B.Tech, Jawaharlal Nehru Technological University, India, 2007 A REPORT submitted in partial fulfillment of the requirements for the degree

More information

Tutorial 5. Jun Xu, Teaching Asistant March 2, COMP4134 Biometrics Authentication

Tutorial 5. Jun Xu, Teaching Asistant March 2, COMP4134 Biometrics Authentication Tutorial 5 Jun Xu, Teaching Asistant nankaimathxujun@gmail.com COMP4134 Biometrics Authentication March 2, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Indeterminate Region

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

An Efficient Iris Recognition System using Phase Based Technique

An Efficient Iris Recognition System using Phase Based Technique ISSN No: 2454-9614 An Efficient Iris Recognition System using Phase Based Technique T.Manickam, A.Sharmila, A.K.Sowmithra Department Of Electronics and Communications Engineering, Nandha Engineering College,

More information

IRIS SEGMENTATION AND RECOGNITION FOR HUMAN IDENTIFICATION

IRIS SEGMENTATION AND RECOGNITION FOR HUMAN IDENTIFICATION IRIS SEGMENTATION AND RECOGNITION FOR HUMAN IDENTIFICATION Sangini Shah, Ankita Mandowara, Mitesh Patel Computer Engineering Department Silver Oak College Of Engineering and Technology, Ahmedabad Abstract:

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Detection Using Implemented (PCA) Algorithm Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with

More information

Palmprint Recognition Using Transform Domain and Spatial Domain Techniques

Palmprint Recognition Using Transform Domain and Spatial Domain Techniques Palmprint Recognition Using Transform Domain and Spatial Domain Techniques Jayshri P. Patil 1, Chhaya Nayak 2 1# P. G. Student, M. Tech. Computer Science and Engineering, 2* HOD, M. Tech. Computer Science

More information

Texture Segmentation by Windowed Projection

Texture Segmentation by Windowed Projection Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw

More information

Convolution Neural Networks for Chinese Handwriting Recognition

Convolution Neural Networks for Chinese Handwriting Recognition Convolution Neural Networks for Chinese Handwriting Recognition Xu Chen Stanford University 450 Serra Mall, Stanford, CA 94305 xchen91@stanford.edu Abstract Convolutional neural networks have been proven

More information

SSRG International Journal of Electronics and Communication Engineering (SSRG-IJECE) Volume 3 Issue 6 June 2016

SSRG International Journal of Electronics and Communication Engineering (SSRG-IJECE) Volume 3 Issue 6 June 2016 Iris Recognition using Four Level HAAR Wavelet Transform: A Literature review Anjali Soni 1, Prashant Jain 2 M.E. Scholar, Dept. of Electronics and Telecommunication Engineering, Jabalpur Engineering College,

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/ TEXTURE ANALYSIS Texture analysis is covered very briefly in Gonzalez and Woods, pages 66 671. This handout is intended to supplement that

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /

More information

DYADIC WAVELETS AND DCT BASED BLIND COPY-MOVE IMAGE FORGERY DETECTION

DYADIC WAVELETS AND DCT BASED BLIND COPY-MOVE IMAGE FORGERY DETECTION DYADIC WAVELETS AND DCT BASED BLIND COPY-MOVE IMAGE FORGERY DETECTION Ghulam Muhammad*,1, Muhammad Hussain 2, Anwar M. Mirza 1, and George Bebis 3 1 Department of Computer Engineering, 2 Department of

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information

6. Multimodal Biometrics

6. Multimodal Biometrics 6. Multimodal Biometrics Multimodal biometrics is based on combination of more than one type of biometric modalities or traits. The most compelling reason to combine different modalities is to improve

More information

Image Compression With Haar Discrete Wavelet Transform

Image Compression With Haar Discrete Wavelet Transform Image Compression With Haar Discrete Wavelet Transform Cory Cox ME 535: Computational Techniques in Mech. Eng. Figure 1 : An example of the 2D discrete wavelet transform that is used in JPEG2000. Source:

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Fingerprint Recognition using Texture Features

Fingerprint Recognition using Texture Features Fingerprint Recognition using Texture Features Manidipa Saha, Jyotismita Chaki, Ranjan Parekh,, School of Education Technology, Jadavpur University, Kolkata, India Abstract: This paper proposes an efficient

More information

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS 130 CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS A mass is defined as a space-occupying lesion seen in more than one projection and it is described by its shapes and margin

More information

Handwritten Script Recognition at Block Level

Handwritten Script Recognition at Block Level Chapter 4 Handwritten Script Recognition at Block Level -------------------------------------------------------------------------------------------------------------------------- Optical character recognition

More information

Histograms of Oriented Gradients

Histograms of Oriented Gradients Histograms of Oriented Gradients Carlo Tomasi September 18, 2017 A useful question to ask of an image is whether it contains one or more instances of a certain object: a person, a face, a car, and so forth.

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover 38 CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING Digital image watermarking can be done in both spatial domain and transform domain. In spatial domain the watermark bits directly added to the pixels of the

More information

A Survey on Feature Extraction Techniques for Palmprint Identification

A Survey on Feature Extraction Techniques for Palmprint Identification International Journal Of Computational Engineering Research (ijceronline.com) Vol. 03 Issue. 12 A Survey on Feature Extraction Techniques for Palmprint Identification Sincy John 1, Kumudha Raimond 2 1

More information

Forensic Image Recognition using a Novel Image Fingerprinting and Hashing Technique

Forensic Image Recognition using a Novel Image Fingerprinting and Hashing Technique Forensic Image Recognition using a Novel Image Fingerprinting and Hashing Technique R D Neal, R J Shaw and A S Atkins Faculty of Computing, Engineering and Technology, Staffordshire University, Stafford

More information

Dilation Aware Multi-Image Enrollment for Iris Biometrics

Dilation Aware Multi-Image Enrollment for Iris Biometrics Dilation Aware Multi-Image Enrollment for Iris Biometrics Estefan Ortiz 1 and Kevin W. Bowyer 1 1 Abstract Current iris biometric systems enroll a person based on the best eye image taken at the time of

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image

More information

Contrast Optimization A new way to optimize performance Kenneth Moore, Technical Fellow

Contrast Optimization A new way to optimize performance Kenneth Moore, Technical Fellow Contrast Optimization A new way to optimize performance Kenneth Moore, Technical Fellow What is Contrast Optimization? Contrast Optimization (CO) is a new technique for improving performance of imaging

More information

A New Encoding of Iris Images Employing Eight Quantization Levels

A New Encoding of Iris Images Employing Eight Quantization Levels A New Encoding of Iris Images Employing Eight Quantization Levels Oktay Koçand Arban Uka Computer Engineering Department, Epoka University, Tirana, Albania Email: {okoc12, auka}@epoka.edu.al different

More information

Neural Network based textural labeling of images in multimedia applications

Neural Network based textural labeling of images in multimedia applications Neural Network based textural labeling of images in multimedia applications S.A. Karkanis +, G.D. Magoulas +, and D.A. Karras ++ + University of Athens, Dept. of Informatics, Typa Build., Panepistimiopolis,

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

MORPH-II: Feature Vector Documentation

MORPH-II: Feature Vector Documentation MORPH-II: Feature Vector Documentation Troy P. Kling NSF-REU Site at UNC Wilmington, Summer 2017 1 MORPH-II Subsets Four different subsets of the MORPH-II database were selected for a wide range of purposes,

More information

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

FACE RECOGNITION USING INDEPENDENT COMPONENT

FACE RECOGNITION USING INDEPENDENT COMPONENT Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major

More information

Feature-level Fusion for Effective Palmprint Authentication

Feature-level Fusion for Effective Palmprint Authentication Feature-level Fusion for Effective Palmprint Authentication Adams Wai-Kin Kong 1, 2 and David Zhang 1 1 Biometric Research Center, Department of Computing The Hong Kong Polytechnic University, Kowloon,

More information

Artifacts and Textured Region Detection

Artifacts and Textured Region Detection Artifacts and Textured Region Detection 1 Vishal Bangard ECE 738 - Spring 2003 I. INTRODUCTION A lot of transformations, when applied to images, lead to the development of various artifacts in them. In

More information