On Generation and Analysis of Synthetic Iris Images

Size: px
Start display at page:

Download "On Generation and Analysis of Synthetic Iris Images"

Transcription

1 1 On Generation and Analysis of Synthetic Iris Images Jinyu Zuo, Student Member, IEEE, Natalia A. Schmid, Member, IEEE, and Xiaohan Chen, Student Member, IEEE Department of Computer Science and Electrical Engineering West Virginia University, Morgantown, WV Phone: (304) x {jinyuz,xiaohanc}@csee.wvu.edu, Natalia.Schmid@mail.wvu.edu Contact author: Natalia Schmid Abstract The popularity of iris biometric has grown considerably over the past 2-3 years. It has resulted in the development of a large number of new iris encoding and processing algorithms. Since there are no publicly available large scale and even medium size databases, neither of the newly designed algorithms has undergone extensive testing. The designers claim exclusively high recognition performance when the algorithms are tested on a small amount of data. In a large scale setting, systems are yet to be tested. Since the issues of security and privacy slow down the speed of collecting and publishing iris data, an optional solution to the problem of algorithm testing is to synthetically generate a large scale database of iris images. In this work, we describe a model based method to generate iris images and evaluate performance of synthetic irises by using a traditional Gabor filter based iris recognition system. A comprehensive comparison of synthetic and real data is performed at three levels of processing: (1) image level, (2) texture level, and (3) decision level. A sensitivity analysis is performed to conclude on importance of various parameters involved in generating iris images. Index Terms Iris based authentication, biometrics, texture analysis, performance extrapolation, iris anatomy I. INTRODUCTION Iris as a biometric has been known for a long time. However, only in the recent years it has gained a substantial attention of both the research community and governmental organizations resulting in

2 2 development of a large number of new iris encoding and processing algorithms. Most of designed systems and algorithms are claimed to have exclusively high performance. However, since there are no publicly available large scale and even medium size datasets, neither of the newly designed algorithms has undergone extensive testing. There are several datasets of frontal view iris images presently available for public use [19], [20], [21], [22], [23]. A brief summary of these databases is provided in Table I. TABLE I PUBLIC IRIS DATABASES Database Database Number of Number of Images Color or Name Size Classes per Class Infrared (IR) CASIA-I IR UBIRIS N/A color UPOL color UBATH IR ICE N/A IR With the lack of data, two major solutions to the problem of algorithm testing are possible: (1) physically collecting a large number of iris images or (2) synthetically generating a large scale dataset of iris images. In this work, we describe a model based, anatomy based method to synthesize iris images and evaluate the performance of synthetic irises. The purpose of this work is not to provide a large database of iris images that will replace the real world data, but rather provide an option to compare efficiency, limitations, and capabilities of newly designed iris recognition algorithms through their testing on a large scale dataset of generated irises. Use of our dataset can be also viewed as an option to get a quick estimate of recognition performance for a designed iris recognition system. We are aware of the advice by Mansfield and Wayman [3] to avoid adding synthetic data to the test set, or adding artificial noise to data for a scenario testing in order to avoid the resulting bias that often makes the results difficult to interpret. Therefore, in this work we perform extensive performance comparison of real iris data and generated iris data to verify the applicability of our synthetically generated iris images for the purpose of algorithm testing. The first methodology for generating synthetic iris images has been proposed by Cui et al. [18], where a sequence of small patches from a set of iris images was collected and encoded by applying Principal Component Analysis (PCA) method. Principle components were further used to generate a number of low resolution iris images from the same iris class. The low resolution images were combined in a single

3 3 high resolution iris image using a superresolution method. A small set of random parameters was used for generation of images belonging to different iris classes. Another method for generation of synthetic iris images based on application of Markov Random Field has been recently developed at WVU [6] and offered as an alternative to the model based, anatomy based method described in this paper. Lefohn et al. [24] developed an ocularist s approach using the computer vision technology for the purpose of both the ocular prosthetics and entertainment industries. In their work a set of textured layers was used to render each iris. Wecker et al. [27] combined characteristics of real irises to augment existing real iris databases. In their work a multi-resolution technique known as reverse subdivision was used to capture the necessary characteristics. When generating synthetic iris images, the problem that one faces is to define a measure of realism. What is the set of requirements that a synthetic iris has to satisfy to be recognized and treated as a physically collected iris image? The conclusion could be: (1) it should look like a real iris; (2) it should have the statistical characteristics of a real iris. Real iris patterns are so anatomically complex that it is nearly impossible to mathematically describe any particular one. Thus, standards of realism will be limited to some degree. However, mathematical models of iris anatomical structures may be used for iris structure generation. In this work, we take a model based, anatomy based approach for generation of iris images. We have conducted extensive anatomical studies of the iris including study of constituent parts and their functionality [7], [8], [9], study of ultra-structure images and high-resolution images [25], [26], structure and classification of irises due to iridology [5], and models available for the iris [18], [24]. As a result, a few observations on common visual characteristics of irises have been made: (1) radial fibers, radially arranged iris vessels, constitute the basis for iris tissues and dominate the structure information; (2) a large part of iris is covered by a semitransparent layer with a bumpy look and few furrows which are caused by retractor muscles; (3) the irregular edge of the top layer contributes to the iris pattern; (4) the collarette part is raised due to overlap of sphincter and dilator muscles. Thus, the main frame of the iris pattern is formed by radial fibers, raised collarette, and a partially covered semitransparent layer with irregular edge. At the same time, the difference of pixel values in an infrared iris image is not only the result of the iris structure information. It is also related to the type of muscles, vessels, and cells that the iris is composed of, surface color, and lighting conditions. Involving all those visual and anatomical characteristics makes each synthetic iris look similar to a real iris. To simulate the stochastic nature of individual iris patterns, the process of iris image generation is reduced to generation of a dense fiber structure controlled by a set of random parameters and postprocessed

4 4 using a variety of image processing techniques. Influence of controlled parameters is carefully researched and the operational range of parameters is evaluated. The value of each parameter needs to be limited to a certain range to keep the common iris features, but also should be allowed to vary maximally to increase the randomness of the iris pattern. We also attempt to model the effects that influence quality of iris images. These effects influence both genuine and imposter score distributions and thus, are very important for testing the robustness and the performance of the iris recognition algorithms. The effects simulated in this work include occlusion, noise, defocus blur, motion blur, rotation, contrast and lighting. The remaining part of the paper is organized as follows. Sec. II introduces our methodology for generating iris images. Analysis and comparison of generated iris images with real iris images is performed in Sec. III and IV. Three levels for comparison are identified: (1) image level, (2) texture level, and (3) decision level. While Sec. III focuses on the comparison at the first two levels, Sec. IV provides detailed analysis of generated iris images at the decision level. The work concludes with a summary in Sec. V. II. METHODOLOGY A. Generation In this work, the generation of an iris image can be subdivided into five major steps: 1) Generate continuous fibers in a cylindrical coordinate system (Z, R, and Θ), where the axis Z (, ) is the depth of the iris, R [0, ) is the radial distance, and Θ [0, 2π) is the azimuth with a zero value corresponding to the 3 o clock position. Each fiber is a continuous three-dimensional (3D) curve in this cylindrical coordinate system. Presently we use seven random parameters, denote them as p i, i = 1, 2..., 7, to generate the projection of each continuous fiber onto the two-dimensional (2D) plane (R, Θ) and six random parameters, denote them as p i, i = 8, 9..., 13, to generate the projection of each fiber onto (R, Z) plane. The following equations describe the dependencies of fiber coordinates in (R, Θ) and (R, Z) planes: Θ = p 1 sin(p 2 (R + p 3 )) + p 4 sin(p 5 (R + p 6 )) + p 7 R 2 (1) and Z = p 8 + p 9 sin(p 10 (R + p 11 )) sin(p 12 (R + p 13 )). (2) In our work, random parameters used for fiber generation and in generation of the remaining parts and structure of the iris follow two main distributions: uniform and banded Gaussian. The uniform distribution is parameterized by the mean µ and the offset o (half of the range). The offset and

5 5 shifted by µ uniform distributed random variable Y is generated from a uniformly distributed on [0, 1] random variable X as follows Y = µ + o 2 (X 0.5). The banded Gaussian distribution is also described by mean µ and offset o (half of the range). The banded Gaussian distributed random parameter Y is generated from a random variable X, which follows a normal distribution with mean µ and standard deviation o following the rule: draw X until the number falls in the range [µ o, µ+o]. The range of intervals is selected to ensure the appearance of synthetic irises close to the appearance of real irises. The random parameters used to generate iris images, their distributions and associated ranges are displayed in Tables II, III, IV, V, and VI. After continuous fibers are TABLE II PARAMETERS USED IN THE GENERATION (BASIC) Parameter Distribution Mean Offset 1 Iris radius in pixels Uniform Number of fibers Uniform Fiber thickness Uniform Pupil radius Uniform Top semitransparent layer s thickness Uniform Fiber cluster degree Uniform Blur kernel size of the semitransparent top layer Uniform Iris root blur range Uniform Bumpy surface s roughness Uniform Bumpy surface texture s fineness Banded Gaussian Collarette ridge s location Uniform Collarette ridge s range Uniform Collarette ridge s height Uniform Crypt generation threshold Uniform Collarette s blur range Uniform generated, they are further sampled in R direction to obtain matrices of Θ and Z coordinates. Note that even with 13 random parameters, the generated fibers look smooth compared to real fibers. To introduce some abrupt changes in the fiber flow, our iris generator has additional options: the thickness of fibers can be varied. To store information about non-uniform width of fibers, a new matrix of size of the matrix Θ containing information about the width of fibers has to be generated. Examples of generated R Θ and R Z plane fibers are shown in Fig. 1. It is believed that a more complex fiber model may provide a better result in terms of shape similarity and pattern randomness

6 6 TABLE III PARAMETERS USED IN THE GENERATION (FIBER) Parameter Distribution Mean Offset 16 Θ s average period for its sinusoidal shape function Uniform Θ s average amplitude for its sinusoidal shape function Uniform Thickness of the iris Uniform Z s average period for its sinusoidal shape function Uniform Z s average amplitude for its sinusoidal shape function Uniform TABLE IV PARAMETERS USED IN THE GENERATION (COLLARETTE) Parameter Distribution Mean Offset 21 Number of collarette corners Uniform Collarette s average radius Uniform Period of cosine functions for collarette corners Uniform in the fiber structure. However, this needs a practical confirmation from anatomy perspective. 2) Generated 3D fibers are then projected into a 2D polar space (R, Θ) to form a 2D frontal view fiber image. Since only the top layer fibers can be seen in a 2D image, the gray value of each pixel in the 2D space is determined by the highest Z value of the fiber at that point. A set of basic B-spline functions in the polar coordinate system (R, Θ) is used to model the shapes of the pupil and the iris, that is, their deviation from a circular shape. The uneven shape of the pupil can be anatomically explained by the presence of about 70 fine folds along the pupillary margin [7], [8]. The results of applying basic B-splines to generate the pupil and iris boundaries are displayed in Fig. 2. 3) A top layer with an irregular edge is added. The edge of the top layer is modeled using cosine functions. The area where the basis images should be covered by the top layer is blurred to create the effect of a semitransparent top layer. The area of the collarette is brightened to create the effect of a lifted portion of the iris. An example of fiber structure projected into (R, Θ) polar space and the same structure after adding the top layer are shown on the left and right panels in Fig. 3, respectively. 4) The root of the iris is blurred to make that area look continuous. Then a smoothed Gaussian noise layer is added to make the top layer bumpy. The left panel in Fig. 4 shows an example

7 7 TABLE V PARAMETERS USED IN THE GENERATION (SHAPE) Parameter Distribution Mean Offset 24,25 Pupil iris center offset Uniform Number of b-spline basis functions for pupil Uniform Standard deviation of pupil b-spline base function amplitudes Uniform Number of b-spline basis functions for iris Uniform Standard deviation of iris b-spline base function amplitudes Uniform TABLE VI PARAMETERS USED IN THE GENERATION (EYE) Parameter Distribution Mean Offset 30 Eye s size (ratio compared with original size) Banded Gaussian Probability of top eyelash generation at each pixel Banded Gaussian Direction of top eyelashes in degrees Banded Gaussian Probability of bottom eyelash generation at each pixel Banded Gaussian Direction of bottom eyelashes in degrees Banded Gaussian Eye s roll angle (according to two eye corners) Banded Gaussian Top eyelid s amplitude for its sinusoidal shape function Banded Gaussian Bottom eyelid s amplitude for its sinusoidal shape function Banded Gaussian Eyelids occlusion degree Banded Gaussian Eye s horizontal location Banded Gaussian Eye s vertical location Banded Gaussian of a generated eyeball with blurred iris root. The right panel in Fig. 4 schematically explains the location of collarette, iris and pupil centers, and circular ridge. 5) Based on a required degree of eyelid opening, two low frequency cosine curves for the eyelids are drawn. Then the eyelashes are randomly generated. This step is used to simulate the occlusion which is coming from eyelids and eyelashes. Different degrees of eyelid opening and different direction and the number of eyelashes results in different degrees of occlusion. Fig. 5 and Fig. 6 demonstrate the main steps of our generation procedure. B. Effects To mimic real world iris databases we add few external affects to our generated ideal iris images. The generated external effects that influence performance of both genuine and imposter data include shot

8 θ 1 θ1 1 θ 2 θ Z 1 Z1 1 Z 2 Z θ Z R R Fig. 1. Example of generated fibers projected onto two planes. All fibers are generated in pairs for the purpose of forming crypts. The left plot shows Θ coordinates of two pairs of fibers. The right plot shows the Z coordinates of two pairs of fibers FINAL CURVE BASIS FUNCTIONS FINAL CURVE BASIS FUNCTIONS RADIUS RADIUS ANGLE SAMPLE POINTS ANGLE SAMPLE POINTS Fig. 2. A B-spline model is used to generate pupil and iris boundaries. The left plot shows pupil boundary unwrapped with respect to its center. The right plot shows iris boundary unwrapped with respect to its center. noise, defocus blur, motion blur, contrast, specular reflections, and rotation effect. The main details of generation procedures are summarized below. 1) Shot noise. The shot noise, a Poisson-model based noise, is generated using intensity of an underlying image. Input pixel values are used directly without scaling as intensities of Poisson processes (see for example [10] for a more detailed description of shot noise). 2) Occlusion. Two low frequency sinusoidal curves with different periods and phases are generated to simulate two eyelids. Other two random parameters are used to decide the eyelid degree of opening and eye position. Two stripes of shadow along the inner side of eyelids are generated to

9 9 Fig. 3. The left panel shows a fiber structure projected onto (R, Θ) polar space. The right panel shows the same structure after adding the top layer IRIS BOUNDARY IRIS CENTER PUPIL BOUNDARY PUPIL CENTER COLLARETTE RING Fig. 4. The left panel shows the eyeball with iris root blurred. The right panel demonstrates the boundaries and locations of collarette and raised circular ridge, ring. mimic a true situation. Realistically, occlusions also result from eyelashes, hence random eyelashes are generated with different densities, lengthes and directions, some of which can occlude the iris pattern. Because the generation parameters are known, precise occlusion mask is known and perfect segmentation of synthetic irises is possible. 3) Rotation. Rotation of the iris can occur in roll, pitch and yaw directions. We consider only the roll direction. We use bilinear interpolation to rotate an image. 4) Defocus blur. To generate defocus blur a two-dimensional image is filtered with a two-dimensional circular averaging filter. The filter has a banded in the range [ ] Gaussian distributed radius with mean 1.5 and standard deviation 1.

10 10 Fig. 5. An example of a three-dimensional fiber structure generated using our methodology. Fig. 6. Shown are the step 2-5 of iris image generation.

11 11 5) Motion blur. We use a two-dimensional filter to approximate the linear motion of a camera by a certain length in pixels, with an angle in degrees. The two parameters are random. The length follows banded in the range [1,31] Gaussian distribution with mean 16 and standard deviation 15. The angle follows uniform distribution on [0, 360 ]. 6) Contrast. This factor is modeled using two parameters, intensity compression and intensity shift. Intensity compression squeezes image intensities into a smaller range. The range is modeled by a uniform random variable Contrast between 0.3 and 1. More compression will yield lower contrast. Once the image has been compressed, the intensities are then shifted uniformly between 0 and (1 Contrast) 255. This intensity shift is synonymous with brightness. The higher the shift the brighter the image. 7) Specular reflections (lighting). We model this factor using four random parameters, two of which are used to model the lighting direction in terms of azimuth and elevation, the other two are used to adjust the strength of the specular reflection in terms of contrast and brightness. First a sphere is generated in order to simulate the shape of the eyeball. The surface of this sphere is lit based on a phong lighting model [11] to generate a lighting pattern. This lighting pattern is then overlayed onto the original image. Sample images, one sample per group specified above, are displayed in Fig. 7. More effects, such as off-angle (projective or affine transforms), can be incorporated in the final synthetic iris image. C. Sensitivity Analysis Although all parameters contribute into the appearance of the final result, the degrees of influence of the parameters on recognition performance are different for different parameters. Our experimental results have shown that recognition performance is extremely sensitive to variation in fiber structure and less sensitive to variation of other parameters. We have conducted the following experiments. All parameters were subdivided into three major groups: (1) fiber structure, labeled as FIBER, (2) parameters related to the collarette and the edge of the top layer, labeled as COLLARETTE, and (3) parameters responsible for other visual effects, such as blur, transparency and the bumpy texture of the top layer, labeled as BASIC. In the first experiment, we fixed all parameters in group (2) and (3) and varied parameters of the fiber structure. A set of 20 iris images was generated. Fig. 8 displays two iris images generated following this procedure. The recognition performance of generated iris images was evaluated using Libor Masek s code [12] that implements a Gabor filter based method. Hamming distance (HD) was calculated for each pair of generated iris images. The results of evaluation are shown in Fig. 11 (the rightmost

12 12 Fig. 7. An example of poor quality images. Shown are (from left to right) the image contaminated with shot noise, the rotated image, the blurred image, the motion smeared iris image, the low contrast image and the image with specular reflection problem. histogram). In the second experiment, the parameters of fiber structure and parameters in group (3) were fixed, while parameters influencing the generation of the collarette and the edge of the top layer were varied. Two sample iris images generated following this procedure are shown in Fig. 9. The histogram plot of HDs for this case is displayed in Fig. 11 (the histogram distribution in the middle). In the last experiment, the parameters in group (1) and group (2) were fixed, while the basic parameters were varied. A pair of generated iris images and the distribution of HDs for this case are shown in Fig. 10 and in Fig. 11 (the leftmost histogram), respectively. From this experiment, we can see that the change of fibers dominate the change of synthetic iris pattern in terms of the recognition performance. The variation of the collarette and the edge of the top layer partially changes the iris pattern. However, these parameters do not influence the recognition performance as much as the fiber structure does. The external effect parameters influence recognition performance the least of all parameters.

13 13 Fig. 8. Iris images generated by varying FIBER parameters (crypts are associated with fibers). Fig. 9. Iris images generated by varying COLLARETTE parameters. III. PERFORMANCE ANALYSIS: IMAGE AND TEXTURE LEVEL We identified three levels at which similarity of synthetic and real iris images can be quantified. They are as follows: (1) global layout, (2) features of fine iris texture, and (3) recognition performance. A. Visual Evaluation A gallery of synthetic iris images generated using our model based approach is shown in Fig. 12. To ensure that generated irises look like real irises, we borrowed a few eyelids from CASIA dataset. Note that only one image in Fig. 12 is a real iris image, a sample from CASIA dataset. It is placed among synthetic irises for the purpose of comparison. To further demonstrate that synthetic iris images look similar to real iris images, we displayed three normalized preprocessed iris images in Fig. 13. The samples on the upper and middle panels are normalized images from CASIA and ICE-I datasets. The sample on the lower panel is a normalized image from our data set of synthetic irises. Although it looks slightly oversmoothed on the bottom portion of the image, the unwrapped synthetic iris image has all

14 14 Fig. 10. Iris images generated by varying BASIC parameters. RELATIVE FREQUENCY FIBER COLLARETTE BASIC IMPOSTER HAMMING DISTANCE Fig. 11. Shown are three histogram plots of HDs obtained by varying parameters in one of three major parameter groups. The rightmost histogram (marked in dark color) was obtained when images generated by varying FIBER parameters were compared. The histogram in the middle (marked in gray) was obtained by comparing images with random COLLARETTE parameters. The leftmost histogram (marked in light color) was generated by involving images generated by varying a set of BASIC parameters. major features of real iris images. B. Texture Analysis To compare real and synthetic iris images at the textural level, we appeal to Bessel K forms. Bessel K form is a stochastic model that can be used to measure image variability. This parametric family is applied to model lower order probability densities of pixel values resulting from bandpass filtering of images. As shown by Grenander and Srivastava [16], Bessel K forms parameterized by only two parameters: (1) the shape parameter p, p > 0, and (2) scale parameter c, c > 0, may provide a good statistical fit to empirical histogram distributions of filtered images. Bessel K forms found an important application in classification of clutter. In this work, we reduce the problem of iris image comparison at the textural

15 15 Fig. 12. Iris 1 Iris 2 Iris 3 Iris 4 Iris 5 Iris 6 Iris 7 Iris 8 A gallery of synthetic iris images generated using model based, anatomy based approach. Iris 4 is a real iris image, a sample from CASIA dataset. (a) (b) (c) Fig. 13. Shown are three unwrapped and enhanced iris images. The images are samples from (a) CASIA data set, (b) ICE-I data set, and (c) data set of synthetic irises generated using our model based approach. level to the problem of evaluating a distance between two Bessel K forms characterizing two distinct iris images. Denote by I an image and by F a filter, then the filtered image I is given by I = I F, where denotes the 2D convolution operation. Under the conditions stated in [16], the probability density function of the random variable I( ) can be approximated by fk (x; p, c) = q 1 x p 0.5 K(p 0.5) ( 2c x ), Z(p, c) (3)

16 16 where K ( ) is the modified Bessel function, and Z is the normalization given by Z(p, c) = πγ(p)(2c) 0.5p (4) The corresponding cumulative distribution function can be easily calculated F K (x; p, c) = x f K (r; p, c)dr. (5) To approximate the empirical density of the filtered image by a Bessel K form, the parameters p and c are estimated from the observed data using ˆp = 3 (SK(I) 3) SV (I) and ĉ =, (6) ˆp where SK is the sample kurtosis and SV is the sample variance of the pixel values in I. Since the moment-based estimate of p in (6) is sensitive with respect to outliers, in our computations we replace it with an estimate based on empirical quartiles given by where 3 ˆp = ( SK(I) ˆ 3), SK(I) ˆ = q 0.995(I) q (I) q 0.75 (I) q 0.25 (I) and q x ( ) is the quartile function that returns the x quartile of a set of samples. For more information on quartile estimates we refer our readers to [30]. To provide an example of Bessel K fit to the empirical distribution of filtered images, we selected three images: a normalized iris image from synthetic database, a natural image [15] and a normalized iris image from CASIA database displayed on the left panel in Fig. 14. The images were further filtered using the log-gabor filter designed by Masek [12]. The real components of filtered images are shown on the right panel of Fig. 14. The histogram distributions and estimated Bessel K forms of the filtered images are displayed in Fig. 15. Note that the real and imaginary components of each filtered image are concatenated to obtain a single Bessel K form fit. To conduct initial comparison of real and synthetic iris images against natural images [15], we selected 12 images from each dataset. Table VII presents a summary of the estimated parameters of Bessel K forms for all images filtered by log-garbor filters. Note how similar the ranges of p parameter for the filtered synthetic and real iris data and how distinct is the range for the case of filtered natural images. To quantify the difference between two filtered images based on their distributions, three distance measures: (1) a pseudo-metric introduced by Srivastava [16], (2) the K-measure [29] between two Bessel

17 17 Fig. 14. Original images (left column) and their real components of log-gabor filtered results (right column). 1 SYNTHETIC (p=1.18, c=53.66) 1 NATURAL (p=4.8, c=55.26) 2 2 LOG(RELATIVE FREQUENCY) LOG(RELATIVE FREQUENCY) INTENSITY 1 2 CASIA (p=1.34, c=2.76) INTENSITY LOG(RELATIVE FREQUENCY) INTENSITY Fig. 15. The empirical histogram distributions (dashed lines) and their Bessel K form approximations (solid lines). K forms, and (3) the integrated absolute distance, an integral of the absolute distance between two cumulative distribution functions, were used. The pseudo-metric is defined as d I (I 1, I 2 ) = (f K (x; p 1, c 1 ) f K (x; p 2, c 2 )) 2 dx. (7) The K-measure is given by d KL (I 1, I 2 ) = D(f K ( ; p 1, c 1 ) f K ( ; p 2, c 2 )) + D(f K ( ; p 2, c 2 ) f K ( ; p 1, c 1 )), (8)

18 18 TABLE VII THE BESSEL K PARAMETERS ESTIMATED FROM IMAGES FILTERED BY LOG-GABOR FILTER. CAK, SYNK, AND NAK, k = 1,..., 12, STAND FOR CASIA, SYNTHETIC, AND NATURAL, RESPECTIVELY. Image p c Image p c Image p c CA SYN NA CA SYN NA CA SYN NA CA SYN NA CA SYN NA CA SYN NA CA SYN NA CA SYN NA CA SYN NA CA SYN NA CA SYN NA CA SYN NA where D(f K ( ; p 1, c 1 ) f K ( ; p 2, c 2 )) is the relative entropy between two distribution functions f K ( ; p 1, c 1 ) and f K ( ; p 2, c 2 ) given by D(f K ( ; p 1, c 1 ) f K ( ; p 2, c 2 )) = The integrated absolute distance is given by d IAD (I 1, I 2 ) = log ( ) fk (x; p 1, c 1 ) f K (x; p 1, c 1 )dx. f K (x; p 2, c 2 ) F K (x; p 1, c 1 ) F K (x; p 2, c 2 ) dx. (9) In the above expressions f K ( ) is the Bessel K probability density function introduced in (3) and F K ( ) is the Bessel K cumulative distribution function introduced in (5). For N images {I n, n = 1, 2,...N} and a bank of filters {F j, j = 1, 2,...J}, a set of filtered images {I (n,j) = I n F j, j = 1, 2,...J} is computed. After estimating the parameter p (n,j) and c (n,j), each image is mapped to J points in the density space. The corresponding distances in (7), (8), and (9) have to be changed to and J d I (I 1, I 2 ) = d(p (1,j), c (1,j), p (2,j), c (2,j) ), j=1 J d KL (I 1, I 2 ) = [D(f K(1,j) f K(2,j) ) + D(f K(2,j) f K(1,j) )], j=1

19 19 Fig. 16. Original images. I 1 and I 2 are synthetic iris images, I 3 and I 4 are natural images and I 5 and I 6 are real iris images. where and d IAD (I 1, I 2 ) = f K(n,j) = f K (x; p (n,j), c (n,j) ), J F K (x; p (1,j), c (1,j) ) F K (x; p (2,j), c (2,j) ) dx. j=1 To compare iris images against natural images, we implemented the following five-step procedure: (1) Convolve image I n with the filter F j included in the Filter Bank. (2) Estimate the parameters of Bessel K forms for each filtered image {(ˆp (n,j), ĉ (n,j) ), n = 1, 2,...N, j = 1, 2,...J}. (3) Drop the filters which resulted in an estimated values ˆp (n,j) < 0.25 (see [16] for details). (4) Calculate pairwise distance matrices d I, d KL, and d IAD. (5) Use the distance matrices to perform the hierarchical clustering of the filtered images. In our initial experiments we selected two images from each of three groups. Fig. 16 shows the normalized images I 1 and I 2 selected from the dataset of synthetic iris images, the images I 3 and I 4 selected from the set of natural images, and the normalized images I 5 and I 6 selected from CASIA dataset. All images were filtered by 48 filters included in the Leung-Malik (LM) filter bank. The filter bank includes 2 Gaussian derivative filters at 6 orientations and 3 scales, 8 Laplacian of Gaussian filters, and 4 Gaussian filters. The left, middle, and right panels in Fig. 17 display dendrogram plots that summarize the clustering results based on the distance measures d I, d KL, and d IAD, respectively. Clearly, in all three cases the filtered natural images are clustered in a separate group compared to filtered real and synthetic iris data. To further compare real and synthetic iris images against natural images at the textural level, we selected 12 iris images from CASIA dataset characterized by the smallest occlusion, 111 natural texture images from Brodatz Texture dataset [15], and 30 synthetic iris images. To gain a better intuition about types of textures contained in all these images and to verify consistency of clustering results, the

20 20 d I d KL d IAD INDEX OF IMAGES INDEX OF IMAGES INDEX OF IMAGES Fig. 17. The left, middle, and right panels show the dendrogram clustering plot using the distance metrics d I, d KL, and d IAD, respectively. images were filtered using four distinct sets of filters: (1) a single log-gabor filter designed by Libor Masek [12], (2) Leung-Malik Filter Bank [17], (3) the Maximum Response Filter Bank [17] and (4) Schmid Filter Bank [17]. The pairwise distances between filtered images were evaluated using d IAD metric. For clustering images based on d IAD we once again invoked a hierarchical clustering procedure. Fig. 18 summarizes the results of clustering based on the entire set of 153 images. The goal of this experiment is to cluster data in two broad classes: IRIS and NON-IRIS. During the experiment, the maximum number of clusters was adjusted to ensure that all images from CASIA dataset fall in the class IRIS. The images belonging to the class IRIS are marked in a dark color, while the images belonging to the class NON-IRIS are marked in a light color. From the diagram in Fig. 18, one can observe that only a small number of natural images are assigned to the same class as the images from CASIA dataset. Note that real and synthetic iris images exhibit limited texture variability compared to a large variability of texture in natural images. The iris images have a certain degree of overlap in terms of texture which undergoes minor variations on the different features extracted using different filters. The relative relationship can be depicted using a Venn s diagram displayed in Fig. 19. IV. PERFORMANCE ANALYSIS: DECISION LEVEL A. Verification Performance Since a biometric system is a special case of a more general pattern recognition system, the verification and recognition performance are a key consideration. To evaluate the performance of synthetic iris images from the recognition perspective, we used the Gabor filter based encoding technique by Libor Masek [12] [14]. Synthetic iris images characterizing 200 individuals, 2 iris classes per individual, 6 iris images per iris class including an ideal image (no effects added), a noisy image with additive shot noise, a

21 21 IRIS CLASS NON IRIS CLASSES SCHMID MAX RESPONSE LEUNG MALIK LOG GABOR IMAGES (1 12: CASIA; 13 42: SYNTHETIC IRISES; : NATURAL IMAGES) Fig. 18. The clustering results for all 153 images filtered using four distinct sets of filters. Fig. 19. The relationship between different groups of images based on the texture analysis. rotated image (to simulate a head tilt), a defocused image, a smeared image (motion blur), and a low contrast image were generated according to the procedures described in Sec. II-B. The left panel in Fig. 20 shows the histograms of genuine and imposter HD distributions for the generated data. The d-prime, a measure of separation between genuine and imposter matching score distributions (see [1] for the definition), is equal to The right panel in Fig. 20 shows the corresponding Receiver Operating Characteristic (ROC) curve. Note that both histogram plots and the ROC curve follow traditional shapes for iris recognition. B. Analysis of Degrees of Freedom False Accept Rate (FAR), one of the most important indicators of the level of a designed biometrics system security, is completely determined by the imposter distribution. In this and the following sections,

22 22 RELATIVE FREQUENCY IMPOSTER GENUINE HAMMING DISTANCE FRR FAR 2 x 10 4 Fig. 20. The left panel shows the histograms of genuine and imposter HDs characterizing the performance of synthetic iris images processed using the recognition system implemented by Libor Masek. The right panel shows the corresponding ROC curve. we analyze the distribution of imposter HDs for synthesized and real iris data. In his work Daugman suggested to use a Binomial distribution as the best fit of empirical imposter data [1], [13], even when the bits in binary IrisCode are not independent. In this case, as has been shown [13], the Binomial distribution has a reduced degree of freedom. Here we adopt a similar strategy and use a Binomial distribution to analyze the difference between synthetic and real data. Since the details of Daugman s system implementation are not available (the results of analysis are presented in [1], [13]), we adopt Masek s approach [12], [14] to processing iris image data. We use normalized templates each of size which results in 9600 bits in total. No compensation for rotation (registration of IrisCodes) is performed when the distribution of HDs is evaluated. Since the Binomial distribution is parameterized by two parameters, the mean p and the variance σ 2, we first estimate these two parameters for each set of imposter scores obtained from synthetic and real data and then find the parameter that characterizes the number of degrees of freedom in Binomial distribution. Denote by N the number of degrees of freedom. For the fractional Binomial distribution, N is given by N = p(1 p) σ 2. (10) In our computations of degrees of freedom the parameters p and σ 2 are replaced with their sample estimates. In this notation, the Binomial distribution of HDs has the following form P (k) = N! k!(n k)! pk (1 p) N k, k = 0, 1, 2,..., N. (11)

23 23 RELATIVE FREQUENCY IMPOSTER BINOMIAL FIT HAMMING DISTANCE Fig. 21. Shown are the empirical distribution of imposter HDs and the Binomial curve (N=376) that provides the best fit. The HD scores were obtained without performing compensation for rotation. Fig. 21 shows the distribution of HDs for imposter scores obtained from synthetic data and its best Binomial distribution fit. The estimated value of degrees of freedom for synthetic irises is 376. We further estimated degrees of freedom for CASIA dataset using the same template size and the same Gabor filter. CASIA data set yielded 507 degrees of freedom. Note a very high number of degrees of freedom in templates from CASIA data set in this case. This phenomenon can be best explained by the presence of fine textures in the images from CASIA data set that adds randomness to the templates of large size. It appears that the random influence of this texture is preserved in IrisCodes of size Conversely, for normalized images of smaller size, the effect of fine texture is removed due to averaging over image patches of a large size. This resulted in a major role being assigned to medium size or large size features in iris images. It appears that CASIA data set does not have as many large features as the images in synthetic data set. The results of our analysis are summarized in Table VIII. C. Performance Extrapolation In this section we model tails of imposter histogram distributions using the theory of extremes and demonstrate the similarity in the behavior of extrapolated tails for synthetic and real iris data in a large scale identification system. As stated in Sec. IV-B, FAR is one of the most important characterizations of a designed iris recognition system. In a traditional iris recognition system, the decision about the natures of two iris images is made based on comparing a test statistic (for example, a distance measured between two iris images) with a threshold, where the threshold has to be selected such that it satisfies design requirements. In Daugman s system the normalized HD between two binary IrisCodes plays the role of

24 24 TABLE VIII ESTIMATED DEGREES OF FREEDOM Template Size Synthetic Data Set CASIA Data Set 8-by by by by by by the test statistic. Denote by d(iriscode1; IrisCode2) the Hamming distance. Then the decision rule is given by If d(iriscode1, IrisCode2) γ, decide Imposter, If d(iriscode1, IrisCode2) < γ, decide Genuine, where γ is a threshold. To obtain the complete characterization of the designed system, FAR and False Rejection Rate (FRR) have to be evaluated for each value of γ on the interval between two empirical means of the test statistics, the empirical mean under the Genuine assumption and the empirical mean under the Imposter assumption. However, if the purpose of performance evaluation is to estimate the probability of error and predict the error in a large scale identification system, the focus has to be placed only on the tails of Genuine and Imposter distributions. In terms of thresholds, it is a relatively narrow area. The idea of performance extrapolation based on tail model has been previously studied. For example, in his recent work [13] Daugman used Binomial Minimum Value distribution to fit the imposter curve using compensated for rotation IrisCodes, and predict the performance of an identification system. Consider a simple verification scenario. Denote by f 0 (x) the Binomial density function fitted into the empirical imposter distribution. Assume that the IrisCodes used for computing the matching scores are not compensated for rotation. Then the binomial cumulative distribution F 0 (x) is the integral of the density F 0 (x) = x 0 f 0 (r)dr. When a compensation for rotation is performed, the distribution function fitted into data compensated for rotation has to be modified. Under the condition that a rotated version of an IrisCode is viewed as a

25 25 new independent IrisCode, the minimum value of the HDs will follow the distribution F m (x) given by F m (x) = 1 [1 F 0 (x)] m (12) called Binomial Minimum Value cumulative distribution [13]. The parameter m is the number of rotation angles considered during the procedure of compensation for rotation. Daugman in [13] used this distribution to predict performance of identification system as a function of two parameters: the number of degrees of freedom, N, and the number of distinct templates in the dataset, M. Consider now a verification scenario with the corresponding FAR denoted as F AR 1:1. For identification system with M independent templates in the database, that is with M iris classes, the expression for the total FAR, denote it by F 1:M (x), is given by F 1:M (x) = 1 [1 F 1:1 (x)] M. (13) Then the density of FAR, f 1:M (x), in the identification case can be found by differentiating the expression in (13). The rest of this section is organized as follows. In the first part of this section, we adopted a similar to Daugman s strategy for performance prediction of iris based identification systems and used the results of extrapolation as a measure of realism in comparing synthetic and real iris data. In the second part of the section, we invoked the theory of extremes. 1) Binomial Minimum Value Analysis: All normalized images in our experiments were transformed to have the same template size Masek s code was further used to generate binary IrisCodes and to calculate the imposter HDs. For each pair of iris templates, an alignment was performed by fixing one of them and shifting the other one in the range from -20 to 20 pixels (corresponds to the rotation of the original iris image in the range of angles from -30 to 30 degrees, provided that the templates have the size ). Assume that the minimum HDs obtained using distinct IrisCode pairs can be treated as independent and identically distributed. To find the Binomial Minimum Value distribution that best fits the imposter distribution, two parameters, the degrees of freedom, N, and the degrees of rotation compensation, m, were varied, while the other parameter, the mean p, was fixed at 0.5. The empirical imposter distribution of HDs for synthetic data was formed using 2,872,800 imposter HD scores, which resulted from 200 users, 2 classes per user with 6 images per class. The left panel in Fig. 22 shows the log scale plot of the Binomial Minimum Value distribution parameterized by N = 373 and m = 11 that provided the best fit to the imposter HD distribution obtained using synthetic data. The right panel in Fig. 22 shows the same functions on the linear scale.

26 26 RELATIVE FREQUENCY IMPOSTER BINOMIAL MIN HAMMING DISTANCE RELATIVE FREQUENCY IMPOSTER BINOMIAL MIN HAMMING DISTANCE Fig. 22. The left panel shows the log scale Binomial Minimum Value density function that provided the best fit to the distribution of imposter HDs generated from synthetic data. The right panel compares the Binomial Minimum Value density function and the histogram distribution of HDs obtained from synthetic data. The templates used to perform experiments are of size For CASIA database, the Binomial Minimum Value distribution that provided the best fit to the empirical imposter HDs is described by N = 437 and m = 14, while for synthetic iris with the same sample size the distribution is described by N = 381 and m = 11. To obtain these results, we used 108 classes, 6 images per class from both CASIA and synthetic iris dataset. This resulted in 208,008 imposter comparisons for CASIA. For synthetic data, we selected first 208,008 from the set of 2,872,800 imposter comparisons. Table IX summarizes the predicted results for the database size M in the range from 10 3 to The second column in this table displays the fixed values of F AR 1:M for identification case. The third column displays the values of the corresponding F AR 1:1 in the verification case. The columns 4 and 5 show the extrapolated values of thresholds γ obtained using templates of size 20-by-240 from CASIA and synthetic iris data sets that resulted in 1% F AR 1:M. Using Binomial Minimum Value distribution, the threshold γ can be directly calculated from F AR 1:1 using γ = F 1 m (F AR 1:1 ), (14) where F 1 m ( ) is the inverse to F m ( ) introduced in (12). To ensure comprehensive evaluation, we have also performed similar experiments with UBATH and ICE-I datasets. We used 1000 free downloadable iris images from UBATH dataset and 2953 images from 132 subjects from ICE-I dataset to form 490,000 and 4,331,761 imposter scores, respectively. For UBATH database, the parameters of the best fit are N = 531 and m = 13. For ICE-I the best fit is provided by the Binomial Minimum Value distribution with N = 232 and m = 15. The resulting predicted thresholds are provided in columns 6 and 7 of Table IX. Note the similarity of real and sythetic data in terms of threshold values.

27 27 TABLE IX THE THRESHOLD VALUES AND ITS EFFECT ON FAR. THE RESULTS ARE OBTAINED USING BINOMIAL MINIMUM VALUE DISTRIBUTION (WITH IMPOSTER COMPARISONS FOR SYNTHETIC DATASET) Database Estimated Estimated HD Threshold γ Size (M) F AR 1:M F AR 1:1 CASIA Synthetic UBATH ICE I % % % % % % ) Extreme Value Analysis: The results in Table IX look encouraging. However, a careful analysis of the results indicates that the data do not exactly follow the original Binomial Minimum Value distribution. The assumption that different relative orientations used in the angle compensation are independent does not hold in practice. In fact, we use a larger number (almost doubled) of different relative orientations to compensate for rotation. To obtain a better approximation of tails in the imposter distributions, we appeal to the extrapolation method based on extreme values. This method is often used to estimate tails of distributions from a few observed values. Jarosz et al. state in [4] that the method gives the best estimate of the accuracy possible without postulating an analytic mathematical expression for the imposter density function. The theory of extreme values assumes that the tail distribution can be approximated by an empirical distribution obtained from independent tail samples. This distribution is further approximated by a parametric extreme value distribution. The parameters of the extreme value distribution that provide the best fit into empirical data are obtained using the Maximum Likelihood (ML) estimation procedure. We use the Type 1, Gumbel, extreme value distribution [28] to approximate the empirical distribution. A Gumbel extreme value distribution is parameterized by two parameters, the location parameter µ and the scale parameter σ, and is given by ( ) ( x µ f E (x : µ, σ) = σ 1 exp exp exp σ The corresponding cumulative distribution function is given by F E (x : µ, σ) = and used to estimate the False Accept Rate F AR 1:1. x ( x µ σ )). (15) f E (r : µ, σ)dr, (16)

IRIS recognition II. Eduard Bakštein,

IRIS recognition II. Eduard Bakštein, IRIS recognition II. Eduard Bakštein, edurard.bakstein@fel.cvut.cz 22.10.2013 acknowledgement: Andrzej Drygajlo, EPFL Switzerland Iris recognition process Input: image of the eye Iris Segmentation Projection

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

Algorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa

Algorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa Algorithms for Recognition of Low Quality Iris Images Li Peng Xie University of Ottawa Overview Iris Recognition Eyelash detection Accurate circular localization Covariance feature with LDA Fourier magnitude

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

An Efficient Iris Recognition Using Correlation Method

An Efficient Iris Recognition Using Correlation Method , pp. 31-40 An Efficient Iris Recognition Using Correlation Method S.S. Kulkarni 1, G.H. Pandey 2, A.S.Pethkar 3, V.K. Soni 4, &P.Rathod 5 Department of Electronics and Telecommunication Engineering, Thakur

More information

Spatial Frequency Domain Methods for Face and Iris Recognition

Spatial Frequency Domain Methods for Face and Iris Recognition Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026

More information

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication Tutorial 8 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication March 30, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Daugman s Method Problem

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Feature-level Fusion for Effective Palmprint Authentication

Feature-level Fusion for Effective Palmprint Authentication Feature-level Fusion for Effective Palmprint Authentication Adams Wai-Kin Kong 1, 2 and David Zhang 1 1 Biometric Research Center, Department of Computing The Hong Kong Polytechnic University, Kowloon,

More information

A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance

A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance Safety and Security Engineering VI 203 A biometric iris recognition system based on principal components analysis, genetic algorithms and cosine-distance V. Nosso 1, F. Garzia 1,2 & R. Cusani 1 1 Department

More information

A Method for the Identification of Inaccuracies in Pupil Segmentation

A Method for the Identification of Inaccuracies in Pupil Segmentation A Method for the Identification of Inaccuracies in Pupil Segmentation Hugo Proença and Luís A. Alexandre Dep. Informatics, IT - Networks and Multimedia Group Universidade da Beira Interior, Covilhã, Portugal

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Iris Recognition for Eyelash Detection Using Gabor Filter

Iris Recognition for Eyelash Detection Using Gabor Filter Iris Recognition for Eyelash Detection Using Gabor Filter Rupesh Mude 1, Meenakshi R Patel 2 Computer Science and Engineering Rungta College of Engineering and Technology, Bhilai Abstract :- Iris recognition

More information

Use of Extreme Value Statistics in Modeling Biometric Systems

Use of Extreme Value Statistics in Modeling Biometric Systems Use of Extreme Value Statistics in Modeling Biometric Systems Similarity Scores Two types of matching: Genuine sample Imposter sample Matching scores Enrolled sample 0.95 0.32 Probability Density Decision

More information

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

Enhanced Iris Recognition System an Integrated Approach to Person Identification

Enhanced Iris Recognition System an Integrated Approach to Person Identification Enhanced Iris Recognition an Integrated Approach to Person Identification Gaganpreet Kaur Research Scholar, GNDEC, Ludhiana. Akshay Girdhar Associate Professor, GNDEC. Ludhiana. Manvjeet Kaur Lecturer,

More information

CURRENT iris recognition systems claim to perform

CURRENT iris recognition systems claim to perform 1 Improving Iris Recognition Performance using Segmentation, Quality Enhancement, Match Score Fusion and Indexing Mayank Vatsa, Richa Singh, and Afzel Noore Abstract This paper proposes algorithms for

More information

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation Walid Aydi, Lotfi Kamoun, Nouri Masmoudi Department of Electrical National Engineering School of Sfax Sfax University

More information

Dilation Aware Multi-Image Enrollment for Iris Biometrics

Dilation Aware Multi-Image Enrollment for Iris Biometrics Dilation Aware Multi-Image Enrollment for Iris Biometrics Estefan Ortiz 1 and Kevin W. Bowyer 1 1 Abstract Current iris biometric systems enroll a person based on the best eye image taken at the time of

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Face Recognition using Eigenfaces SMAI Course Project

Face Recognition using Eigenfaces SMAI Course Project Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract

More information

Iris Recognition System with Accurate Eyelash Segmentation & Improved FAR, FRR using Textural & Topological Features

Iris Recognition System with Accurate Eyelash Segmentation & Improved FAR, FRR using Textural & Topological Features Iris Recognition System with Accurate Eyelash Segmentation & Improved FAR, FRR using Textural & Topological Features Archana V Mire Asst Prof dept of IT,Bapurao Deshmukh College of Engineering, Sevagram

More information

Fingerprint Matching using Gabor Filters

Fingerprint Matching using Gabor Filters Fingerprint Matching using Gabor Filters Muhammad Umer Munir and Dr. Muhammad Younas Javed College of Electrical and Mechanical Engineering, National University of Sciences and Technology Rawalpindi, Pakistan.

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

Shifting Score Fusion: On Exploiting Shifting Variation in Iris Recognition

Shifting Score Fusion: On Exploiting Shifting Variation in Iris Recognition Preprocessing c 211 ACM This is the author s version of the work It is posted here by permission of ACM for your personal use Not for redistribution The definitive version was published in: C Rathgeb,

More information

FACE RECOGNITION USING INDEPENDENT COMPONENT

FACE RECOGNITION USING INDEPENDENT COMPONENT Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 16, Issue 6, Ver. VI (Nov Dec. 2014), PP 29-33 Analysis of Image and Video Using Color, Texture and Shape Features

More information

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Sowmya. A (Digital Electronics (MTech), BITM Ballari), Shiva kumar k.s (Associate Professor,

More information

International Journal of Advance Engineering and Research Development. Iris Recognition and Automated Eye Tracking

International Journal of Advance Engineering and Research Development. Iris Recognition and Automated Eye Tracking International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 Special Issue SIEICON-2017,April -2017 e-issn : 2348-4470 p-issn : 2348-6406 Iris

More information

Chapter 5. Effective Segmentation Technique for Personal Authentication on Noisy Iris Images

Chapter 5. Effective Segmentation Technique for Personal Authentication on Noisy Iris Images 110 Chapter 5 Effective Segmentation Technique for Personal Authentication on Noisy Iris Images Automated authentication is a prominent goal in computer vision for personal identification. The demand of

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Comparing Binary Iris Biometric Templates based on Counting Bloom Filters

Comparing Binary Iris Biometric Templates based on Counting Bloom Filters Christian Rathgeb, Christoph Busch, Comparing Binary Iris Biometric Templates based on Counting Bloom Filters, In Proceedings of the 18th Iberoamerican Congress on Pattern Recognition (CIARP 13), LNCS

More information

A New Encoding of Iris Images Employing Eight Quantization Levels

A New Encoding of Iris Images Employing Eight Quantization Levels A New Encoding of Iris Images Employing Eight Quantization Levels Oktay Koçand Arban Uka Computer Engineering Department, Epoka University, Tirana, Albania Email: {okoc12, auka}@epoka.edu.al different

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

A Novel Identification System Using Fusion of Score of Iris as a Biometrics

A Novel Identification System Using Fusion of Score of Iris as a Biometrics A Novel Identification System Using Fusion of Score of Iris as a Biometrics Raj Kumar Singh 1, Braj Bihari Soni 2 1 M. Tech Scholar, NIIST, RGTU, raj_orai@rediffmail.com, Bhopal (M.P.) India; 2 Assistant

More information

The Elimination Eyelash Iris Recognition Based on Local Median Frequency Gabor Filters

The Elimination Eyelash Iris Recognition Based on Local Median Frequency Gabor Filters Journal of Information Hiding and Multimedia Signal Processing c 2015 ISSN 2073-4212 Ubiquitous International Volume 6, Number 3, May 2015 The Elimination Eyelash Iris Recognition Based on Local Median

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

Biorthogonal wavelets based Iris Recognition

Biorthogonal wavelets based Iris Recognition Biorthogonal wavelets based Iris Recognition Aditya Abhyankar a, Lawrence Hornak b and Stephanie Schuckers a,b a Department of Electrical and Computer Engineering, Clarkson University, Potsdam, NY 13676,

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

CS 130 Final. Fall 2015

CS 130 Final. Fall 2015 CS 130 Final Fall 2015 Name Student ID Signature You may not ask any questions during the test. If you believe that there is something wrong with a question, write down what you think the question is trying

More information

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation Texture Outline Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation 1 Image Representation The standard basis for images is the set

More information

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction Volume, Issue 8, August ISSN: 77 8X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Combined Edge-Based Text

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 18 Feature extraction and representation What will we learn? What is feature extraction and why is it a critical step in most computer vision and

More information

Creating Synthetic IrisCodes to Feed Biometrics Experiments

Creating Synthetic IrisCodes to Feed Biometrics Experiments Creating Synthetic IrisCodes to Feed Biometrics Experiments Hugo Proença and João C. Neves Department of Computer Science IT - Instituto de Telecomunicações University of Beira Interior, Portugal Email:

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Dec. 4, 2008 Harpreet S. Sawhney hsawhney@sarnoff.com Plan IBR / Rendering applications of motion / pose

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Efficient Iris Identification with Improved Segmentation Techniques

Efficient Iris Identification with Improved Segmentation Techniques Efficient Iris Identification with Improved Segmentation Techniques Abhishek Verma and Chengjun Liu Department of Computer Science New Jersey Institute of Technology Newark, NJ 07102, USA {av56, chengjun.liu}@njit.edu

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

Iris Recognition: Measuring Feature s Quality for the Feature Selection in Unconstrained Image Capture Environments

Iris Recognition: Measuring Feature s Quality for the Feature Selection in Unconstrained Image Capture Environments CIHSPS 2006 - IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE FOR HOMELAND SECURITY AND PERSONAL SAFETY, ALEXANDRIA,VA, USA, 16-17 O Iris Recognition: Measuring Feature s Quality for the Feature

More information

Implementation of Reliable Open Source IRIS Recognition System

Implementation of Reliable Open Source IRIS Recognition System Implementation of Reliable Open Source IRIS Recognition System Dhananjay Ikhar 1, Vishwas Deshpande & Sachin Untawale 3 1&3 Dept. of Mechanical Engineering, Datta Meghe Institute of Engineering, Technology

More information

A Framework for Efficient Fingerprint Identification using a Minutiae Tree

A Framework for Efficient Fingerprint Identification using a Minutiae Tree A Framework for Efficient Fingerprint Identification using a Minutiae Tree Praveer Mansukhani February 22, 2008 Problem Statement Developing a real-time scalable minutiae-based indexing system using a

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Nov. 13, 2007 Harpreet S. Sawhney hsawhney@sarnoff.com Recapitulation Problem of motion estimation Parametric

More information

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013 Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition

More information

Motivation. Intensity Levels

Motivation. Intensity Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

EE 584 MACHINE VISION

EE 584 MACHINE VISION EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency

More information

Histograms of Oriented Gradients

Histograms of Oriented Gradients Histograms of Oriented Gradients Carlo Tomasi September 18, 2017 A useful question to ask of an image is whether it contains one or more instances of a certain object: a person, a face, a car, and so forth.

More information

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0047 ISSN (Online): 2279-0055 International

More information

FILTERBANK-BASED FINGERPRINT MATCHING. Dinesh Kapoor(2005EET2920) Sachin Gajjar(2005EET3194) Himanshu Bhatnagar(2005EET3239)

FILTERBANK-BASED FINGERPRINT MATCHING. Dinesh Kapoor(2005EET2920) Sachin Gajjar(2005EET3194) Himanshu Bhatnagar(2005EET3239) FILTERBANK-BASED FINGERPRINT MATCHING Dinesh Kapoor(2005EET2920) Sachin Gajjar(2005EET3194) Himanshu Bhatnagar(2005EET3239) Papers Selected FINGERPRINT MATCHING USING MINUTIAE AND TEXTURE FEATURES By Anil

More information

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding

More information

Projected Texture for Hand Geometry based Authentication

Projected Texture for Hand Geometry based Authentication Projected Texture for Hand Geometry based Authentication Avinash Sharma Nishant Shobhit Anoop Namboodiri Center for Visual Information Technology International Institute of Information Technology, Hyderabad,

More information

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection Prof Emmanuel Agu Computer Science Dept. Worcester Polytechnic Institute (WPI) Recall: Edge Detection Image processing

More information

Are Iris Crypts Useful in Identity Recognition?

Are Iris Crypts Useful in Identity Recognition? Are Iris Crypts Useful in Identity Recognition? Feng Shen Patrick J. Flynn Dept. of Computer Science and Engineering, University of Notre Dame 254 Fitzpatrick Hall, Notre Dame, IN 46556 fshen1@nd.edu flynn@nd.edu

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

A NOVEL IRIS RECOGNITION USING STRUCTURAL FEATURE OF COLLARETTE

A NOVEL IRIS RECOGNITION USING STRUCTURAL FEATURE OF COLLARETTE A NOVEL RS RECOGNTON USNG STRUCTURAL FEATURE OF COLLARETTE Shun-Hsun Chang VP-CCLab., Dept. of Electrical Engineering, National Chi Nan University, Taiwan s94323902@ncnu.edu.tw Wen-Shiung Chen VP-CCLab.,

More information

Processing of Iris Video frames to Detect Blink and Blurred frames

Processing of Iris Video frames to Detect Blink and Blurred frames Processing of Iris Video frames to Detect Blink and Blurred frames Asha latha.bandi Computer Science & Engineering S.R.K Institute of Technology Vijayawada, 521 108,Andhrapradesh India Latha009asha@gmail.com

More information

Three-Dimensional Computer Vision

Three-Dimensional Computer Vision \bshiaki Shirai Three-Dimensional Computer Vision With 313 Figures ' Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Table of Contents 1 Introduction 1 1.1 Three-Dimensional Computer Vision

More information

Technical Report. Cross-Sensor Comparison: LG4000-to-LG2200

Technical Report. Cross-Sensor Comparison: LG4000-to-LG2200 Technical Report Cross-Sensor Comparison: LG4000-to-LG2200 Professors: PhD. Nicolaie Popescu-Bodorin, PhD. Lucian Grigore, PhD. Valentina Balas Students: MSc. Cristina M. Noaica, BSc. Ionut Axenie, BSc

More information

Classifying Depositional Environments in Satellite Images

Classifying Depositional Environments in Satellite Images Classifying Depositional Environments in Satellite Images Alex Miltenberger and Rayan Kanfar Department of Geophysics School of Earth, Energy, and Environmental Sciences Stanford University 1 Introduction

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

An Improved Iris Segmentation Technique Using Circular Hough Transform

An Improved Iris Segmentation Technique Using Circular Hough Transform An Improved Iris Segmentation Technique Using Circular Hough Transform Kennedy Okokpujie (&), Etinosa Noma-Osaghae, Samuel John, and Akachukwu Ajulibe Department of Electrical and Information Engineering,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Signature Recognition by Pixel Variance Analysis Using Multiple Morphological Dilations

Signature Recognition by Pixel Variance Analysis Using Multiple Morphological Dilations Signature Recognition by Pixel Variance Analysis Using Multiple Morphological Dilations H B Kekre 1, Department of Computer Engineering, V A Bharadi 2, Department of Electronics and Telecommunication**

More information

Feature scaling in support vector data description

Feature scaling in support vector data description Feature scaling in support vector data description P. Juszczak, D.M.J. Tax, R.P.W. Duin Pattern Recognition Group, Department of Applied Physics, Faculty of Applied Sciences, Delft University of Technology,

More information

The Impact of Diffuse Illumination on Iris Recognition

The Impact of Diffuse Illumination on Iris Recognition The Impact of Diffuse Illumination on Iris Recognition Amanda Sgroi, Kevin W. Bowyer, and Patrick J. Flynn University of Notre Dame asgroi kwb flynn @nd.edu Abstract Iris illumination typically causes

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit

Augmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image

Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image Tsuyoshi Moriyama Keio University moriyama@ozawa.ics.keio.ac.jp Jing Xiao Carnegie Mellon University jxiao@cs.cmu.edu Takeo

More information

Development of an Automated Fingerprint Verification System

Development of an Automated Fingerprint Verification System Development of an Automated Development of an Automated Fingerprint Verification System Fingerprint Verification System Martin Saveski 18 May 2010 Introduction Biometrics the use of distinctive anatomical

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

BCC Optical Stabilizer Filter

BCC Optical Stabilizer Filter BCC Optical Stabilizer Filter The Optical Stabilizer filter allows you to stabilize shaky video footage. The Optical Stabilizer uses optical flow technology to analyze a specified region and then adjusts

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Exploring Curve Fitting for Fingers in Egocentric Images

Exploring Curve Fitting for Fingers in Egocentric Images Exploring Curve Fitting for Fingers in Egocentric Images Akanksha Saran Robotics Institute, Carnegie Mellon University 16-811: Math Fundamentals for Robotics Final Project Report Email: asaran@andrew.cmu.edu

More information

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

Fast and Efficient Automated Iris Segmentation by Region Growing

Fast and Efficient Automated Iris Segmentation by Region Growing Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 6, June 2013, pg.325

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information