Face Recognition with Local Binary Patterns

Size: px
Start display at page:

Download "Face Recognition with Local Binary Patterns"

Transcription

1 Face Recognition with Local Binary Patterns Bachelor Assignment B.K. Julsing University of Twente Department of Electrical Engineering, Mathematics & Computer Science (EEMCS) Signals & Systems Group (SAS) P.O. Box AE Enschede The Netherlands Report Number: Report Date: Period of Work: Supervisor: SAS May 11, /11/ /05/2007 Dr. Ir. L. Spreeuwers

2 Abstract This report is about a bachelor assignment in which research has been done to face recognition with Local Binary Patterns (LBP). LBP is a very powerful method to describe the texture and shape of a digital image. Therefore it appeared to be suitable for feature extraction in face recognition systems. A face image is first divided into small regions from which LBP histograms are extracted and then concatenated into a single feature vector. This feature vector forms an efficient representation of the face and can be used to measure similarities between images. The report describes the principles of the method and how exactly it can be implemented to perform face recognition. Experiments have been carried out on the different sets of the FERET database. High recognition rates are obtained, especially compared to other face recognition methods. Also a few extensions are investigated to further improve the performance of the method. It appeared that LBP can easily be combined with Principal Component Analysis to reduce the length of the feature vector, while the recognition performance only slightly decreases. i

3

4 Contents Abstract Table of Contents i iv 1 Introduction Face recognition Local Binary Patterns Research objectives Report outline Feature extraction methods for face recognition Principal Component Analysis Linear Discriminant Analysis Elastic Bunch Graph Matching Comparison with the LBP method Principles of Local Binary Patterns Local Binary Patterns Uniform Local Binary Patterns Using Local Binary Patterns for face recognition Feature vectors Comparing feature vectors Experiments and results Experiment design Chosen parameters Results Conclusions iii

5 iv Contents 6 Extensions and improvements Expanding neighborhoods Other region weights LBP combined with PCA Conclusions and Recommendations Conclusions Recommendations A Bilinear interpolation 37 B Implementation of Local Binary Patterns in MATLAB 39 B.1 Producing a Local Binary Pattern B.2 Checking for uniformity B.3 Dividing the images into regions B.4 Constructing the feature vector B.5 Generating the distance matrix B.6 Generating the rank list and rank curve B.7 Expanding neighborhoods B.8 Finding region weights B.9 LBP combined with PCA C Rank curves 49 C.1 LBP rank curves from literature C.2 Rank curves other feature extraction methods

6 1 Introduction 1.1 Face recognition Face recognition by a computer system is a very popular and useful application of (digital) image analysis. It is a method to identify or verify the identity of a person. In an identification process, based on face recognition, the face image of an unknown identity is compared with face images of known individuals from a large database. In the ideal case the system returns the determined (and of course correct) identity. In a verification process the face image of a person is compared with one face image from a database with the claimed identity. The system returns a value, which is a measure for the similarity between the two images. Of course this value needs to reach a certain (an in advance established) threshold to acknowledge or reject the claimed identity. An example of an application of face recognition is person identification or verification at the customs authorities on an airport. A face recognition system could replace current identification methods like PIN-codes, passwords and ID-cards, but also extremely reliable methods of biometric person identification, like fingerprint analysis and retinal or iris scans. The disadvantage of methods like these is that they rely on the cooperation of the participants, whereas a person identification system based on the analysis of (frontal) images of the face can be effective without the participant s cooperation or knowledge. Despite of the fact that at this moment already numerous of commercial face recognition systems are in use, this way of identification continues to be an interesting topic for researchers. This is due to the fact that the current systems perform well under relatively simple and controlled environments, but perform much worse when variations in different factors are present, such as pose, viewpoint, facial expressions, time (when the pictures are made) 1

7 2 Introduction 1 and illumination (lightening changes). The goal in this research area is to minimize the influence of these factors and create a robust face recognition system. Figure 1.1: Principle of an identification process with face recognition. The process of person identification by using face recognition can be split into three main phases (figure 1.1). These are registration and normalization, feature extraction and classification. In the registration and normalization phase, the image is transformed (scaled and rotated) till it has the same position as the images from the database. E.g. this means that the eyes are at the same positions. In this part also problem factors like illumination differences are reduced. In the feature extraction phase, the most useful and unique features (properties) of the face image are extracted. With these obtained features, the face image is compared with the images from the database. This is done in the classification phase. The output of the classification part is the identity of a face image from the database with the highest matching score, thus with the smallest differences compared to the input face image. Also a threshold value can be used to determine if the differences are small enough. After all, it could be that a certain face is not in the database at all. 1.2 Local Binary Patterns There exist several methods for extracting the most useful features from (preprocessed) face images to perform face recognition. One of these feature extraction methods is the Local Binary Pattern (LBP) method. This relative new approach was introduced in 1996 by Ojala et al. [5]. With LBP it is possible to describe the texture and shape of a digital image. This is done by dividing an image into several small regions from which the features are extracted (figure 1.2). These features consist of binary patterns that describe the surroundings of pixels in the regions. The obtained features from the regions are concatenated into a single feature histogram, which forms a representation of the image. Images can then be compared by measuring the similarity (distance) between their histograms. According to several studies [2, 3, 4] face recognition using the LBP method provides very good

8 1.2 Local Binary Patterns 3 results, both in terms of speed and discrimination performance. Because of the way the texture and shape of images is described, the method seems to be quite robust against face images with different facial expressions, different lightening conditions, image rotation and aging of persons. Figure 1.2: A preprocessed image divided into 49 regions. The LBP method was originally adopted for face recognition and face verification. Thanks to its good performance and its relative simple and efficient calculation procedure (chapter 3), the LBP method has already been used in several applications all over the world [3], including visual inspection, image retrieval, remote sensing, biomedical image analysis (images of diabetic feet), face image analysis, motion analysis, environment modeling, and outdoor scene analysis. The current form of the LBP method is quite different from its basic version, a number of extensions have been developed in the past few years. An example of a recent extension is the assignment of weights to the regions [2]. Of course, there is still a lot of research going on to find more extensions and improve the robustness of the method.

9 4 Introduction Research objectives Face recognition with the use of LBP seems to provide very good results according to several articles on the subject. The primary objective of this assignment is to implement the LBP method (in MATLAB) to see if these high face recognition rates (see appendix C.1) can be reproduced and to compare the performance of the LBP method with the performance of other feature extraction methods (see appendix C.2). The secondary objective is to evaluate a few extensions which could possibly improve the performance of the method even more. 1.4 Report outline Chapter 2 starts with a global description of the working of three other, often used, feature extraction methods. This is done to compare the most important properties between these methods and the LBP method. In chapter 3 the principles of the LBP method are described, how exactly it can be used to describe the texture and shape of an image. Chapter 4 explains how the method can be applied in face recognition systems by measuring similarities between images. Based on these two chapters, the method is implemented in MATLAB (appendix B). A description of the exact experimental setup, an explanation of the chosen parameter values and finally the obtained results are given in chapter 5. Chapter 6 describes how the method could be improved. The ideas are also implemented, tested and discussed. Finally, in chapter 7 the obtained results, in the form of rank curves, are compared with the results from early studies. The performance is also compared with other feature extraction methods. This chapter concludes with recommendations and possible further improvements.

10 2 Feature extraction methods for face recognition Beside the LBP-method there are many other feature extraction methods developed which can be used for face recognition. Three of the most important and most used methods are, according to [1], Principal Component Analysis, Linear Discriminant Analysis and Elastic Bunch Graph Matching. These methods will now shortly be described. At the end of this chapter the most important similarities and differences with the LBP-method will be listed. 2.1 Principal Component Analysis Principal Component Analysis (PCA) [9] is a probabilistic method for finding patterns in data with high dimensions. PCA is an orthogonal linear transformation that transforms the original data to a new coordinate system with less dimensions. This is done by calculating the eigenvalues and eigenvectors of the covariance matrix of the original data set. The covariance matrix contains values that indicate how much the dimensions vary from the mean with respect to each other. The eigenvectors with the highest eigenvalues contain the most information and are the principal components of the data set. By choosing only a few eigenvectors (with the highest eigenvalues) and multiplying these vectors with the original data set, a new data set is obtained with less dimensions, but still with the most important characteristics of the original data. In face recognition systems PCA can be used to reduce the dimensions of the original feature vectors of the face images [10]. These dimensions (for example the number of pixels) are reduced by calculating the eigenvectors of the covariance matrix. By choosing only the most important eigenvectors, irrelevant information (pixels which represent background) are filtered 5

11 6 Feature extraction methods for face recognition 2 out, because of the low variance. Along the new axes the differences and similarities between a new image and the original images from the database are measured. So the main idea of PCA, used for face recognition, is that the original data (the original feature vectors of the images) is transformed to a different space, with fewer dimensions, in which it is easier to measure the relevant differences and similarities between them. 2.2 Linear Discriminant Analysis Linear Discriminant Analysis (LDA) [11] is also a statistical method. It can be used to classify (new) objects (for example people) into one or more groups based on a set of features that describe the objects (gender, age, weight, etc.). In general, the purpose is to assign a new object to one of the predetermined groups based on observations made on the object. To improve the classification process, a linear transformation on the original feature set is performed to obtain a new and reduced set of features that describes a group in the best way. The main idea is then that, after the linear transformation, the differences within one group are minimized and the differences between all the groups are maximized. In the ideal case, a projection (a discriminant) is found that completely separates the different groups. A new object can then be assigned the the group with the highest conditional probability. In face recognition systems LDA can be used to classify a new face image to the group of face images of the same person, based on the most appropriate features. The original features (for example pixels) are, after multiplying with the calculated linear discriminant, transformed to a new space with low differences within a group of images of the same person and with high differences between the image-groups of different person. 2.3 Elastic Bunch Graph Matching By using the EBGM method for face recognition, face images are represented as some kind of graphs. These image graphs consist of nodes with, so called, jets. A jet describes a small patch of gray values in an image around a given pixel. This is based on a Gabor wavelet transform [12]. The EBGM process starts with selecting jets to serve as examples of facial features. A bunch graph is created with nodes that correspond to a landmark point in the face image (e.g. pupils, mouth corners and tip of the nose). Each node contains a bunch of model jets. Then landmark points are located for every image. This is done by extracting a novel jet from the novel image. The novel jet s displacement from the actual location is estimated by comparing it to the most similar model jet from the corresponding bunch. Finally a face

12 2.4 Comparison with the LBP method 7 graph is created for each image by extracting a jet for each landmark. This graph contains the locations of the landmarks and the values of the jets. The similarity between two face images is then computed by comparing their graphs. It appeared that the wavelets in a graph are quite robust to lightening changes and small shifts of the image. The graphs can easily be translated, scaled or rotated during the matching process. The learning aspect of the EBGM method can be found in the process of selecting and determining the jets. 2.4 Comparison with the LBP method By comparing PCA, LDA and EBGM with the LBP method, a few differences can be noticed. An important one is that feature extraction with LBP is a straightforward (real-time) process. This means that, in contrary to all three other methods, no training is necessary. For each image the feature vector can directly be constructed. In PCA for example, first a transformation matrix has to be determined by using a training set. The absence of the training aspect in LBP has a positive influence on the processing speed and the integration of the method in a new environment. Another important difference between LBP and the other methods (especially PCA) is that LBP is less sensitive to illumination variations in the images, because LBP eliminates offsets in pixel values. It is also less sensitive to rotation and scaling variations of the images. This is thanks to the way in which the patterns are constructed.

13

14 3 Principles of Local Binary Patterns With the Local Binary Pattern (LBP) operator it is possible to describe the texture and shape of a digital (gray scale) image. One LBP is a binary code for an image-pixel which tells something about the local neighborhood of that pixel. By producing a LBP of a pixel, the gray value of that pixel is compared to the gray value of pixels in its neighborhood. In this chapter some aspects of the texture and shape description with LBP will be discussed. For this description also a few equations will be derived. 3.1 Local Binary Patterns The original LBP operator was introduced by Ojala et al. [5]. This operator works with the eight neighbors of a pixel, using the value of this center pixel as a threshold. If a neighbor pixel has a higher gray value than the center pixel (or the the same gray value) than a one is assigned to that pixel, else it gets a zero. The LBP code for the center pixel is then produced by concatenating the eight ones or zeros to a binary code (figure 3.1). Figure 3.1: The original LBP operator. Later the LBP operator was extended to use neighborhoods of different sizes. In this case a circle is made with radius R from the center pixel. P sampling points on the edge of this circle are taken and compared with the 9

15 10 Principles of Local Binary Patterns 3 value of the center pixel. To get the values of all sampling points in the neighborhood for any radius and any number of pixels, (bilinear) interpolation is necessary. (The working of bilinear interpolation is explained in appendix A.) For neighborhoods the notation (P, R) is used. Figure 3.2 illustrates three neighbor-sets for different values of P and R. Figure 3.2: Circularly neighbour-sets for three different values of P and R. If the coordinates of the center pixel are (x c, y c ) then the coordinates of his P neighbors (x p, y p ) on the edge of the circle with radius R can be calculated with the sinus and cosines: x p = x c + R cos(2πp/p ), y p = y c + R sin(2πp/p ). (3.1a) (3.1b) If the gray value of the center pixel is g c and the gray values of his neighbors are g p, with p = 0,..., P 1, than the texture T in the local neighborhood of pixel (x c, y c ) can be defined as: T = t(g c, g 0,..., g P 1 ). (3.2) Once these values of the points are obtained is it also possible do describe the texture in another way. This is done by subtracting the value of the center pixel from the values of the points on the circle. On this way the local texture is represented as a joint distribution of the value of the center pixel and the differences: T = t(g c, g 0 g c,..., g P 1 g c ). (3.3) Since t(g c ) describes the overall luminance of an image, which is unrelated to the local image texture, it does not provide useful information for texture analysis. Therefore, much of the information about the textural characteristics in the original joint distribution (Eq. 3.3) is preserved in the joint difference distribution (Ojala et al. 2001): T (g 0 g c,..., g P 1 g c ). (3.4)

16 3.2 Uniform Local Binary Patterns 11 Although invariant against gray scale shifts, the differences are affected by scaling. To achieve invariance with respect to any monotonic transformation of the gray scale, only the signs of the differences are considered. This means that in the case a point on the circle has a higher gray value than the center pixel (or the same value), a one is assigned to that point, else it gets a zero: where T (s(g 0 g c ),..., s(g P 1 g c )), s(x) = { 1 x 0 0 x < 0. (3.5a) (3.5b) In the last step to produce the LBP for pixel (x c, y c ) a binomial weight 2 p is assigned to each sign s(g p g c ). These binomial weight are summed: LBP P,R (x c, y c ) = P 1 p=0 s(g p g c )2 p. (3.6) The Local Binary Pattern characterizes the local image texture around (x c, y c ). The original LBP operator in figure 3.1 is very similar to this operator with P = 8 and R = 1, thus LBP 8,1. The main difference between these operators is that in LBP 8,1 the pixels first need to be interpolated to get the values of the points on the circle. 3.2 Uniform Local Binary Patterns A Local Binary Pattern is called uniform if it contains at most two bitwise transitions from 0 to 1 or vice versa. In a matter of fact this means that a uniform pattern has no transitions or two transitions. Only one transition is not possible, since the binary string needs to be considered circular. The two patterns with zero transitions, with for example eight bits, are and Examples of uniform patterns with eight bits and two transitions are and For patterns with two transitions are P (P 1) combinations possible. For uniform patterns with P sampling points and radius R the notion LBP u2 P,R is used. Figure 3.3: Different texture primitives detected by the LBP u2 8,2.

17 12 Principles of Local Binary Patterns 3 Using only uniform Local Binary Patterns has two important benefits. The first one is that it saves memory. With non-uniform patterns there are 2 P possible combinations. With LBP u2 P,R there are P (P 1) + 2 patterns possible. The number of possible patterns for a neighborhood of 16 (interpolated) pixels is for standard LBP and 242 for LBP u2. The second benefit is that LBP u2 detects only the important local textures, like spots, line ends, edges and corners. See figure 3.3 for examples of these texture primitives.

18 4 Using Local Binary Patterns for face recognition In this chapter will be explained how the LBP-method can be applied on images (of faces) to extract features which can be used to get a measure for the similarity between these images. The main idea is that for every pixel of an image the LBP-code is calculated. The occurrence of each possible pattern in the image is kept up. The histogram of these patterns, also called labels, forms a feature vector, and is thus a representation for the texture of the image. These histograms can then be used to measure the similarity between the images, by calculating the distance between the histograms. Figure 4.1: Face image split in an image with only pixels with uniform patterns and in an image with only non-uniform patterns, by using LBP u2 16,2. Figure 4.1 shows an image which is split in an image with only pixels with uniform patterns and in an image with only non-uniform patterns. These images are created by using the LBP u2 16,2-operator. It occurs that the image with only pixels with uniform patterns still contains a considerable amount of pixels, namely 79 % of the original image. So, 79 % of the pixels of the image have uniform patterns (with LBP u2 8,2 this is even 86 %). Another striking thing is the fact that, by taking only the pixels with uniform patterns, the background is also preserved. This is because the background pixels all 13

19 14 Using Local Binary Patterns for face recognition 4 have the same color (same gray value) and thus their patterns contain zero transitions. It also seems that much of the pixels around the mouth, the noise and the eyes (especially the eyebrows) have uniform patterns. 4.1 Feature vectors Once the Local Binary Pattern for every pixel is calculated, the feature vector of the image can be constructed. For an efficient representation of the face, first the image is divided into k 2 regions. In figure 4.2 a face image is divided into 7 2 = 49 regions. For every region a histogram with all possible labels is constructed. This means that every bin in a histogram represents a pattern and contains the number of its appearance in the region. The feature vector is then constructed by concatenating the regional histograms to one big histogram. Figure 4.2: Face image divided into 49 regions, with for every region a histogram. For every region all non-uniform patterns (more than two transitions) are labeled with one single label. This means that every regional histogram consists of P (P 1) + 3 bins: P (P 1) bins for the patterns with two transitions, two bins for the patterns with zero transitions and one bin for all non-uniform patterns. The total feature vector for an image contains k 2 (P (P 1) + 3) bins. So, for an image divided into 49 regions and eight sampling points on the circles, the feature vector has a size of 2891 bins. The LBP code cannot be calculated for the pixels in the area with a distance R from the edges of the image. This means that, in constructing the feature vector, a small area on the borders of the image is not used. For an N M image the feature vector is constructed by calculating the LBP code for every pixel (x c, y c ) with x c {R + 1,..., N R} and y c {R + 1,..., M R}. If an image is divided into k k regions, then the histogram for region (k x, k y ), with k x {1,..., k} and k y {1,..., k}, can

20 4.2 Comparing feature vectors 15 be defined as: H i (k x, k y ) = I{LBP P,R (x, y) = L(i)}, i = 1,..., P (P 1) + 3, (4.1a) x,y {R + 1,..., N/k} k x = 1 x {(k x 1)(N/k) + 1,..., N R} k x = k {(k x 1)(N/k) + 1,..., k x (N/k)} else {R + 1,..., M/k} k y = 1 y {(k y 1)(M/k) + 1,..., M R} k y = k {(k y 1)(M/k) + 1,..., k y (M/k)} else in which L is the label of bin i and { 1, A is true I(A) = 0, A is false. (4.1b) (4.1c) The feature vector is effectively a description of the face on three different levels of locality: the labels contain information about the patterns on a pixel-level; the regions, in which the different labels are summed, contain information on a small regional level and the concatenated histograms give a global description of the face. 4.2 Comparing feature vectors To compare two face images, a sample (S) and a model (M), the difference between the feature vectors has to measured. This can be done with several possible dissimilarity measures for histograms: - Histogram intersection: k 2 D(S, M) = j=1 - Log-likelihood statistic: k 2 L(S, M) = j=1 - Chi square statistic (χ 2 ): k 2 χ 2 (S, M) = j=1 P (P 1)+3 i=1 P (P 1)+3 i=1 P (P 1)+3 i=1 min(s i,j, M i,j ) (4.2) S i,j log M i,j (4.3) (S i,j M i,j ) 2 (4.4) S i,j + M i,j

21 16 Using Local Binary Patterns for face recognition 4 In these equations S i,j and M i,j are the sizes of bin i from region j (number of appearance of pattern L(i) in region j). Because some regions of the face images (for example the regions with the eyes) could contain more useful information than others, a weight can be set for each region based on the importance of the information it contains. According to article [2] the χ 2 performs slightly better than histogram intersection and the log-likelihood statistic. By applying a weight w j to region j, the equation for the weighted χ 2 becomes: k 2 P (P 1)+3 χ 2 w(s, M) = w j (S i,j M i,j ) 2 (4.5) S i,j + M i,j j=1 This weighted χ 2 for two (face) images, which is calculated from the histograms, is a measure for the similarity between these images. The lower the value of the χ 2 (which is also called the distance between the two images), the bigger the similarity. i=1

22 5 Experiments and results The LBP-method as described in chapter 3 and chapter 4 is implemented in MATLAB (see appendix B). This implementation is used to test the performance of the LBP-method on different kind of face images. Several parameters, like the LBP operator (P and R), non-weighted or weighted regions and the dividing of the regions, are varied to see the influence of these parameters on the performance. The main objective in this experiment is to reproduce the same rank curves as presented in [2], see appendix C Experiment design To obtain results which can easily be compared with the results of other studies, the same experiment design is used as described in [2]. This means that the procedure of the CSU Face Recognition Evaluation System [13] and the preprocessed images from the FERET database are used. The CSU system works as follows (see figure 5.1): 1. Face image files and the eye coordinates are fed into the system. In the preprocessing part the images are registered using the eye coordinates and cropped with an elliptical mask to exclude non-face areas. 2. If needed, the algorithm is trained using a subset of the images. 3. The preprocessed images and possibly the training data are fed into the experimental algorithm which performs the feature extraction. It outputs a distance matrix containing a measure for the similarity (the distance) between each pair of images. 4. By using the distance matrix and the given gallery image and probe image lists, the system calculates the rank curves. A rank curve is a cumulative matching score for the probe images with the gallery images. 17

23 18 Experiments and results 5 Figure 5.1: The parts of the CSU face recognition system. Because in this experiment the preprocessed images from the FERET database are used and training of image files is not (yet) needed, the first two parts of the CSU system are in this case not necessary anymore. So the images are directly fed into the experimental algorithm, where with the LBP-method the feature vector for every image is produced. With the use of the χ 2 the distance matrix is calculated. For the gallery images and the probe images the following sets from the FERET database are used: fa set: used as gallery set, contains frontal images of 1196 people. fb set: 1195 face images with alternative facial expressions. fc set: 194 face images from photos that were taken under different lighting conditions. dup I set: 722 face images from photos that were taken later in time. dup II set: a subset of dup I, containing those images that were taken at least a year after the corresponding gallery image. Figure 5.2: Examples of preprocessed images from the different FERET sets.

24 5.2 Chosen parameters 19 Of course the face images in the probe sets are from the same persons as the images from the gallery set. All images are gray scale and have a resolution of 130 x 150 pixels. Figure 5.2 shows examples of preprocessed images from the different FERET sets. 5.2 Chosen parameters By making the choices for the parameters, the recognition performance and the length of the feature vector (memory and speed) are the most important factors. By using the LBP-method, the parameters that have to be chosen are: Uniform patterns. As already explained in section 3.2, by using only uniform patterns, the length of the feature vector is reduced from 2 P pixels to P (P 1) + 3 pixels. With 16 sample points, this reduces the length with 99.6 %. And a test with LBP u2 16,2 showed that on average 80.3 % of all pixels in the images from the FERET set has uniform patterns. Making use of only uniform patterns would thus be a good choice. LBP operator. According to [2] LBP u2 16,2 and LBPu2 8,2 are the two best operators. The first one performs slightly better than the second one, but the length of the feature vector of LBP u2 16,2 (243 labels) is considerable larger than LBP u2 8,2 (59 labels). Therefore they chose to select LBP u2 8,2 as the operator. In this research, experiments with both operators will carried out to see the differences in performance between them. Region sizes. By dividing the images into m regions, the length of the feature vector becomes m times larger. In [2] the images are divided into 49 regions, however it becomes not totally clear how they divided the images. They took a fixed region size of 18 x 21 pixels with probably region (4,4) exactly in the middle of the image. Therefore in this experiments the images are divided on two ways: the one described by equation 4.1b and the one with a fixed region size of 18 x 21 pixels. (See appendix B.3 for the exact implementation.) Region weights. In [2] a training set was classified using only one of the regions at a time. The recognition rates of corresponding regions on the left and right half of the face were averaged. Based on these recognition rates, a weight w j was assigned to each region, with w j {0, 1, 2, 4}. Probably these weights are not optimal, however in the first experiments of this research the same weights are used to compare the results (figure 5.3). As shown in the figure, regions with mainly background pixels are left out by assigning them zero weight. The

25 20 Experiments and results 5 eye-regions are assigned with the highest weight, because the eyes are probably important characteristics for a person. Figure 5.3: Assigned region weights. 5.3 Results Experiments are carried out with the non-weighted version of LBP u2 8,2 and with the weighted versions of LBP u2 8,2 and LBPu2 16,2 (with weights assigned as in figure 5.3). For the weighted versions of LBP u2 8,2 the regions are divided on both ways: the one described by equation 4.1b (weighted 1 ) and the one with a fixed region size of 18 x 21 pixels (weighted 2 ). In the figures 5.5, 5.7, 5.9 and 5.11 the obtained rank curves for the four different probe sets are shown. Table 5.1 contains the rank-1 scores (the percentage of probe images which are directly recognized correct) for the different probe sets and the different LBP-operators. fb set fc set dup I dup II LBP u2 8,2 non-weighted LBP u2 8,2 weighted LBP u2 8,2 weighted LBP u2 16,2 weighted Table 5.1: Obtained rank-1 scores. As can be seen in the table and in the figures with the obtained rank curves, the differences between the non-weighted and the weighted versions of the LBP-operator are quite big. For every probe set the recognition rates are much higher by using weighted regions, especially for the fc set. However the differences between the LBP u2 8,2-operator and the LBPu2 16,2-operator are not very big (while the the size of the feature vector is more than four times larger!) Only for the fb set and the fc set the recognition rates are slightly better by using a neighborhood of 16 sampling points. It also follows that the differences between the recognition rates for the two ways of region dividing are very small.

26 5.3 Results 21 Figure 5.4: Rank curves for the fb probe set from [2]. Figure 5.5: Rank curves for the fb probe set from experiment.

27 22 Experiments and results 5 Figure 5.6: Rank curves for the fc probe set from [2]. Figure 5.7: Rank curves for the fc probe set from experiment.

28 5.3 Results 23 Figure 5.8: Rank curves for the dup1 probe set from [2]. Figure 5.9: Rank curves for the dup1 probe set from experiment.

29 24 Experiments and results 5 Figure 5.10: Rank curves for the dup2 probe set from [2]. Figure 5.11: Rank curves for the dup2 probe set from experiment.

30 5.4 Conclusions Conclusions By comparing the obtained rank curves (figures 5.5, 5.7, 5.9 and 5.11) with the rank curves from [2] (figures 5.4, 5.6, 5.8 and 5.10), it can be concluded that the objective to reproduce the same rank curves is for the most part achieved. The rank curves for the fb and dup I probe sets, obtained with the LBP u2 8,2-operator, are almost the same. Only in the rank curves for the fc and dup II probe sets a difference can be found. The rank-1 score for the fc set with the LBP u2 8,2-operator in [2] is ± 0.79; according to the experiments this is So, for the fc set the difference between the rank-1 scores is For the dup II set the difference between rank-1 scores is ± The differences are probably caused by implementation differences, although it is not totally clear which ones this exactly are. Possibly the exact dividing of the regions.

31

32 6 Extensions and improvements To improve the performance and the robustness of the LBP-method, three extensions have been explored. The first extension tries to improve the recognition performance by using larger neighborhoods. The second extension tries to improve the recognition performance by finding better weights for the different regions. The third extension is more focused on the reduction of the feature vector length by applying Principal Component Analysis on the original feature vector. 6.1 Expanding neighborhoods The idea behind the LBP-method is to detect characteristic (local) textures in a face image, like spots, line ends, edges and corners (see section 3.2). By using standard LBP u2, the neighborhood of a pixel consists of sampling points on one circle. However, it is possible that a certain texture primitive is detected while it is not really part of an (important) face characteristic. This could be seen as noise. When the neighborhood is expanded to sampling points on two (or maybe more) circles, the chance that a noise texture primitive is detected can possibly be reduced. In a matter of fact, this is done by calculating two patterns for one pixel: one for the sampling points on the first circle and one for the sampling points on the second circle. If both patterns are uniform, the chance that the texture primitive belongs to an important characteristic is bigger. The total pattern is than formed by concatenating these two patterns. See figure 6.1. A disadvantage of this neighborhood expansion is that the length of the feature vector will increase drastically. It will now consist of (P 2 1 P 1 + 2) (P 2 2 P 2 + 2) + 1 bins (one bin for the non-uniform patterns), with P 1 the number of sampling points on the first circle and P 2 the number of sampling points on the second circle. 27

33 28 Extensions and improvements 6 Figure 6.1: Neighborhood of sampling points on two circles. There is also a possibility to expand the neighborhood, while the increase of the feature vector length is restricted. In a region, for every pixel two patterns are produced (two circles with different radius), but not directly concatenated. So, in this case first two histograms are constructed. The total feature vector for one region is then formed by concatenating both histograms. On this way the length of the feature vector for one region becomes (P 2 1 P 1 + 3) + (P 2 2 P 2 + 3) bins. To see if both neighborhood expansions will improve the recognition performance, they are implemented in MATLAB (see appendix B.7). For the first neighborhood expansion (NE 1 ) a neighborhood of 4 points on a circle with a radius of 1 pixel and 8 points on a circle with a radius of 2 pixels is used (length feature vector is 813 bins). And for the second one (NE 2 ) a neighborhood of 8 points on a circle with a radius of 2 pixels and 16 points on a circle with a radius of 3 pixels (length feature vector is 302 bins). The images are again divided in 49 regions and with the same weight allocation as in the previous experiments (figure 5.3). The obtained rank-1 scores are shown in table 6.1. fb set fc set dup I dup II LBP u2 8, LBP u2 16, LBP u2 16, NE 1 : LBP u2 4,1,8, NE 2 : LBP u2 8,2,16, Table 6.1: Rank-1 scores obtained with expanded neighborhoods. It appears that the obtained recognition rates with the first neighborhood expansion are worse than the rates obtained with the standard weighted LBP u2 8,2-operator. Probably the circle with radius 1 pixel and 4 points is the bottleneck, apparently this neighborhood is too small and also some characteristic textures are now eliminated. (The experiment showed that

34 6.2 Other region weights 29 now less pixels in an image have uniform patterns for both circles, namely 84 %.) The performance of the second neighborhood expansion has to be compared with the performance of the LBP u2 16,3-operator. The recognition rates obtained with this neighborhood expansion are quite high, but those of the LBP u2 16,3-operator are higher. Apparently also on this way the use of two circles doesn t provide better recognition performance. So, it can be concluded that expanding the neighborhood to sampling points on two circles is not a good extension. The length of the feature vector is scaled while the recognition performance doesn t improve. 6.2 Other region weights In the previous experiments zero weights were assigned to the two regions in the nose area (just like in the experiments of other studies). This implies that the nose plays a total insignificant role in face recognition. This is a little hard to imagine. To get a global indication of the contribution of each region, a script is written to determine the rank-1 rates for each region of the images separately (see appendix B.8). With this script the recognition rates for the different regions are determined by using 50 images of the fb set and 50 images of the training set. 50 images is not much, but for some reason the used computer crashed when more images were used. This means that the obtained data is not very accurately, however it gives a global indication of the contribution of each region. Figure 6.2: New weights, based on rank-1 rates for the different regions. For calculating the histograms the LBP u2 8,2-operator is used. For both sets this resulted in relatively high rank-1 rates for the nose area, as can be seen in figure 6.2. Also the region between the eyes seems to be more important. Based on the obtained rates, the weight for this region and the weights in the noise area are raised. With these new assigned weights another experiment is carried out. The results are shown in table 6.2. It appears that the recognition performance only for the fb set and the dup II set is improved. So, the area between the eyes and the nose play only a significant role at face recognition of images from these two sets.

35 30 Extensions and improvements 6 LBP u2 8,2 LBP u2 8,2 fb fc dup I dup II old weights new weights Table 6.2: Rank-1 scores obtained with the old weights and the new weights. 6.3 LBP combined with PCA As explained in section 2.1, in face recognition systems Principal Component Analysis (PCA) can be used to reduce the length of the original feature vectors. This can save calculation time and possibly improve the recognition rates (because noise components could be filtered away). A possibility is to apply PCA on the original feature vectors of the different regions. In this case the weights still can be used during calculating the distance matrix. The idea is that the original region histograms (obtained with the LBPoperator) are transformed to new histograms with less bins. Therefore the images from the training set are used to find the transformation matrix. This is done by first calculating for one region the mean value of each bin. Then the covariance matrix is calculated from the original bins subtracted with their mean values. From this matrix the eigenvectors and eigenvalues are obtained. It turns out that the eigenvectors with the highest eigenvalues are the most important and thus the principal components. The transformation matrix is then formed by selecting only a certain number of eigenvectors, N P CA, with the highest eigenvalues. If the obtained transformation matrix from the training set is T and the mean value of bin i is bin i, than the new histogram H new of one region of an image, with original histogram H = [h 1, h 2,..., h P 2 P +3], is obtained by carrying out the following linear transformation: H new = T [ h 1 bin 1, h 2 bin 2,..., h P 2 P +3 bin P 2 P +3] (6.1) The size of this new histogram depends on the chosen number of principal components, N P CA. So, by applying PCA the original feature vector, which has a length of P 2 P + 3 bins, is transformed to a new feature vector with N P CA bins. Because the time for calculating the similarities between the images is proportional to the number of bins, this calculation time will reduce with a factor N P CA /(P 2 P + 3). Because after the transformation the new feature vectors contain also negative values, the current χ 2 is not a good dissimilarity measure anymore (this is confirmed by a few experiments: the recognition rate were very low). Therefore another dissimilarity measure is used, the so-called City Block Measure [10] (equation 6.2). It is possible that this one is also not

36 6.3 LBP combined with PCA 31 the most optimal dissimilarity measure. However, at least it is applicable on feature vectors with positive and negative values. ( NP CA D(S, M) = w j k 2 j=1 i=1 S i,j M i,j ) (6.2) In figure 6.3 the rank-1 recognition rates of the fb set versus the number of principal components are plot. These rates are obtained by using the LBP u2 8,2-operator, the original region weights (see figure 5.3) and equation 6.2 as dissimilarity measure. In the figure four important things are shown. First, from N P CA = 1 till N P CA = 5 the recognition performance is increasing very hard and after N P CA = 5 only slowly. So, it can be said that choosing the first five principal components is a good trade off between recognition performance and feature vector length (and thus calculation time). Second, the score with N P CA = 33 is the same as with all components (59 in the case of 8 sampling points). Apparently, the contribution of the other components is negligible. Third, it can be concluded that noise components have a low influence, because there is no improvement in the rates with less than 59 components. Finally, by using all components, the same rate is obtained as without applying PCA. This means that equation 6.2 is indeed a good dissimilarity measure. Figure 6.3: Rank-1 scores fb for different numbers of principal components.

37

38 7 Conclusions and Recommendations 7.1 Conclusions In this assignment research has been done to the performance of a face recognition system by making use of feature extraction with Local Binary Patterns. The primary objective to reproduce the same recognition rates as presented in a few articles about the subject and especially those from [2] is for the most part achieved. The small differences are probably caused by implementation differences, although it is not totally clear which ones this exactly are. Possibly the exact dividing of the regions. From the obtained rank curves and the investigated extensions, the following conclusions with respect to the LBP method can be drawn: Assigning weights to the different regions improves the recognition performance drastically. This is especially true for faces from which the photos are taken under different lighting conditions. Important regions for feature extraction are those with the eyes and the mouth. The LBP u2 8,2-operator is a good trade-off between recognition performance and feature vector length. Expanding the neighborhoods for the LBP-operator to sampling points on two circles is not a successful extension. The feature vector length is drastically increased while the recognition performance hardly improves. LBP can easily be combined with PCA. By choosing the right number of principal components, the time for calculating similarities between images significantly reduces, while the recognition performance only slightly decreases. Perhaps this is ideal in the case of face verification. 33

39 34 Conclusions and Recommendations 7 Overall, it can be concluded that face recognition with LBP performs much better than face recognition with most other feature extraction methods (see appendix C.2). For instance, with the LBP the rank-1 score for the fb probe set is around 97 %, while this score for the other methods does not even exeed 90 %. From this it follows that LBP is quite insensitive for faces with alternative expressions. Although also for the other probe sets the rates with LBP are much higher than with most other methods, the rates are still relatively poor. So, it still is difficult to recognize faces from which the images are made under different lightning conditions or from photos taken later in time. 7.2 Recommendations More research to the parameters of the LBP-operator is recommended. It would be useful to carry out experiments for several kind of neighborhoods in which one parameter (P or R) is varied at the time. On this way the optimal radius and the optimal number of sampling points on the circle can be determined. Thereby also can be experimented with a non-integer radius (e.g. 2.5) and odd sampling points (e.g. 9). In this research only rectangular regions were used to extract the feature histograms from. A further improvement could be to use regions of different sizes and even other shapes. By shifting, scaling and rotating a sub-window over images from a large training set, it should be possible to find the optimal regions with optimal weights. For example, in [7] a method is proposed in which first 8000 candidate features are created. A learning algorithm, called AdaBoost, is applied to select the 23 most important features which are then used for face recognition. In this research PCA was applied to reduce the length of the original feature vector. However, there are also other ways for reducing the feature vector length and make the method more suitable for large face databases. In [8] distinction is made between the symmetry levels of the uniform patterns, based on the number of ones or zeros they contain. The feature vector length is reduced by using only uniform patterns with a certain symmetry level. Table 7.1 shows the symmetry levels in the case of a neighborhood with eight sampling points. By using for example only symmetry levels 3 and 4, the feature vector length becomes (16+8+3*2)*49 = 1410 (2 bins for the patterns with other symmetry level), instead of According to the paper the recognition performance is only slightly reduced. Questions?

Better than best: matching score based face registration

Better than best: matching score based face registration Better than best: based face registration Luuk Spreeuwers University of Twente Fac. EEMCS, Signals and Systems Group Hogekamp Building, 7522 NB Enschede The Netherlands l.j.spreeuwers@ewi.utwente.nl Bas

More information

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition Olegs Nikisins Institute of Electronics and Computer Science 14 Dzerbenes Str., Riga, LV1006, Latvia Email: Olegs.Nikisins@edi.lv

More information

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Neslihan Kose, Jean-Luc Dugelay Multimedia Department EURECOM Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr

More information

FACIAL RECOGNITION BASED ON THE LOCAL BINARY PATTERNS MECHANISM

FACIAL RECOGNITION BASED ON THE LOCAL BINARY PATTERNS MECHANISM FACIAL RECOGNITION BASED ON THE LOCAL BINARY PATTERNS MECHANISM ABSTRACT Alexandru Blanda 1 This work presents a method of facial recognition, based on Local Binary Models. The idea of using this algorithm

More information

A Comparative Study of Local Matching Approach for Face Recognition

A Comparative Study of Local Matching Approach for Face Recognition A Comparative Study of Local Matching Approach for Face Recognition Jie ZOU, Member, IEEE, Qiang JI, Senior Member, IEEE, and George NAGY, Fellow, IEEE Abstract In contrast to holistic methods, local matching

More information

Implementation of a Face Recognition System for Interactive TV Control System

Implementation of a Face Recognition System for Interactive TV Control System Implementation of a Face Recognition System for Interactive TV Control System Sang-Heon Lee 1, Myoung-Kyu Sohn 1, Dong-Ju Kim 1, Byungmin Kim 1, Hyunduk Kim 1, and Chul-Ho Won 2 1 Dept. IT convergence,

More information

FACE RECOGNITION USING INDEPENDENT COMPONENT

FACE RECOGNITION USING INDEPENDENT COMPONENT Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major

More information

CHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS

CHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS CHAPTER 4 FACE RECOGNITION DESIGN AND ANALYSIS As explained previously in the scope, this thesis will also create a prototype about face recognition system. The face recognition system itself has several

More information

Texture Features in Facial Image Analysis

Texture Features in Facial Image Analysis Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

Decorrelated Local Binary Pattern for Robust Face Recognition

Decorrelated Local Binary Pattern for Robust Face Recognition International Journal of Advanced Biotechnology and Research (IJBR) ISSN 0976-2612, Online ISSN 2278 599X, Vol-7, Special Issue-Number5-July, 2016, pp1283-1291 http://www.bipublication.com Research Article

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

II. LITERATURE REVIEW

II. LITERATURE REVIEW Matlab Implementation of Face Recognition Using Local Binary Variance Pattern N.S.Gawai 1, V.R.Pandit 2, A.A.Pachghare 3, R.G.Mundada 4, S.A.Fanan 5 1,2,3,4,5 Department of Electronics & Telecommunication

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

Recognition: Face Recognition. Linda Shapiro EE/CSE 576 Recognition: Face Recognition Linda Shapiro EE/CSE 576 1 Face recognition: once you ve detected and cropped a face, try to recognize it Detection Recognition Sally 2 Face recognition: overview Typical

More information

A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION

A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A NOVEL APPROACH TO ACCESS CONTROL BASED ON FACE RECOGNITION A. Hadid, M. Heikkilä, T. Ahonen, and M. Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering

More information

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing

More information

APPLICATION OF LOCAL BINARY PATTERN AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION

APPLICATION OF LOCAL BINARY PATTERN AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION APPLICATION OF LOCAL BINARY PATTERN AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION 1 CHETAN BALLUR, 2 SHYLAJA S S P.E.S.I.T, Bangalore Email: chetanballur7@gmail.com, shylaja.sharath@pes.edu Abstract

More information

[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES LBP AND PCA BASED ON FACE RECOGNITION SYSTEM Ashok T. Gaikwad Institute of Management Studies and Information Technology, Aurangabad, (M.S), India ABSTRACT

More information

Face and Nose Detection in Digital Images using Local Binary Patterns

Face and Nose Detection in Digital Images using Local Binary Patterns Face and Nose Detection in Digital Images using Local Binary Patterns Stanko Kružić Post-graduate student University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture

More information

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Face Recognition based Only on Eyes Information and Local Binary Pattern

Face Recognition based Only on Eyes Information and Local Binary Pattern Face Recognition based Only on Eyes Information and Local Binary Pattern Francisco Rosario-Verde, Joel Perez-Siles, Luis Aviles-Brito, Jesus Olivares-Mercado, Karina Toscano-Medina, and Hector Perez-Meana

More information

NIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899

NIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899 ^ A 1 1 1 OS 5 1. 4 0 S Support Vector Machines Applied to Face Recognition P. Jonathon Phillips U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Information

More information

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao

Classifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao Motivation Image search Building large sets of classified images Robotics Background Object recognition is unsolved Deformable shaped

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

What do we mean by recognition?

What do we mean by recognition? Announcements Recognition Project 3 due today Project 4 out today (help session + photos end-of-class) The Margaret Thatcher Illusion, by Peter Thompson Readings Szeliski, Chapter 14 1 Recognition What

More information

ILLUMINATION NORMALIZATION USING LOCAL GRAPH STRUCTURE

ILLUMINATION NORMALIZATION USING LOCAL GRAPH STRUCTURE 3 st January 24. Vol. 59 No.3 25-24 JATIT & LLS. All rights reserved. ILLUMINATION NORMALIZATION USING LOCAL GRAPH STRUCTURE HOUSAM KHALIFA BASHIER, 2 LIEW TZE HUI, 3 MOHD FIKRI AZLI ABDULLAH, 4 IBRAHIM

More information

Face Recognition using Eigenfaces SMAI Course Project

Face Recognition using Eigenfaces SMAI Course Project Face Recognition using Eigenfaces SMAI Course Project Satarupa Guha IIIT Hyderabad 201307566 satarupa.guha@research.iiit.ac.in Ayushi Dalmia IIIT Hyderabad 201307565 ayushi.dalmia@research.iiit.ac.in Abstract

More information

Verification: is that a lamp? What do we mean by recognition? Recognition. Recognition

Verification: is that a lamp? What do we mean by recognition? Recognition. Recognition Recognition Recognition The Margaret Thatcher Illusion, by Peter Thompson The Margaret Thatcher Illusion, by Peter Thompson Readings C. Bishop, Neural Networks for Pattern Recognition, Oxford University

More information

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Sowmya. A (Digital Electronics (MTech), BITM Ballari), Shiva kumar k.s (Associate Professor,

More information

variations labeled graphs jets Gabor-wavelet transform

variations labeled graphs jets Gabor-wavelet transform Abstract 1 For a computer vision system, the task of recognizing human faces from single pictures is made difficult by variations in face position, size, expression, and pose (front, profile,...). We present

More information

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Markus Turtinen, Topi Mäenpää, and Matti Pietikäinen Machine Vision Group, P.O.Box 4500, FIN-90014 University

More information

Periocular Biometrics: When Iris Recognition Fails

Periocular Biometrics: When Iris Recognition Fails Periocular Biometrics: When Iris Recognition Fails Samarth Bharadwaj, Himanshu S. Bhatt, Mayank Vatsa and Richa Singh Abstract The performance of iris recognition is affected if iris is captured at a distance.

More information

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5].

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5]. ON THE VULNERABILITY OF FACE RECOGNITION SYSTEMS TO SPOOFING MASK ATTACKS Neslihan Kose, Jean-Luc Dugelay Multimedia Department, EURECOM, Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication

Tutorial 8. Jun Xu, Teaching Asistant March 30, COMP4134 Biometrics Authentication Tutorial 8 Jun Xu, Teaching Asistant csjunxu@comp.polyu.edu.hk COMP4134 Biometrics Authentication March 30, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Daugman s Method Problem

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013 Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition

More information

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Detection Using Implemented (PCA) Algorithm Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with

More information

Face/Flesh Detection and Face Recognition

Face/Flesh Detection and Face Recognition Face/Flesh Detection and Face Recognition Linda Shapiro EE/CSE 576 1 What s Coming 1. Review of Bakic flesh detector 2. Fleck and Forsyth flesh detector 3. Details of Rowley face detector 4. The Viola

More information

TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES

TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES TIED FACTOR ANALYSIS FOR FACE RECOGNITION ACROSS LARGE POSE DIFFERENCES SIMON J.D. PRINCE, JAMES H. ELDER, JONATHAN WARRELL, FATIMA M. FELISBERTI IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

A face recognition system based on local feature analysis

A face recognition system based on local feature analysis A face recognition system based on local feature analysis Stefano Arca, Paola Campadelli, Raffaella Lanzarotti Dipartimento di Scienze dell Informazione Università degli Studi di Milano Via Comelico, 39/41

More information

BRIEF Features for Texture Segmentation

BRIEF Features for Texture Segmentation BRIEF Features for Texture Segmentation Suraya Mohammad 1, Tim Morris 2 1 Communication Technology Section, Universiti Kuala Lumpur - British Malaysian Institute, Gombak, Selangor, Malaysia 2 School of

More information

A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS

A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS Timo Ahonen and Matti Pietikäinen Machine Vision Group, University of Oulu, PL 4500, FI-90014 Oulun yliopisto, Finland tahonen@ee.oulu.fi, mkp@ee.oulu.fi Keywords:

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

Face Recognition with Local Line Binary Pattern

Face Recognition with Local Line Binary Pattern 2009 Fifth International Conference on Image and Graphics Face Recognition with Local Line Binary Pattern Amnart Petpon and Sanun Srisuk Department of Computer Engineering, Mahanakorn University of Technology

More information

Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction

Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction Invariant Recognition of Hand-Drawn Pictograms Using HMMs with a Rotating Feature Extraction Stefan Müller, Gerhard Rigoll, Andreas Kosmala and Denis Mazurenok Department of Computer Science, Faculty of

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Combining Gabor Features: Summing vs.voting in Human Face Recognition *

Combining Gabor Features: Summing vs.voting in Human Face Recognition * Combining Gabor Features: Summing vs.voting in Human Face Recognition * Xiaoyan Mu and Mohamad H. Hassoun Department of Electrical and Computer Engineering Wayne State University Detroit, MI 4822 muxiaoyan@wayne.edu

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

Eugene Borovikov. Human Head Pose Estimation by Facial Features Location. UMCP Human Head Pose Estimation by Facial Features Location

Eugene Borovikov. Human Head Pose Estimation by Facial Features Location. UMCP Human Head Pose Estimation by Facial Features Location Human Head Pose Estimation by Facial Features Location Abstract: Eugene Borovikov University of Maryland Institute for Computer Studies, College Park, MD 20742 4/21/1998 We describe a method for estimating

More information

Skin and Face Detection

Skin and Face Detection Skin and Face Detection Linda Shapiro EE/CSE 576 1 What s Coming 1. Review of Bakic flesh detector 2. Fleck and Forsyth flesh detector 3. Details of Rowley face detector 4. Review of the basic AdaBoost

More information

International Journal of Research in Advent Technology, Vol.4, No.6, June 2016 E-ISSN: Available online at

International Journal of Research in Advent Technology, Vol.4, No.6, June 2016 E-ISSN: Available online at Authentication Using Palmprint Madhavi A.Gulhane 1, Dr. G.R.Bamnote 2 Scholar M.E Computer Science & Engineering PRMIT&R Badnera Amravati 1, Professor Computer Science & Engineering at PRMIT&R Badnera

More information

Recognition of Non-symmetric Faces Using Principal Component Analysis

Recognition of Non-symmetric Faces Using Principal Component Analysis Recognition of Non-symmetric Faces Using Principal Component Analysis N. Krishnan Centre for Information Technology & Engineering Manonmaniam Sundaranar University, Tirunelveli-627012, India Krishnan17563@yahoo.com

More information

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Impact of Eye Detection Error on Face Recognition Performance

Impact of Eye Detection Error on Face Recognition Performance Impact of Eye Detection Error on Face Recognition Performance Abhishek Dutta a, Manuel Günther b, Laurent El Shafey b, Sébastien Marcel b, Raymond Veldhuis a and Luuk Spreeuwers a a University of Twente,

More information

Part-based Face Recognition Using Near Infrared Images

Part-based Face Recognition Using Near Infrared Images Part-based Face Recognition Using Near Infrared Images Ke Pan Shengcai Liao Zhijian Zhang Stan Z. Li Peiren Zhang University of Science and Technology of China Hefei 230026, China Center for Biometrics

More information

ADAPTIVE WEIGHTED LOCAL TEXTURAL FEATURES FOR ILLUMINATION, EXPRESSION AND OCCLUSION INVARIANT FACE RECOGNITION. Thesis.

ADAPTIVE WEIGHTED LOCAL TEXTURAL FEATURES FOR ILLUMINATION, EXPRESSION AND OCCLUSION INVARIANT FACE RECOGNITION. Thesis. ADAPTIVE WEIGHTED LOCAL TEXTURAL FEATURES FOR ILLUMINATION, EXPRESSION AND OCCLUSION INVARIANT FACE RECOGNITION Thesis Submitted to The School of Engineering of the UNIVERSITY OF DAYTON In Partial Fulfillment

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Face recognition algorithms: performance evaluation

Face recognition algorithms: performance evaluation Face recognition algorithms: performance evaluation Project Report Marco Del Coco - Pierluigi Carcagnì Institute of Applied Sciences and Intelligent systems c/o Dhitech scarl Campus Universitario via Monteroni

More information

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /

More information

Principal Component Analysis (PCA) is a most practicable. statistical technique. Its application plays a major role in many

Principal Component Analysis (PCA) is a most practicable. statistical technique. Its application plays a major role in many CHAPTER 3 PRINCIPAL COMPONENT ANALYSIS ON EIGENFACES 2D AND 3D MODEL 3.1 INTRODUCTION Principal Component Analysis (PCA) is a most practicable statistical technique. Its application plays a major role

More information

Face Recognition under varying illumination with Local binary pattern

Face Recognition under varying illumination with Local binary pattern Face Recognition under varying illumination with Local binary pattern Ms.S.S.Ghatge 1, Prof V.V.Dixit 2 Department of E&TC, Sinhgad College of Engineering, University of Pune, India 1 Department of E&TC,

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Study of Local Binary Pattern for Partial Fingerprint Identification

Study of Local Binary Pattern for Partial Fingerprint Identification International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Study of Local Binary Pattern for Partial Fingerprint Identification Miss Harsha V. Talele 1, Pratvina V. Talele 2, Saranga N Bhutada

More information

Dealing with Inaccurate Face Detection for Automatic Gender Recognition with Partially Occluded Faces

Dealing with Inaccurate Face Detection for Automatic Gender Recognition with Partially Occluded Faces Dealing with Inaccurate Face Detection for Automatic Gender Recognition with Partially Occluded Faces Yasmina Andreu, Pedro García-Sevilla, and Ramón A. Mollineda Dpto. Lenguajes y Sistemas Informáticos

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Applications Video Surveillance (On-line or off-line)

Applications Video Surveillance (On-line or off-line) Face Face Recognition: Dimensionality Reduction Biometrics CSE 190-a Lecture 12 CSE190a Fall 06 CSE190a Fall 06 Face Recognition Face is the most common biometric used by humans Applications range from

More information

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected

More information

Learning to Fuse 3D+2D Based Face Recognition at Both Feature and Decision Levels

Learning to Fuse 3D+2D Based Face Recognition at Both Feature and Decision Levels Learning to Fuse 3D+2D Based Face Recognition at Both Feature and Decision Levels Stan Z. Li, ChunShui Zhao, Meng Ao, Zhen Lei Center for Biometrics and Security Research & National Laboratory of Pattern

More information

Connected components - 1

Connected components - 1 Connected Components Basic definitions Connectivity, Adjacency, Connected Components Background/Foreground, Boundaries Run-length encoding Component Labeling Recursive algorithm Two-scan algorithm Chain

More information

Pattern Recognition 44 (2011) Contents lists available at ScienceDirect. Pattern Recognition. journal homepage:

Pattern Recognition 44 (2011) Contents lists available at ScienceDirect. Pattern Recognition. journal homepage: Pattern Recognition 44 (2011) 951 963 Contents lists available at ScienceDirect Pattern Recognition journal homepage: www.elsevier.com/locate/pr Methodological improvement on local Gabor face recognition

More information

Face Description with Local Binary Patterns: Application to Face Recognition

Face Description with Local Binary Patterns: Application to Face Recognition IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 12, DECEMBER 2006 2037 Face Description with Local Binary Patterns: Application to Face Recognition Timo Ahonen, Student Member,

More information

MORPH-II: Feature Vector Documentation

MORPH-II: Feature Vector Documentation MORPH-II: Feature Vector Documentation Troy P. Kling NSF-REU Site at UNC Wilmington, Summer 2017 1 MORPH-II Subsets Four different subsets of the MORPH-II database were selected for a wide range of purposes,

More information

Part-based Face Recognition Using Near Infrared Images

Part-based Face Recognition Using Near Infrared Images Part-based Face Recognition Using Near Infrared Images Ke Pan Shengcai Liao Zhijian Zhang Stan Z. Li Peiren Zhang University of Science and Technology of China Hefei 230026, China Center for Biometrics

More information

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe Face detection and recognition Many slides adapted from K. Grauman and D. Lowe Face detection and recognition Detection Recognition Sally History Early face recognition systems: based on features and distances

More information

Gender Classification

Gender Classification Gender Classification Thomas Witzig Master student, 76128 Karlsruhe, Germany, Computer Vision for Human-Computer Interaction Lab KIT University of the State of Baden-Wuerttemberg and National Research

More information

Pedestrian Detection with Improved LBP and Hog Algorithm

Pedestrian Detection with Improved LBP and Hog Algorithm Open Access Library Journal 2018, Volume 5, e4573 ISSN Online: 2333-9721 ISSN Print: 2333-9705 Pedestrian Detection with Improved LBP and Hog Algorithm Wei Zhou, Suyun Luo Automotive Engineering College,

More information

Recognition, SVD, and PCA

Recognition, SVD, and PCA Recognition, SVD, and PCA Recognition Suppose you want to find a face in an image One possibility: look for something that looks sort of like a face (oval, dark band near top, dark band near bottom) Another

More information

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 122 CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 5.1 INTRODUCTION Face recognition, means checking for the presence of a face from a database that contains many faces and could be performed

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at 14th International Conference of the Biometrics Special Interest Group, BIOSIG, Darmstadt, Germany, 9-11 September,

More information

Face Recognition for Different Facial Expressions Using Principal Component analysis

Face Recognition for Different Facial Expressions Using Principal Component analysis Face Recognition for Different Facial Expressions Using Principal Component analysis ASHISH SHRIVASTAVA *, SHEETESH SAD # # Department of Electronics & Communications, CIIT, Indore Dewas Bypass Road, Arandiya

More information

Dr. Enrique Cabello Pardos July

Dr. Enrique Cabello Pardos July Dr. Enrique Cabello Pardos July 20 2011 Dr. Enrique Cabello Pardos July 20 2011 ONCE UPON A TIME, AT THE LABORATORY Research Center Contract Make it possible. (as fast as possible) Use the best equipment.

More information

High-Order Circular Derivative Pattern for Image Representation and Recognition

High-Order Circular Derivative Pattern for Image Representation and Recognition High-Order Circular Derivative Pattern for Image epresentation and ecognition Author Zhao Sanqiang Gao Yongsheng Caelli Terry Published 1 Conference Title Proceedings of the th International Conference

More information

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT

CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT CHAPTER 2 TEXTURE CLASSIFICATION METHODS GRAY LEVEL CO-OCCURRENCE MATRIX AND TEXTURE UNIT 2.1 BRIEF OUTLINE The classification of digital imagery is to extract useful thematic information which is one

More information

LOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION

LOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION 17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 LOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION Seyed Mehdi Lajevardi, Zahir M. Hussain

More information

ROBUST LDP BASED FACE DESCRIPTOR

ROBUST LDP BASED FACE DESCRIPTOR ROBUST LDP BASED FACE DESCRIPTOR Mahadeo D. Narlawar and Jaideep G. Rana Department of Electronics Engineering, Jawaharlal Nehru College of Engineering, Aurangabad-431004, Maharashtra, India ABSTRACT This

More information

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,

More information

Facial Expression Recognition Using Local Binary Patterns

Facial Expression Recognition Using Local Binary Patterns Facial Expression Recognition Using Local Binary Patterns Kannan Subramanian Department of MC, Bharath Institute of Science and Technology, Chennai, TamilNadu, India. ABSTRACT: The most expressive way

More information

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at

More information

An Introduction to Content Based Image Retrieval

An Introduction to Content Based Image Retrieval CHAPTER -1 An Introduction to Content Based Image Retrieval 1.1 Introduction With the advancement in internet and multimedia technologies, a huge amount of multimedia data in the form of audio, video and

More information

Dimension Reduction CS534

Dimension Reduction CS534 Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of

More information

Face Recognition Using Phase-Based Correspondence Matching

Face Recognition Using Phase-Based Correspondence Matching Face Recognition Using Phase-Based Correspondence Matching Koichi Ito Takafumi Aoki Graduate School of Information Sciences, Tohoku University, 6-6-5, Aramaki Aza Aoba, Sendai-shi, 98 8579 Japan ito@aoki.ecei.tohoku.ac.jp

More information