3DLBP and HAOG fusion for Face Recognition Utilizing Kinect as a 3D Scanner

Size: px
Start display at page:

Download "3DLBP and HAOG fusion for Face Recognition Utilizing Kinect as a 3D Scanner"

Transcription

1 3DLBP and HAOG fusion for Face Recognition Utilizing Kinect as a 3D Scanner João Baptista Cardia Neto Graduate Program in Computer Science UNESP - São Paulo State University Bauru, São Paulo, joaobcardia@gmail.com Aparecido Nilceu Marana Department of Computing - Faculty of Sciences UNESP - São Paulo State University Bauru, São Paulo, nilceu@fc.unesp.br ABSTRACT Pose and illumination variability are two major problems with D face recognition. Since 3D data is less sensible to illumination changes and can be used to adjust pose variations, it has been adopted to improve performance on face recognition systems. The main problem with utilizing 3D data is the high cost of the traditional 3D scanners. The Kinect is a low cost device that can be used to obtain the 3D data from an environment in a fast manner, but with lower accuracy than the traditional scanners. Recently, a 3D Local Binary Pattern (3DLBP) method was proposed for 3D face recognition by using high resolution scanners. The main goal of this work is to assess the performance of 3DLBP method, fused with Histogram of Averaged Oriented Gradients (HAOG) face descriptor method, for face recognition when Kinect is used as the 3D face scanner. Another goal is to compare the 3DLBP method, fused with HAOG descriptor, with other methods proposed in the literature for face recognition by using Kinect. Experimental results on EURECOM face dataset showed that the data generated by Kinect are discriminative enough to allow face recognition and that 3DLBP performs better than the other methods. Categories and Subject Descriptors I.5.5 [Pattern Recognition]: Applications; I.4.m [Image processing and computer vision]: Miscellaneous biometrics General Terms Pattern Recognition, Image Processing, Computer Vision, Biometrics Keywords 3D Face Recognition, Kinect, 3DLBP, HAOG. 1. INTRODUCTION Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. ACM 15,April 13-17, 015, Salamanca, Spain. Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM /15/04$ Nowadays, the great majority of systems that need to assure a person identity utilizes passwords, a security access card, or a combination of both. Since these kinds of identification are based on something a person knows (password) or has (security card), it is possible to an impostor to learn a needed password or to get the object that grants him the same access of a genuine subject [1]. If someone steal a security card of a person and the access restriction relies only on this object, it becomes impossible to differentiate a genuine subject from an impostor. The same happens when a knowledge, such as a password, is learned. The best way to overcome these drawbacks is utilizing what a person is (his/her biometrics characteristics), instead of what a person has or knows. Biometrics characteristics are the physical/physiological (face, fingerprint, iris, hand veins) or the behavioral (gait, voice, typing dynamics, signature) traits. If a human characteristic satisfies some requirements, then it is possible to utilize it to biometrics recognition [6]. Such requirements are: Universality: The characteristic must occur in as many people as possible; Uniqueness: The characteristic must be different from a person to another; Permanence: The characteristic should not change over time; Collectability: The characteristic must be easily collected; Performance: The characteristic must allow high accuracy, with low processing time and low computational requirements, besides being robust to uncontrolled environment; Acceptability: The subjects agree to be identified by the characteristic; Circumvention: The characteristic must be difficult to circumvent. The Table 1 shows a comparison of some of the most important biometrics characteristics in respect to these requirements. As one can see, face has some advantages over the other characteristics, since it presents high universality, high collectability, and high acceptability. The human

2 beings normally recognize each other trough their face characteristics, which can be obtained at a distance and also in a covert manner [10]. However, this biometrics feature has some disadvantages. For instance, in the D face recognition system the illumination and pose variations can dramatically decrease the system performance. Requirement Face Fingerprint Iris Gait Universality H M H M Uniqueness L H H L Permanence M H H L Collectability H M M H Performance L H H L Acceptability H M L H Circumvention L M H M Table 1: Comparison among some of the most important biometrics characteristics [7]. H = High, M = Medium and L = Low. Since the face is a 3D object, one of the best ways to deal with illumination and pose changes is utilizing a 3D or a.5d representation of the face. However, when dealing with 3D data a problem arises: the high cost of 3D sensors. Table shows the prices and other characteristics of some popular 3D sensors [9]. As one can see, the Kinect arises as a cheap alternative for the expensive 3D devices. Device Speed Charge Size Price Acc. 3dMD sec N/A >$50K <0. Minolta.5 no 1408 >$50K 0.1 Artec Eva no >$0K 0.5 3D3 HDI R1 1.3 no N/A >$10K 0.3 SwissRanger 0.0 no >$5K 10 DAVID SLS.4 no N/A >$K 0.5 Kinect no 41.5 >$ Table : Comparison among different 3D scanners [9]. The speed is expressed in seconds, size in inch 3, price in USD, and the accuracy is an approximation expressed in mm. The Kinect sensor captures depth data of the objects in the environment. For the face recognition task, the depth data can be utilized to deal with pose, illumination, and also facial expression problems. Besides the low cost, Kinect has other advantage: speed. Considering a realistic scenario, it is not feasible to wait.5 seconds in front a device in order to get the face scanned, as required by some sensors. Kinect can scan a person s face in only s. Recently, 3DLBP (3D Local Binary Pattern) method [4] was proposed for 3D face recognition by using high resolution scanners. The main goal of this work is to assess the performance of 3DLBP method, fused with HAOG (Histogram of Averaged Oriented Gradients) face descriptor method [], for face recognition when Kinect is used as the 3D face scanner (the 3DLBP method is described in the subsection 3. and the HAOG method is described in the subsection 3.5). Another goal of our work is to analyze the performance of the 3DLBP method fused with HAOG face descriptor when compared with other methods proposed in the literature for face recognition using Kinect data. Figure 1: The Kinect sensor: 1) Depth Sensor; ) RGB Camera; 3) Microphone array; and 4) Motorized tilt.. KINECT The Microsoft Kinect device contains a depth sensor, a RGB camera, a microphone array, and a motorized tilt. The depth sensor contains an infrared emitter and an infrared camera. Kinect estimates the depth data by projecting an array of infrared dots and measuring the distortion caused by the rays reflected back to the camera [15]. The Kinect sensor outputs a 640 X 480 depth grid with a precision of 11 bits at 30 Hz [18]. The depth range is between 1. and 3.5 meters. The angular field of view of the cameras is 57 horizontally and 43 vertically [16]. These angles are relative to the horizon (and not to the base of the device). It is possible to use the controlled tilt engine to adjust the angle caught in the frame. This way, the device can focus on the desired object. The Figure 1 shows a Kinect sensor. Observing again the Table and comparing the Kinect with others 3D scanners, it is possible to see its main flaw: the low resolution, which brings some new challenges and shows the importance of assessing the performance of face recognition methods proposed in the literature, in general, for high resolution sensors, when using Kinect devices as 3D sensor. 3. METHODS FOR FACE RECOGNITION There are several methods proposed in the literature for face recognition, mainly for D. Amongst these methods, a very few are dedicated to face recognition utilizing Kinect as 3D data source. In this section, some traditional D face recognition, as well as, some more recent 3D face recognition methods are described. 3.1 Local Binary Pattern The Local Binary Pattern (LBP) was originally introduced by [1] and it consists in analyzing the difference between a pixel and its 3X3 neighborhood. Given a central pixel (x c, y c) the LBP operator can be given as [14]: LBP (x c, y c) = 7 s(i n i c) n (1) n=0 with i c as the grey value at the center pixel, i n the value of the neighborhood pixel and s(x) defined as: s(x) = { 1 if x 0; 0 if x < 0. The Figure shows an example of how the code is generated. ()

3 and P 4. The value of P 1 has the same value as the original LBP. For matching, the histograms of the local regions (P 1, P, P 3, P 4) are concatenated. The Figure 4 shows the process for the generation of the 3DLBP face descriptors. Figure : Example of how to calculate the LBP Operator. Later, [13] proposed an extension to the original LBP method. Instead of using a 3X3 neighborhood, the authors proposed to use another neighborhood given by (P, R), in which P is the quantity of points sampled in a circle of radius R. The Figure 3 illustrates a LBP(4,1) neighborhood. If the coordinates of the central pixel are (0, 0) the coordinates of a neighbor g p are given by: ( R sin ( ) πp, R cos P ( )) πp P The values that does not fall in the center of a pixel are estimated by interpolation. (3) Figure 4: The full process of the 3DLBP proposed by [4]. Each of the differences is encoded into the layers (layer, 3 and 4) and the signal into the layer 1. Figure 3: Example of a LBP(4,1), the black dots are the sampling points in the red circle. 3. 3D Local Binary Pattern (3DLBP) The LBP operator takes into consideration only the signal of the comparison between a region and its kernel. The original operator is a powerful feature for texture description, but cannot deal with behavior of depth values. For instance, the nose tip is one of the most reliable landmarks on the face [9]. But, because Kinect data is too noisy, the original LBP method is not able to detect it. Other points in the face will have the same LBP code as the nose tip point. The 3D Local Binary Pattern (3DLBP) operator, proposed by Huang et al. [4] as a variation of the original LBP, considers not only the signal of the difference between a neighbor and its kernel, but also its absolute value. Huang et al. [4] state that more than 93% of all depth differences (DD) within a radius R = are smaller than 7. Due to this property, the absolute value of the DD can be stored into three binary units (i, i 3, i 4). Therefore, it is possible to affirm that: DD = i + i i 4 0 (4) The binary unit i 1 is defined as s(x) (equation ). The four binary units are divided into four layers and, for each layer, four decimal numbers are obtained: P 1, P, P 3, 3.3 Four-Patch Local Binary Pattern (FPLBP) The Four-Patch LBP (FPLBP) [17] is a variation of the original Local Binary Pattern (LBP) operator. It works utilizing two rings of radius r 1 and r centered at a pixel. S patches of size W W are distributed along both rings. The FPLBP codes are produced by comparing two center symmetric patches in the inner rings with two in the outer ring. The code of the pixel is set according to what pair being compared is more similar. This is defined as: S i F P LBP r1,r,s,w,α(p) = [f(d(c 1i, C,1+α mod S) d(c 1,i+ S, C,i+ S +α mod S)) i ] 3.4 Histogram of Oriented Gradients (HOG) In [3] the authors generate entropy and salience maps for the Depth and RGB images. Then, the Histogram of Oriented Gradients (HOG) is applied on the entropy and salience maps. The HOG obtained from different face patches are concatenated in order to compose the facial feature vectors. In this work, the classification of the samples is carried out by a Random Decision Forest (RDF) classifier. 3.5 Histogram of Averaged Oriented Gradients (HAOG) (5)

4 In [], the authors proposed a method for inter-modality Face Sketch Recognition. The main goal of the method is to reduce the gap made by the different modality between face photos and sketches. The method is based on a new gradient orientation descriptor named Histogram of Averaged Oriented Gradients (HAOG). The gap is generated by the difference of visual information that can be seem in a photo and a sketch. A face has several components (e.g. eyes, eyebrows, lips) that have strong relations to each other in a spatial configuration []. The general shape of the face and its spatial configuration are considered the meaningful visual information. With this in mind, it is possible to affirm that the amount of shape information from a photo and a sketch is the same (the face shape is not involved in the modality gap). While the face shape does not generates modality gap, the same cannot be said for the texture. Since texture is related to face appearance, it is deeply related with how the modality gap presents itself. Face appearance has coarse and fine textures, belonging to facial components and facial skin. Boundaries of the facial components with high contrast are coarse textures, while the low contrasts from the face skin (e.g. flaws, moles) are fine textures. It is possible to explore the problem separately for each type of textures. Coarse textures are vital for artists to draw sketches, while fine textures details can be ignored. Since the discriminative power of the extracted orientation gradients are related with the modality gap and only the fine textures are responsible for the gap, it can be concluded that the more robust way to deal with inter-modality recognition is to use only coarse textures for feature extraction. Since it is not feasible to separate both the texture types, the best way to deal with this situation is to use both of them, but emphasizing on the coarse textures. One way to achieve this is to vote square magnitudes of gradient into histogram of orientations. The HAOG descriptor is computed in three main steps. First the grayscale image has a S S window sliding over it, starting in the left upper corner and ending at the right lower corner. For each pixel in a local patch, the ρ and ϕ are defined. For each patch a histogram with b bins is defined. The histogram is quantified by accumulating ρ at the bin in which ϕ fell into. The final descriptor is the concatenation of the histograms of all patches. In order to get ρ and ϕ, it is necessary to calculate first the gradient vector for the image. Given an image in grayscale I(x, y), the gradient vector is defined as: [ ] gx(x, y) = g y(x, y) [ ] I(x,y) x I(x,y) y The values of ρ and ϕ can be calculated with: (6) ρ = (g x + g y) 0.5 (7) ( ) ϕ = tan 1 gy g x The main problem here is that it is unfeasible to calculate directly the averaged oriented gradient, since opposite gradients at both sides of an edge can cancel each other. To deal with this problem [8] proposed to double the gradient angles before averaging. For defining ρ and ϕ first is needed (8) to define the average squared gradient for each pixel in a local neighborhood in a window W, this is done by: [ ] gsx = g sy (9) [ ] ρ cos ϕ ρ = sin ϕ [ ] [ ] ρ (cos ϕ sin ϕ) g ρ = x gy ( sin ϕ cos ϕ) g yg x ] [ ] [ ] [ḡsx = W gsx = W (g x gy) ḡ sy W gsy W gygx Then, ρ and ϕ are defined by: (10) ( ) ϕ = tan 1 ḡsy, ϕ [ π, π) (11) ḡ sy ρ = W (g sy + g sx) 0.5 = W 4. SYMMETRIC FILLING ρ (1) In [9] the authors utilize depth and RGB images to perform face recognition. The RBG image goes through the Discriminant Color Space (DCS) transform before taking part in the method. Since the image has three channels, they are stacked. The depth map used in this work is based on the cloud of points returned by the Kinect. For each cloud of points, the pose is corrected and the Symmetric Filling is applied to them. Symmetric filling is a process in which each point from a mirrored cloud of points is compared with the closest point in the original cloud. If the Euclidean distance between them is smaller than a threshold, the mirrored point is added to the original cloud of points. Due to the symmetrical characteristics of the human face, this strategy tends to improve the data quality generated by the Kinect sensor. In this work, the classification of the samples occurs with the Sparse Representation Classifier (SRC). 5. PROPOSED METHOD In this work we propose the fusion of 3DLBP method with the Histogram of Averaged Oriented Gradients (HAOG) face descriptor method for face recognition when Kinect is used as the 3D face scanner. The Figure 5 shows a diagram of the proposed method. For face normalization, the depth maps generated by the Kinect device have a sphere of radius R cropped centered at the nose tip. This is the only pre-processing applied on the depth images generated by the Kinect. For feature extraction, the cropped area was divided into 8X8 small regions and, for each region, it was applied the 3DLBP operator. The 3DLBP utilize a 3X3 neighborhood. After extracting the operator for all neighborhoods in a region, the values are normalized and a histogram with 14 bins is extracted. Since each neighborhood has four layers, this is done to each one of them. The final face descriptor is the concatenation of each small region histogram. The Figure 6 illustrates the feature extraction with 3DLBP. The HAOG is applied to the same region as the 3DLBP operator. The difference is that it generates a histogram with seven bins. For the classification stage, a Support Vector Machine (SVM) classifier is utilized. For the classification, the gallery

5 Figure 5: Diagram of the proposed method. The fusion occurs in the score level. is the training set and the probe are the test set. Each image in the probe receives a probability of being the subject n. The probe identity is considered the class with higher probability. The fusion between the 3DLBP and HAOG is made at the score level. The final classification score is given by: F S = 3DLBP SC w 1 + HAOGSC w (13) in which F S is the final score, 3DLBP SC is the score from the 3DLBP method, w 1 is the weight for the 3DLBP score, is HAOGSC the score from HAOG classification, and w is the weight for the HAOG classification. 5.1 Generation of new depth maps In our work, in order to increase the robustness of the 3D face recognition from kinect data, we generate another depth map from the cloud of points outputted by Kinect. For the generation of the new depth maps a circle with radius R is cropped centered at the nose. After the cropping stage, each face goes through the symmetric filling process. Then, the face is fitted to a smooth surface using approximation rather than interpolation, this is done with an open Figure 6: Overview of the face feature extraction by using the 3DLBP method. The face is divided 8X8 windows, for each window the 3DLBP is applied in all 3X3 regions. All the histograms for each window is concatenated to form the image descriptor. source code 1 written in MATLAB. The resulting face is a 100 X 100 matrix that is saved to a.bmp file. Since the image file cannot contain decimal values, all the depth values are rounded. Figure 1(b) shows an example of depth map generated by using this method. 6. EXPERIMENTAL RESULTS In order to assess the proposed method, two experiments were carried out on the EURECOM Face Dataset [5, 11]. These experiments, as well as, the database are described in this section. 6.1 EURECOM Face Dataset The EURECOM Face Dataset [5, 11] is composed by 5 subjects, 14 females and 38 males. There are two sets of images captured in the interval of 15 days and the images 1

6 in each set have nine types of variations: neutral, smiling, open mouth, illumination variation, occlusion of the eyes, occlusion of the mouth, occlusion of half of the face, left and right profiles. The Figure 7 shows the images of a subject in the EURECOM Kinect Face dataset. The images have three different source of information: depth (in bitmap depth images and text files with all the values sensed by the Kinect), RGB image, and 3D.obj files. For each sample and for each format there are annotations for the left and right eyes, the tip of the nose, the left and right sides of the mouth and the chin. Figure 7: Images of a subject in the EURECOM Kinect Face dataset. 6. Experiment 1 The first experiment was carried out to compare the results obtained by 3DLBP, HAOG, and by their fusion against the results obtained by [3]. In this experiment only the open mouth, smile, neutral and light on images from both sessions of the EURECOM Kinect Dataset were utilized. Those images are divided into two groups: gallery and probe. The gallery was composed of 75% of subject images available in the sets: open mouth, smile and light on. The neutral set of faces was utilized as probe. The Figure 8 shows an example of the two sets, probe and gallery, of a subject in the EURECOM Face dataset. Figure 9: CMC curves obtained from results utilizing the EURECOM Face dataset. Figure 10: DET curve for the 3DLBP, HAOG, and the fusion between them. The results were obtained utilizing the EURECOM Face dataset. Figure 8: Gallery and probe for a subject in the EURECOM Kinect Face dataset. The Figure 9 shows the Cumulative Match Characteristic (CMC) curves obtained utilizing the EURECOM Face Dataset. The fusion between 3DLBP and HAOG has the higher identification accuracy at Rank 1 and. The 3DLBP method has the higher identification accuracy at Rank 3 and 4, at 5 it ties with the fusion. The HAOG method starts with the worst performance but passes HOG at Rank 5. The Figure 10 shows the DET (Detection Error Tradeoff) curves for this experiment. One can observe that the fusion of the 3DLBP and HAOG methods obtained the best results. The results showed in this paper for all methods, except the 3DLBP, HAOG, and their fusion are the ones published in [3]. 6.3 Experiment The second experiment was carried out to asses the robustness of the face recognition method when dealing with face obstruction. The main goal was to observe the performance gain when utilizing new depth maps generated from the cloud of points after the symmetric filling process. The 3DLBP, HAOG, and their fusion were evaluated on two subsets of the database: one obtained with the original depth maps outputted by the Kinect and the other with the new depth maps obtained from the cloud of points outputted by

7 the Kinect. This experiment has a change in the probe: the face images with the right side occluded were added to the probe set. Another difference in this experiment is that not only the original depth maps from Kinect were utilized, but also the depth maps generated from the cloud of points of each face. The Figure 11 shows the type of faces included in the probe set and the Figure 1 illustrates an example of the original and generated depth maps. Figure 11: Example of face with occlusion utilized in the second experiment. Figure 13: DET curve of two different experiments, one utilizing original depth data and other with regenerated depth maps. The regenerated depth maps are based on point clouds after the symmetrical filling process. Figure 1: Two types of data utilized in the second experiments. a) Depth map outputted directly by the Kinect; and b) Depth map generated from the cloud of points after the symmetric filling. The Figure 13 shows the DET curve comparing the 3DLBP, HAOG, and their fusion in two sets of images: with the original Kinect data and with the new depth maps based on the cloud of points. One can see that the depth data outputted by Kinect obtained better performance than the depth map generated from the cloud of points. However, the fusion between 3DLBP and HAOG with the regenerated depth maps is not very far from it. The images utilized in this experiment are the same as the previous. The Figure 14 shows the same comparison as the previous experiment, but an image with face obstruction is included in the probe. The fusion between 3DLBP and HAOG has the better result when utilizing the new depth images, the other methods have similar performance and only the HAOG with the original data have a very low performance. 7. CONCLUSIONS From the experimental results presented in Section 6, we can conclude that: Figure 14: DET curve of two different experiments, one utilizing original depth data and other with regenerated depth maps. This is done with a image with half face obstructed. The regenerated depth maps are based on point clouds after the symmetrical filling process. 1. The 3DLBP method, besides being effective for 3D face recognition using data captured with high resolution 3D sensors, can also be applied to 3D face recognition using data captured by low cost and low resolution sensors, like the Kinect;. The 3DLBP method, which uses only depth maps, presented better results than the Entropy and Salience map method [3], which uses depth maps and RGB information; 3. Although the HAOG descriptor has the lower perfor-

8 mance in several cases, when fused with 3DLBP it improves its performance, becoming a good alternative to increase face recognition performance; 4. The Symmetric Filling method helps to increase face recognition performance when dealing with face obstruction; 5. The results obtained in this work corroborates that the data generated by Kinect, even though being of lower resolution when compared with other traditional 3D sensors, are discriminative enough to allow a correct face recognition among different subjects. It is important to emphasize that 3DLBP and HAOG methods rely only on the depth maps and do not use the RGB images in any moment. This is important because the depth information can be constructed utilizing the cloud of points provided by the 3D sensors. Therefore, the method can be applied even for non canonical face images. That is, the method is robust to images with variations on pose and with some degree of obstruction. Besides, since the depth data is less sensible to illumination changes than RGB, the 3DLBP and HAOG methods are more robust to these kind of problems. 8. ACKNOWLEDGMENTS We thank the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) for the financial support. 9. REFERENCES [1] R. Bolle and S. Pankanti. Biometrics, Personal Identification in Networked Society: Personal Identification in Networked Society. Kluwer Academic Publishers, Norwell, MA, USA, [] H. K. Galoogahi and T. Sim. Inter-modality face sketch recognition. 01 IEEE International Conference on Multimedia and Expo, 0:4 9, 01. [3] G. Goswami, S. Bharadwaj, M. Vatsa, and R. Singh. On rgb-d face recognition using kinect. International Conference on Biometrics: Theory, Applications and Systems, 013. [4] Y. Huang, Y. Wang, and T. Tan. Combining statistics of geometrical and correlative features for 3d face recognition. In Proceedings of the British Machine Vision Conference, pages BMVA Press, 006. doi:10.544/c [5] T. Huynh, R. Min, and J.-L. Dugelay. An efficient LBP-based descriptor for facial depth images applied to gender recognition using RGB-D face data. In ACCV 01, Workshop on Computer Vision with Local Binary Pattern Variants, Daejeon, Korea, November 5-9, 01 / Published also as LNCS, Vol 778, PART 1, Daejeon, KOREA, DEMOCRATIC PEOPLE S REPUBLIC OF, [6] A. K. Jain and D. Maltoni. Handbook of Fingerprint Recognition. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 003. [7] A. K. Jain, A. Ross, and S. Prabhakar. An introduction to biometric recognition. IEEE Trans. on Circuits and Systems for Video Technology, 14:4 0, 004. [8] M. Kass and A. Witkin. Analyzing oriented patterns. Comput. Vision Graph. Image Process., 37(3):36 385, Mar [9] B. Li, A. Mian, W. Liu, and A. Krishna. Using kinect for face recognition under varying poses, expressions, illumination and disguise. In Applications of Computer Vision (WACV), 013 IEEE Workshop on, pages , 013. [10] S. Z. Li and A. K. Jain, editors. Handbook of Face Recognition, nd Edition. Springer, 011. [11] R. Min, N. Kose, and J.-L. Dugelay. Kinectfacedb: A kinect database for face recognition. Systems, Man, and Cybernetics: Systems, IEEE Transactions on, PP(99):1 1, 014. [1] T. Ojala, M. Pietikäinen, and D. Harwood. A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 9(1):51 59, [13] T. Ojala, M. Pietikainen, and T. Maenpaa. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 4(7): , 00. [14] Y. Rodriguez and S. Marcel. Face authentication using adapted local binary pattern histograms. In 9th European Conference on Computer Vision (ECCV), IDIAP-RR [15] A. Shpunt and Z. Zalevsky. Depth-varying light fields for three dimensional sensing, 008. US Patent App. 11/74,068. [16] J. Stowers, M. Hayes, and A. Bainbridge-Smith. Altitude control of a quadrotor helicopter using depth map from microsoft kinect sensor. In Mechatronics (ICM), 011 IEEE International Conference on, pages , april 011. [17] L. Wolf, T. Hassner, and Y. Taigman. Y.: Descriptor based methods in the wild. In In: Faces in Real-Life Images Workshop in ECCV. (008) (b) Similarity Scores based on Background Samples. [18] M. ZollhÖfer, M. Martinek, G. Greiner, M. Stamminger, and J. Sussmuth. Automatic reconstruction of personalized avatars from 3d face scans. Comput. Animat. Virtual Worlds, (-3):195 0, Apr. 011.

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5].

Figure 1. Example sample for fabric mask. In the second column, the mask is worn on the face. The picture is taken from [5]. ON THE VULNERABILITY OF FACE RECOGNITION SYSTEMS TO SPOOFING MASK ATTACKS Neslihan Kose, Jean-Luc Dugelay Multimedia Department, EURECOM, Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr

More information

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Neslihan Kose, Jean-Luc Dugelay Multimedia Department EURECOM Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr

More information

An Efficient Face Recognition Using SIFT Descriptor in RGB-D Images

An Efficient Face Recognition Using SIFT Descriptor in RGB-D Images International Journal of Electrical and Computer Engineering (IJECE) Vol. 5, No. 6, December 2015, pp. 1227~1233 ISSN: 2088-8708 1227 An Efficient Face Recognition Using SIFT Descriptor in RGB-D Images

More information

An Efficient LBP-based Descriptor for Facial Depth Images applied to Gender Recognition using RGB-D Face Data

An Efficient LBP-based Descriptor for Facial Depth Images applied to Gender Recognition using RGB-D Face Data An Efficient LBP-based Descriptor for Facial Depth Images applied to Gender Recognition using RGB-D Face Data Tri Huynh, Rui Min, Jean-Luc Dugelay Department of Multimedia Communications, EURECOM, Sophia

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: Fingerprint Recognition using Robust Local Features Madhuri and

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Periocular Biometrics: When Iris Recognition Fails

Periocular Biometrics: When Iris Recognition Fails Periocular Biometrics: When Iris Recognition Fails Samarth Bharadwaj, Himanshu S. Bhatt, Mayank Vatsa and Richa Singh Abstract The performance of iris recognition is affected if iris is captured at a distance.

More information

MORPH-II: Feature Vector Documentation

MORPH-II: Feature Vector Documentation MORPH-II: Feature Vector Documentation Troy P. Kling NSF-REU Site at UNC Wilmington, Summer 2017 1 MORPH-II Subsets Four different subsets of the MORPH-II database were selected for a wide range of purposes,

More information

On RGB-D Face Recognition using Kinect

On RGB-D Face Recognition using Kinect On RGB-D Face Recognition using Kinect Gaurav Goswami, Samarth Bharadwaj, Mayank Vatsa, and Richa Singh IIIT Delhi {gauravgs, samarthb, mayank, rsingh}@iiitd.ac.in Abstract Face recognition algorithms

More information

FACE recognition with 2D images is a challenging problem

FACE recognition with 2D images is a challenging problem 1 RGB-D Face Recognition with Texture and Attribute Features Gaurav Goswami, Student Member, IEEE, Mayank Vatsa, Senior Member, IEEE, and Richa Singh, Senior Member, IEEE Abstract Face recognition algorithms

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

Implementation of a Face Recognition System for Interactive TV Control System

Implementation of a Face Recognition System for Interactive TV Control System Implementation of a Face Recognition System for Interactive TV Control System Sang-Heon Lee 1, Myoung-Kyu Sohn 1, Dong-Ju Kim 1, Byungmin Kim 1, Hyunduk Kim 1, and Chul-Ho Won 2 1 Dept. IT convergence,

More information

Decorrelated Local Binary Pattern for Robust Face Recognition

Decorrelated Local Binary Pattern for Robust Face Recognition International Journal of Advanced Biotechnology and Research (IJBR) ISSN 0976-2612, Online ISSN 2278 599X, Vol-7, Special Issue-Number5-July, 2016, pp1283-1291 http://www.bipublication.com Research Article

More information

RGB-D Face Recognition via Learning-based Reconstruction

RGB-D Face Recognition via Learning-based Reconstruction RGB-D Face Recognition via Learning-based Reconstruction Anurag Chowdhury, Soumyadeep Ghosh, Richa Singh and Mayank Vatsa IIIT-Delhi, India {anurag1402,soumyadeepg,rsingh,mayank}@iiitd.ac.in Abstract Low

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor

[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES LBP AND PCA BASED ON FACE RECOGNITION SYSTEM Ashok T. Gaikwad Institute of Management Studies and Information Technology, Aurangabad, (M.S), India ABSTRACT

More information

Face Recognition based Only on Eyes Information and Local Binary Pattern

Face Recognition based Only on Eyes Information and Local Binary Pattern Face Recognition based Only on Eyes Information and Local Binary Pattern Francisco Rosario-Verde, Joel Perez-Siles, Luis Aviles-Brito, Jesus Olivares-Mercado, Karina Toscano-Medina, and Hector Perez-Meana

More information

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features 1 Kum Sharanamma, 2 Krishnapriya Sharma 1,2 SIR MVIT Abstract- To describe the image features the Local binary pattern (LBP)

More information

Fingerprint Indexing using Minutiae and Pore Features

Fingerprint Indexing using Minutiae and Pore Features Fingerprint Indexing using Minutiae and Pore Features R. Singh 1, M. Vatsa 1, and A. Noore 2 1 IIIT Delhi, India, {rsingh, mayank}iiitd.ac.in 2 West Virginia University, Morgantown, USA, afzel.noore@mail.wvu.edu

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Palm Vein Recognition with Local Binary Patterns and Local Derivative Patterns

Palm Vein Recognition with Local Binary Patterns and Local Derivative Patterns Palm Vein Recognition with Local Binary Patterns and Local Derivative Patterns Leila Mirmohamadsadeghi and Andrzej Drygajlo Swiss Federal Institude of Technology Lausanne (EPFL) CH-1015 Lausanne, Switzerland

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

A Survey on Face-Sketch Matching Techniques

A Survey on Face-Sketch Matching Techniques A Survey on Face-Sketch Matching Techniques Reshma C Mohan 1, M. Jayamohan 2, Arya Raj S 3 1 Department of Computer Science, SBCEW 2 Department of Computer Science, College of Applied Science 3 Department

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Fusion of Color and Depth Information for Facial Recognition using a Multi Perspective Approach

Fusion of Color and Depth Information for Facial Recognition using a Multi Perspective Approach Proceedings of the Pakistan Academy of Sciences: A. Physical and Computational Sciences 54 (4): 365 374 (2017) Copyright Pakistan Academy of Sciences ISSN: 2518-4245 (print), 2518-4253 (online) Pakistan

More information

Haresh D. Chande #, Zankhana H. Shah *

Haresh D. Chande #, Zankhana H. Shah * Illumination Invariant Face Recognition System Haresh D. Chande #, Zankhana H. Shah * # Computer Engineering Department, Birla Vishvakarma Mahavidyalaya, Gujarat Technological University, India * Information

More information

Fusion of Hand Geometry and Palmprint Biometrics

Fusion of Hand Geometry and Palmprint Biometrics (Working Paper, Dec. 2003) Fusion of Hand Geometry and Palmprint Biometrics D.C.M. Wong, C. Poon and H.C. Shen * Department of Computer Science, Hong Kong University of Science and Technology, Clear Water

More information

Shape and Texture Based Countermeasure to Protect Face Recognition Systems Against Mask Attacks

Shape and Texture Based Countermeasure to Protect Face Recognition Systems Against Mask Attacks 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Shape and Texture Based Countermeasure to Protect Face Recognition Systems Against Mask Attacks Neslihan Kose and Jean-Luc Dugelay

More information

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE

PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE PEOPLE IN SEATS COUNTING VIA SEAT DETECTION FOR MEETING SURVEILLANCE Hongyu Liang, Jinchen Wu, and Kaiqi Huang National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Science

More information

Texture Features in Facial Image Analysis

Texture Features in Facial Image Analysis Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

Rotation Invariant Finger Vein Recognition *

Rotation Invariant Finger Vein Recognition * Rotation Invariant Finger Vein Recognition * Shaohua Pang, Yilong Yin **, Gongping Yang, and Yanan Li School of Computer Science and Technology, Shandong University, Jinan, China pangshaohua11271987@126.com,

More information

Enhanced Iris Recognition System an Integrated Approach to Person Identification

Enhanced Iris Recognition System an Integrated Approach to Person Identification Enhanced Iris Recognition an Integrated Approach to Person Identification Gaganpreet Kaur Research Scholar, GNDEC, Ludhiana. Akshay Girdhar Associate Professor, GNDEC. Ludhiana. Manvjeet Kaur Lecturer,

More information

Face and Nose Detection in Digital Images using Local Binary Patterns

Face and Nose Detection in Digital Images using Local Binary Patterns Face and Nose Detection in Digital Images using Local Binary Patterns Stanko Kružić Post-graduate student University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture

More information

A ROBUST DISCRIMINANT CLASSIFIER TO MAKE MATERIAL CLASSIFICATION MORE EFFICIENT

A ROBUST DISCRIMINANT CLASSIFIER TO MAKE MATERIAL CLASSIFICATION MORE EFFICIENT A ROBUST DISCRIMINANT CLASSIFIER TO MAKE MATERIAL CLASSIFICATION MORE EFFICIENT 1 G Shireesha, 2 Mrs.G.Satya Prabha 1 PG Scholar, Department of ECE, SLC's Institute of Engineering and Technology, Piglipur

More information

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Feature-level Fusion for Effective Palmprint Authentication

Feature-level Fusion for Effective Palmprint Authentication Feature-level Fusion for Effective Palmprint Authentication Adams Wai-Kin Kong 1, 2 and David Zhang 1 1 Biometric Research Center, Department of Computing The Hong Kong Polytechnic University, Kowloon,

More information

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,

More information

HW2 due on Thursday. Face Recognition: Dimensionality Reduction. Biometrics CSE 190 Lecture 11. Perceptron Revisited: Linear Separators

HW2 due on Thursday. Face Recognition: Dimensionality Reduction. Biometrics CSE 190 Lecture 11. Perceptron Revisited: Linear Separators HW due on Thursday Face Recognition: Dimensionality Reduction Biometrics CSE 190 Lecture 11 CSE190, Winter 010 CSE190, Winter 010 Perceptron Revisited: Linear Separators Binary classification can be viewed

More information

Object detection using non-redundant local Binary Patterns

Object detection using non-redundant local Binary Patterns University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh

More information

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation

Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Finger Vein Biometric Approach for Personal Identification Using IRT Feature and Gabor Filter Implementation Sowmya. A (Digital Electronics (MTech), BITM Ballari), Shiva kumar k.s (Associate Professor,

More information

Face Recognition with Local Binary Patterns

Face Recognition with Local Binary Patterns Face Recognition with Local Binary Patterns Bachelor Assignment B.K. Julsing University of Twente Department of Electrical Engineering, Mathematics & Computer Science (EEMCS) Signals & Systems Group (SAS)

More information

Multimodal Biometric System by Feature Level Fusion of Palmprint and Fingerprint

Multimodal Biometric System by Feature Level Fusion of Palmprint and Fingerprint Multimodal Biometric System by Feature Level Fusion of Palmprint and Fingerprint Navdeep Bajwa M.Tech (Student) Computer Science GIMET, PTU Regional Center Amritsar, India Er. Gaurav Kumar M.Tech (Supervisor)

More information

A New Feature Local Binary Patterns (FLBP) Method

A New Feature Local Binary Patterns (FLBP) Method A New Feature Local Binary Patterns (FLBP) Method Jiayu Gu and Chengjun Liu The Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA Abstract - This paper presents

More information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach

More information

Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition

Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition 1 Combining Statistics of Geometrical and Correlative Features for 3D Face Recognition Yonggang Huang 1, Yunhong Wang 2, Tieniu Tan 1 1 National Laboratory of Pattern Recognition Institute of Automation,

More information

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image Processing

More information

Biometric Security System Using Palm print

Biometric Security System Using Palm print ISSN (Online) : 2319-8753 ISSN (Print) : 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology Volume 3, Special Issue 3, March 2014 2014 International Conference

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Sketchable Histograms of Oriented Gradients for Object Detection

Sketchable Histograms of Oriented Gradients for Object Detection Sketchable Histograms of Oriented Gradients for Object Detection No Author Given No Institute Given Abstract. In this paper we investigate a new representation approach for visual object recognition. The

More information

An Algorithm based on SURF and LBP approach for Facial Expression Recognition

An Algorithm based on SURF and LBP approach for Facial Expression Recognition ISSN: 2454-2377, An Algorithm based on SURF and LBP approach for Facial Expression Recognition Neha Sahu 1*, Chhavi Sharma 2, Hitesh Yadav 3 1 Assistant Professor, CSE/IT, The North Cap University, Gurgaon,

More information

Gabor Surface Feature for Face Recognition

Gabor Surface Feature for Face Recognition Gabor Surface Feature for Face Recognition Ke Yan, Youbin Chen Graduate School at Shenzhen Tsinghua University Shenzhen, China xed09@gmail.com, chenyb@sz.tsinghua.edu.cn Abstract Gabor filters can extract

More information

Biometrics Technology: Multi-modal (Part 2)

Biometrics Technology: Multi-modal (Part 2) Biometrics Technology: Multi-modal (Part 2) References: At the Level: [M7] U. Dieckmann, P. Plankensteiner and T. Wagner, "SESAM: A biometric person identification system using sensor fusion ", Pattern

More information

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian 4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 2016) Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian Hebei Engineering and

More information

Fingerprint Matching using Gabor Filters

Fingerprint Matching using Gabor Filters Fingerprint Matching using Gabor Filters Muhammad Umer Munir and Dr. Muhammad Younas Javed College of Electrical and Mechanical Engineering, National University of Sciences and Technology Rawalpindi, Pakistan.

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

International Journal of Research in Advent Technology, Vol.4, No.6, June 2016 E-ISSN: Available online at

International Journal of Research in Advent Technology, Vol.4, No.6, June 2016 E-ISSN: Available online at Authentication Using Palmprint Madhavi A.Gulhane 1, Dr. G.R.Bamnote 2 Scholar M.E Computer Science & Engineering PRMIT&R Badnera Amravati 1, Professor Computer Science & Engineering at PRMIT&R Badnera

More information

Part-based Face Recognition Using Near Infrared Images

Part-based Face Recognition Using Near Infrared Images Part-based Face Recognition Using Near Infrared Images Ke Pan Shengcai Liao Zhijian Zhang Stan Z. Li Peiren Zhang University of Science and Technology of China Hefei 230026, China Center for Biometrics

More information

Part-based Face Recognition Using Near Infrared Images

Part-based Face Recognition Using Near Infrared Images Part-based Face Recognition Using Near Infrared Images Ke Pan Shengcai Liao Zhijian Zhang Stan Z. Li Peiren Zhang University of Science and Technology of China Hefei 230026, China Center for Biometrics

More information

Multi-feature face liveness detection method combining motion information

Multi-feature face liveness detection method combining motion information Volume 04 - Issue 11 November 2018 PP. 36-40 Multi-feature face liveness detection method combining motion information Changlin LI (School of Computer Science and Technology, University of Science and

More information

Masked Face Detection based on Micro-Texture and Frequency Analysis

Masked Face Detection based on Micro-Texture and Frequency Analysis International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Masked

More information

3D Face Recognition. Anil K. Jain. Dept. of Computer Science & Engineering Michigan State University.

3D Face Recognition. Anil K. Jain. Dept. of Computer Science & Engineering Michigan State University. 3D Face Recognition Anil K. Jain Dept. of Computer Science & Engineering Michigan State University http://biometrics.cse.msu.edu Face Recognition 1959? 1960 1972 1973 Face detection using OpenCV Viola-Jones

More information

Dealing with Inaccurate Face Detection for Automatic Gender Recognition with Partially Occluded Faces

Dealing with Inaccurate Face Detection for Automatic Gender Recognition with Partially Occluded Faces Dealing with Inaccurate Face Detection for Automatic Gender Recognition with Partially Occluded Faces Yasmina Andreu, Pedro García-Sevilla, and Ramón A. Mollineda Dpto. Lenguajes y Sistemas Informáticos

More information

Face Liveness Detection Based on Texture and Frequency Analyses

Face Liveness Detection Based on Texture and Frequency Analyses Face Liveness Detection Based on Texture and Frequency Analyses Gahyun Kim 1, Sungmin Eum 1, Jae Kyu Suhr 2, Dong Ik Kim 1, Kang Ryoung Park 3 and Jaihie Kim 1 1 School of Electrical and Electronic Engineering,

More information

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition Olegs Nikisins Institute of Electronics and Computer Science 14 Dzerbenes Str., Riga, LV1006, Latvia Email: Olegs.Nikisins@edi.lv

More information

Local Descriptor based on Texture of Projections

Local Descriptor based on Texture of Projections Local Descriptor based on Texture of Projections N V Kartheek Medathati Center for Visual Information Technology International Institute of Information Technology Hyderabad, India nvkartheek@research.iiit.ac.in

More information

Projected Texture for Hand Geometry based Authentication

Projected Texture for Hand Geometry based Authentication Projected Texture for Hand Geometry based Authentication Avinash Sharma Nishant Shobhit Anoop Namboodiri Center for Visual Information Technology International Institute of Information Technology, Hyderabad,

More information

Better than best: matching score based face registration

Better than best: matching score based face registration Better than best: based face registration Luuk Spreeuwers University of Twente Fac. EEMCS, Signals and Systems Group Hogekamp Building, 7522 NB Enschede The Netherlands l.j.spreeuwers@ewi.utwente.nl Bas

More information

Recognizing Micro-Expressions & Spontaneous Expressions

Recognizing Micro-Expressions & Spontaneous Expressions Recognizing Micro-Expressions & Spontaneous Expressions Presentation by Matthias Sperber KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association www.kit.edu

More information

3D Face Identification - Experiments Towards a Large Gallery

3D Face Identification - Experiments Towards a Large Gallery 3D Face Identification - Experiments Towards a Large Gallery Dirk Colbry a, Folarin Oki b, George Stockman b a Arizona State University, School of Computing and Informatics, Tempe, AZ 85287-8809 USA b

More information

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Dong Yi, Shengcai Liao, Zhen Lei, Jitao Sang, and Stan Z. Li Center for Biometrics and Security Research, Institute

More information

Digital Vision Face recognition

Digital Vision Face recognition Ulrik Söderström ulrik.soderstrom@tfe.umu.se 27 May 2007 Digital Vision Face recognition 1 Faces Faces are integral to human interaction Manual facial recognition is already used in everyday authentication

More information

BRIEF Features for Texture Segmentation

BRIEF Features for Texture Segmentation BRIEF Features for Texture Segmentation Suraya Mohammad 1, Tim Morris 2 1 Communication Technology Section, Universiti Kuala Lumpur - British Malaysian Institute, Gombak, Selangor, Malaysia 2 School of

More information

A Real Time Facial Expression Classification System Using Local Binary Patterns

A Real Time Facial Expression Classification System Using Local Binary Patterns A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial

More information

A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS

A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS Timo Ahonen and Matti Pietikäinen Machine Vision Group, University of Oulu, PL 4500, FI-90014 Oulun yliopisto, Finland tahonen@ee.oulu.fi, mkp@ee.oulu.fi Keywords:

More information

Effective Classifiers for Detecting Objects

Effective Classifiers for Detecting Objects Effective Classifiers for Detecting Objects Michael Mayo Dept. of Computer Science University of Waikato Private Bag 3105, Hamilton, New Zealand mmayo@cs.waikato.ac.nz Abstract Several state-of-the-art

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Combining 3D and 2D for less constrained periocular recognition

Combining 3D and 2D for less constrained periocular recognition Combining 3D and 2D for less constrained periocular recognition Conference or Workshop Item Accepted Version Chen, L. and Ferryman, J. (2015) Combining 3D and 2D for less constrained periocular recognition.

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

A New Rotation Invariant Weber Local Descriptor for Recognition of Skin Diseases

A New Rotation Invariant Weber Local Descriptor for Recognition of Skin Diseases A New Rotation nvariant Weber Local Descriptor for Recognition of Skin Diseases Anabik Pal 1, Nibaran Das 2,*, Somenath Sarkar 3, Dwijendranath Gangopadhyay 4, and Mita Nasipuri 2 1 Department of nformation

More information

Face Detection for Skintone Images Using Wavelet and Texture Features

Face Detection for Skintone Images Using Wavelet and Texture Features Face Detection for Skintone Images Using Wavelet and Texture Features 1 H.C. Vijay Lakshmi, 2 S. Patil Kulkarni S.J. College of Engineering Mysore, India 1 vijisjce@yahoo.co.in, 2 pk.sudarshan@gmail.com

More information

Texture Feature Extraction Using Improved Completed Robust Local Binary Pattern for Batik Image Retrieval

Texture Feature Extraction Using Improved Completed Robust Local Binary Pattern for Batik Image Retrieval Texture Feature Extraction Using Improved Completed Robust Local Binary Pattern for Batik Image Retrieval 1 Arrie Kurniawardhani, 2 Nanik Suciati, 3 Isye Arieshanti 1, Institut Teknologi Sepuluh Nopember,

More information

Fingerprint Authentication for SIS-based Healthcare Systems

Fingerprint Authentication for SIS-based Healthcare Systems Fingerprint Authentication for SIS-based Healthcare Systems Project Report Introduction In many applications there is need for access control on certain sensitive data. This is especially true when it

More information

TEXTURE CLASSIFICATION METHODS: A REVIEW

TEXTURE CLASSIFICATION METHODS: A REVIEW TEXTURE CLASSIFICATION METHODS: A REVIEW Ms. Sonal B. Bhandare Prof. Dr. S. M. Kamalapur M.E. Student Associate Professor Deparment of Computer Engineering, Deparment of Computer Engineering, K. K. Wagh

More information

Gurmeet Kaur 1, Parikshit 2, Dr. Chander Kant 3 1 M.tech Scholar, Assistant Professor 2, 3

Gurmeet Kaur 1, Parikshit 2, Dr. Chander Kant 3 1 M.tech Scholar, Assistant Professor 2, 3 Volume 8 Issue 2 March 2017 - Sept 2017 pp. 72-80 available online at www.csjournals.com A Novel Approach to Improve the Biometric Security using Liveness Detection Gurmeet Kaur 1, Parikshit 2, Dr. Chander

More information

International Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017

International Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017 RESEARCH ARTICLE OPEN ACCESS Facial expression recognition based on completed LBP Zicheng Lin 1, Yuanliang Huang 2 1 (College of Science and Engineering, Jinan University, Guangzhou, PR China) 2 (Institute

More information

High-Order Circular Derivative Pattern for Image Representation and Recognition

High-Order Circular Derivative Pattern for Image Representation and Recognition High-Order Circular Derivative Pattern for Image epresentation and ecognition Author Zhao Sanqiang Gao Yongsheng Caelli Terry Published 1 Conference Title Proceedings of the th International Conference

More information

COMPUTATIONALLY EFFICIENT SERIAL COMBINATION OF ROTATION-INVARIANT AND ROTATION COMPENSATING IRIS RECOGNITION ALGORITHMS

COMPUTATIONALLY EFFICIENT SERIAL COMBINATION OF ROTATION-INVARIANT AND ROTATION COMPENSATING IRIS RECOGNITION ALGORITHMS COMPUTATIONALLY EFFICIENT SERIAL COMBINATION OF ROTATION-INVARIANT AND ROTATION COMPENSATING IRIS RECOGNITION ALGORITHMS Mario Konrad, Herbert Stögner School of Communication Engineering for IT, Carinthia

More information

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms

Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Computationally Efficient Serial Combination of Rotation-invariant and Rotation Compensating Iris Recognition Algorithms Andreas Uhl Department of Computer Sciences University of Salzburg, Austria uhl@cosy.sbg.ac.at

More information

Robust Facial Expression Classification Using Shape and Appearance Features

Robust Facial Expression Classification Using Shape and Appearance Features Robust Facial Expression Classification Using Shape and Appearance Features S L Happy and Aurobinda Routray Department of Electrical Engineering, Indian Institute of Technology Kharagpur, India Abstract

More information

Spoofing detection on facial images recognition using LBP and GLCM combination

Spoofing detection on facial images recognition using LBP and GLCM combination Journal of Physics: Conference Series PAPER OPEN ACCESS Spoofing detection on facial images recognition using LBP and GLCM combination To cite this article: F Sthevanie and K N Ramadhani 2018 J. Phys.:

More information

arxiv: v3 [cs.cv] 3 Oct 2012

arxiv: v3 [cs.cv] 3 Oct 2012 Combined Descriptors in Spatial Pyramid Domain for Image Classification Junlin Hu and Ping Guo arxiv:1210.0386v3 [cs.cv] 3 Oct 2012 Image Processing and Pattern Recognition Laboratory Beijing Normal University,

More information

CAP 5415 Computer Vision Fall 2012

CAP 5415 Computer Vision Fall 2012 CAP 5415 Computer Vision Fall 01 Dr. Mubarak Shah Univ. of Central Florida Office 47-F HEC Lecture-5 SIFT: David Lowe, UBC SIFT - Key Point Extraction Stands for scale invariant feature transform Patented

More information

Spatial Frequency Domain Methods for Face and Iris Recognition

Spatial Frequency Domain Methods for Face and Iris Recognition Spatial Frequency Domain Methods for Face and Iris Recognition Dept. of Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 e-mail: Kumar@ece.cmu.edu Tel.: (412) 268-3026

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation

A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation Walid Aydi, Lotfi Kamoun, Nouri Masmoudi Department of Electrical National Engineering School of Sfax Sfax University

More information

Morphable Displacement Field Based Image Matching for Face Recognition across Pose

Morphable Displacement Field Based Image Matching for Face Recognition across Pose Morphable Displacement Field Based Image Matching for Face Recognition across Pose Speaker: Iacopo Masi Authors: Shaoxin Li Xin Liu Xiujuan Chai Haihong Zhang Shihong Lao Shiguang Shan Work presented as

More information

Critique: Efficient Iris Recognition by Characterizing Key Local Variations

Critique: Efficient Iris Recognition by Characterizing Key Local Variations Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher

More information