TRADITIONAL optical imaging devices require appropriate

Size: px
Start display at page:

Download "TRADITIONAL optical imaging devices require appropriate"

Transcription

1 2436 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 6, JUNE 2014 Common Feature Discriminant Analysis for Matching Infrared Face Images to Optical Face Images Zhifeng Li, Senior Member, IEEE, Dihong Gong, Yu Qiao, Senior Member, IEEE, and Dacheng Tao, Senior Member, IEEE Abstract In biometrics research and industry, it is critical yet a challenge to match infrared face images to optical face images. The major difficulty lies in the fact that a great discrepancy exists between the infrared face image and corresponding optical face image because they are captured by different devices (optical imaging device and infrared imaging device). This paper presents a new approach called common feature discriminant analysis to reduce this great discrepancy and improve optical-infrared face recognition performance. In this approach, a new learning-based face descriptor is first proposed to extract the common features from heterogeneous face images (infrared face images and optical face images), and an effective matching method is then applied to the resulting features to obtain the final decision. Extensive experiments are conducted on two large and challenging opticalinfrared face data sets to show the superiority of our approach over the state-of-the-art. Index Terms Face recognition, infrared face, face descriptor. I. INTRODUCTION TRADITIONAL optical imaging devices require appropriate illumination conditions to work properly, which is difficult to achieve satisfactorily in practical face recognition applications. To combat low illumination at nights or indoors, infrared imaging devices have been widely applied to many automatic face recognition (ARF) systems. The task of infrared-based ARF systems is to match a probe face image taken with the infrared imaging device to a gallery of face images taken with the optical imaging device, which is considered to be an important application of heterogeneous Manuscript received July 4, 2013; revised November 22, 2013 and March 5, 2014; accepted March 10, Date of publication April 8, 2014; date of current version April 28, This work was supported in part by the Natural Science Foundation of China under Grant and Grant , in part by the Shenzhen Basic Research Program under Grant JC A, Grant JCYJ , and Grant JCYJ , in part by the 100 Talents Program, Chinese Academy of Sciences, in part by the Guangdong Innovative Research Team Program under Grant D , and in part by the Australian Research Council Project under Grant FT and Grant DP The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Bulent Sankur. Z. Li, D. Gong, and Y. Qiao are with the Shenzhen Key Laboratory of Computer Vision and Pattern Recognition, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen , China ( zhifeng.li@siat.ac.cn; dh.gong@siat.ac.cn; yu.qiao@siat.ac.cn). D. Tao is with the Centre for Quantum Computation and Intelligent Systems, Faculty of Engineering and Information Technology, University of Technology, Sydney, NSW 2007, Australia ( dacheng.tao@uts.edu.au). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /TIP Fig. 1. Examples of optical and corresponding infrared face images for (a) HFB dataset and (b) CUHK VIS-NIR dataset. face recognition [1] (also known as cross-modality face recognition). The most challenging issue in heterogeneous face recognition is that face images associated with the same person but taken with different devices may be mismatched due to the great discrepancy between the different image modalities (optical and infrared), which is referred to as modality gap. As illustrated in Figure 1, the infrared photos are usually blurred, low contrast, and have significantly different gray distribution compared to the optical photos. To combat the great modality gap, a variety of algorithms have been proposed in the literature which can be summarized as belonging to the following three categories. The first category of approaches is to convert an image from one modality to the other by synthesizing a pseudo-image from the query image such that the matching process can be done within the same modality. For instance, [2] applies a holistic mapping to convert a photo image into a corresponding sketch image, and in [3] [5] the authors used local patch-based mappings to convert images from one modality to the other for sketchphoto recognition. In [6], Xiong et al. synthesized VIS face images from NIR face images with pose rectification. The second category of approaches is to design an appropriate representation that is insensitive to the modalities of images. For example, Klare et al. [7] used SIFT feature descriptors and multi-scale local binary patterns to represent both the sketch and photo images. Zhang et al. [8] proposed a learningbased algorithm to capture discriminative local face structures and effectively match photo and sketch. In [9], Liu et al. designed a multi-scale common feature descriptor to combat the large intra-class difference incurred by the modality (VIS-NIR) difference. The third category of approaches is to compare the heterogeneous images on a common subspace IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See for more information.

2 LI et al.: CFDA FOR MATCHING INFRARED FACE IMAGES TO OPTICAL FACE IMAGES 2437 where the modality difference is believed to be minimized [10] [19]. For example, Tenenbaum et al. [10] applied the Bilinear Model (BLM) by Singular Value Decomposition (SVD) to develop a common content (associated with identity) space for a set of different styles (corresponding to modalities). In [11], Yi et al. used the Canonical Correlation Analysis (CCA) technique to construct a common subspace where the correlations between infrared and optical images can be maximized. In [12], Li et al. applied the CCA to cross-pose face recognition. In [13], Sharma et al. applied the Partial Least Squares (PLS) method to derive a linear subspace in which cross-modality images are highly correlated, while at the same time preserving variances more effectively than the previous CCA method. In [16], Lei et al. proposed an effective subspace learning framework called coupled discriminant analysis for heterogeneous face recognition. In [17], a generic HFR framework was proposed in which both probe and gallery images are represented in terms of non-linear kernel similarities to a collection of prototype face images to enhance heterogeneous face recognition accuracy. Motivated by the challenging task of VIS-NIR face recognition, in this paper, we propose a new approach called common feature discriminant analysis (CFDA) for matching infrared face images to optical face images. Although we only demonstrate the application in VIS-NIR face recognition, the basic idea behind our approach is based on general heterogeneous face recognition, thus it can be easily extended to many other similar applications like matching sketch to photography, or matching thermal to photography. In the CFDA approach, a new learning-based face descriptor is first proposed to reduce the modality discrepancy in the feature extraction stage. The key idea is to learn an encoding mechanism which will turn the optical and infrared face images into their encoded images to reduce the modality gap between them. The proposed new descriptor maximizes the correlation of the encoded images of the same object, and is thus beneficial to performance improvement. Extensive experiments are conducted on two large and challenging optical-infrared face datasets to show the significant performance improvement of our new approach over the state-of-the-art. The major contributions of this work lie in the proposed new feature descriptor that is well suited for matching infrared face images to optical face images. From experimental results, we can clearly see that our feature descriptor significantly outperforms the existing ones, regardless of which matching scheme is adopted. Moreover, this work focuses on feature representation, so it can be easily combined with the existing classification algorithms to produce better performance. Hence it is very easy to implement. The rest of this paper is organized as follows. In Section II, we describe the proposed CFDA approach for optical-infrared face recognition. Experimental results and analysis are given in Section III. Finally, we conclude this paper in Section IV. II. PROPOSED APPROACH In this section, we propose a new approach called common feature discriminant analysis (CFDA) for optical-infrared face recognition. In the CFDA approach, a new learning-based feature descriptor is first developed to learn a set of optimal hyperplanes to quantize continuous vector space into discrete partitions for common feature extraction, and an effective discriminant analysis technique is then applied for feature classification. The details are elaborated in the following subsections. A. Our Descriptor Vector quantization is an effective technique in mapping vectors of continuous space into discrete code representations, and has been widely used to create discrete image representations for object recognition. An image can be turned into an encoded image by converting each pixel into a specific code using the vector quantization technique. Various algorithms such as mean shift [20], k-means, random projection tree [21], [22], and random forest [23], [24] have been proposed to quantize a continuous space to form discrete partition cells for vector quantization. In this section, we design a hyperplane-based encoding method for effective feature representation for heterogeneous face images. Mathematically, a hyperplane divides a feature space R D 1 into two partitions, namely the positive part and the negative part. A vector x R D 1 will be classified to the negative part if: h( x) = w T x + b 0. (1) Otherwise, the vector will be classified to the positive part. Here the w is a unit normal vector and b is the offset. We encode face images by turning each of the pixel into a decimal code representing which spatial partition the pixel falls into, as illustrated in Figure 2. Each pixel is corresponding to a point in a R D 1 space, and thus any face image can be represented as a set of points I = { x 1,..., x M },where M = H W is the total number of pixels of the image. The detailed procedure in converting a pixel into its corresponding vector is elaborated as follows. Suppose we are given K pairs of training persons which can be represented as χ = {( x i 1, x i 2 ) }, i = 1,...,N where superscript 1 represents optical faces and 2 for infrared faces, subscript i indicates the i-th pixel vector, and N = H W K is the total number of vector pairs. Then we can compute the projections to their corresponding hyperplanes using (1), which is denoted as: γ = {( y 1 i, y2 i ) }, i = 1,...,N, (2) where yi 1 ( ) ( ) = h 1 x 1 i and y 2 i = h 2 x 2 i. Denote that X1 = [ x 1 1,..., x N 1 ], X 2 = [ x 1 2,..., x N 2 ], Y 1 = [y1 1,...,y1 N ],and Y 2 =[y1 2,...,y2 N ]. Since the vector pairs are associated with the same subject, it is expected that the projections on their hyperplanes will be highly correlated. Formally, we define a Kullback Leibler (KL) divergence [25], [26] to measure the correlation: KL P (Y 1, Y 2 ) P (Y 1 ) P (Y 2 ). (3) The KL divergence measures the distance between the joint distribution and the distribution of the independent assumption. If the pairs (Y 1, Y 2 ) are statistically independent, the KL divergence is zero. The greater the correlation between

3 2438 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 6, JUNE 2014 Fig. 2. The pipeline for extracting the feature descriptor. For each pixel, we first sample its five d-neighbor (Radii = d) pixels for each direction (the figure shows one of four directions, with blue arrows), and then subtract the center pixel value. Finally the centered vector is normalized into the unit L2-norm to form the associated pixel vector of that direction. Each pixel is associated with four vectors, forming four sets of training vectors that are used to train four encoders. Each encoder consists of two sets of mutually orthogonal hyperplanes (we only show one for illustration), which divide the vector space into four partitions. Vectors of each direction are encoded into a 2-bit value, according to the partition in which the vector lies (i.e. 00 for the first partition, 01 for the second partition, and so forth). Finally, the four 2-bit values are concatenated to form an 8-bit value that will be converted into a decimal value (from 0to 255) as the code. (Y 1, Y 2 ), the more the joint distribution will diverge from the independent assumption and the KL value. Our aim is to derive two hyperplanes H1 and H 2, such that the KL divergence can be maximized. Observed that the projections of the optical vectors and infrared vectors to their hyperplanes approximately follow joint Gaussian distribution (Empirically, we have tested that the projections are approximately Gaussian distributed), then the KL divergence can be simplified, as below: KL = ln cov (Y 1, Y 1 ) cov (Y 2, Y 2 ), (4) where is the determinant of the covariance matrix given by: [ ] cov (Y1, Y = 1 ) cov (Y 1, Y 2 ). (5) cov (Y 2, Y 1 ) cov (Y 2, Y 2 ) Substituting (5) into (4), we can obtain the equivalent optimization formula: w T max 1 1,2 w 2 w 1, w 2 w T 1 1,1 w 1 w T 2 2,2, (6) w 2 s.t. w 1 = 1, w 2 = 1, where 1,2 = X 1X T 2, 1,1 = X 1X T 1,and 2,2 = X 2 X T 2. X 1 and X 2 are the mean-centered data matrix whose columns consist of centered observations. The equation (6) is the CCA problem [27], [28], which can be solved with the following generalized eigen-decomposition equation: [ w ] [ w ] λc 1 D = C 1 w O, 2 w 2 where [ C D = 0 2,2 1,1 0 ] [ 0 and C O = T1,2 0 1,2 ]. In solving the generalized eigen-decomposition problem, we pick up the eigenvector corresponding to the largest eigenvalue as the eigenvalue λ is corresponding to the value that we want to maximize. The solution of (6) provides the normal vectors to the optimal hyperplanes that maximize the correlation of the distances between the optical image and infrared image feature vectors. To determine the offset values b 1 and b 2, we apply the KL criteria again on the quantized values of the feature vectors. Formally, Ŷ 1 = sign(y 1 ), (7) Ŷ 2 = sign(y 2 ), (8) and b 1 and b 2 are solved in a brute-force manner: choose (b 1, b 2 ) where b 1 Y 1 and b 2 Y 2 such that the following KL value can be maximized: ) ) ) KL P (Ŷ1, Ŷ 2 P (Ŷ1 P (Ŷ2. (9) One thing to note is that the computational cost of (b 1, b 2 ) in a brute-force manner is rather low. To show this point, we compute the KL distance defined in (4) for each pair of (b 1, b 2 ) taken uniformly from b 1 [min (Y 1 ) max (Y 1 )]and b 2 [min (Y 2 ) max (Y 2 )]. We found that dividing the solution regions into 1000 intervals can achieve a good approximation to the problem. For the computation of each KL distance, we only need to compute three variances which is O ( n 2),where n is the number of training pixel vectors. With the above algorithm, we obtain a pair of hyperplanes H1 and H 2. One thing to note is that we can introduce more pairs of hyperplanes to further divide the vector space if necessary. The k-th pair of hyperplanes can be obtained by solving the same eigen-decomposition problem but taking the eigenvectors corresponding to the k-th largest eigenvalue. For instance, if we already have k-1 pairs of hyperplanes {( H 1, j, H 2, j ) j = 1, 2,...,k 1 },

4 LI et al.: CFDA FOR MATCHING INFRARED FACE IMAGES TO OPTICAL FACE IMAGES 2439 Fig. 3. Illustration for encoded face images of (a) CUHK VIS-NIR face dataset and (b) HFB VIS-NIR face dataset. The first row shows the original optical face images as well as their encoded images of different radii, and the second row shows the corresponding infrared images. then the k-th pair of hyperplanes (H1,k, H 2,k ) can be obtained by selecting the eigenvectors corresponding to the k-th largest eigenvalues. The property of the eigenvectors guarantees that H1,k is orthogonal to } {H1, j j = 1,.., k 1, and also for H2,k {H and } 2, j j = 1,.., k 1. With M pairs of hyperplanes, we can divide each vector space (optical or infrared) into 2 M partitions, and any optical or infrared face image can be turned into a discrete coded representation by assigning a code to each of its pixels according to the partition in which the associated vector lies. To regularize the partitions with the existing geometry information, we encode each of the pixels in any face image with four associated vectors, as illustrated in Figure 2. Each pixel is associated with four vectors, one for each direction. For each direction, we split the vector space into four partitions with two orthogonal hyperplanes. The partitions of each direction are trained independently, yielding four sets of hyperplanes, with each set dividing the space of that direction into four partitions. For each direction, there are two orthogonal hyperplanes, yielding four encoded statuses, i.e. 00, 01, 10, 11. The encoded statuses of each direction are concatenated to form a final encoded status; thus, each pixel of the image has 4 4 = 256 discrete statuses, and we assign one of these status values (by converting the 8-bits binary value into decimal value, i.e. ( ) 2 (131) 10 ) to each pixel to form an encoded image. In our study, we sample the vectors of each pixel with multiple scales, i.e. sampling radii = {3, 5, 7, 9}. Hyperplanes are trained independently on each of the sampling patterns, and the final feature representation of the input image is derived by concatenating the feature representation of each of the encoded images. In Figure 3, we illustrate some examples of encoded face images. With face images encoded, we can extract feature representations using a densely sampling technique as follows. 1. Divide the whole encoded image into a set of overlapping patches with size c c pixels (the step between adjacent patches is s). 2. Compute the histogram, over each patch, of the frequency of each code occurring which gives a feature vector for the patch. 3. Concatenate the outputs of each patch into a long vector to form the final face feature. B. Matching Framework By applying the aforementioned feature descriptor, we can obtain a high-dimensional feature vector for each optical or infrared image. In our study, each face image is divided into 165 patches with each one represented by a 256-dimensional feature vector. We used four scales in our experiments, i.e. sampling radii ={3, 5, 7, 9}. So the feature dimension for the resulting feature vector is in our experiments. Direct processing such a high-dimensional feature vector might incur problems. First, high dimensionality places a huge demand on computer memory. Second, a more serious problem is the lack of sufficiently large datasets to train the high dimensional feature, which leads to overfitting [29]. In order to address these issues, we design a two-level matching framework. The matching framework involves two levels of subspace analysis. In the first level, the large feature vector is first divided into multiple segments of smaller feature vectors. Discriminant analysis is performed separately on each segment to extract the discriminant features. The goal for the first level is to generate more discriminative projections to reduce intraclass variations and avoid over-fitting. In the second level, projected features from all the segments are then combined, with PCA for efficient recognition. The two-level matching framework is based on the local feature-based discriminant analysis (LFDA) approach in [7]. The choice of such a matching framework is based on the reported success in heterogeneous face recognition [7]. Figure 4 presents an illustration of this algorithm. The details of this algorithm are described as follows. The first level of subspace analysis is conducted as follows: 1. Divide the original feature vector equally into several feature segments. Project each segment S to its PCA subspace using the transform matrix T 1, which is computed from the corresponding segment in the training set, F 1 = T T 1 S.

5 2440 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 6, JUNE 2014 Fig. 4. Illustration of the two-level matching framework in Section II.B. 2. Compute the intrapersonal subspace using the withinclass scatter matrix in the reduced PCA subspace. Adjust the dimensions of the intra-class subspace to reduce the intra-class variations. 3. Compute the class center of each person on the training set. Project all the class centers onto the intrapersonal subspace and then normalize the projections by intrapersonal eigenvalues to compute whitened feature vectors, F 2 = T2 T F 1, where T 2 is the normalized projection matrix. 4. Calculate the between-class scatter matrix based on the whitened feature vectors. 5. Compute the final discriminant feature vector using the PCA transform matrix T 3 based on the between-class scatter matrix calculated in step 4. F 3 = T3 T F 2. Procedures in the second level of subspace analysis are: 6. Combine the extracted discriminant feature vectors from each segment to form a new feature vector V. 7. To remove the redundant information, apply PCA on V to obtain the final feature vector F for classification, F = T4 T V, where T 4 is the corresponding PCA projection matrix. C. Discussions The CFDA approach is proposed specifically for handling the optical-infrared face recognition problem. In the feature extraction stage, a learning-based feature descriptor is developed to maximize the correlations between the optical face images and corresponding infrared images. In this way, the modality gap between the two kinds of face images can be significantly reduced; hence, it is expected that the resulting features will be well-suited to the optical-infrared face recognition problem, as supported by the experimental results in Section III. Our new feature descriptor differs significantly from state-of-the-art descriptors, such as the widely used HOG [30] and LBP [31], [32], in the literature. Instead of encoding the images using a handcrafted encoding scheme, our feature descriptor learns a new encoding scheme to encode the common micro-structure of both the optical and infrared face images for effective feature representation. Our experimental results also support the effectiveness of our new descriptor over state-of-the-art descriptors. Our new feature descriptor also differs significantly from the CITE in [8]. The major difference between them are summarized as follows. (1) CITE is inherently a treebased encoding method, while our feature descriptor is inherently a binary encoding scheme. (2) Unlike CITE, we encode a pixel with four directions to make full use of the geometry information. (3) From experimental comparison, we can clearly see that our new descriptor significantly outperforms CITE in [8]. This also reflects the significant difference between them. III. EXPERIMENTS In this section, we conduct extensive experiments on two large and challenging optical and infrared face datasets to investigate the effectiveness of our new approach. The first dataset is collected by CUHK, called CUHK VIS-NIR face dataset. It consists of 2876 different persons with each one having one optical photo and one corresponding near infrared photo. Some examples from this dataset are shown in Figure 1(b). In our experiments, we divide the dataset into two parts: the images of 1438 persons are used for training, and those of the remaining 1438 persons are used for testing. There is no overlap between the training set and the testing set. The second dataset is a public-domain heterogeneous face dataset,

6 LI et al.: CFDA FOR MATCHING INFRARED FACE IMAGES TO OPTICAL FACE IMAGES 2441 Fig. 5. Illustration of normalized optical face images and the corresponding infrared face images for (a) CUHK VIS-NIR face dataset; and (b) HFB VIS-NIR face dataset. TABLE I COMPARISON OF OUR DESCRIPTOR WITH STATE-OF-THE-ART DESCRIPTORS USING THE NEAREST NEIGHBOR MATCHING SCHEME Fig. 6. The illustration for parameter exploration of the sampling radii. called heterogeneous face biometrics (HFB) [33]. It consists of 200 subjects with probe images captured in the near-infrared spectrum ( 780 1,100 nm) and gallery images captured in the visible spectrum. For fair comparison, we use the same split scheme with [17] where 133 persons are used as training and the rest 67 persons are used as testing. The dataset is split randomly, and the results are reported as the average over 10 random runs. To better evaluate the recognition performance, we preprocess the face images through the following steps: 1) rotate the face images to align the vertical face orientation; 2) scale the face images so that the distance between the centers of eyes is the same for all images; 3) crop the face images (into pixels) to remove the background and the hair region. Figure 5 shows example images of infrared and optical data after preprocessing. We use FaceVACS (a commercial face recognition engineer) [34] to help detect the faces and locate the eyes. We further manually aligned a small set of training faces (98 pairs of training faces) with a face mask (which are divided into many triangle regions based on which we warp the faces into frontal ones) to help establish a good one-to-one correspondence of pixels. We use the 98 pairs of training faces to learn our encoding model and use all the 1438 pairs of training faces to learn the matching framework. For testing faces, we do not need this mask to warp the face to build pixel correspondence. A. Parameter Exploration The proposed algorithm has three parameters: the sampling radii r, the size of patch c, and the step of patches s. In our system, we empirically set the size of patch as pixels with step 8 pixels, which is a very popular configuration in the literatures. We explore the effect of sampling radii r by reporting the averaged 5-fold rank-1 recognition accuracies of different sampling radii on the HFB face dataset, as shown in Figure 6(a). We can see that the feature descriptor achieves the optimal performance around radii = 5, thus we choose a fusion of radii ={3, 5, 7, 9} in our system. B. Experiment on the CUHK VIS-NIR Face Dataset We first investigate the effectiveness of our descriptor and compare it with several state-of-the-art descriptors in heterogeneous face recognition: local binary pattern (LBP) [31]

7 2442 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 6, JUNE 2014 TABLE II COMPARISON OF OUR DESCRIPTOR WITH STATE-OF-THE-ART DESCRIPTORS USING THE PCA+LDA SCHEME Fig. 7. Comparison of our descriptor with state-of-the-art descriptors using the nearest neighbor matching scheme. TABLE III COMPARISON OF OUR DESCRIPTOR WITH STATE-OF-THE-ART DESCRIPTORS USING THE TWO-LEVEL MATCHING FRAMEWORK Fig. 8. Comparison of our descriptor with state-of-the-art descriptors using the PCA+LDA matching scheme. with 59-bits U2 encoding, multi-scale local binary pattern (MLBP) [32] with scales {1, 3, 5, 7}, histogram of orientation gradient (HOG) [30] with 12 discrete orientations, and a newly developed descriptor for heterogeneous face recognition called coupled-information tree encoding CITE [8]. We use three matching schemes for extensive performance evaluation: (i) nearest neighbor matching using Euclidean distance, (ii) PCA+LDA [35], and (iii) the two-level matching scheme presented in Section II.B. The comparative results for both face identification and face verification are reported in Tables I III and Figures 7 9. From these results, we make the following observations. First, it is encouraging to see that our new face descriptor significantly outperforms the state-of-the-art descriptors by a clear margin in terms of both face identification and face verification tasks, regardless of which matching scheme is used. This clearly shows the superiority of our new descriptor over existing descriptors. Second, the two-level matching scheme significantly boosts recognition performance, regardless of which descriptor is used. Finally, by integrating the proposed descriptor and the two-level matching scheme, we obtain the best identification Fig. 9. Comparison of our descriptor with state-of-the-art descriptors using the two-level matching scheme. rate, 80.19%. Considering the difficulty of this dataset, this is a very encouraging result. Next, we investigate the performance of our CFDA framework and also compare our approach with several newly developed algorithms in the literature for heterogeneous

8 LI et al.: CFDA FOR MATCHING INFRARED FACE IMAGES TO OPTICAL FACE IMAGES 2443 TABLE IV COMPARISON OF OUR NEW APPROACH WITH THE STATE-OF-THE-ART HETEROGENEOUS FACE RECOGNITION ALGORITHMS ON THE INFRARED-OPTICAL DATASET.LISTED ARE THE RANK-1 IDENTIFICATION ACCURACIES TABLE V COMPARISON OF OUR APPROACH WITH THE STATE-OF-THE-ART RESULTS ON HFB TABLE VI COMPARISON OF OUR FOUR-DIRECTION ENCODING METHOD AGAINST THE ENCODING METHOD BASED ON A DECISION TREE [8]. LISTED ARE THE RANK-1 IDENTIFICATION ACCURACIES Fig. 10. data set. Illustration of the rank-1 results by changing the size of the training The comparative results are reported in Table V. The parameters for the algorithms in Table V are tuned to the best settings according to their papers. It is encouraging to see that our approach also substantially outperforms the state-of-the-art on the HFB dataset. face recognition. The comparative results are reported in Table IV. It is very clear that our approach achieves superior performance over the state-of-the-art methods in the literature. Finally, we evaluate the performance of our approach by gradually increasing the number of training samples and at the same time keeping the testing dataset unchanged. We compare our algorithm against the approach in [17] that is the best-performing method among the existing methods in Table IV. The rank-1 recognition accuracies are reported in Figure 10, from which we can clearly see that our approach consistently outperforms the approach in [17] by a clear margin. This shows the robustness of our algorithm. C. Experiment on the HFB Face Dataset In this subsection, we compare our approach with several newly developed methods in heterogeneous face recognition on a public-domain database, the HFB face dataset [33]. Following the configuration of training and testing split for HFB dataset in [17], we use 133 persons as training and the remaining 67 persons as testing. The dataset is split randomly, and the results are reported as the average over 10 random runs. D. Additional Experiment A major difference between our encoding method and that in CITE is that our method encodes a pixel with four directions (in order to take use of geometry information). It is desirable to design an additional experiment to show the advantage of our four-direction encoding method. The comparative results are shown in Table VI. We compare our four-direction encoding method against the encoding method based on a decision tree [8] that has 256 partitions (the same as ours). For fair comparison, we use the same training and testing split on the CUHK dataset. One thing to note is that the feature length of our four-direction encoding is the same as that of the decision tree method. From Table VI, we can clearly see that our fourdirection encoding method notably outperforms the encoding method in [8]. IV. CONCLUSIONS AND FUTURE WORK In this paper, we have proposed a new approach called common feature discriminant analysis (CFDA) for matching infrared face images to optical face images. In CFDA, a new descriptor is first developed to effectively represent both

9 2444 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 23, NO. 6, JUNE 2014 optical and infrared face images to reduce the modality gap, and a two-level matching method is subsequently applied for fast and effective matching. Extensive experiments on two large and challenging optical-infrared face datasets show the significant improvement of our new approach over the state-ofthe-art. In the future, we will extend our approach to general cross-modality face recognition problems such as sketch-photo and high-low resolution face matching. ACKNOWLEDGMENT The authors would like to thank Prof. Anil K. Jain and Dr. Brendan Klare for their useful suggestions and comments. REFERENCES [1] S. Z. Li, Heterogeneous face biometrics, in Encyclopedia of Biometrics. New York, NY, USA: Springer-Verlag, [2] X. Tang and X. Wang, Face sketch recognition, IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1, pp , Jan [3] B. Xiao, X. Gao, D. Tao, Y. Yuan, and J. Li, Photo-sketch synthesis and recognition based on subspace learning, Neurocomputing, vol. 73, pp , Jan [4] X. Wang and X. Tang, Face photo-sketch synthesis and recognition, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 11, pp , Nov [5] Q. Liu, X. Tang, H. Jin, H. Lu, and S. Ma, Nonlinear approach for face sketch synthesis and recognition, in Proc. IEEE Comput. Soc. Conf. CVPR, Jun. 2005, pp [6] P. Xiong, L. Huang, and C. Liu, A method for heterogeneous face image synthesis, in Proc. 5th IAPR ICB, Apr. 2012, pp [7] B. Klare, Z. Li, and A. K. Jain, Matching forensic sketches to mug shot photos, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 3, pp , Mar [8] W. Zhang, X. Wang, and X. Tang, Coupled information-theoretic encoding for face photo-sketch recognition, in Proc. IEEE Conf. CVPR, Jun. 2011, pp [9] S. Liu, D. Yi, Z. Lei, and S. Z. Li, Heterogeneous face image matching using multi-scale features, in Proc. 5th IAPR ICB, Apr. 2012,pp [10] J. B. Tenenbaum and W. T. Freeman, Separating style and content with bilinear models, Neural Comput., vol. 12, no. 6, pp , [11] D. Yi, R. Liu, R.-F. Chu, Z. Lei, and S. Z. Li, Face matching between near infrared and visible light images, in Proc. Int. Conf. Adv. Biometrics, 2007, pp [12] A. Li, S. Shan, X. Chen, and W. Gao, Maximizing intra-individual correlations for face recognition across pose differences, in Proc. IEEE Conf. CVPR, Jun. 2009, pp [13] A. Sharma and D. W. Jacobs, Bypassing synthesis: PLS for face recognition with pose, low-resolution and sketch, in Proc. IEEE Conf. CVPR, Jun. 2011, pp [14] W. Yang, D. Yi, Z. Lei, J. Sang, and S. Z. Li, 2D 3D face matching using CCA, in Proc. 8th IEEE Int. Conf. Autom. Face Gesture Recognit., Sep. 2008, pp [15] S. Liao, D. Yi, Z. Lei, R. Qin, and S. Li, Heterogeneous face recognition from local structures of normalized appearance, in Proc. 3rd ICB, 2009, pp [16] Z. Lei, S. Liao, A. K. Jain, and S. Z. Li, Coupled discriminant analysis for heterogeneous face recognition, IEEE Trans. Inf. Forensics Security, vol. 7, no. 6, pp , Dec [17] B. Klare and A. K. Jain, Heterogeneous face recognition using kernel prototype similarities, IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 6, pp , Jun [18] F. Nicolo and N. A. Schmid, Long range cross-spectral face recognition: Matching SWIR against visible light images, IEEE Trans. Inf. Forensics Security, vol. 7, no. 6, pp , Dec [19] Z. Lei, C. Zhou, D. Yi, A. K. Jain, and S. Z. Li, An improved coupled spectral regression for heterogeneous face recognition, in Proc. 5th IAPR ICB, 2012, pp [20] F. Jurie and B. Triggs, Creating efficient codebooks for visual recognition, in Proc. 10th IEEE ICCV, vol. 1. Oct. 2005, pp [21] Z. Cao, Q. Yin, X. Tang, and J. Sun, Face recognition with learningbased descriptor, in Proc. IEEE Conf. CVPR, Jun. 2010, pp [22] J. Wright and G. Hua, Implicit elastic matching with random projection for pose-variant face recognition, in Proc. CVPR, 2009, pp [23] F. Moosmann, B. Triggs, and F. Jurie, Fast discriminative visual codebooks using randomized clustering forests, in Proc. NIPS, 2007, pp [24] J. Shotton, M. Johnson, and R. Cipolla, Semantic texton forests for image categorization and segmentation, in Proc. IEEE Conf. CVPR, Jun. 2008, pp [25] S. Kullback and R. A. Leibler, On information and sufficiency, Ann. Math. Statist., vol. 22, no. 1, pp , [26] S. Kullback, Letter to the editor: The Kullback Leibler distance, Amer. Statist., vol. 41, no. 4, pp , [27] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, Canonical correlation analysis: An overview with application to learning methods, Neural Comput., vol. 16, no. 12, pp , [28] D. Weenink, Canonical correlation analysis, in Encyclopedia of Statistics in Behavioral Science. New York, NY, USA: Wiley, 2003, pp [29] S. Raudys and A. Jain, Small sample size effects in statistical pattern recognition: Recommendations for practitioners, IEEE Trans. Pattern Anal. Mach. Intell., vol. 13, no. 3, pp , Mar [30] N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in Proc. IEEE Comput. Soc. Conf. CVPR, vol. 1. Jun. 2005, pp [31] T. Ahonen, A. Hadid, and M. Pietikainen, Face recognition with local binary patterns, in Proc. ECCV, 2004, pp [32] T. Mäenpää and M. Pietikäinen, Multi-scale binary patterns for texture analysis, in Proc. SCIA, Gothenberg, Sweden, 2003, pp [33] S.Z.Li,Z.Lei,and M.Ao, The HFB face database for heterogeneous face biometrics research, in Proc. 6th IEEE Workshop OTCBVS, CVPR, Miami, FL, USA, Jun. 2009, pp [34] Cognitec Systems GbmH, Dresden, Germany. FaceVACS Software Developer Kit [Online]. Available: [35] P. Belhumeur, J. Hespanda, and D. Kriegman, Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection, IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 7, pp , Jul Zhifeng Li (M 06 SM 11) received the Ph.D. degree from the Chinese University of Hong Kong in After that, he was a Post-Doctoral Fellow with the Chinese University of Hong Kong and Michigan State University for several years. He is currently an Associate Professor with the Shenzhen Institutes of Advanced Technology, Chinese Academy of Science. His research interests include computer vision, pattern recognition, and multimodal biometrics. He is a regular member of the program committee for the most important computer vision conferences, including the International Conference on Computer Vision, the International Conference on Computer Vision and Pattern Recognition, and the European Conference on Computer Vision. He also serves as a reviewer for a number of major journals, including the IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, International Journal of Computer Vision, IEEE TRANSACTIONS ON IMAGE PROCESSING, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, and IEEE TRANSAC- TIONS ON INFORMATION FORENSICS AND SECURITY. Dihong Gong received the B.E. degree in electrical engineering from the University of Science and Technology of China in He is currently an Inter-Student with the Shenzhen Institutes of Advanced Technology, Chinese Academy of Science. His current research interests include machine learning methods for face recognition, image retrieval, and image-based human age estimation.

10 LI et al.: CFDA FOR MATCHING INFRARED FACE IMAGES TO OPTICAL FACE IMAGES 2445 Yu Qiao (S 05 M 07 SM 13) received the Ph.D. degree from the University of Electro- Communications, Japan, in He was a JSPS Fellow and Project Assistant Professor with the University of Tokyo from 2007 to He is currently a Professor with the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. His research interests include pattern recognition, computer vision, multimedia, image processing, and machine learning. He has authored more than 90 papers. He was a recipient of the Lu Jiaxi Young Researcher Award from the Chinese Academy of Sciences in Dacheng Tao (M 07 SM 12) is a Professor of Computer Science with the Centre for Quantum Computation and Information Systems and the Faculty of Engineering and Information Technology with the University of Technology, Sydney. He mainly applies statistics and mathematics for data analysis problems in computer vision, machine learning, multimedia, data mining, and video surveillance. He has authored and co-authored more than 100 scientific articles at top venues, including the IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, TRANSACTIONS ON IMAGE PROCESSING, Artificial Intelligence and Statistics, Computer Vision and Pattern Recognition, and International Conference on Data Mining (ICDM), and was the runner-up for the Best Theory/Algorithm Paper Award at the IEEE ICDM 07.

HETEROGENEOUS FACE RECOGNITION VIA GRASSMANNIAN BASED NEAREST SUBSPACE SEARCH

HETEROGENEOUS FACE RECOGNITION VIA GRASSMANNIAN BASED NEAREST SUBSPACE SEARCH HETEROGENEOUS FACE RECOGNITION VIA GRASSMANNIAN BASED NEAREST SUBSPACE SEARCH Yuan Tian 1, Cheng Yan 1, Xiao Bai 1, Jun Zhou 2 1 School of Computer Science and Engineering, Beihang University, Beijing,

More information

FEATURE representation and feature matching are two key

FEATURE representation and feature matching are two key 2736 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 9, SEPTEMBER 2015 Learning Compact Feature Descriptor and Adaptive Matching Framework for Face Recognition Zhifeng Li, Senior Member, IEEE, Dihong

More information

Forensic Sketches matching

Forensic Sketches matching Forensic Sketches matching Ms Neha S.Syed 1 Dept. of Comp. Science & Engineering People s Education Society s college of Engineering Aurangabad, India. E-mail: nehas1708@gmail.com Abstract In this paper

More information

Coupled Discriminant Analysis for Heterogeneous Face Recognition

Coupled Discriminant Analysis for Heterogeneous Face Recognition IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 7, NO. 6, DECEMBER 2012 1707 Coupled Discriminant Analysis for Heterogeneous Face Recognition Zhen Lei, Member, IEEE, Shengcai Liao, Anil K.

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Matching Composite Sketches to Facial Photos using Component-based Approach

Matching Composite Sketches to Facial Photos using Component-based Approach Matching Composite Sketches to Facial Photos using Component-based Approach Archana Uphade P.G.Student Department of Computer Engineering Late G.N.Sapkal College of Engineering, Anjaneri, Nasik J.V. Shinde

More information

The CASIA NIR-VIS 2.0 Face Database

The CASIA NIR-VIS 2.0 Face Database The CASIA NIR-VIS. Face Database Stan Z. Li, Dong Yi, Zhen Lei and Shengcai Liao Center for Biometrics and Security Research & National Laboratory of Pattern Recognition Institute of Automation, Chinese

More information

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge

Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Partial Face Matching between Near Infrared and Visual Images in MBGC Portal Challenge Dong Yi, Shengcai Liao, Zhen Lei, Jitao Sang, and Stan Z. Li Center for Biometrics and Security Research, Institute

More information

Color Local Texture Features Based Face Recognition

Color Local Texture Features Based Face Recognition Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma

Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected

More information

Heat Kernel Based Local Binary Pattern for Face Representation

Heat Kernel Based Local Binary Pattern for Face Representation JOURNAL OF LATEX CLASS FILES 1 Heat Kernel Based Local Binary Pattern for Face Representation Xi Li, Weiming Hu, Zhongfei Zhang, Hanzi Wang Abstract Face classification has recently become a very hot research

More information

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features

Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features 1 Kum Sharanamma, 2 Krishnapriya Sharma 1,2 SIR MVIT Abstract- To describe the image features the Local binary pattern (LBP)

More information

An efficient face recognition algorithm based on multi-kernel regularization learning

An efficient face recognition algorithm based on multi-kernel regularization learning Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

Decorrelated Local Binary Pattern for Robust Face Recognition

Decorrelated Local Binary Pattern for Robust Face Recognition International Journal of Advanced Biotechnology and Research (IJBR) ISSN 0976-2612, Online ISSN 2278 599X, Vol-7, Special Issue-Number5-July, 2016, pp1283-1291 http://www.bipublication.com Research Article

More information

FACE RECOGNITION USING SUPPORT VECTOR MACHINES

FACE RECOGNITION USING SUPPORT VECTOR MACHINES FACE RECOGNITION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (b) 1. INTRODUCTION

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition

Linear Discriminant Analysis in Ottoman Alphabet Character Recognition Linear Discriminant Analysis in Ottoman Alphabet Character Recognition ZEYNEB KURT, H. IREM TURKMEN, M. ELIF KARSLIGIL Department of Computer Engineering, Yildiz Technical University, 34349 Besiktas /

More information

TEXTURE CLASSIFICATION METHODS: A REVIEW

TEXTURE CLASSIFICATION METHODS: A REVIEW TEXTURE CLASSIFICATION METHODS: A REVIEW Ms. Sonal B. Bhandare Prof. Dr. S. M. Kamalapur M.E. Student Associate Professor Deparment of Computer Engineering, Deparment of Computer Engineering, K. K. Wagh

More information

Thermal to Visible Face Recognition

Thermal to Visible Face Recognition Thermal to Visible Face Recognition Jonghyun Choi, Shuowen Hu, S. Susan Young and Larry S. Davis University of Maryland, College Park, MD U.S. Army Research Laboratory, Adelphi, MD ABSTRACT In low light

More information

Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune

Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune Face sketch photo synthesis Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune Abstract Face sketch to photo synthesis has

More information

Fuzzy Bidirectional Weighted Sum for Face Recognition

Fuzzy Bidirectional Weighted Sum for Face Recognition Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 2014, 6, 447-452 447 Fuzzy Bidirectional Weighted Sum for Face Recognition Open Access Pengli Lu

More information

Cross-pose Facial Expression Recognition

Cross-pose Facial Expression Recognition Cross-pose Facial Expression Recognition Abstract In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose

More information

Image retrieval based on bag of images

Image retrieval based on bag of images University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2009 Image retrieval based on bag of images Jun Zhang University of Wollongong

More information

Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels

Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIENCE, VOL.32, NO.9, SEPTEMBER 2010 Hae Jong Seo, Student Member,

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

Feature-Aging for Age-Invariant Face Recognition

Feature-Aging for Age-Invariant Face Recognition Feature-Aging for Age-Invariant Face Recognition Huiling Zhou, Kwok-Wai Wong, and Kin-Man Lam, Centre for Signal Processing, Department of Electronic and Information Engineering, The Hong Kong Polytechnic

More information

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li Learning to Match Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li 1. Introduction The main tasks in many applications can be formalized as matching between heterogeneous objects, including search, recommendation,

More information

HETEROGENEOUS face recognition is a common issue

HETEROGENEOUS face recognition is a common issue IEEE SIGNAL PROCESSING LETTERS, VOL 24, NO 1, JANUARY 2017 81 Cross-Modality Face Recognition via Heterogeneous Joint Bayesian Hailin Shi, Xiaobo Wang, Dong Yi, Zhen Lei, Senior Member, IEEE, Xiangyu Zhu,

More information

Multimodal Information Spaces for Content-based Image Retrieval

Multimodal Information Spaces for Content-based Image Retrieval Research Proposal Multimodal Information Spaces for Content-based Image Retrieval Abstract Currently, image retrieval by content is a research problem of great interest in academia and the industry, due

More information

Graph Matching Iris Image Blocks with Local Binary Pattern

Graph Matching Iris Image Blocks with Local Binary Pattern Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of

More information

Texture Features in Facial Image Analysis

Texture Features in Facial Image Analysis Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University

More information

A New Multi Fractal Dimension Method for Face Recognition with Fewer Features under Expression Variations

A New Multi Fractal Dimension Method for Face Recognition with Fewer Features under Expression Variations A New Multi Fractal Dimension Method for Face Recognition with Fewer Features under Expression Variations Maksud Ahamad Assistant Professor, Computer Science & Engineering Department, Ideal Institute of

More information

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks

Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Neslihan Kose, Jean-Luc Dugelay Multimedia Department EURECOM Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr

More information

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, Universität Karlsruhe (TH) 76131 Karlsruhe, Germany

More information

Gabor Surface Feature for Face Recognition

Gabor Surface Feature for Face Recognition Gabor Surface Feature for Face Recognition Ke Yan, Youbin Chen Graduate School at Shenzhen Tsinghua University Shenzhen, China xed09@gmail.com, chenyb@sz.tsinghua.edu.cn Abstract Gabor filters can extract

More information

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition

Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition Weighted Multi-scale Local Binary Pattern Histograms for Face Recognition Olegs Nikisins Institute of Electronics and Computer Science 14 Dzerbenes Str., Riga, LV1006, Latvia Email: Olegs.Nikisins@edi.lv

More information

Diagonal Principal Component Analysis for Face Recognition

Diagonal Principal Component Analysis for Face Recognition Diagonal Principal Component nalysis for Face Recognition Daoqiang Zhang,2, Zhi-Hua Zhou * and Songcan Chen 2 National Laboratory for Novel Software echnology Nanjing University, Nanjing 20093, China 2

More information

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

The HFB Face Database for Heterogeneous Face Biometrics Research

The HFB Face Database for Heterogeneous Face Biometrics Research The HFB Face Database for Heterogeneous Face Biometrics Research Stan Z. Li, Zhen Lei, Meng Ao Center for Biometrics and Security Research Institute of Automation, Chinese Academy of Sciences 95 Zhongguancun

More information

The Analysis of Parameters t and k of LPP on Several Famous Face Databases

The Analysis of Parameters t and k of LPP on Several Famous Face Databases The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun

More information

COMBINING SPEEDED-UP ROBUST FEATURES WITH PRINCIPAL COMPONENT ANALYSIS IN FACE RECOGNITION SYSTEM

COMBINING SPEEDED-UP ROBUST FEATURES WITH PRINCIPAL COMPONENT ANALYSIS IN FACE RECOGNITION SYSTEM International Journal of Innovative Computing, Information and Control ICIC International c 2012 ISSN 1349-4198 Volume 8, Number 12, December 2012 pp. 8545 8556 COMBINING SPEEDED-UP ROBUST FEATURES WITH

More information

Face Synthesis from Near-Infrared to Visual Light Via Sparse Representation

Face Synthesis from Near-Infrared to Visual Light Via Sparse Representation Face Synthesis from Near-Infrared to Visual Light Via Sparse Representation Zeda Zhang, Yunhong Wang, Zhaoxiang Zhang Laboratory of Intelligent Recognition and Image Processing, Beijing Key Laboratory

More information

Heterogeneous Face Recognition and Synthesis Using Canonical

Heterogeneous Face Recognition and Synthesis Using Canonical Mengyi Liu, Zhiyong uan, ufeng Ma, ingwei Chen, Qian in Heterogeneous Face Recognition and Synthesis Using Canonical Correlation Analysis 1 Mengyi Liu, 2 Zhiyong uan, 3 ufeng Ma, 4 ingwei Chen, 5 Qian

More information

FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS

FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS Jaya Susan Edith. S 1 and A.Usha Ruby 2 1 Department of Computer Science and Engineering,CSI College of Engineering, 2 Research

More information

Automated Canvas Analysis for Painting Conservation. By Brendan Tobin

Automated Canvas Analysis for Painting Conservation. By Brendan Tobin Automated Canvas Analysis for Painting Conservation By Brendan Tobin 1. Motivation Distinctive variations in the spacings between threads in a painting's canvas can be used to show that two sections of

More information

Image Classification Using Wavelet Coefficients in Low-pass Bands

Image Classification Using Wavelet Coefficients in Low-pass Bands Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August -7, 007 Image Classification Using Wavelet Coefficients in Low-pass Bands Weibao Zou, Member, IEEE, and Yan

More information

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time

More information

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN 2016 International Conference on Artificial Intelligence: Techniques and Applications (AITA 2016) ISBN: 978-1-60595-389-2 Face Recognition Using Vector Quantization Histogram and Support Vector Machine

More information

Palm Vein Recognition with Local Binary Patterns and Local Derivative Patterns

Palm Vein Recognition with Local Binary Patterns and Local Derivative Patterns Palm Vein Recognition with Local Binary Patterns and Local Derivative Patterns Leila Mirmohamadsadeghi and Andrzej Drygajlo Swiss Federal Institude of Technology Lausanne (EPFL) CH-1015 Lausanne, Switzerland

More information

AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH

AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH AN ENHANCED ATTRIBUTE RERANKING DESIGN FOR WEB IMAGE SEARCH Sai Tejaswi Dasari #1 and G K Kishore Babu *2 # Student,Cse, CIET, Lam,Guntur, India * Assistant Professort,Cse, CIET, Lam,Guntur, India Abstract-

More information

A Survey on Feature Extraction Techniques for Palmprint Identification

A Survey on Feature Extraction Techniques for Palmprint Identification International Journal Of Computational Engineering Research (ijceronline.com) Vol. 03 Issue. 12 A Survey on Feature Extraction Techniques for Palmprint Identification Sincy John 1, Kumudha Raimond 2 1

More information

Technology Pravin U. Dere Department of Electronics and Telecommunication, Terna college of Engineering

Technology Pravin U. Dere Department of Electronics and Telecommunication, Terna college of Engineering The Fusion Approaches of Matching Forensic Sketch Photo to Apprehend Criminals by using LFDA framework Dipeeka S. Mukane Department of Electronics and Telecommunication, Alamuri Ratnamala Institute of

More information

Face Recognition Using SIFT- PCA Feature Extraction and SVM Classifier

Face Recognition Using SIFT- PCA Feature Extraction and SVM Classifier IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) Volume 5, Issue 2, Ver. II (Mar. - Apr. 2015), PP 31-35 e-issn: 2319 4200, p-issn No. : 2319 4197 www.iosrjournals.org Face Recognition Using SIFT-

More information

Part-based Face Recognition Using Near Infrared Images

Part-based Face Recognition Using Near Infrared Images Part-based Face Recognition Using Near Infrared Images Ke Pan Shengcai Liao Zhijian Zhang Stan Z. Li Peiren Zhang University of Science and Technology of China Hefei 230026, China Center for Biometrics

More information

A Survey on Matching Sketches to Facial Photographs

A Survey on Matching Sketches to Facial Photographs A Survey on Matching Sketches to Facial Photographs Archana Uphade 1, Prof. J. V. Shinde 2 1 M.E Student, Kalyani Charitable Trust s Late G.N. Sapkal College of Engineering 2 Assistant Professor, Kalyani

More information

Combining Selective Search Segmentation and Random Forest for Image Classification

Combining Selective Search Segmentation and Random Forest for Image Classification Combining Selective Search Segmentation and Random Forest for Image Classification Gediminas Bertasius November 24, 2013 1 Problem Statement Random Forest algorithm have been successfully used in many

More information

Robust Face Recognition Using Enhanced Local Binary Pattern

Robust Face Recognition Using Enhanced Local Binary Pattern Bulletin of Electrical Engineering and Informatics Vol. 7, No. 1, March 2018, pp. 96~101 ISSN: 2302-9285, DOI: 10.11591/eei.v7i1.761 96 Robust Face Recognition Using Enhanced Local Binary Pattern Srinivasa

More information

Face Photo-Sketch Recognition using Local and Global Texture Descriptors

Face Photo-Sketch Recognition using Local and Global Texture Descriptors Face Photo-Sketch Recognition using Local and Global Texture Descriptors Christian Galea Department of Communications & Computer Engineering University of Malta Msida, MSD080 Email: christian.galea.09@um.edu.mt

More information

Part-based Face Recognition Using Near Infrared Images

Part-based Face Recognition Using Near Infrared Images Part-based Face Recognition Using Near Infrared Images Ke Pan Shengcai Liao Zhijian Zhang Stan Z. Li Peiren Zhang University of Science and Technology of China Hefei 230026, China Center for Biometrics

More information

An Integrated Face Recognition Algorithm Based on Wavelet Subspace

An Integrated Face Recognition Algorithm Based on Wavelet Subspace , pp.20-25 http://dx.doi.org/0.4257/astl.204.48.20 An Integrated Face Recognition Algorithm Based on Wavelet Subspace Wenhui Li, Ning Ma, Zhiyan Wang College of computer science and technology, Jilin University,

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Face Matching between Near Infrared and Visible Light Images

Face Matching between Near Infrared and Visible Light Images Face Matching between Near Infrared and Visible Light Images Dong Yi, Rong Liu, RuFeng Chu, Zhen Lei, and Stan Z. Li Center for Biometrics Security Research & National Laboratory of Pattern Recognition

More information

A Novel Method of Face Recognition Using Lbp, Ltp And Gabor Features

A Novel Method of Face Recognition Using Lbp, Ltp And Gabor Features A Novel Method of Face Recognition Using Lbp, Ltp And Gabor Features Koneru. Anuradha, Manoj Kumar Tyagi Abstract:- Face recognition has received a great deal of attention from the scientific and industrial

More information

A Survey on Face-Sketch Matching Techniques

A Survey on Face-Sketch Matching Techniques A Survey on Face-Sketch Matching Techniques Reshma C Mohan 1, M. Jayamohan 2, Arya Raj S 3 1 Department of Computer Science, SBCEW 2 Department of Computer Science, College of Applied Science 3 Department

More information

An Efficient Face Recognition under Varying Image Conditions

An Efficient Face Recognition under Varying Image Conditions An Efficient Face Recognition under Varying Image Conditions C.Kanimozhi,V.Nirmala Abstract Performance of the face verification system depends on many conditions. One of the most problematic is varying

More information

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011

Previously. Part-based and local feature models for generic object recognition. Bag-of-words model 4/20/2011 Previously Part-based and local feature models for generic object recognition Wed, April 20 UT-Austin Discriminative classifiers Boosting Nearest neighbors Support vector machines Useful for object recognition

More information

Applications Video Surveillance (On-line or off-line)

Applications Video Surveillance (On-line or off-line) Face Face Recognition: Dimensionality Reduction Biometrics CSE 190-a Lecture 12 CSE190a Fall 06 CSE190a Fall 06 Face Recognition Face is the most common biometric used by humans Applications range from

More information

[2008] IEEE. Reprinted, with permission, from [Yan Chen, Qiang Wu, Xiangjian He, Wenjing Jia,Tom Hintz, A Modified Mahalanobis Distance for Human

[2008] IEEE. Reprinted, with permission, from [Yan Chen, Qiang Wu, Xiangjian He, Wenjing Jia,Tom Hintz, A Modified Mahalanobis Distance for Human [8] IEEE. Reprinted, with permission, from [Yan Chen, Qiang Wu, Xiangian He, Wening Jia,Tom Hintz, A Modified Mahalanobis Distance for Human Detection in Out-door Environments, U-Media 8: 8 The First IEEE

More information

Final Project Face Detection and Recognition

Final Project Face Detection and Recognition Final Project Face Detection and Recognition Submission Guidelines: 1. Follow the guidelines detailed in the course website and information page.. Submission in pairs is allowed for all students registered

More information

Directional Derivative and Feature Line Based Subspace Learning Algorithm for Classification

Directional Derivative and Feature Line Based Subspace Learning Algorithm for Classification Journal of Information Hiding and Multimedia Signal Processing c 206 ISSN 2073-422 Ubiquitous International Volume 7, Number 6, November 206 Directional Derivative and Feature Line Based Subspace Learning

More information

Multiple Kernel Learning for Emotion Recognition in the Wild

Multiple Kernel Learning for Emotion Recognition in the Wild Multiple Kernel Learning for Emotion Recognition in the Wild Karan Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort and Marian S. Bartlett Machine Perception Laboratory UCSD EmotiW Challenge,

More information

Face Recognition by Combining Kernel Associative Memory and Gabor Transforms

Face Recognition by Combining Kernel Associative Memory and Gabor Transforms Face Recognition by Combining Kernel Associative Memory and Gabor Transforms Author Zhang, Bai-ling, Leung, Clement, Gao, Yongsheng Published 2006 Conference Title ICPR2006: 18th International Conference

More information

[Khadse, 4(7): July, 2015] ISSN: (I2OR), Publication Impact Factor: Fig:(1) Image Samples Of FERET Dataset

[Khadse, 4(7): July, 2015] ISSN: (I2OR), Publication Impact Factor: Fig:(1) Image Samples Of FERET Dataset IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY IMPLEMENTATION OF THE DATA UNCERTAINTY MODEL USING APPEARANCE BASED METHODS IN FACE RECOGNITION Shubhangi G. Khadse, Prof. Prakash

More information

Tensor Decomposition of Dense SIFT Descriptors in Object Recognition

Tensor Decomposition of Dense SIFT Descriptors in Object Recognition Tensor Decomposition of Dense SIFT Descriptors in Object Recognition Tan Vo 1 and Dat Tran 1 and Wanli Ma 1 1- Faculty of Education, Science, Technology and Mathematics University of Canberra, Australia

More information

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS Yanghao Li, Jiaying Liu, Wenhan Yang, Zongg Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,

More information

Deep Learning for Face Recognition. Xiaogang Wang Department of Electronic Engineering, The Chinese University of Hong Kong

Deep Learning for Face Recognition. Xiaogang Wang Department of Electronic Engineering, The Chinese University of Hong Kong Deep Learning for Face Recognition Xiaogang Wang Department of Electronic Engineering, The Chinese University of Hong Kong Deep Learning Results on LFW Method Accuracy (%) # points # training images Huang

More information

Morphable Displacement Field Based Image Matching for Face Recognition across Pose

Morphable Displacement Field Based Image Matching for Face Recognition across Pose Morphable Displacement Field Based Image Matching for Face Recognition across Pose Speaker: Iacopo Masi Authors: Shaoxin Li Xin Liu Xiujuan Chai Haihong Zhang Shihong Lao Shiguang Shan Work presented as

More information

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,

More information

Object detection using non-redundant local Binary Patterns

Object detection using non-redundant local Binary Patterns University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh

More information

Sketchable Histograms of Oriented Gradients for Object Detection

Sketchable Histograms of Oriented Gradients for Object Detection Sketchable Histograms of Oriented Gradients for Object Detection No Author Given No Institute Given Abstract. In this paper we investigate a new representation approach for visual object recognition. The

More information

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained

More information

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images 1 Anusha Nandigam, 2 A.N. Lakshmipathi 1 Dept. of CSE, Sir C R Reddy College of Engineering, Eluru,

More information

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT

More information

A Study on Different Challenges in Facial Recognition Methods

A Study on Different Challenges in Facial Recognition Methods Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 6, June 2015, pg.521

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Writer Recognizer for Offline Text Based on SIFT

Writer Recognizer for Offline Text Based on SIFT Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.1057

More information

Recognition of Non-symmetric Faces Using Principal Component Analysis

Recognition of Non-symmetric Faces Using Principal Component Analysis Recognition of Non-symmetric Faces Using Principal Component Analysis N. Krishnan Centre for Information Technology & Engineering Manonmaniam Sundaranar University, Tirunelveli-627012, India Krishnan17563@yahoo.com

More information

Image Processing and Image Representations for Face Recognition

Image Processing and Image Representations for Face Recognition Image Processing and Image Representations for Face Recognition 1 Introduction Face recognition is an active area of research in image processing and pattern recognition. Since the general topic of face

More information

GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES

GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (a) 1. INTRODUCTION

More information

Part-based and local feature models for generic object recognition

Part-based and local feature models for generic object recognition Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza

More information

Facial Expression Recognition with Emotion-Based Feature Fusion

Facial Expression Recognition with Emotion-Based Feature Fusion Facial Expression Recognition with Emotion-Based Feature Fusion Cigdem Turan 1, Kin-Man Lam 1, Xiangjian He 2 1 The Hong Kong Polytechnic University, Hong Kong, SAR, 2 University of Technology Sydney,

More information

An Improved Coupled Spectral Regression for Heterogeneous Face Recognition

An Improved Coupled Spectral Regression for Heterogeneous Face Recognition An Improved Coupled Spectral Regression for Heterogeneous Face Recognition Zhen Lei 1 Changtao Zhou 1 Dong Yi 1 Anil K. Jain 2 Stan Z. Li 1 1 Center for Biometrics and Security Research & National Laboratory

More information

An MKD-SRC Approach for Face Recognition from Partial Image

An MKD-SRC Approach for Face Recognition from Partial Image An MKD-SRC Approach for Face Recognition from Partial Image Mr.S.Krishnaraj 1, Mr. M. V. Prabhakaran 2 M.E. Computer science Dept, Adhiparasakthi college of Engineering, Melmaruvathur, Chennai, India 1,

More information

Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K.

Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K. Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K.Chaudhari 2 1M.E. student, Department of Computer Engg, VBKCOE, Malkapur

More information

A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS

A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS Timo Ahonen and Matti Pietikäinen Machine Vision Group, University of Oulu, PL 4500, FI-90014 Oulun yliopisto, Finland tahonen@ee.oulu.fi, mkp@ee.oulu.fi Keywords:

More information

Object Tracking using HOG and SVM

Object Tracking using HOG and SVM Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection

More information

Aggregating Descriptors with Local Gaussian Metrics

Aggregating Descriptors with Local Gaussian Metrics Aggregating Descriptors with Local Gaussian Metrics Hideki Nakayama Grad. School of Information Science and Technology The University of Tokyo Tokyo, JAPAN nakayama@ci.i.u-tokyo.ac.jp Abstract Recently,

More information

Tongue Recognition From Images

Tongue Recognition From Images Tongue Recognition From Images Ryszard S. Choraś Institute of Telecommunications and Computer Science UTP University of Sciences and Technology 85-796 Bydgoszcz, Poland Email: choras@utp.edu.pl Abstract

More information