Framework for reliable, real-time facial expression recognition for low resolution images
|
|
- Verity Barnett
- 5 years ago
- Views:
Transcription
1 Framework for reliable, real-time facial expression recognition for low resolution images Rizwan Ahmed Khan a,b,, Alexandre Meyer a,b,hubertkonik a,c,saida Bouakaz a,b a Université delyon,cnrs b Université Lyon 1, LIRIS, UMR5205, F-69622, France c Université Jean Monnet, Laboratoire Hubert Curien, UMR5516, Saint-Etienne, France Abstract Automatic recognition of facial expressions is a challenging problem specially for low spatial resolution facial images. It has many potential applications in human-computer interactions, social robots, deceit detection, interactive video and behavior monitoring. In this study we present a novel framework that can recognize facial expressions very efficiently and with high accuracy even for very low resolution facial images. The proposed framework is memory and time efficient as it extracts texture features in a pyramidal fashion only from the perceptual salient regions of the face. We tested the framework on different databases, which includes Cohn-Kanade (CK+) posed facial expression database, spontaneous expressions of MMI facial expression database and FG-NET facial expressions and emotions database (FEED) corresponding author for this article is Rizwan A Khan, Phone: (+33) (0) , Cell: (+33) (0) , Fax: (+33) (0) addresses: Rizwan-ahmed.khan@liris.cnrs.fr (Rizwan Ahmed Khan), Alexandre.meyer@liris.cnrs.fr (Alexandre Meyer), Hubert.Konik@univ-st-etienne.fr (Hubert Konik), Saida.bouakaz@liris.cnrs.fr (Saida Bouakaz) Preprint submitted to Pattern Recognition Letters October 19, 2012
2 and obtained very good results. Moreover, our proposed framework exceeds state-of-the-art methods for expression recognition on low resolution images. Keywords: Facial expression recognition, Low Resolution Images, Local Binary Pattern, Image pyramid, Salient facial regions Introduction Communication in any form i.e. verbal or non-verbal is vital to complete various routine tasks and plays a significant role in daily life. Facial expression is the most effective form of non-verbal communication and it provides a clue about emotional state, mindset and intention [7]. Human visual system (HVS) decodes and analyzes facial expressions in real time despite having limited neural resources. As an explanation for such performance, it has been proposed that only some visual inputs are selected by considering salient regions [36], where salient means most noticeable or most important. For computer vision community it is a difficult task to automatically recognize facial expressions in real-time with hight reliability. Variability in pose, illumination and the way people show expressions across cultures are some of the parameters that make this task difficult. Low resolution input images makes this task even harder. Smart meeting, video conferencing and visual surveillance are some of the real world applications that require facial expression recognition system that works adequately on low resolution images. Another problem that hinders the development of such system for real world application is the lack of databases with natural displays of expres- 2
3 sions [27]. There are number of publicly available benchmark databases with posed displays of the six basic emotions [6] exist but there is no equivalent of this for spontaneous basic emotions. While, It has been proved that Spontaneous facial expressions differ substantially from posed expressions [2]. In this work, we propose a facial expression recognition system that caters for illumination changes and works equally well for low resolution as well as for good quality / high resolution images. We have tested our proposed system on spontaneous facial expressions as well and recorded encouraging results. We propose a novel descriptor for facial features analysis, Pyramid of Local Binary Pattern (PLBP) (refer Section 3). PLBP is a spatial representation of local binary pattern (LBP) [19] and it represents stimuli by its local texture (LBP) and the spatial layout of the texture. We combined pyramidal approach with LBP descriptor for facial feature analysis as this approach has already been proved to be very effective in a variety of image processing tasks [10]. Thus, the proposed descriptor is a simple and computationally efficient extension of LBP image representation, and it shows significantly improved performance for facial expression recognition tasks for low resolution images. We base our framework for automatic facial expression recognition (FER) on human visual system (HVS) (refer Section 5), so it extracts PLBP features only from the salient regions of the face. To determine which facial region(s) is the most important or salient according to HVS, we conducted a psycho-visual experiment using an eye-tracker (refer Section 4). We considered six universal facial expressions for psycho-visual experimental study as these expressions are proved to be consistent across cultures [6]. These six expressions are anger, disgust, fear, happiness, sadness and surprise. The 3
4 novelty of the proposed framework is that, it is illumination invariant, reli- able on low resolution images and works adequately for both i.e. posed and spontaneous expressions. Figure 1: Basic structure of facial expression recognition system pipeline Generally, facial expression recognition system consists of three steps: face detection, feature extraction and expression classification. The same has been shown in Figure 1. In our framework we tracked face / salient facial regions using Viola-Jones object detection algorithm [30] asitisthe most cited and considered the fastest and most accurate pattern recognition method for face detection [13]. The second step in the framework is feature extraction, which is the area where this study contributes. The optimal features should minimize within-class variations of expressions, while maximize between class variations. If inadequate features are used, even the best classifier could fail to achieve accurate recognition [25]. Section 3 presents the novel method for facial features extraction which is based on human visual system (HVS). To study and understand HVS we performed psycho-visual experiment. Psycho-visual experimental study is briefly described in Section 4. Expression classification or recognition is the last step in the pipeline. In literature two different ways are prevalent to recognize expressions i.e. direct 4
5 recognition of prototypic expressions or recognition of expressions through facial action coding system (FACS) action units (AUs) [8]. In our proposed framework, which is described in Section 5 we directly classify six universal prototypic expressions [6]. The performance of the framework is evaluated for five different classifiers (from different families i.e. classification Tree, Instance Based Learning, SVM etc ) and results are presented in Section 6. Next section presents the brief literature review for facial features extraction methods Related work In the literature, various methods are employed to extract facial features and these methods can be categorized either as appearance-based methods or geometric feature-based methods. Appearance-based methods. One of the widely studied method to extract appearance information is based on Gabor wavelets [15, 26, 5]. Generally, the drawback of using Gabor filters is that it produces extremely large number of features and it is both time and memory intensive to convolve face images with a bank of Gabor filters to extract multi-scale and multi-orientational coefficients. Another promising approach to extract appearance information is by using Haar-like features, see Yang et al. [33]. Recently, texture descriptors and classification methods i.e. Local Binary Pattern (LBP) [19] and Local Phase Quantization (LPQ) [21] are also studied to extract appearance-based facial features. Zhao et al. [35] proposedto model texture using volume local binary patterns (VLBP) an extension to LBP, for expression recognition. 5
6 Geometric-based methods. Geometric feature-based methods [34, 22, 28, 1] extracts shapes and locations of facial components information to form a feature vector. The problem with using geometric feature-based methods is that they usually require accurate and reliable facial feature detection and tracking which is difficult to achieve in many real world applications where illumination changes with time and images are recorded in very low resolution. Generally, we have found that all the reviewed methods for automatic facial expression recognition are computationally expensive and usually requires dimensionally large feature vector to complete the task. This explains their inability for real-time applications. Secondly, in literature, very few studies exist that tackles the issue of expressions recognition from low resolution images, this adds to lack of applicability of expression recognition system for real world applications. Lastly, all of the reviewed methods, spend computational time on whole face image or divides the facial image based on some mathematical or geometrical heuristic for features extraction. We argue that the task of expression analysis and recognition could be done in more conducive manner, if only some regions are selected for further processing (i.e. salient regions) as it happens in human visual system. Thus, our contributions in this study are: 1. We propose a novel descriptor for facial expression analysis i.e. Pyramid of Local Binary Pattern (PLBP), which outperforms state-of-theart methods for expression recognition on low resolution images (spatially degraded images). It also performs better than other state-ofthe-art methods for good resolution images (with no degradation). 6
7 As the proposed framework is based on human visual system it algorithmically processes only salient facial regions which reduces the length of feature vector. This reduction in feature vector length makes the proposed framework suitable for real-time applications due to minimized computational complexity Pyramid of Local Binary Pattern The proposed framework creates a novel feature space by extracting proposed PLBP (pyramid of local binary pattern) features only from the visually salient facial region (see Section 4 for psycho-visual experiment). PLBP is a pyramidal-based spatial representation of local binary pattern (LBP) descriptor. PLBP represents stimuli by their local texture (LBP) and the spatial layout of the texture. The spatial layout is acquired by tiling the image into regions at multiple resolutions. The idea is illustrated in Figure 2. If only the coarsest level is used, then the descriptor reduces to a global LBP histogram. Comparing to the multi-resolution LBP of Ojala et al.[20],our descriptor selects samples in a more uniformly distributed manner, whereas Ojala s LBP takes samples centered around a point leading to missing some information in the case of face (which is different than a repetitive texture). LBP features were initially proposed for texture analysis [19], but recently they have been successfully used for facial expression analysis [35, 25]. The most important property of LBP features are their tolerance against illumination changes and their computational simplicity [18, 19, 20]. The operator labels the pixels of an image by thresholding the 3 x 3 neighbourhood of each pixel with the center value and considering the result as a binary num- 7
8 Figure 2: Pyramid of Local Binary Pattern. First row: stimuli at two different pyramid levels, second row: histograms of LBP at two respective levels, third row: final descriptor ber. Then the histogram of the labels can be used as a texture descriptor. Formally, LBP operator takes the form: LBP (x c,y c )= s(i n i c )2 n (1) where in this case n runs over the 8 neighbours of the central pixel c, i c and i n are the grey level values at c and n and s(u) is1ifu 0or0 otherwise. Later, the LBP operator is extended to use neighborhood of different sizes n=0 8
9 [20] as the original operator uses 3 x 3 neighbourhood. Using circular neighborhoods and bilinearly interpolating the pixel values allow any radius and number of pixels in the neighborhood. The LBP operator with P sampling points on a circular neighborhood of radius R is given by: P 1 LBP P,R = s(g p g c )2 p (2) p=0 Another extension to the original operator is the definition of uniform patterns, which can be used to reduce the length of the feature vector and implement a simple rotation-invariant descriptor. A local binary pattern is called uniform if the binary pattern contains at most two bitwise transitions from 0 to 1 or vice versa when the bit pattern is traversed circularly. Accumulating the patterns which have more than 2 transitions into a single bin yields an LBP operator, denoted LBPP,R u patterns. These binary patterns can be used to represent texture primitives such as spot, flat area, edge and corner. We extend LBP operator so that the stimuli can be represented by its local texture and the spatial layout of the texture. We call this extended LBP operator as pyramid of local binary pattern or PLBP. PLBP creates the spatial pyramid by dividing the stimuli into finer spatial sub-regions by iteratively doubling the number of divisions in each dimension. It can be observed from the Figure 2 that the pyramid at level l has 2 l sub-regions along each dimension (R 0,...R m ). Histograms of LBP features at the same levels are concatenated. Then, their concatenation at different pyramid levels 9
10 gives final PHOG descriptor (as shown in Figure 2). It can be defined as: H i,j = I{f l (x, y) =i}i{(x, y) R l } (3) l xy where l =0...m 1, i =0...n 1. n is the number of different labels produced by the LBP operator and 1 if A is true, I(A) = 0 otherwise (4) 166 While, the dimensionality of the descriptor can be calculated by: N l 4 l (5) 167 Where, in our experiment (see Section 6) l=1 and N =59aswecreated pyramid up to level 1 and extracted 59 LBP features using LBP8,2 u2 168 operator, 169 which denotes a uniform LBP operator with 8 sampling pixels in a local neighborhood region of radius 2. This pattern reduces the histogram from 256 to 59 bins. In our experiment we obtained 295 dimensional feature vector from one facial region i.e. mouth region (59 dimensions / sub-region), since we executed the experiment with the pyramid of level 1 (the same is shown in Figure 2) Novelty of the proposed descriptor There exist some methods in literature that uses Pyramid of LBP for different applications and they look similar to our proposed descriptor i.e. [32, 9, 17]. Our proposition is novel and there exist differences in the methodology that creates differences in the extracted information. Method for face 10
11 recognitionproposedin[32] creates pyramids before applying LBP operator by down sampling original image i.e. scale-space representation, whereas we propose to create the spatial pyramid by dividing the stimuli into finer spatial sub-regions by iteratively doubling the number of divisions in each dimension. Secondly, our approach reduces memory consumption (do not requires to store same image in different resolutions) and is computationally more efficient. Guo et al. [9] proposed approach for face and palmprint recognition based on multiscale LBP. Their proposed method seems similar to our method for expression recognition but how multiscale analysis is achieved deviates our approach. Approach proposed in [9] achieves multiscale analysis using different values of P and R, wherelbp (P, R) denotes a neighborhood of P equally spaced sampling points on a circle of radius R (discussed earlier). Same approach has been applied by Moore et al. [17] for facial features analysis. Generally the drawback of using such approach is that it increases the size of the feature histogram and increases the computational cost. [17] reports dimensionality of feature vector as high as 30,208 for multiscale face expression analysis as compared to our proposition which creates 590 dimensional feature vector (see Section 5) for the same task. We achieve the task of multiscale analysis much more efficiently than any other earlier proposed methods. By the virtue of efficient multiscale analysis our framework can be used for real time applications (see Table 1 for the time and memory consumption comparison) which is not the case with other methods. As mentioned earlier, we base our framework for facial expression recognition on human visual system (HVS), which selects only few facial regions (salient) to extract information. In order to determine the saliency of facial 11
12 region(s) for a particular expression, we conducted psycho-visual experiment with the help of an eye-tracker. Next section briefly explains the psychovisual experimental study Psycho-Visual experiment The aim of our experiment was to record the eye movement data of human observers in free viewing conditions. The data were analyzed in order to find which components of face are salient for specific displayed expression Participants, apparatus and stimuli Eye movements of fifteen human observers were recorded using video based eye-tracker (EyelinkII system, SR Research), as the subjects watched the collection of 54 videos selected from the extended Cohn-Kanade (CK+) database [16], showing one of the six universal facial expressions [6]. Observers include both male and female aging from 20 to 45 years with normal or corrected to normal vision. All the observers were naïve to the purpose of an experiment Eye movement recording Eye position was tracked at 500 Hz with an average noise less than Head mounted eye-tracker allows flexibility to perform the experiment in free viewing conditions as the system is designed to compensate for small head movements Psycho-Visual experiment Results In order to statistically quantify which region is perceptually more attractive for specific expression, we have calculated the average percentage of trial 12
13 Figure 3: Summary of the facial regions that emerged as salient for six universal expressions. Salient regions are mentioned according to their importance (for example facial expression of fear has two salient regions but mouth is the most important region according to HVS) time observers have fixated their gazes at specific region(s) in a particular time period. As the stimuli used for the experiment is dynamic i.e. video sequences, it would have been incorrect to average all the fixations recorded during trial time (run length of the video) for the data analysis as this could lead to biased analysis of the data. To meaningfully observe and analyze the gaze trend across one video sequence we have divided each video sequence in three mutually exclusive time periods. The first time period correspond to initial frames of the video sequence i.e. neutral face. The last time period en- 13
14 capsulates the frames where the expression is shown with full intensity (apex frames). The second time period is a encapsulation of the frames which has a transition of facial expression i.e. transition from neutral face to the beginning of the desired expression (i.e neutral to the onset of the expression). Then the fixations recorded for a particular time period are averaged across fifteen observers. For drawing the conclusions we considered second and third time periods as they have the most significant information in terms of specific displayed expression. Conclusions drawn are summarized in Figure 3. Refer [11] for the detailed explanation of the psycho-visual experimental study Expression Recognition Framework Feature selection along with the region(s) from where these features are going to be extracted is one of the most important step to recognize expressions. As the proposed framework draws its inspiration from the human visual system (HVS), it extracts proposed features i.e. PLBP, only from the perceptual salient facial region(s) which were determined through Psycho- Visual experiment. Schematic overview of the framework is presented in Figure. 4. Steps of the proposed framework are as follows: 1. First, the framework extracts PLBP features from the mouth region, feature vector of 295 dimensions ( f 1,..., f 295 ). The classification ( Classifier-a in the Figure. 4) is carried out on the basis of extracted features in order to make two groups of facial expressions. First group comprises of those expressions that has one perceptual salient region i.e. happiness, sadness and surprise while the second group is composed of those expressions that have two or more perceptual salient regions i.e. 14
15 Figure 4: Schematic overview of the framework anger, fear and disgust (see Section 4.3). Purpose of making two groups of expressions is to reduce feature extraction computational time. 2. If the stimuli is classified in the first group, then it is classified either as happiness, sadness or surprise by the Classifier-b using already extracted PLBP features from the mouth region. 3. If the stimuli is classified in the second group, then the framework extracts PLBP features from the eyes region (it is worth mentioning here that for the expression of disgust nose region emerged as the salient but the framework do not explicitly extracts features from the nose region as the region of nose that emrged as salient is the upper nose wrinkle area which is connected and already included in the localiza- 15
16 tion of the eyes region, refer Figure 3) and concatenates them with the already extracted PLBP features from the mouth region, feature vector of 590 dimensions ( f 1,..., f f 1,..., f 295 ). Then, the concatenated feature vector is fed to the classifier ( Classifier-c ) for the final classification Experiment and results We performed person-independent facial expression recognition using proposed PLBP features 1. We performed four experiments to test different scenarios. 1. First experiment was performed on the extended Cohn-Kanade (CK+) database [16]. This database contains 593 sequences of posed universal expressions. 2. Second experiment was performed to test the performance of the proposed framework on low resolution image sequences. 3. Third experiment tests the robustness of the proposed framework when generalizing on the new dataset. 4. Fourth experiment was performed on the MMI facial expression database (Part IV and V of the database) [27] which contains spontaneous/natural expressions. For the first two experiments we used all the 309 sequences from the CK+ database which have FACS coded expression label [8]. The experiment 1 video showing the result of the proposed framework on good quality image sequences is available at: 16
17 was carried out on the frames which covers the status of onset to apex of the expression, as done by Yang et al. [33]. Region of interest was obtained automatically by using Viola-Jones object detection algorithm [30] and processed to obtain PLBP feature vector. We extracted LBP features only from the salient region(s) using LBP8,2 u2 296 operator which denotes a uniform LBP oper- ator with 8 sampling pixels in a local neighborhood region of radius 2. Only 297 exception was in the second experiment, when we adopted LBP4,1 u2 298 operator when the spatial facial resolution gets smaller than 36 x In our framework we created image pyramid up to level 1, so in turn got five sub-regions from one facial region i.e. mouth region (see Figure. 2). In total we obtained 295 dimensional feature vector (59 dimensions / sub- region). As mentioned earlier we adopted LBP4,1 u2 303 operator when the spatial facial resolution was 18 x 24. In this case we obtained 75 dimensional feature vector (15 dimensions / sub-region). We recorded correct classification accuracy in the range of 95% for image pyramid level 1. We decided not to test framework with further image pyramid levels as it would double the size of feature vector and thus increase the feature extraction time and likely would add few percents in the accuracy of the framework which will be insignificant for a framework holistically First experiment: posed expressions This experiment measures the performance of the proposed framework on the classical database i.e. extended Cohn-Kanade (CK+) database [16]. Most of the methods in literature report their performance on this database, so this experiment could be considered as the benchmark experiment for facial expression recognition framework. 17
18 The performance of the framework was evaluated for five different classifiers: 1. Support Vector Machine (SVM) with χ 2 kernel and γ=1 2. C4.5 Decision Tree (DT) with reduced-error pruning 3. Random Forest (RF) of 10 trees 4. 2 Nearest Neighbor (2 NN) based on Euclidean distance 5. Naive Bayes (NB) classifier Above mentioned classifiers are briefly described below. Support vector machine (SVM). SVM performs an implicit mapping of data into a higher dimensional feature space, and then finds a linear separating hyperplane with the maximal margin to separate data in this higher dimensional space [29]. Given a training set of labeled examples { (x i,y i ),i = 1...l } where x i R n and y i {-1, 1}, anewtestexamplex is classified by the following function: l f(x) =sgn( α i y i K(x i,x)+b) (6) i=1 where α i are Langrange multipliers of a dual optimization problem that describe the separating hyperplane, K(.,.) is a kernel function, and b is the threshold parameter of the hyperplane. We used Chi-Square kernel as it is best suited for histograms. It is given by: K(x, y) =1 2 (x i y i ) 2 (7) (x i i + y i ) Classification Trees. A Classification Tree is a classifier composed by nodes and branches which break the set of samples into a set of covering 18
19 decision rules. In each node, a single test is made to obtain the partition. The starting node is called the root of the tree. In the final nodes or leaves, a decision about the classification of the case is made. In this work, we have used C4.5 paradigm [24]. Random Forest (RFs) are collections of Decision Trees (DTs) that have been constructed randomly. RFs generally performs better than DT on unseen data. Instance Based Learning. k-nn classifiers are instance-based algorithms taking a conceptually straightforward approach to approximating real or dis- crete valued target functions. The learning process consists in simply storing the presented data. All instances correspond to points in an n-dimensional space and the nearest neighbors of a given query are defined in terms of the standard Euclidean distance. The probability of a query q belonging to a class c can be calculated as follows: p(c q) = k K W k.1 (kc=c) (8) W k k K W k = 1 (9) d(k, q) K is the set of nearest neighbors, kc the class of k and d(k, q) theeu- clidean distance of k from q. Naive Bayes Classifiers. The Naive-Bayes (NB) classifier uses the Bayes theorem to predict the class for each case, assuming that the predictive genes are independent given the category. To classify a new sample characterized 19
20 355 by d genes X =(X1,X2,...Xd), the NB classifier applies the following rule: d C N B = arg max cj C p(c j ) p(x i c j ) (10) where C N B denotes the class label predicted by the Naive-Bayes classifier and the possible classes of the problem are grouped in C = {c 1,...c l }. i= Results The framework achieved average recognition rate of 96.7%, 97.9%, 96.2%, 94.7 % and 90.2 % for SVM, 2 Nearest Neighbor (2 NN), Random Forest (RF), C4.5 Decision Tree (DT) and Naive Bayes (NB) respectively using 10 - fold cross validation technique. One of the most interesting aspects of our approach is that it gives excellent results for a simple 2 NN classifier which is a non-parametric method. This points to the fact that framework do not need computationally expensive methods such as SVM, random forests or decision trees to obtain good results. In general, the proposed framework achieved high expression recognition accuracies irrespective of the classifiers, proves the descriptive strength of the extracted features (features minimizes withinclass variations of expressions, while maximizes between class variations). For comparison and reporting results, we have used the classification results obtained by the SVM as it is the most cited method for classification in the literature Comparisons We chose to compare average recognition performance of our framework with the framework proposed by Shan et.al [25] with different SVM kernels. 20
21 Table 1: Comparison of time and memory consumption. LBP [25] Gabor [25] Gabor [3] PLBP Memory (feature dimension) 2,478 42,650 92, Time(feature extraction time) 0.03s 30s s Our choice was based on the fact that both have common underlying descriptor i.e. local binary pattern (LBP), secondly framework proposed by Shan et.al [25] is highly cited in the literature. Our framework obtained average recognition percentage of 93.5% for SVM linear kernel while for the same kernel Shan et.al [25] have reported 91.5%. For SVM with polynomial kernel and SVM with RBF kernel our framework achieved recognition accuracy of 94.7% and 94.9% respectively, as compared to 91.5% and 92.6%. In terms of time and memory costs of feature extraction process, we have measured and compared our descriptor with the LBP and Gabor-wavelet features in Table 1. Table1 shows the effectiveness of the proposed descriptor for facial feature analysis i.e. PLBP, for real-time applications as it is memory efficient and its extraction time is much lower than other compared descriptor (see Section 5 for the dimensionality calculation). In Table 1 feature dimension reported are stored in a data type float and float occupies four bytes. The proposed framework is compared with other state-of the-art frameworks using same database (i.e Cohn-Kanade database) and the results are presented in Table 2. Table 2 shows the comparison of the achieved average recognition rate of the proposed framework with the state-of-the-art methods using same database (i.e Cohn-Kanade database). Results from [33] are presented for the 21
22 Table 2: Comparison with the state-of-the-art methods for posed expressions. Sequence Class Performance Recog. Num Num Measure Rate (%) [15] leave-one-out 93.3 [35] fold [35] fold [14] fold 94.5 [26] [33]a % split 92.3 [33]b % split 80 Ours fold 96.7 Ours fold two configurations. [33]a shows the result when the method was evaluated for the last three frames from the sequence while [33]b presents the reported result for the frames which encompasses the status from onset to apex of the expression. It can be observed from the Table 2 that the proposed framework is comparable to any other state-of-the-art method in terms of expression recognition accuracy. The method discussed in [33]b is directly comparable to our method, as we also evaluated the framework on similar frames. In this configuration, our framework is better in terms of average recognition accuracy. In general, Table 1 and 2 show that the framework is better than the state-of-the-art frameworks in terms of average expression recognition performance, time and memory costs of feature extraction processes. These 22
23 results show that the system could be used with the high degree of confidence for real-time applications as its unoptimized Matlab implementation runs at more than 30 frames/second (30 fps) Second experiment: low resolution image sequences Figure 5: Robustness of different methods for facial expression recognition with decreasing image resolution. PHOG[ICIP] corresponds to framework proposed by Khan et. al [12], Gabor [CVPRW] corresponds to Tian s work [26], LBP[JIVC] and Gabor[JIVC] corresponds to results reported by Shan et. al [25] 412 Most of the existing state-of-the-art systems for expressions recognition 23
24 report their results on high resolution images with out reporting results on low resolution images. As mentioned earlier there are many real world applications that require expression recognition system to work amicably on low resolution images. Smart meeting, video conferencing and visual surveillance are some examples of such applications. To compare with Tian s work [26], we tested our proposed framework on low resolution images of four different facial resolutions (144 x 192, 72 x 96, 36 x 48, 18 x 24 ) based on Cohn-Kanade database. Tian s work can be considered as the pioneering work for low resolution image facial expression recognition. Figure 5 shows the images at different spatial resolution along with the average recognition accuracy achieved by the different methods. Low resolution image sequences were obtained by down sampling the original sequences. All the other ex- perimental parameters i.e. descriptor, number of sequences and region of interest, were same as mentioned earlier in the Section 6. Figure 5 reports the recognition results of the proposed framework with the state-of-the-art methods on four different low facial resolution images. Reported results of our proposed method i.e. are obtained using support vector machine (SVM) with χ 2 kernel and γ=1. In Figure 5 recognition curve for our proposed method is shown as PLBP-SVM, recognition curves of LBP [25] andgabor[25] areshownaslbp[jivc] and Gabor[JIVC] respectively, curve for Tian s work [26]isshownasGabor[CVPRW] while Khan et al. [12] proposed system s curve is shown as PHOG[ICIP]. Results reports in LBP [25] andgabor[25], the different facial image resolution are 110 x 150, 55 x 75, 27 x 37 and 14 x 19 which are comparable to the resolutions of 144 x 192, 72 x 96, 36 x 48, 18 x 24 pixels in our experiment. Referenced 24
25 figure shows the supremacy of the proposed framework for low resolution images. Specially for the smallest tested facial image resolution (18 x 24) our framework performs much better than any other compared state-of-the-art method. Results from the first and second experiment show that the proposed framework for facial expression recognition works amicably on classical dataset (CK dataset) and its performance is not effected significantly for low resolution images. Secondly, the framework has a very low memory requirement and thus it can be utilized for real-time applications Third experiment: generalization on the new dataset The aim of this experiment is to study how well the proposed framework generalizes on the new dataset. We used image sequences from CK+ dataset and FG-NET FEED (Facial Expressions and Emotion Database) [31]. FG-NET FEED contains 399 video sequences across 18 different individuals showing seven facial expressions i.e. six universal expression [6] plusone neutral. In this dataset individuals were not asked to act rather expressions were captured while showing them video clips or still images. The experiment was carried out on the frames which covers the status of onset to apex of the expression as done in the previous experiment. This experiment was performed in two different scenarios, with the same classifier parameters as the first experiment: a. In the first scenario samples from the CK+ database were used for the training of different classifiers and samples from FG-NET FEED [31] were used for the testing. Obtained results are presented in Table 3. 25
26 b. In the second scenario we used samples from the FG-NET FEED for the training and testing was carried out with the CK+ database samples. Results obtained are presented in last two rows of Table 3. This experiment simulates the real life situation when the framework would be employed to recognize facial expressions on the unseen data. Obtained results are presented in Table 3. Reported average recognition percentages for training phase were calculated using 10 -fold cross validation method. Obtained results are encouraging and they can be further improved by training classifiers on more than one dataset before using in real life scenario. Table 3: Average recognition accuracy (%) Training on CK+ database and testing it with FG-NET FEED SVM C4.5 DT RF 2NN Training samples Test samples Training on FG-NET FEED and testing it with CK+ database Training samples Test samples Fourth experiment: spontaneous expressions Spontaneous/natural facial expressions differ substantially from posed ex- pressions [2]. The same has also been proved by psychophysical work [7]. To 26
27 test the performance of the proposed framework on the spontaneous facial expressions we used 392 video segments from part IV and V of the MMI facial expression database [27]. Part IV and V of the database contains spontaneous/naturalistic expressions recorded from 25 participants aged between 20 and 32 years in two different settings. Due to ethical concerns the database contains only the video recording of the expressions of happiness, surprise and disgust [27]. The framework achieved average recognition rate of 91%, 91.4%, 90.3% and 88% for SVM, 2-nearest neighbor, Random forest and C4.5 decision tree respectively using 10 -fold cross validation technique. Algorithm of Park et al.[23] for spontaneous expression recognition achieved results for three expressions in the range of 56% to 88% for four different configurations which is less than recognition rate of our proposed algorithm, although results cannot be compared directly as they used different database Conclusions and future work We presented a novel descriptor and framework for automatic and reliable facial expression recognition. Framework is based on initial study of human vision and works adequately on posed as well as on spontaneous expressions. The key conclusion drawn from the study are: 1. Facial expressions can be analyzed automatically by mimicking human visual system i.e. extracting features only from the salient facial regions. 2. Features extracted using proposed pyramidal local binary pattern (PLBP) operator have strong discriminative ability as the recognition result for 27
28 six universal expressions is not effected by the choice of classifier. 3. The proposed framework is robust for low resolution images, spontaneous expressions and generalizes well on unseen data. 4. The proposed framework can be used for real-time applications since its unoptimized Matlab implementation run at more than 30 frames / second (30 fps) on a Windows 64 bit machine with i7 processor running at 2.4 GHz having 6 GB of RAM. In future we plan to investigate the effect of occlusion as this parameter could significantly impact the performance of the framework for real world applications. Secondly, the notion of movement could improve the performance of the proposed framework for real world applications as the experimental study conducted by Bassili [4] suggested that dynamic information is important for facial expression recognition. Another parameter that needs to be investigated is the variations of camera angle as for many applications frontal facial pose is difficult to record References [1] Bai, Y., Guo, L., Jin, L., Huang, Q., A novel feature extraction method using pyramid histogram of orientation gradients for smile recognition, in: International Conference on Image Processing. [2] Bartlett, M.S., Littlewort, G., Braathen, B., Sejnowski, T.J., Movellan, J.R., A prototype for automatic recognition of spontaneous facial actions, in: Advances in Neural Information Processing Systems. 28
29 [3] Bartlett, M.S., Littlewort, G., Fasel, I., Movellan, J.R., Real time face detection and facial expression recognition: Development and applications to human computer interaction., in: Conference on Computer Vision and Pattern Recognition Workshop, [4] Bassili, J.N., Emotion recognition: The role of facial movement and the relative importance of upper and lower areas of the face. Journal of Personality and Social Psychology 37, [5] Donato, G., Bartlett, M.S., Hager, J.C., Ekman, P., Sejnowski, T.J., Classifying facial actions. IEEE Transaction on Pattern Analysis and Machine Intelligence 21, [6] Ekman, P., Universals and cultural differences in facial expressions of emotion, in: Nebraska Symposium on Motivation, Lincoln University of Nebraska Press. pp [7] Ekman, P., Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. W. W. Norton & Company, New York. 3rd edition. [8] Ekman, P., Friesen, W., The facial action coding system: A technique for the measurement of facial movements. Consulting Psychologist. [9] Guo, Z., Zhang, L., Zhang, D., Mou, X., Hierarchical multiscale lbp for face and palmprint recognition, in: IEEE International Conference on Image Processing, pp
30 [10] Hadjidemetriou, E., Grossberg, M., Nayar, S., Multiresolution histograms and their use for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 26, [11] Khan, R.A., Meyer, A., Konik, H., Bouakaz, S., 2012a. Exploring human visual system: study to aid the development of automatic facial expression recognition framework, in: Computer Vision and Pattern Recognition Workshop. [12] Khan, R.A., Meyer, A., Konik, H., Bouakaz, S., 2012b. Human vision inspired framework for facial expressions recognition, in: IEEE (Ed.), IEEE International Conference on Image Processing. [13] Kolsch, M., Turk, M., Analysis of rotational robustness of hand detection with a viola-jones detector, in: 17th International Conference on Pattern Recognition, pp Vol.3. [14] Kotsia, I., Zafeiriou, S., Pitas, I., Texture and shape information fusion for facial expression and facial action unit recognition. Pattern Recognition 41, [15] Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J., Dynamics of facial expression extracted automatically from video. Image and Vision Computing 24, [16] Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I., The extended cohn-kande dataset (CK+): A complete facial expression dataset for action unit and emotion-specified expression., in: 30
31 IEEE Conference on Computer Vision and Pattern Recognition Workshops. [17] Moore, S., Bowden, R., Local binary patterns for multi-view facial expression recognition. Computer Vision and Image Understanding 115, [18] Ojala, T., Pietikäinen, M., Unsupervised texture segmentation using feature distributions. Pattern Recognition 32, [19] Ojala, T., Pietikäinen, M., Harwood, D., A comparative study of texture measures with classification based on featured distribution. Pattern Recognition 29, [20] Ojala, T., Pietikäinen, M., Mäenpää, T., Multiresolution grayscale and rotation invariant texture classification with local binary patterns. IEEE Transaction on Pattern Analysis and Machine Intelligence 24, [21] Ojansivu, V., Heikkilä, J., Blur insensitive texture classification using local phase quantization, in: International conference on Image and Signal Processing. [22] Pantic, M., Patras, I., Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences. IEEE Transactions on Systems, Man, and Cybernetics 36, [23] Park, S., Kim, D., Spontaneous facial expression classification 31
32 with facial motion vector, in: IEEE Conference on Automatic Face and Gesture Recognition. [24] Quinlan, J.R., C4.5: Programs for Machine Learning. Morgan Kaufmann. [25] Shan, C., Gong, S., McOwan, P.W., Facial expression recognition based on local binary patterns: A comprehensive study. Image and Vision Computing 27, [26] Tian, Y., Evaluation of face resolution for expression analysis, in: Computer Vision and Pattern Recognition Workshop. [27] Valstar, M., Pantic, M., Induced disgust, happiness and surprise: an addition to the MMI facial expression database, in: International Language Resources and Evaluation Conference. [28] Valstar, M., Patras, I., Pantic, M., Facial action unit detection using probabilistic actively learned support vector machines on tracked facial point data, in: IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp [29] Vapnik, V.N., The nature of statistical learning theory. New York: Springer-Verlag. [30] Viola, P., Jones, M., Rapid object detection using a boosted cascade of simple features, in: IEEE Conference on Computer Vision and Pattern Recognition. 32
33 [31] Wallhoff, F., Facial expressions and emotion database. waf/fgnet/feedtum.html. [32] Wang, W., Chen, W., Xu, D., Pyramid-based multi-scale lbp features for face recognition, in: International Conference on Multimedia and Signal Processing (CMSP), pp [33] Yang, P., Liu, Q., Metaxas, D.N., Exploring facial expressions with compositional features, in: IEEE Conference on Computer Vision and Pattern Recognition. [34] Zhang, Y., Ji, Q., Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Transactions on Pattern Analysis and Machine Intelligence 27, [35] Zhao, G., Pietikäinen, M., Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transaction on Pattern Analysis and Machine Intelligence 29, [36] Zhaoping, L., Theoretical understanding of the early visual processes by data compression and data selection. Network: computation in neural systems 17,
Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition
Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Peng Yang Qingshan Liu,2 Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road,
More informationHUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION
HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time
More informationFacial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression
More informationRobust Facial Expression Classification Using Shape and Appearance Features
Robust Facial Expression Classification Using Shape and Appearance Features S L Happy and Aurobinda Routray Department of Electrical Engineering, Indian Institute of Technology Kharagpur, India Abstract
More informationA Real Time Facial Expression Classification System Using Local Binary Patterns
A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial
More informationLBP Based Facial Expression Recognition Using k-nn Classifier
ISSN 2395-1621 LBP Based Facial Expression Recognition Using k-nn Classifier #1 Chethan Singh. A, #2 Gowtham. N, #3 John Freddy. M, #4 Kashinath. N, #5 Mrs. Vijayalakshmi. G.V 1 chethan.singh1994@gmail.com
More informationTexture Features in Facial Image Analysis
Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University
More informationReal time facial expression recognition from image sequences using Support Vector Machines
Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,
More informationFacial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib
3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 201) Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi
More informationRecognizing Micro-Expressions & Spontaneous Expressions
Recognizing Micro-Expressions & Spontaneous Expressions Presentation by Matthias Sperber KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association www.kit.edu
More informationPerson-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)
The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014 195 Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) Faisal Ahmed 1, Hossain
More informationFacial Expression Recognition with Emotion-Based Feature Fusion
Facial Expression Recognition with Emotion-Based Feature Fusion Cigdem Turan 1, Kin-Man Lam 1, Xiangjian He 2 1 The Hong Kong Polytechnic University, Hong Kong, SAR, 2 University of Technology Sydney,
More informationLOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION
17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 LOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION Seyed Mehdi Lajevardi, Zahir M. Hussain
More informationExploring Bag of Words Architectures in the Facial Expression Domain
Exploring Bag of Words Architectures in the Facial Expression Domain Karan Sikka, Tingfan Wu, Josh Susskind, and Marian Bartlett Machine Perception Laboratory, University of California San Diego {ksikka,ting,josh,marni}@mplab.ucsd.edu
More informationCOMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION
COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION Priyanka Rani 1, Dr. Deepak Garg 2 1,2 Department of Electronics and Communication, ABES Engineering College, Ghaziabad
More informationPattern Recog nition Letters
Pattern Recognition Letters 34 (2013) 1159 1168 Contents lists available at SciVerse ScienceDi rect Pattern Recog nition Letters journal homepage: www.elsevier.com/locate/patrec Framework for reliable,
More informationDynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model
Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Caifeng Shan, Shaogang Gong, and Peter W. McOwan Department of Computer Science Queen Mary University of London Mile End Road,
More informationInternational Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017
RESEARCH ARTICLE OPEN ACCESS Facial expression recognition based on completed LBP Zicheng Lin 1, Yuanliang Huang 2 1 (College of Science and Engineering, Jinan University, Guangzhou, PR China) 2 (Institute
More informationRecognition of facial expressions in presence of partial occlusion
Recognition of facial expressions in presence of partial occlusion Ioan Buciu, 1 Irene Kotsia 1 and Ioannis Pitas 1 AIIA Laboratory Computer Vision and Image Processing Group Department of Informatics
More informationFully Automatic Facial Action Recognition in Spontaneous Behavior
Fully Automatic Facial Action Recognition in Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia Lainscsek 1, Ian Fasel 1, Javier Movellan 1 1 Institute for Neural
More informationMultiple Kernel Learning for Emotion Recognition in the Wild
Multiple Kernel Learning for Emotion Recognition in the Wild Karan Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort and Marian S. Bartlett Machine Perception Laboratory UCSD EmotiW Challenge,
More informationExtracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition
Extracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition Xiaoyi Feng 1, Yangming Lai 1, Xiaofei Mao 1,JinyePeng 1, Xiaoyue Jiang 1, and Abdenour Hadid
More informationA Novel Feature Extraction Technique for Facial Expression Recognition
www.ijcsi.org 9 A Novel Feature Extraction Technique for Facial Expression Recognition *Mohammad Shahidul Islam 1, Surapong Auwatanamongkol 2 1 Department of Computer Science, School of Applied Statistics,
More informationImplementation of a Face Recognition System for Interactive TV Control System
Implementation of a Face Recognition System for Interactive TV Control System Sang-Heon Lee 1, Myoung-Kyu Sohn 1, Dong-Ju Kim 1, Byungmin Kim 1, Hyunduk Kim 1, and Chul-Ho Won 2 1 Dept. IT convergence,
More informationFacial expression recognition using shape and texture information
1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,
More informationFace and Nose Detection in Digital Images using Local Binary Patterns
Face and Nose Detection in Digital Images using Local Binary Patterns Stanko Kružić Post-graduate student University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture
More informationINTERNATIONAL JOURNAL FOR ADVANCE RESEARCH IN ENGINEERING AND TECHNOLOGY WINGS TO YOUR THOUGHTS.. XBeats-An Emotion Based Music Player
XBeats-An Emotion Based Music Player Sayali Chavan 1, Ekta Malkan 2, Dipali Bhatt 3, Prakash H. Paranjape 4 1 U.G. Student, Dept. of Computer Engineering, sayalichavan17@gmail.com 2 U.G. Student, Dept.
More informationClassification of Face Images for Gender, Age, Facial Expression, and Identity 1
Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1
More informationComplete Local Binary Pattern for Representation of Facial Expression Based on Curvelet Transform
Proc. of Int. Conf. on Multimedia Processing, Communication& Info. Tech., MPCIT Complete Local Binary Pattern for Representation of Facial Expression Based on Curvelet Transform Nagaraja S., Prabhakar
More informationAppearance Manifold of Facial Expression
Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk
More informationExploring Facial Expressions with Compositional Features
Exploring Facial Expressions with Compositional Features Peng Yang Qingshan Liu Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road, Piscataway, NJ 88, USA peyang@cs.rutgers.edu,
More informationLearning the Deep Features for Eye Detection in Uncontrolled Conditions
2014 22nd International Conference on Pattern Recognition Learning the Deep Features for Eye Detection in Uncontrolled Conditions Yue Wu Dept. of ECSE, Rensselaer Polytechnic Institute Troy, NY, USA 12180
More informationAn Acceleration Scheme to The Local Directional Pattern
An Acceleration Scheme to The Local Directional Pattern Y.M. Ayami Durban University of Technology Department of Information Technology, Ritson Campus, Durban, South Africa ayamlearning@gmail.com A. Shabat
More informationAn Adaptive Threshold LBP Algorithm for Face Recognition
An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent
More informationFacial Action Detection using Block-based Pyramid Appearance Descriptors
Facial Action Detection using Block-based Pyramid Appearance Descriptors Bihan Jiang, Michel F. Valstar and Maja Pantic Department of Computing, Imperial College London, UK Mixed Reality Lab, School of
More informationGeneric Face Alignment Using an Improved Active Shape Model
Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn
More informationMicro-Facial Movements: An Investigation on Spatio-Temporal Descriptors
Micro-Facial Movements: An Investigation on Spatio-Temporal Descriptors Adrian K. Davison 1, Moi Hoon Yap 1, Nicholas Costen 1, Kevin Tan 1, Cliff Lansley 2, and Daniel Leightley 1 1 Manchester Metropolitan
More informationFacial Expression Analysis
Facial Expression Analysis Faces are special Face perception may be the most developed visual perceptual skill in humans. Infants prefer to look at faces from shortly after birth (Morton and Johnson 1991).
More informationClassification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks
Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks HADI SEYEDARABI*, WON-SOOK LEE**, ALI AGHAGOLZADEH* AND SOHRAB
More informationAction Unit Based Facial Expression Recognition Using Deep Learning
Action Unit Based Facial Expression Recognition Using Deep Learning Salah Al-Darraji 1, Karsten Berns 1, and Aleksandar Rodić 2 1 Robotics Research Lab, Department of Computer Science, University of Kaiserslautern,
More information3D Facial Action Units Recognition for Emotional Expression
3D Facial Action Units Recognition for Emotional Expression Norhaida Hussain 1, Hamimah Ujir, Irwandi Hipiny and Jacey-Lynn Minoi 1 Department of Information Technology and Communication, Politeknik Kuching,
More informationFacial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion
Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion Juxiang Zhou 1, Tianwei Xu 2, Jianhou Gan 1 1. Key Laboratory of Education Informalization for Nationalities,
More informationContent Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features
Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features 1 Kum Sharanamma, 2 Krishnapriya Sharma 1,2 SIR MVIT Abstract- To describe the image features the Local binary pattern (LBP)
More informationAutomatic Facial Expression Recognition based on the Salient Facial Patches
IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 11 May 2016 ISSN (online): 2349-784X Automatic Facial Expression Recognition based on the Salient Facial Patches Rejila.
More informationLarge-Scale Traffic Sign Recognition based on Local Features and Color Segmentation
Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,
More informationSpatiotemporal Features for Effective Facial Expression Recognition
Spatiotemporal Features for Effective Facial Expression Recognition Hatice Çınar Akakın and Bülent Sankur Bogazici University, Electrical & Electronics Engineering Department, Bebek, Istanbul {hatice.cinar,bulent.sankur}@boun.edu.tr
More informationFacial Expression Recognition using Principal Component Analysis with Singular Value Decomposition
ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial
More informationA FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS
A FRAMEWORK FOR ANALYZING TEXTURE DESCRIPTORS Timo Ahonen and Matti Pietikäinen Machine Vision Group, University of Oulu, PL 4500, FI-90014 Oulun yliopisto, Finland tahonen@ee.oulu.fi, mkp@ee.oulu.fi Keywords:
More informationRecognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior
Computer Vision and Pattern Recognition 2005 Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia
More informationColor Local Texture Features Based Face Recognition
Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India
More informationReal-time Automatic Facial Expression Recognition in Video Sequence
www.ijcsi.org 59 Real-time Automatic Facial Expression Recognition in Video Sequence Nivedita Singh 1 and Chandra Mani Sharma 2 1 Institute of Technology & Science (ITS) Mohan Nagar, Ghaziabad-201007,
More informationTexture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map
Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Markus Turtinen, Topi Mäenpää, and Matti Pietikäinen Machine Vision Group, P.O.Box 4500, FIN-90014 University
More informationSPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS. André Mourão, Pedro Borges, Nuno Correia, João Magalhães
SPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS André Mourão, Pedro Borges, Nuno Correia, João Magalhães Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade
More informationA Hierarchical Face Identification System Based on Facial Components
A Hierarchical Face Identification System Based on Facial Components Mehrtash T. Harandi, Majid Nili Ahmadabadi, and Babak N. Araabi Control and Intelligent Processing Center of Excellence Department of
More informationCountermeasure for the Protection of Face Recognition Systems Against Mask Attacks
Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Neslihan Kose, Jean-Luc Dugelay Multimedia Department EURECOM Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr
More informationChapter 12. Face Analysis Using Local Binary Patterns
Chapter 12 Face Analysis Using Local Binary Patterns A. Hadid, G. Zhao, T. Ahonen, and M. Pietikäinen Machine Vision Group Infotech Oulu, P.O. Box 4500 FI-90014, University of Oulu, Finland http://www.ee.oulu.fi/mvg
More informationA Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions
A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions Akhil Bansal, Santanu Chaudhary, Sumantra Dutta Roy Indian Institute of Technology, Delhi, India akhil.engg86@gmail.com,
More informationGraph Matching Iris Image Blocks with Local Binary Pattern
Graph Matching Iris Image Blocs with Local Binary Pattern Zhenan Sun, Tieniu Tan, and Xianchao Qiu Center for Biometrics and Security Research, National Laboratory of Pattern Recognition, Institute of
More informationObject detection using non-redundant local Binary Patterns
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh
More informationMulti-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature
0/19.. Multi-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature Usman Tariq, Jianchao Yang, Thomas S. Huang Department of Electrical and Computer Engineering Beckman Institute
More informationCross-pose Facial Expression Recognition
Cross-pose Facial Expression Recognition Abstract In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose
More informationAn efficient face recognition algorithm based on multi-kernel regularization learning
Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel
More informationSmile Detection Using Multi-scale Gaussian Derivatives
Smile Detection Using Multi-scale Gaussian Derivatives Varun Jain, James L. Crowley To cite this version: Varun Jain, James L. Crowley. Smile Detection Using Multi-scale Gaussian Derivatives. 12th WSEAS
More informationFacial Expression Analysis
Facial Expression Analysis Jeff Cohn Fernando De la Torre Human Sensing Laboratory Tutorial Looking @ People June 2012 Facial Expression Analysis F. De la Torre/J. Cohn Looking @ People (CVPR-12) 1 Outline
More informationBRIEF Features for Texture Segmentation
BRIEF Features for Texture Segmentation Suraya Mohammad 1, Tim Morris 2 1 Communication Technology Section, Universiti Kuala Lumpur - British Malaysian Institute, Gombak, Selangor, Malaysia 2 School of
More informationFacial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism
Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism 1 2 Wei-Lun Chao, Jun-Zuo Liu, 3 Jian-Jiun Ding, 4 Po-Hung Wu 1, 2, 3, 4 Graduate Institute
More informationInternational Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN
ISSN 2229-5518 538 Facial Expression Detection Using FACS Vinaya Tinguria, Sampada Borukar y, Aarti Awathare z,prof. K.T.V. Talele x Sardar Patel Institute of Technology, Mumbai-400058, India, vinaya.t20@gmail.com
More informationBoosting Facial Expression Recognition in a Noisy Environment Using LDSP-Local Distinctive Star Pattern
www.ijcsi.org 45 Boosting Facial Expression Recognition in a Noisy Environment Using LDSP-Local Distinctive Star Pattern Mohammad Shahidul Islam 1, Tarikuzzaman Emon 2 and Tarin Kazi 3 1 Department of
More information[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor
GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES LBP AND PCA BASED ON FACE RECOGNITION SYSTEM Ashok T. Gaikwad Institute of Management Studies and Information Technology, Aurangabad, (M.S), India ABSTRACT
More informationBoosting Sex Identification Performance
Boosting Sex Identification Performance Shumeet Baluja, 2 Henry Rowley shumeet@google.com har@google.com Google, Inc. 2 Carnegie Mellon University, Computer Science Department Abstract This paper presents
More informationIntensity Rank Estimation of Facial Expressions Based on A Single Image
2013 IEEE International Conference on Systems, Man, and Cybernetics Intensity Rank Estimation of Facial Expressions Based on A Single Image Kuang-Yu Chang ;, Chu-Song Chen : and Yi-Ping Hung ; Institute
More informationAn Algorithm based on SURF and LBP approach for Facial Expression Recognition
ISSN: 2454-2377, An Algorithm based on SURF and LBP approach for Facial Expression Recognition Neha Sahu 1*, Chhavi Sharma 2, Hitesh Yadav 3 1 Assistant Professor, CSE/IT, The North Cap University, Gurgaon,
More informationarxiv: v3 [cs.cv] 3 Oct 2012
Combined Descriptors in Spatial Pyramid Domain for Image Classification Junlin Hu and Ping Guo arxiv:1210.0386v3 [cs.cv] 3 Oct 2012 Image Processing and Pattern Recognition Laboratory Beijing Normal University,
More informationDecorrelated Local Binary Pattern for Robust Face Recognition
International Journal of Advanced Biotechnology and Research (IJBR) ISSN 0976-2612, Online ISSN 2278 599X, Vol-7, Special Issue-Number5-July, 2016, pp1283-1291 http://www.bipublication.com Research Article
More informationFACIAL RECOGNITION BASED ON THE LOCAL BINARY PATTERNS MECHANISM
FACIAL RECOGNITION BASED ON THE LOCAL BINARY PATTERNS MECHANISM ABSTRACT Alexandru Blanda 1 This work presents a method of facial recognition, based on Local Binary Models. The idea of using this algorithm
More informationMood detection of psychological and mentally disturbed patients using Machine Learning techniques
IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.8, August 2016 63 Mood detection of psychological and mentally disturbed patients using Machine Learning techniques Muhammad
More informationFacial Emotion Recognition using Eye
Facial Emotion Recognition using Eye Vishnu Priya R 1 and Muralidhar A 2 1 School of Computing Science and Engineering, VIT Chennai Campus, Tamil Nadu, India. Orcid: 0000-0002-2016-0066 2 School of Computing
More informationGender Classification Technique Based on Facial Features using Neural Network
Gender Classification Technique Based on Facial Features using Neural Network Anushri Jaswante Dr. Asif Ullah Khan Dr. Bhupesh Gour Computer Science & Engineering, Rajiv Gandhi Proudyogiki Vishwavidyalaya,
More informationFACIAL EXPRESSION RECOGNITION AND EXPRESSION INTENSITY ESTIMATION
FACIAL EXPRESSION RECOGNITION AND EXPRESSION INTENSITY ESTIMATION BY PENG YANG A dissertation submitted to the Graduate School New Brunswick Rutgers, The State University of New Jersey in partial fulfillment
More informationFacial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches
Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Yanpeng Liu a, Yuwen Cao a, Yibin Li a, Ming Liu, Rui Song a Yafang Wang, Zhigang Xu, Xin Ma a Abstract Facial
More informationAPPLICATION OF LOCAL BINARY PATTERN AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION
APPLICATION OF LOCAL BINARY PATTERN AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION 1 CHETAN BALLUR, 2 SHYLAJA S S P.E.S.I.T, Bangalore Email: chetanballur7@gmail.com, shylaja.sharath@pes.edu Abstract
More informationFacial Expression Recognition Using Non-negative Matrix Factorization
Facial Expression Recognition Using Non-negative Matrix Factorization Symeon Nikitidis, Anastasios Tefas and Ioannis Pitas Artificial Intelligence & Information Analysis Lab Department of Informatics Aristotle,
More informationImage Processing Pipeline for Facial Expression Recognition under Variable Lighting
Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated
More informationFace Alignment Under Various Poses and Expressions
Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.
More informationTowards Facial Expression Recognition in the Wild: A New Database and Deep Recognition System
Towards Facial Expression Recognition in the Wild: A New Database and Deep Recognition System Xianlin Peng, Zhaoqiang Xia, Lei Li, Xiaoyi Feng School of Electronics and Information, Northwestern Polytechnical
More informationFacial Expression Recognition Using Gabor Motion Energy Filters
Facial Expression Recognition Using Gabor Motion Energy Filters Tingfan Wu Marian S. Bartlett Javier R. Movellan Dept. Computer Science Engineering Institute for Neural Computation UC San Diego UC San
More informationILLUMINATION NORMALIZATION USING LOCAL GRAPH STRUCTURE
3 st January 24. Vol. 59 No.3 25-24 JATIT & LLS. All rights reserved. ILLUMINATION NORMALIZATION USING LOCAL GRAPH STRUCTURE HOUSAM KHALIFA BASHIER, 2 LIEW TZE HUI, 3 MOHD FIKRI AZLI ABDULLAH, 4 IBRAHIM
More informationSurvey on Human Face Expression Recognition
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 10, October 2014,
More informationRecognition of Facial Action Units with Action Unit Classifiers and An Association Network
Recognition of Facial Action Units with Action Unit Classifiers and An Association Network Junkai Chen 1, Zenghai Chen 1, Zheru Chi 1 and Hong Fu 1,2 1 Department of Electronic and Information Engineering,
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationFeature Descriptors. CS 510 Lecture #21 April 29 th, 2013
Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition
More informationEffective Classifiers for Detecting Objects
Effective Classifiers for Detecting Objects Michael Mayo Dept. of Computer Science University of Waikato Private Bag 3105, Hamilton, New Zealand mmayo@cs.waikato.ac.nz Abstract Several state-of-the-art
More informationSketchable Histograms of Oriented Gradients for Object Detection
Sketchable Histograms of Oriented Gradients for Object Detection No Author Given No Institute Given Abstract. In this paper we investigate a new representation approach for visual object recognition. The
More informationBridging the Gap Between Local and Global Approaches for 3D Object Recognition. Isma Hadji G. N. DeSouza
Bridging the Gap Between Local and Global Approaches for 3D Object Recognition Isma Hadji G. N. DeSouza Outline Introduction Motivation Proposed Methods: 1. LEFT keypoint Detector 2. LGS Feature Descriptor
More informationLBP with Six Intersection Points: Reducing Redundant Information in LBP-TOP for Micro-expression Recognition
LBP with Six Intersection Points: Reducing Redundant Information in LBP-TOP for Micro-expression Recognition Yandan Wang 1, John See 2, Raphael C.-W. Phan 1, Yee-Hui Oh 1 1 Faculty of Engineering, Multimedia
More informationLearning to Recognize Faces in Realistic Conditions
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationAutomatic Facial Expression Recognition Using Features of Salient Facial Patches
1 Automatic Facial Expression Recognition Using Features of Salient Facial Patches S L Happy and Aurobinda Routray Abstract Extraction of discriminative features from salient facial patches plays a vital
More informationShort Paper Boosting Sex Identification Performance
International Journal of Computer Vision 71(1), 111 119, 2007 c 2006 Springer Science + Business Media, LLC. Manufactured in the United States. DOI: 10.1007/s11263-006-8910-9 Short Paper Boosting Sex Identification
More informationAn Efficient LBP-based Descriptor for Facial Depth Images applied to Gender Recognition using RGB-D Face Data
An Efficient LBP-based Descriptor for Facial Depth Images applied to Gender Recognition using RGB-D Face Data Tri Huynh, Rui Min, Jean-Luc Dugelay Department of Multimedia Communications, EURECOM, Sophia
More informationDynamic Human Fatigue Detection Using Feature-Level Fusion
Dynamic Human Fatigue Detection Using Feature-Level Fusion Xiao Fan, Bao-Cai Yin, and Yan-Feng Sun Beijing Key Laboratory of Multimedia and Intelligent Software, College of Computer Science and Technology,
More information