Uniform Local Active Forces: A Novel Gray-Scale InvariantLocal Feature Representation for Facial Expression Recognition
|
|
- Kelly Johnson
- 6 years ago
- Views:
Transcription
1 Uniform Local Active Forces: A Novel Gray-Scale InvariantLocal Feature Representation for Facial Expression Recognition Mohammad Shahidul Islam Assistant Professor, Department of Computer Science and Engineering,AtishDipankar University of Science & Technology,Dhaka, Bangladesh. suva93@gmail.com Abstract Facial expression recognition is a dynamic topic in multiple disciplines: psychology, behavior science, sociology, medical science, computer science, etc. Local feature representation such as Local Binary Pattern is widely adopted by many researchers for facial expression recognition due to its simplicity and high efficiency. But long histogram produced by Local Binary Pattern is not suitable for large-scale face database. Considering this problem, a simple gray-scale invariant local feature descriptor is proposed for facial expression recognition. Local feature of pixel is computed based on magnitudes of gray color intensity differences of the pixel and its neighboring pixels. A facial image is divided into 9x9=81 blocks and the histograms of local features for all 81 blocks are concatenated to serve as a local distinctive descriptor of the facial image. Extended Cohn-Kanade dataset is used to evaluate the efficiency of the proposed method. It is compared with other appearance based feature representations for static image in terms of classification accuracy using multiclass SVM. Experimental results show that the proposed feature representation along with Support Vector Machine is effective for facial expression recognition. Keywords Facial Expression recognition, Facial Image Representation, Feature Descriptor, Pattern Recognition, CK+, JAFFE, Image Processing, Computer Vision F I. INTRODUCTION 8 acial Expression is very important for daily activities, allowing someone to express feelings beyond the verbal world and understand each other from various situations[15]. Some expressions instigate human actions, and others enrich the meaning of human communication. Mehrabian[15] observed that the verbal part of human communication contributes only 7%, the vocal part contributes 38% and facial movement and expression yields 55% to the meaning of the communication. This means that the facial part does the major contribution in human communication. Therefore, automatic facial expression recognition has become a challenging problem in computer vision. Due to its potential important applications in manmachine interactions, it attracted much attention of the researchers in the past few years [29]. A. Motivation From the survey, it is revealed that most of the facial expression recognition systems (FERS) are based on the Facial Action Coding System (FACS) [24], [25] and[21], which involves more complexity due to facial feature detection, and extraction procedures. Geometric shape based models have problem with on plane face transformation [26], [20]. Gabor wavelet [31] is used widely in this field but the feature vector length is huge and has computational complexity. Another appearance based [23] method LBP-local binary pattern [17], which is adopted by many researchers also has disadvantages. (1) It produces too long histograms, which slows down the recognition speed; (2) Under some certain circumstances, it misses the local feature as it does not consider the effect of the center pixel [10]; Though LBP RIU2 solves the first disadvantage by making histogram small but as a penalty it drops 197 patterns among 256, which may contain valuable local structure. Aiming at these problems, a new methodology is proposednamed LAF U (Uniform Local Active Forces). LAF U is a local feature descriptor for static image, which not only overcomes the above disadvantages but also has the following advantages on top: Simple computation than existing methods, Robust feature extraction, Easy to understand and good for further research, Less memory cost due to its tiny feature vector, hence integration with devices like CCTV, Still camera is easy and cost effective, B. Related Works There are seven basic types of facial expressions. They are contempt, fear, sadness, disgust, anger, surprise and happiness [21]. Most of the research works on facial expression recognition (FER) are done on still images. The psychological experiments by Bassili [5] concluded that facial expressions are recognized more precisely from video than single still image. Yang et al. [34]used local binary pattern and local phase quantization together to build facial expression recognition
2 system. Kabir et al. [35]proposedLDPv(Local Directional Pattern- Variance) by applying weight to the feature vector using local variance and found it to be effective for facial expression recognition. He used support vector machine as a classifier. Xiaohua et al. [36] used LBP-TOP (LBP on three orthogonal planes) on eyes, nose and mouth. They developed a new method that can learn weights for multiple feature sets in facial components.tianet al. [24] proposed multi state face component model of AUs and neural network for classification. They developed an automatic system to analyze subtle changes in facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal image sequence. Cohen et al. [9] employed Naive Bayes classifiers and hidden Markov models (HMMs) together to recognize human facial expressions from video sequences. They introduced and tested different Bayesian network classifiers for classifying expressions from video, focusing on changes in distribution assumptions, and feature dependency structures. Zhang et al. [30] proposed IR illumination camera for facial feature detection, tracking and recognized the facial expressions using Dynamic Bayesian networks (DBNs). He characterized facial expressions by detecting 26 facial features around the regions of eyes, nose, and mouth. Anderson and Peter [2] proposed a fully automated, multistage system for real time recognition of facial expression. They used a multichannel gradient model (MCGM) to determine facial optical flow. The motion signatures achieved are then classified using Support Vector Machines. Yeasinet al. [28] presented a spatio temporal approach in recognizing six universal facial expressions from visual data and using them to compute levels of interest. He used discrete hidden Markov models (DHMMs) to recognize the facial expressions. Pantic and Rothkrantz[20] presented a method which can handle a large range of human facial behavior by recognizing facial muscle actions that produce expressions. They applied face-profilecontour tracking and rule-based reasoning to recognize 20 AUs taking place alone or in a combination in nearly left-profile-view face image sequences and they achieved 84.9% accuracy rate. Kotsiaet al. [13] manually placed some of Candide grid nodes to face landmarks to create facial wire frame model for facial expressions and a Support Vector Machine (SVM) for classification. Besides facial representations using AUs, some local feature representations also proposed. The local features are much easier for extraction than those of AUs.Ahonenet al. [1] proposed a new facial representation strategy for still images based on Local Binary Pattern (LBP). In this method, the LBP value at the referenced center pixel of a 3x3 pixels area is computed using the gray scale color intensities of the pixel and its neighboring pixels as follows: LBP = f x = P 2 i 1 f g i c (1) i=1 0 x < 0 1 x 0 Where c denotes the gray color intensity of the center pixel, g(i) is the gray color intensity of its neighbors, P stands for the number of neighbors, i.e. 8. Figure 1 shows an example of obtaining LBP value of the referenced center pixel for a given 3 3 pixels area. An extension to the original LBP operator called LBP RIU2 was proposed by [18]. It reduces the length of the feature vector and implements a simple rotation-invariant descriptor. An LBP pattern is called uniform if the binary pattern contains at most two bitwise transitions from 0 to 1 or vice versa. For example, the patterns (0 transitions), (2 transitions) and (2 transitions) are uniform whereas the patterns (4 transitions) and (6 transitions) are not. The uniform ones occur more commonly in any image textures than the non-uniform ones; therefore, the latter ones are neglected. The uniform ones yield only 59 different patterns. To create a rotation-invariant LBP descriptor, a uniform pattern can be rotated clockwise P-1(P=no of bits in the pattern) times, and the result matches with other pattern, then they are taken as a single pattern. Hence, instead of 59 bins, only 8 bins are needed to construct a histogram representing the local feature for a given local block. Once, the LBP local features for all blocks of a face are extracted, they are concatenated into an enhanced feature vector. This method is proven to be a growing success. It is adopted by many researchers, and is successfully used for facial expression recognition [32], [33]. Although LBP features achieved high accuracy rates for facial expression recognition, LBP extraction can be time consuming. C. Contributions in This Paper Proposed feature representation method LAF U can capture more texture information from local 3x3 pixels area. The computed feature holds information of local differences (the differences of gray color intensity of the referenced center pixel with its surrounding pixels) along with four possible accumulative gradient directions keeping the feature vector length up to 8 only for each of 9x9=81 blocks. It is robust to monotonic gray-scale changes caused, for example, by illumination variations. Due to its tiny feature vector length, it is appropriate for large-scale facial dataset. Unlike LBP, LAF U never neglects the center pixel in the time of feature extraction. So in compare with LBP [17] or LBP RIU [18]proposed method performs better both in time and classification accuracy, see Experimental Results and Analysis. The rest of the paper is organized to explain the proposed local feature representation method and system framework in section II, data collection and experimental setup in section III, results and analysis in section IV and conclusion in section V. (2) Figure 1: Example of obtaining LBP for a 3x3 block 9
3 II. PROPOSED FEATURE REPRESENTATION METHOD The proposed feature representation method is based on facial texture information of a 3x3 pixels local area. The representation operates on gray scale facial images. It is designed to capture for each pixel, the local pattern of gray color intensities of its neighboring pixels with respect to the gray color intensity of the pixel along with four directions of gradients. These patterns represent distinctive features for a particular pixel. The histogram of all possible patterns is constructed for each 9x9 blocks. These histograms are then concatenated to form the local feature representation for the input image. The representation method is as follows: D. Gray-scale Invariant Local Active Forces (LAF) Local pattern for a pixel is computed from the 3x3 pixels area, see Figure 2. Figure 2 : 3x3 pixels area. The pattern can be derived as follows: p= (f-e)-(d-e) (3) q=(c-e)-(g-e) (4) r= (b-e)-(h-e) (5) s = (a-e)-(i-e) (6) P = 0 if p < 0. 1 if p 0 Where a, b, c, d, e, f, g, h and i are the gray color intensities of pixels A, B, C, D, E, F, G, H and I, respectively. Local 3x3 pixels area in normal light C C C C 0 C 0 C 0 a) local pattern A B C D E F G H I Same area in low light b) local pattern Same area in high light c) local pattern Figure 3: Example of LAFUof being gray-scale invariant The p represents the winning force between two opposite (7) 8 forces or local differences, i.e. E to F and E to D and P corresponds to the direction of the winning force. Q, R and S can be defined and derived in the same way as P. Then, E = (P) (Q) (R) (S) Where E is the local pattern at the central pixel E. Thus, E contains 4 bits and therefore can have only 24=16 different patterns. A detailed example of local pattern extraction at pixel E is given in Figure 4. So feature vector lenght for each block is 16 for LAF, therefore feature dimension for all 81 blocks is 81x16=1296. LAFU is gray-scale invariant because monotonic gray-scale changes caused, for example, by illumination variations does not affect the local pattern, see Figure 3. This is because the local patterns are obtained using the magnitudes of local differences as shown in Figure 4. The magnitudes of local differences are independent of pixels gray color intensities for a 3x3 pixels local area. Figure 3 illustrates an example of LAFU of being gray-scale invariant. In all three different lightning condition, LAFU sustains the pattern The standard local binary pattern [17] (LBP) encodes therelationship between the referenced pixel and its surrounding neighbors by computing gray color intensity difference. The proposed method encodes the relationship between the referenced pixel and its neighbors, based on the four combined gradient directions through the center; those are calculated using the local magnitudes of gray color intensity differences of the referenced pixel with its surrounding pixels in vertical, horizontal and other two corner directions see Figure 4. Thus, LAF U holds more information than LBP and LBP RIU2 in the sense, a. LAF U preserves the all four possible accumulativegradient directions through the center in an addition, b. Unlike LBP or LBP RIU2, LAF U does not neglect thegray color intensity of the center pixel and c. LAF U does not neglect the magnitude of the color intensity differences between two pixels like other methods. E. Uniform LAF (LAF U ) A feature vector should have all those essential information needed by the classifier to classify. Unnecessary information put a pressure on the classification time as well as sometimes causes accuracy fall by a significant amount. On the other hand with inadequate features, a good classifier even may fail. Hence, Dimensionality Reduction (DR) methods are suggested as a preprocessing step to address the curse of dimensionality [37]. DR techniques try to project the data to a low-dimensional feature space. For a given p-dimensional random vector X=(x1,x2,,xp), DR tries to represent it with a q-dimensional vector Z=(z1,z2,,zq) such that, q<p, which maintains the characteristics of the original data as much as possible. DR can be done in two ways: 1. by transforming the existing features to a new reduced set of features, or 2. by selecting a subset of existing features.
4 Corresponding Classification Accuracy(%) Corresponding Classification Accuracy(%) ISSN: AVG(n) = 1 m m n P i i=1 (9) Where m in number of samples in the feature matrix.tfm is then sorted in descending order. It starts with the pattern that appear more frequently therefore the highest contribution to the classification. Index from this matrix is used to select the actual pattern from the feature matrix. Experiments are conducted using topmost 100,200 up to 1200 most frequently occurring patterns, see Figure Figure 4: Example of local pattern extraction at a pixel E using LAF U. (a) Gray-scale image, (b) Local 3x3 pixels area, (c) & (d) Local differences, (e), (f), (g) & (h) Four possible forces through the center, (i), (j), (k) & (l) Directions of the forces, (P), (Q), (R) & (S) Directions of the wining forces, (m) Final pattern for referenced pixel of gray color intensity 90 To overcome this dimensionality problem, a new technique is introduced based on variance, term frequency and bitwise transition of the patterns for selecting a subset of existing features.after feature extraction using LAF, a feature matrix of dimension m-by-n is obtained, where m in the number of subjects and n is the feature dimension e.g. for LAF n is The column wise variance of that matrix is computed, which results to a 1-by-n dimensional array named as VM. So, VAR (n) = 1 m VM = [VAR(1) VAR(2)..... VAR(n)] m i=1 (P n i μ(n)) 2 m ;Where, μ(n) = 1 m P i n i=1 (8) Where(P n i ) is the n-th sample of the i-th image and m is the number of samples in the feature matrix. It is clear that the smaller the variance of a pattern, the less the contribution of that pattern. The VM is then sorted in the descending order to get the highly contributed patterns in the beginning. The index from the sorted matrix is used to select the actual variables from the feature matrix for SVM input, see Figure 5. Similar way the average of each column of the feature matrix is calculated which results to a 1-by-n matrix. The matrix is named as TF-matrix (Term Frequency matrix). TFM.= [AVG(1) AVG(2)..... AVG(n)] Figure 5: Classification accuracy vs. number of features selected using variance. Red marked line is the best selection Figure 6: Classification accuracy vs. number of selected features using frequency of occurrence.red marked line is the best selection. In both the cases, the highest accuracy is obtained when the number of selected feature in between from Deep investigation on these patterns selected by both the Variance Matrix (VM) and Term- Frequency Matrix (TFM) shows that the patterns having bitwise transition (e.g. from 1 to 0 or 0 to 1) less than or equal to 1 are highly responsible for expression differentiation. Based on this extensive experimental evaluation, feature selection can be simplified by selecting only less transition patterns, for example it is less than or equal to 1 in this case and named as Uniform Local Active Forces (LAF U ), see Table Number of Features selected with top most variances Number of Features selected with top most Frequency 9
5 III. EXPERIMENTS Table 1: Patterns finally selected by LAF U Patterns selected by LAF U 0000, 0001, 0011, 0111, 1000, 1100, 1110 and 1111 Patterns dropped by LAF U 0010, 0100, 0101, 0110, 1001, 1010, 1011 and 1101 F. Proposed Framework for Facial Expressions Recognition Figure 7 shows the proposed framework for facial expression recognition system. The framework consists of the following steps: The Extended Cohn-Kanade Dataset (CK+) [14] is used for the experiments to evaluate the effectiveness of the proposed method. There are 326 peak facial expressions of 123 subjects. Seven emotion categories are in this dataset. They are Anger, Contempt, Disgust, Fear, Happy, Sadness and Surprise. Figure 10 shows the examples of a posed facial image and an unposed facial image. Figure 8 shows the numbers of instances for each expression in the dataset. Angry (45) Contempt' (18) Training Images Detection Preprocess Testing Images Detection Preprocess Surprise (82) Sad (28) Happy (69) Disgust (59) Fear (25) Feature Extraction Multiclass LIBSVM Feature Extraction Figure 8:CK+ Dataset, seven expressions and number of instances of each expression. No subject with the same emotion is collected more than once. All the facial images in the dataset are posed. Figure 9 shows some examples of facial images in the dataset. Expression Recognition Figure 7 : Proposed System Framework a. For each of training images, convert it to gray scaleif in different format. b. Detect the face in the image, resize it to 180x180 pixels using bilinear interpolation and divide it into equal size 81 blocks c. Compute LAF U value for each pixel of the image. d. Construct the histogram for each from 9x9=81blocks. e. Concatenate the histograms to get the feature vector for each image of 648 dimensions. f. Build a multiclass Support Vector Machine for face expression recognition using feature vectors of the training images. g. Do step 1 to 5 for each of testing images and use the Multiclass Support Vector Machine from step 6 to identify the face expression of the given testing image Figure 9: Cohn-Kanade (CK+) sample dataset The steps of face detection, preprocessing and feature extraction are illustrated in Figure 11. detection is done using fdlibmex library from Matlab. The library consists of single mexfile with a single function that takes an image as expressions. 10
6 IV. EXPERIMENTAL RESULTS AND ANALYSIS Figure 10: (a) Unposed Image, (b) Posed Image Figure 11: Facial Feature Extraction input and returns the frontal face of 180x180 resolutions. The face is then masked using a elliptical shape, see Figure 11. According to the past research on face expression recognition, higher accuracy can be achieved if the input face is divided into several blocks [17],[18]. In the experiments, the 180x180-size face is equally divided into 9x9=81 blocks, see Figure 12. Figure 12: Dividing facial image to sub-image or block. Features are extracted from each block using proposed method as well as LBP RIU2 method. Gathering feature histograms of all the blocks produces a unique feature vector for a given image is of 648 dimensions. A ten-fold none overlapping cross validation is used in the experiments. Using LIBSVM [7] a multiclass support vector machine with randomly chosen 90% of 326 images (90% from each expression) as the training images is constructed. The remaining 10 % of images is used as testing images. There is no overlap between the folds and it is userdependent. Ten rounds of training and testing are conducted and the average confusion matrix for each proposed method is reported and compared against the others. The kernel parameters for the classifier are set to: s=0 for SVM type C-Svc, t=1 for polynomial kernel function, c=1 is the cost of SVM, g= is the value of 1/ (length of feature vector dimension), b=1 for probability estimation. This setting of LIBSVM is found to be suitable for CK+ dataset with seven classes of data. Other kernel and parameter settings are also tried but the above setting is found to be suitable for CK+ dataset with seven classes of facial Several experiments are conducted to find a suitable face dimension for better performance. Table 2 shows the results of different face dimensions vs. the classification accuracy obtained. Among different face dimensions 180x180 pixels is found to be the best, giving classification accuracy of 92.1%. Then the face area is masked to remove non face region e.g. hair, ear, neck side areas etc. as much as possible and divided into blocks of different sizes. Table 2: Number of blocks (2 nd column) vs. classification accuracy (4 th column) vs. Feature vector length (5 th column); Feature vector length= Number of blocks X Number of Bins (In this case 8) Dimension (Pixels) Table 3: dimension (1st column) vs. Classification accuracy (4th column) Dimension (Pixels) Blocks Block Dimension (Pixels) Blocks Classification Accuracy (%) Block Dimension (Pixels) Feature Vector Length 180x180 6x6 30x x180 9x9 20x x180 10x10 18x x180 12x12 15x x180 15x15 12x x180 18x18 10x x180 36x36 5x Classification Accuracy (%) 240x240 12x12 20x x220 11x11 20x x200 10x10 20x x180 9x9 20x x160 8x8 20x x140 7x7 20x Several experiments are conducted as well to find the suitable block numbers. According to Table,number of blocks considered for further experiments is 9x9=81. 9x8, 8x9, 6x9, 9x6, 8x10, 10x8 and so many are also tried but 9x9 remained the best in terms of accuracy. Proposed method LAF U has 8 possible bins, see Table 1. So each of the 81 blocks has feature vector length of eight. Therefore, LAF U feature vector in use is of dimension 8x9x9=648. To prove LAF U method to be gray-scale invariant, the gray color intensity of random instances from the 326 imagesare manually changed. Some samples are shown in Figure 13. The result is found to be consistent even after changing the illumination that proved proposed method to be gray-scale invariant. Table compares the feature vector length of LAF U with some popular methods. 11
7 prediction ISSN: Surprise a)photos in High Light b) Photos in Natural Light c) Photos in low Light Figure 13: Samplesfrom CK+ dataset in different lighting condition (Different Gray-Scale Intensity) Table 4: Comparison of Feature Vector Dimension of LAFU with other methods. Method Feature vector Length LPQ (Local Phase Quantization[19] 256 LBP (General Local Binary pattern)[17] LBP u2 (Uniform Local Binary Pattern)[18] LBP RI(Rotation Invariant Local Binary Pattern)[18] LBP RIU2(Rotation Invariant and Uniform Local Binary Pattern)[18] 256 LAF U (Proposed Method) Feature vector Dimension used in our experiment 256x9x9 = x9x9 = x9x9 = x9x9 =2916 8x9x9 =648 8x9x9 =648 Third column shows the featurevector dimension for the whole facial image. All the methods from Table are conducted in same experimental setup with the same machine and the results are compared with proposed method, see Ошибка! Источник ссылки не найден.. From the figure, it is clear that LAF U performs better in terms of time and classification accuracy rate. The result for face expression recognition using the proposed feature representation method is shown using confusion matrices in Table 55. Table 5 : Confusion Matrix for LAF U LAF (U) = 10-fold validation Feature Extraction time for 326 Images = 38 Seconds Average Classification Accuracy = 92.1% Kernel parameter for LIBSVM: = (-s 0 -t 1 -c 100 -g b 1) Confusion Matrix: C : Actual Angry Contempt Disgust Fear Happy Sad Surprise Angry Contempt' Disgust Fear Happy Sad Figure 14: Comparison of proposed method LAF U with some other popular methods in terms of-(a) Classification Accuracy. (b) Feature extraction time of 326 images from CK+ Dataset in seconds.(c) 10-Fold cross validation time in seconds (It is person dependent, Folds are non-overlapping and each fold contains 90% from each expression class for training SVM and 10% for
8 testing.) and (d) Total experiment time for each of the methods in seconds. It should be noted that the results are not directly comparable due to different experimental setups, version differences of the CK dataset with different emotion labels,preprocessing methods, the number of sequences used, and so on, but they still point out the discriminative power of each approach. Table 2compares proposed method with other static analysis methods in terms of the number of subjects, the number of sequences, with different classifier and measures, providing the overall classification accuracy obtained with theck [12] or CK+ [14] facial expression database. Table 2: Comparison with Different Approaches, No of expression classes=7 and non-dynamic; (PDM-Point Distribution Model; EAR-LBP- Extended Asymmetric Region-Local Binary Pattern; LPQ-Local Phase Quantization, LBPU2-Uniform Local Binary pattern, Poly.-Polynomial, RBF- radial basis function) Method No of subjects [8] PDM [16] EAR-LBP [27] [4] [11] [22] LBP+LQ P No of Classifier Measure sequences Gabor Represent ation D Landmark s Robust LBP u SVM(RB F ) Multi Class SVM(RB F) Linear SVM SVM + AdaBoost SVM Proposed LAF U SVM(Pol ynomial) Multi Class SVM(Pol y) Manifold [6] 2-foldcross 10-foldcross leaveonesubjectout crossvalidation 10-foldcross 10-foldcross 10-foldcross Classifica tion accuracy (%) Chew et al.[8] experimented on the Extended Cohn-Kanade (CK+) dataset using CLM-tracked CAPP (Canonical Appearance) and SAPP (Similarity Shape) features. Though CLM (Constrained Local Model) is very powerful face alignment algorithm, they achieved above 80% accuracy. In [16], Naikaet al. proposed an Extended Asymmetric Region Local Binary Pattern (EAR-LBP) operator and automatic face localization heuristics to mitigate the localization errors for automatic facial expression recognition. Even after using automatic face localization, she achieved 83% accuracy on CK+ dataset. Yang at al. [27] used SIFT(System Identification from Tracking) tracked CK+, Bertlett [17] used both manual and automatic tracked faces along with AdaSVM (Adaboost+SVM), Jeniet al.[11] used CLM tracked 593 posed sequences from Extended CK+ and Shan et al.[22] used LBP u2 (Uniform Local Binary Pattern) along with SVM.But LAF U clearly outperforms all of them in almost all cases, see Figure 11. It takes Sec to extract features from face of 180x180 resolutions which cannot be compared with others as feature extraction is not cited in any of their papers. All the authors of [14] [8], [16], [4] and [11] used different types of face localization methods. According to those authors if faces are not aligned properly, it may miss some crucial features at the border area in the time of feature extraction from a block or whole face. It is clearly mentioned in [11] and [23] that aligned faces give an extra 5-10% increase in classification accuracy, leave-one -out increases the accuracy by 1-2% and adaboost increases it by 1-2% on CK+ dataset, [4]. So overall an extra 7-12% accuracy can be obtained using proper alignment, increasing training data and adding boosting algorithm along with classifier. LAF U better than the available best AAM result that uses texture plus shape information [14], the CLM result that utilizes only textural information [8] and the best CLM result that utilizes shape formation [11], see Table 3. It also shows that proposed texture based expression recognition is better than previous texture based methods and compares favorably with state-of-the-art procedures using shape or shape and texture information combined. The dataset has only one peak expression for a particular subject. Some subjects do not contain all seven expressions. Table 3:CK+ dataset.comparison of the results of different methods on the seven main facial expressions. S: shape based method, T: texture based method. S + T: both shape and texture based method. (CLM- Constrained Local Model, AAM-Active Appearance Model, PMSpersonal mean shape, An.= Anger, Co.= Contempt, Di.= Disgust, Fe.=Fear, Ha.=Happy, Sa.=Sad, Su.=Surprise, Avg.=Average) Authors Method T/S An. Co. Di. Fe. Ha. Sa. Su Avg. [14] AAM + SVM S AAM + SVM T AAM + SVM T + S [8] CLM + SVM T [11] LAF U CLM + SVM (AU0 norm.) CLM + SVM (PMS) No Aligning+ SVM S S T Figure 15 shows the comparison between the number of training instances and achieved accuracy for each of expression types using LAF U. The number of Sad, Contempt or Fear instances is less in compare with other expression classes. Due to this reason the classification accuracy is less for these expression classes which affects the overall accuracy. Hence, it can be concluded that the number of training instances for an expression does affect the accuracy rate achieved for the
9 expression. Another reason for less classification accuracy could be the dataset No of Expression Instances in CK+ % of Expression Recognition Accuracy by LAF(RI) Figure 15: No. of expression Instances in CK+ dataset Vs % of classification Accuracy for Each Facial Expression in case of Some facial expressions are confusing for human eyes as well. The accuracy can be improved by collecting more peak expression instances from the same subject. V. CONCLUSION A novel gray-scale invariant facial feature extraction method is proposed for facial expression recognition. For each pixel in a gray scale image, the method extracts the local pattern using magnitudes of gray color intensity differences between referenced pixel and its neighbors and four possible gradient directions through the center of a local 3x3 pixels area. The method is very simple and effective for facial expression recognition and outperformed LPQ, LBP, LBP U2, LBP RI, and LBP RIU2 both in terms of classification accuracy rate and feature extraction time. The facial expression recognition using all the above methods is conducted with same experimental setup and in the machine.future research work may include incorporating AdaBoost or SimpleBoost algorithm into the face expression classifiers, which may increase the accuracy rates substantially. 76 VI. REFERENCES [1] Ahonen, T., Hadid, A., and Pietikainen, M. Description with Local Binary Patterns: Application to Recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, 28, 12 (2006), [2] Anderson, K. and McOwan, P. W. A Real-Time Automated System for the Recognition of Human Facial Expressions. IEEE Transactions on Systems, Man, and Cybernetics, 36, 1 (2006), [3] Asthana, A., Saragih, J., Wagner, M., and Goecke, R. Evaluating AAM fitting methods for facial expression recognition. In 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, ( 2009), [4] Bartlett, M.S., Littlewort, G., Fasel, I., and Movellan, & R. Real Time Detection and Facial Expression Recognition: Development and Application to Human Computer Interaction. In Proc. CVPR Workshop Computer Vision and Pattern Recognition for HumanComputer Interaction ( 2003). [5] Bassili, J.N. Emotion Recognition: The Role of Facial Movement and the Relative Importance of Upper and Lower Area of the. J.Personality and Social Psychology, 37 (1979), [6] Chang, Y., Hu, C., Feris, R., and Turk, M. Manifold-based analysis of facial expression. Image Vision Comput., 24, 6 (2006), [7] Chang, C.C. and Lin, C.J. LIBSVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (2011). [8] Chew, S.W., Lucey, P., Lucey, S., Saragih, J., Cohn, J.F., and Sridharan, S. Person-independent facial expression detection using Constrained Local Models. In 2011 IEEE International Conference on Automatic & Gesture Recognition and Workshops (FG 2011), ( 2011), [9] Cohen, I., Sebe, N., Garg, S., Chen, L. S., and Huanga, T. S. Facial expression recognition from video sequences: temporal and static modelling. Computer Vision and Image Understanding, 91 (2003), [10] Fu, X. and W. Wei. Centralized Binary Patterns Embedded with Image Euclidean Distance for Facial Expression Recognition. In Fourth International Conference on Natural Computation, ICNC '08. ( 2008), [11] Jeni, L.A., Lőrincz, A., Nagy, T., Palotai, Z., Sebők, J., Szabó, Z., and Takács, D. 3D shape estimation in video sequences provides high precision evaluation of facial expressions. Image and Vision Computing, 30, 10 (October 2012), [12] Kanade, T., Cohn, J. F., and Tian, Y. Comprehensive database for facial expression analysis. In Fourth IEEE International Conference on Automatic and Gesture Recognition ( 2000). [13] Kotsia, I. and Pitas, I. Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines. IEEE Transaction on Image Processing, 16, 1 (Jan 2007). [14] Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. The Extended Cohn-Kande Dataset (CK+): A complete facial expression dataset for action unit and emotion-specified expression. Paper presented at the Third IEEE Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010) (2010). [15] Mehrabian, A. Communication without words. Psychology Today, 2, 4 (1968), [16] Naika, C.L.S., Jha, S. Shekhar, Das, P. K., and Nair, S. B.
10 Automatic Facial Expression Recognition Using Extended AR-LBP. WIRELESS NETWORKS AND COMPUTATIONAL INTELLIGENCE, 292, 3 (2012), [17] Ojala, T., Pietikäinen, M., and Harwood, D. A Comparative Study of Texture Measures with Classification Based on Feature Distributions. Pattern Recognition, 19, 3 (1996), [18] Ojala, T., Pietikäinen, M., and Mäenpää, T. Multiresolution Gray-scale and Rotation Invariant Texture Classification with Local Binary Patterns. IEEE Trans. Pattern Analysis and Machine Intelligence, 24, 7 (2002), [19] Ojansivu, V. and Heikkilä, J. Blur Insensitive Texture Classification Using Local Phase Quantization. in Proc. ICISP, Berlin, Germany (2008), [20] Pantic, M. and Patras, Ioannis. Dynamics of Facial Expression: Recognition of Facial Actions and Their Temporal Segments From Profile Image Sequences. IEEE Transactions on Systems, Man, and Cybernetics, 36, 2 (2006), [21] Paul Ekman. Basic Emotions. In Handbook of Cognition and Emotion. John Wiley & Sons, [22] Shan, C., Gong, S., and McOwan, P.W. Robust Facial Expression Recognition Using Local Binary Patterns. In Proc. IEEE Int l Conf. Image Procession ( 2005), [23] Shan, C., S.Gong, and P.W.McOwan. Facial expression recognition based on Local Binary Patterns: A comprehensive study. Image and Vision Computing, 27, 6 (2009), [24] Tian, Y. L., Kanade, T., and Cohn, J. F. Recognizing action units for facial expression analysis. IEEE Trans. Pattern Anal. Mach. Intell, 23, 2 (2001), [25] Tong, Y., Liao, W., and Ji, Q. Facial Action Unit Recognition by Exploiting Their Dynamic and Semantic Relationships. IEEE Trans. Pattern Anal. Mach. Intell, 29, 10 (2007), [26] Valstar, M. F. and Pantic, M. Combined support vector machines and hidden Markov models for modeling facial action temporal dynamics. In in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recog. Workshop Human Comput. Interact. ( 2007), [27] Yang, S. and Bhanu, B. Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, (2012), [28] Yeasin, M., Bullot, B., and Sharma, R. Recognition of Facial Expressions and Measurement of Levels of Interest From Video. IEEE Trans. Multimedia, 8, 3 (2006), [29] Zeng, Z., Pantic, M., Roisman, G., and Huang, T. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31, 1 (2009), [30] Zhang, Y. and Ji, Q. Active and dynamic information fusion for facial expression understanding from image sequences. IEEE Trans. Pattern Anal. Mach. Intel., 27, 5 (2005), [31] Zhang, Z., Lyons, M., Schuster, M., and Akamatsu, S. Comparison between geometry-based and Gaborwavelets-based facial expression recognition using multilayer perceptron. In Third IEEE International Conference on Automatic and Gesture Recognition, Proceedings. ( 1998), [32] Zhao, G. and Pietikainen, M. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)., 29, 6 (2007), [33] Zhao, G. and Pietikainen, M. Facial Expression Recognition Using Constructive Feed forward Neural Networks. IEEE Transactions on Systems, Man, and Cybernetics., 34, 3 (2004), [34] Yang, S.; Bhanu, B., "Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image," Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol.42, no.4, pp.980,992, Aug [35] Kabir, H.; T. Jabid, and O. Chae."Local Directional Pattern Variance (LDPv):A Robust Feature Descriptor for Facial Expression Recognition." The International Arab Journal of Information Technology 9(4) (2012). [36] Xiaohua, H.; Zhao, G.; Matti Pietikäinen, Wenming Zheng (2011): Expression Recognition in Videos Using a Weighted Component-Based Feature Descriptor. SCIA 2011: [37] Kumar A Analysis of Unsupervised Dimensionality Reduction Techniques, Computer Science and Information Systems, vol. 6, no. 2, pp Mohammad Shahidul Islam (PhD) received his B.Tech. degree in Computer Science and Technology from Indian Institute of Technology- Roorkee (I.I.TR), Uttar Pradesh, India in 2002, M.Sc. degree in Computer Science from American World University, London Campus, U.K in 2005 and M.Sc. in Mobile Computing and Communication from University of Greenwich, London, U.K in He has successfully finished his Ph.D. degree in Computer Science & Information Systems from National Institute of Development Administration (NIDA), Bangkok, Thailand in His field of research interest includes Image Processing, Pattern Recognition, wireless and mobile communication, Satellite Commutation and Computer Networking. 15
A Novel Feature Extraction Technique for Facial Expression Recognition
www.ijcsi.org 9 A Novel Feature Extraction Technique for Facial Expression Recognition *Mohammad Shahidul Islam 1, Surapong Auwatanamongkol 2 1 Department of Computer Science, School of Applied Statistics,
More informationBoosting Facial Expression Recognition in a Noisy Environment Using LDSP-Local Distinctive Star Pattern
www.ijcsi.org 45 Boosting Facial Expression Recognition in a Noisy Environment Using LDSP-Local Distinctive Star Pattern Mohammad Shahidul Islam 1, Tarikuzzaman Emon 2 and Tarin Kazi 3 1 Department of
More informationHUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION
HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time
More informationRobust Facial Expression Classification Using Shape and Appearance Features
Robust Facial Expression Classification Using Shape and Appearance Features S L Happy and Aurobinda Routray Department of Electrical Engineering, Indian Institute of Technology Kharagpur, India Abstract
More informationFacial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib
3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 201) Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi
More informationFacial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression
More informationA Real Time Facial Expression Classification System Using Local Binary Patterns
A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial
More informationBoosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition
Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Peng Yang Qingshan Liu,2 Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road,
More informationPerson-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)
The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014 195 Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) Faisal Ahmed 1, Hossain
More informationDynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model
Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Caifeng Shan, Shaogang Gong, and Peter W. McOwan Department of Computer Science Queen Mary University of London Mile End Road,
More informationCOMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION
COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION Priyanka Rani 1, Dr. Deepak Garg 2 1,2 Department of Electronics and Communication, ABES Engineering College, Ghaziabad
More informationFacial Expression Recognition with Emotion-Based Feature Fusion
Facial Expression Recognition with Emotion-Based Feature Fusion Cigdem Turan 1, Kin-Man Lam 1, Xiangjian He 2 1 The Hong Kong Polytechnic University, Hong Kong, SAR, 2 University of Technology Sydney,
More informationFacial Expression Recognition using Principal Component Analysis with Singular Value Decomposition
ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial
More informationFacial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion
Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion Juxiang Zhou 1, Tianwei Xu 2, Jianhou Gan 1 1. Key Laboratory of Education Informalization for Nationalities,
More informationAppearance Manifold of Facial Expression
Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk
More informationReal time facial expression recognition from image sequences using Support Vector Machines
Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,
More informationAN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)
AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT
More informationExploring Facial Expressions with Compositional Features
Exploring Facial Expressions with Compositional Features Peng Yang Qingshan Liu Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road, Piscataway, NJ 88, USA peyang@cs.rutgers.edu,
More informationLBP Based Facial Expression Recognition Using k-nn Classifier
ISSN 2395-1621 LBP Based Facial Expression Recognition Using k-nn Classifier #1 Chethan Singh. A, #2 Gowtham. N, #3 John Freddy. M, #4 Kashinath. N, #5 Mrs. Vijayalakshmi. G.V 1 chethan.singh1994@gmail.com
More informationINTERNATIONAL JOURNAL FOR ADVANCE RESEARCH IN ENGINEERING AND TECHNOLOGY WINGS TO YOUR THOUGHTS.. XBeats-An Emotion Based Music Player
XBeats-An Emotion Based Music Player Sayali Chavan 1, Ekta Malkan 2, Dipali Bhatt 3, Prakash H. Paranjape 4 1 U.G. Student, Dept. of Computer Engineering, sayalichavan17@gmail.com 2 U.G. Student, Dept.
More informationLearning to Recognize Faces in Realistic Conditions
000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050
More informationTexture Features in Facial Image Analysis
Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University
More informationRecognition of Facial Action Units with Action Unit Classifiers and An Association Network
Recognition of Facial Action Units with Action Unit Classifiers and An Association Network Junkai Chen 1, Zenghai Chen 1, Zheru Chi 1 and Hong Fu 1,2 1 Department of Electronic and Information Engineering,
More informationAction Unit Based Facial Expression Recognition Using Deep Learning
Action Unit Based Facial Expression Recognition Using Deep Learning Salah Al-Darraji 1, Karsten Berns 1, and Aleksandar Rodić 2 1 Robotics Research Lab, Department of Computer Science, University of Kaiserslautern,
More informationFacial Expression Recognition Using Encoded Dynamic Features
Facial Expression Recognition Using Encoded Dynamic Features Peng Yang Qingshan Liu,2 Xinyi Cui Dimitris N.Metaxas Computer Science Department, Rutgers University Frelinghuysen Road Piscataway, NJ 8854
More informationFacial Expression Analysis
Facial Expression Analysis Jeff Cohn Fernando De la Torre Human Sensing Laboratory Tutorial Looking @ People June 2012 Facial Expression Analysis F. De la Torre/J. Cohn Looking @ People (CVPR-12) 1 Outline
More informationSpatiotemporal Features for Effective Facial Expression Recognition
Spatiotemporal Features for Effective Facial Expression Recognition Hatice Çınar Akakın and Bülent Sankur Bogazici University, Electrical & Electronics Engineering Department, Bebek, Istanbul {hatice.cinar,bulent.sankur}@boun.edu.tr
More informationFacial Emotion Recognition using Eye
Facial Emotion Recognition using Eye Vishnu Priya R 1 and Muralidhar A 2 1 School of Computing Science and Engineering, VIT Chennai Campus, Tamil Nadu, India. Orcid: 0000-0002-2016-0066 2 School of Computing
More informationClassification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks
Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks HADI SEYEDARABI*, WON-SOOK LEE**, ALI AGHAGOLZADEH* AND SOHRAB
More informationCountermeasure for the Protection of Face Recognition Systems Against Mask Attacks
Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Neslihan Kose, Jean-Luc Dugelay Multimedia Department EURECOM Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr
More informationExtracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition
Extracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition Xiaoyi Feng 1, Yangming Lai 1, Xiaofei Mao 1,JinyePeng 1, Xiaoyue Jiang 1, and Abdenour Hadid
More informationA Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions
A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions Akhil Bansal, Santanu Chaudhary, Sumantra Dutta Roy Indian Institute of Technology, Delhi, India akhil.engg86@gmail.com,
More informationClassification of Face Images for Gender, Age, Facial Expression, and Identity 1
Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1
More informationAn Algorithm based on SURF and LBP approach for Facial Expression Recognition
ISSN: 2454-2377, An Algorithm based on SURF and LBP approach for Facial Expression Recognition Neha Sahu 1*, Chhavi Sharma 2, Hitesh Yadav 3 1 Assistant Professor, CSE/IT, The North Cap University, Gurgaon,
More informationRecognition of facial expressions in presence of partial occlusion
Recognition of facial expressions in presence of partial occlusion Ioan Buciu, 1 Irene Kotsia 1 and Ioannis Pitas 1 AIIA Laboratory Computer Vision and Image Processing Group Department of Informatics
More informationEmotion Detection System using Facial Action Coding System
International Journal of Engineering and Technical Research (IJETR) Emotion Detection System using Facial Action Coding System Vedant Chauhan, Yash Agrawal, Vinay Bhutada Abstract Behaviors, poses, actions,
More informationColor Local Texture Features Based Face Recognition
Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India
More informationAutomatic Facial Expression Recognition Using Features of Salient Facial Patches
1 Automatic Facial Expression Recognition Using Features of Salient Facial Patches S L Happy and Aurobinda Routray Abstract Extraction of discriminative features from salient facial patches plays a vital
More informationFacial expression recognition using shape and texture information
1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,
More informationSPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS. André Mourão, Pedro Borges, Nuno Correia, João Magalhães
SPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS André Mourão, Pedro Borges, Nuno Correia, João Magalhães Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade
More informationDA Progress report 2 Multi-view facial expression. classification Nikolas Hesse
DA Progress report 2 Multi-view facial expression classification 16.12.2010 Nikolas Hesse Motivation Facial expressions (FE) play an important role in interpersonal communication FE recognition can help
More informationAutomatic Facial Expression Recognition based on the Salient Facial Patches
IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 11 May 2016 ISSN (online): 2349-784X Automatic Facial Expression Recognition based on the Salient Facial Patches Rejila.
More informationFacial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism
Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism 1 2 Wei-Lun Chao, Jun-Zuo Liu, 3 Jian-Jiun Ding, 4 Po-Hung Wu 1, 2, 3, 4 Graduate Institute
More informationAn Adaptive Threshold LBP Algorithm for Face Recognition
An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent
More informationMULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION
MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of
More informationFacial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches
Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Yanpeng Liu a, Yuwen Cao a, Yibin Li a, Ming Liu, Rui Song a Yafang Wang, Zhigang Xu, Xin Ma a Abstract Facial
More informationRecognizing Micro-Expressions & Spontaneous Expressions
Recognizing Micro-Expressions & Spontaneous Expressions Presentation by Matthias Sperber KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association www.kit.edu
More informationFACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTIVE SHAPE MODEL
FACIAL EXPRESSIO RECOGITIO USIG DIGITALISED FACIAL FEATURES BASED O ACTIVE SHAPE MODEL an Sun 1, Zheng Chen 2 and Richard Day 3 Institute for Arts, Science & Technology Glyndwr University Wrexham, United
More informationEdge Detection for Facial Expression Recognition
Edge Detection for Facial Expression Recognition Jesús García-Ramírez, Ivan Olmos-Pineda, J. Arturo Olvera-López, Manuel Martín Ortíz Faculty of Computer Science, Benemérita Universidad Autónoma de Puebla,
More informationInternational Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017
RESEARCH ARTICLE OPEN ACCESS Facial expression recognition based on completed LBP Zicheng Lin 1, Yuanliang Huang 2 1 (College of Science and Engineering, Jinan University, Guangzhou, PR China) 2 (Institute
More informationC.R VIMALCHAND ABSTRACT
International Journal of Scientific & Engineering Research, Volume 5, Issue 3, March-2014 1173 ANALYSIS OF FACE RECOGNITION SYSTEM WITH FACIAL EXPRESSION USING CONVOLUTIONAL NEURAL NETWORK AND EXTRACTED
More informationFACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS
FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS M.Gargesha and P.Kuchi EEE 511 Artificial Neural Computation Systems, Spring 2002 Department of Electrical Engineering Arizona State University
More informationExploring Bag of Words Architectures in the Facial Expression Domain
Exploring Bag of Words Architectures in the Facial Expression Domain Karan Sikka, Tingfan Wu, Josh Susskind, and Marian Bartlett Machine Perception Laboratory, University of California San Diego {ksikka,ting,josh,marni}@mplab.ucsd.edu
More informationMultiple Kernel Learning for Emotion Recognition in the Wild
Multiple Kernel Learning for Emotion Recognition in the Wild Karan Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort and Marian S. Bartlett Machine Perception Laboratory UCSD EmotiW Challenge,
More information3D Facial Action Units Recognition for Emotional Expression
3D Facial Action Units Recognition for Emotional Expression Norhaida Hussain 1, Hamimah Ujir, Irwandi Hipiny and Jacey-Lynn Minoi 1 Department of Information Technology and Communication, Politeknik Kuching,
More informationLearning the Deep Features for Eye Detection in Uncontrolled Conditions
2014 22nd International Conference on Pattern Recognition Learning the Deep Features for Eye Detection in Uncontrolled Conditions Yue Wu Dept. of ECSE, Rensselaer Polytechnic Institute Troy, NY, USA 12180
More informationComplete Local Binary Pattern for Representation of Facial Expression Based on Curvelet Transform
Proc. of Int. Conf. on Multimedia Processing, Communication& Info. Tech., MPCIT Complete Local Binary Pattern for Representation of Facial Expression Based on Curvelet Transform Nagaraja S., Prabhakar
More informationFacial expression recognition is a key element in human communication.
Facial Expression Recognition using Artificial Neural Network Rashi Goyal and Tanushri Mittal rashigoyal03@yahoo.in Abstract Facial expression recognition is a key element in human communication. In order
More informationA Facial Expression Classification using Histogram Based Method
2012 4th International Conference on Signal Processing Systems (ICSPS 2012) IPCSIT vol. 58 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V58.1 A Facial Expression Classification using
More informationFacial Feature Expression Based Approach for Human Face Recognition: A Review
Facial Feature Expression Based Approach for Human Face Recognition: A Review Jageshvar K. Keche 1, Mahendra P. Dhore 2 1 Department of Computer Science, SSESA, Science College, Congress Nagar, Nagpur,
More informationEmotion Recognition With Facial Expressions Classification From Geometric Facial Features
Reviewed Paper Volume 2 Issue 12 August 2015 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 Emotion Recognition With Facial Expressions Classification From Geometric
More informationFACIAL expression plays a significant role in human communication.
980 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS PART B: CYBERNETICS, VOL. 42, NO. 4, AUGUST 2012 Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image Songfan Yang, Student
More informationContent Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features
Content Based Image Retrieval Using Color Quantizes, EDBTC and LBP Features 1 Kum Sharanamma, 2 Krishnapriya Sharma 1,2 SIR MVIT Abstract- To describe the image features the Local binary pattern (LBP)
More informationFacial Expression Recognition Using Non-negative Matrix Factorization
Facial Expression Recognition Using Non-negative Matrix Factorization Symeon Nikitidis, Anastasios Tefas and Ioannis Pitas Artificial Intelligence & Information Analysis Lab Department of Informatics Aristotle,
More informationFacial Expression Recognition for HCI Applications
acial Expression Recognition for HCI Applications adi Dornaika Institut Géographique National, rance Bogdan Raducanu Computer Vision Center, Spain INTRODUCTION acial expression plays an important role
More informationConvolutional Neural Networks for Facial Expression Recognition
Convolutional Neural Networks for Facial Expression Recognition Shima Alizadeh Stanford University shima86@stanford.edu Azar Fazel Stanford University azarf@stanford.edu Abstract In this project, we have
More informationFacial Expression Detection Using Implemented (PCA) Algorithm
Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with
More informationAPPLICATION OF LOCAL BINARY PATTERN AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION
APPLICATION OF LOCAL BINARY PATTERN AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION 1 CHETAN BALLUR, 2 SHYLAJA S S P.E.S.I.T, Bangalore Email: chetanballur7@gmail.com, shylaja.sharath@pes.edu Abstract
More informationFacial Expression Analysis
Facial Expression Analysis Faces are special Face perception may be the most developed visual perceptual skill in humans. Infants prefer to look at faces from shortly after birth (Morton and Johnson 1991).
More informationRecognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition
Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition Yi Sun, Michael Reale, and Lijun Yin Department of Computer Science, State University of New York
More informationFacial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap
Sensors 2011, 11, 9573-9588; doi:10.3390/s111009573 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant
More informationAn Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features
An Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features C.R Vimalchand Research Scholar, Associate Professor, Department of Computer Science, S.N.R Sons
More informationNatural Facial Expression Recognition Using Dynamic and Static Schemes
Natural Facial Expression Recognition Using Dynamic and Static Schemes Bogdan Raducanu 1 and Fadi Dornaika 2,3 1 Computer Vision Center, 08193 Bellaterra, Barcelona, Spain bogdan@cvc.uab.es 2 IKERBASQUE,
More informationReal-time Automatic Facial Expression Recognition in Video Sequence
www.ijcsi.org 59 Real-time Automatic Facial Expression Recognition in Video Sequence Nivedita Singh 1 and Chandra Mani Sharma 2 1 Institute of Technology & Science (ITS) Mohan Nagar, Ghaziabad-201007,
More informationUnifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis
Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis Mehdi Ghayoumi 1, Arvind K Bansal 1 1 Computer Science Department, Kent State University, {mghayoum,akbansal}@kent.edu
More informationTEXTURE CLASSIFICATION METHODS: A REVIEW
TEXTURE CLASSIFICATION METHODS: A REVIEW Ms. Sonal B. Bhandare Prof. Dr. S. M. Kamalapur M.E. Student Associate Professor Deparment of Computer Engineering, Deparment of Computer Engineering, K. K. Wagh
More informationAddress for Correspondence 1 Research Scholar, 2 Assistant Professor, ECE Deptt., DAVIET, Jalandhar, INDIA
Research Paper GEOMETRIC AND APPEARANCE FEATURE ANALYSIS FOR FACIAL EXPRESSION RECOGNITION 1 Sonu Dhall, 2 Poonam Sethi Address for Correspondence 1 Research Scholar, 2 Assistant Professor, ECE Deptt.,
More informationIntensity Rank Estimation of Facial Expressions Based on A Single Image
2013 IEEE International Conference on Systems, Man, and Cybernetics Intensity Rank Estimation of Facial Expressions Based on A Single Image Kuang-Yu Chang ;, Chu-Song Chen : and Yi-Ping Hung ; Institute
More informationIMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur
IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important
More informationEnhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets.
Enhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets. Zermi.Narima(1), Saaidia.Mohammed(2), (1)Laboratory of Automatic and Signals Annaba (LASA), Department
More informationFacial Expression Recognition Using Gabor Motion Energy Filters
Facial Expression Recognition Using Gabor Motion Energy Filters Tingfan Wu Marian S. Bartlett Javier R. Movellan Dept. Computer Science Engineering Institute for Neural Computation UC San Diego UC San
More informationEMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE
EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE V. Sathya 1 T.Chakravarthy 2 1 Research Scholar, A.V.V.M.Sri Pushpam College,Poondi,Tamilnadu,India. 2 Associate Professor, Dept.of
More informationFacial Feature Extraction Based On FPD and GLCM Algorithms
Facial Feature Extraction Based On FPD and GLCM Algorithms Dr. S. Vijayarani 1, S. Priyatharsini 2 Assistant Professor, Department of Computer Science, School of Computer Science and Engineering, Bharathiar
More informationFace Alignment Under Various Poses and Expressions
Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.
More informationResearch on Emotion Recognition for Facial Expression Images Based on Hidden Markov Model
e-issn: 2349-9745 p-issn: 2393-8161 Scientific Journal Impact Factor (SJIF): 1.711 International Journal of Modern Trends in Engineering and Research www.ijmter.com Research on Emotion Recognition for
More information[Gaikwad *, 5(11): November 2018] ISSN DOI /zenodo Impact Factor
GLOBAL JOURNAL OF ENGINEERING SCIENCE AND RESEARCHES LBP AND PCA BASED ON FACE RECOGNITION SYSTEM Ashok T. Gaikwad Institute of Management Studies and Information Technology, Aurangabad, (M.S), India ABSTRACT
More informationRecognition of Facial Expressions using Local Mean Binary Pattern
Electronic Letters on Computer Vision and Image Analysis 16(1):4-67, 217 Recognition of Facial Expressions using Local Mean Binary Pattern Mahesh Goyani * and Narendra Patel + * Department of Computer
More informationDirectional Binary Code for Content Based Image Retrieval
Directional Binary Code for Content Based Image Retrieval Priya.V Pursuing M.E C.S.E, W. T. Chembian M.I.ET.E, (Ph.D)., S.Aravindh M.Tech CSE, H.O.D, C.S.E Asst Prof, C.S.E Gojan School of Business Gojan
More informationEvaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity
Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Ying-li Tian 1 Takeo Kanade 2 and Jeffrey F. Cohn 2,3 1 IBM T. J. Watson Research Center, PO
More informationLOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION
17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 LOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION Seyed Mehdi Lajevardi, Zahir M. Hussain
More informationRecognizing Facial Expressions Automatically from Video
Recognizing Facial Expressions Automatically from Video Caifeng Shan and Ralph Braspenning Introduction Facial expressions, resulting from movements of the facial muscles, are the face changes in response
More informationFACE RECOGNITION USING LDN CODE
FACE RECOGNITION USING LDN CODE J.K.Jeevitha 1,B.Karthika 2,E.Devipriya 3 1 2 3Assistant Professor, Department of Information Technology, Tamil Nadu, India Abstract - LDN characterizes both the texture
More informationMulti-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature
0/19.. Multi-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature Usman Tariq, Jianchao Yang, Thomas S. Huang Department of Electrical and Computer Engineering Beckman Institute
More informationFacial Expressions Recognition Using Eigenspaces
Journal of Computer Science 8 (10): 1674-1679, 2012 ISSN 1549-3636 2012 Science Publications Facial Expressions Recognition Using Eigenspaces 1 Senthil Ragavan Valayapalayam Kittusamy and 2 Venkatesh Chakrapani
More informationRadially Defined Local Binary Patterns for Hand Gesture Recognition
Radially Defined Local Binary Patterns for Hand Gesture Recognition J. V. Megha 1, J. S. Padmaja 2, D.D. Doye 3 1 SGGS Institute of Engineering and Technology, Nanded, M.S., India, meghavjon@gmail.com
More informationExtreme Learning Machine Ensemble Using Bagging for Facial Expression Recognition
J Inf Process Syst, Vol.10, No.3, pp.443~458, September 2014 http://dx.doi.org/10.3745/jips.02.0004 ISSN 1976-913X (Print) ISSN 2092-805X (Electronic) Extreme Learning Machine Ensemble Using Bagging for
More informationFacial Expression Recognition Using Local Binary Patterns
Facial Expression Recognition Using Local Binary Patterns Kannan Subramanian Department of MC, Bharath Institute of Science and Technology, Chennai, TamilNadu, India. ABSTRACT: The most expressive way
More informationPattern Recognition Letters
Pattern Recognition Letters 30 (2009) 1117 1127 Contents lists available at ScienceDirect Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec Boosted multi-resolution spatiotemporal
More informationGender Classification Technique Based on Facial Features using Neural Network
Gender Classification Technique Based on Facial Features using Neural Network Anushri Jaswante Dr. Asif Ullah Khan Dr. Bhupesh Gour Computer Science & Engineering, Rajiv Gandhi Proudyogiki Vishwavidyalaya,
More informationMULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION
MULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION Irene Kotsia, Stefanos Zafeiriou, Nikolaos Nikolaidis and Ioannis Pitas Aristotle University of
More information