Intensity Rank Estimation of Facial Expressions Based on A Single Image

Size: px
Start display at page:

Download "Intensity Rank Estimation of Facial Expressions Based on A Single Image"

Transcription

1 2013 IEEE International Conference on Systems, Man, and Cybernetics Intensity Rank Estimation of Facial Expressions Based on A Single Image Kuang-Yu Chang ;, Chu-Song Chen : and Yi-Ping Hung ; Institute of Information Science, Academia Sinica, Taipei, Taiwan : Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan ; Dept. of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan Graduate Institute of Networking and Multimedia, National Taiwan University, Taipei, Taiwan song@iis.sinica.edu.tw Abstract In this paper, we propose a framework that estimates the discrete intensity rank of a facial expression based on a single image. For most people, judging whether an expression is more intense than others is easier than determining its realvalued intensity degree, and hence the relative order of two expressions is more distinguishable than the exact difference between them. We utilize the relative order to construct an image-based ranking approach for inferring the discrete ranks. The challenge of image-based approaches is to conduct a representation for subtle expression changes. We employ an efficient descriptor, scattering transform, which is translation invariant and can linearize deformations. This scattering representation recovers the lost high frequencies and retains discrimination under invariant property. Our experimental results demonstrate that the proposed framework with scattering transform outperforms other compared feature descriptors and algorithms. Index Terms expression intensity estimation; ordinal ranking; scattering transform; (a) Anger (b) Disgust (c) Fear (d) Happiness (e) Sadness (f) Surprise Fig. 1: Illustration of the six basic categories of facial expressions. Fig. 2: Image sequence of surprise expression from neutral to apex. I. INTRODUCTION Facial expression reflects the affection of an individual, which is essential to nonverbal communications. How to recognize facial expressions automatically is therefore important to human-computer interaction and affective computing systems, which has been a significant research topic over the past few decades. Most of the automatic facial-expression recognition (AFR) researches [1 5] focus on recognizing the six universal expressions, categorized as anger, disgust, happiness, fear, surprise and sadness, as illustrated in Fig. 1. Although the categorical information is fundamental to AFR, it is sometimes insufficient for real-world applications when we need to know further how strong an expression is. For example, Fig. 2 illustrates an image sequence of the surprise expression from neutral to apex. Although these expressions belong to the same category, their degree of strongness (or strength) are different. On the other hand, the expression intensity information would be critical for choosing an appropriate reaction strategy for human-computer interaction. A. Expression Intensity Estimation In this paper, instead of classifying a face image into one of the facial expression categories, we focus on the problem of estimating the intensity of facial expression. Recently, there has been a growing interest to this problem. In the past, some early studies [6 8] simply extend the outcomes of expressioncategory classification for obtaining the intensity-estimation results. In the training phase, these approaches employed only the category-labeled data (which lack of the intensity labels) for learning a classifier. Then, the expression intensity is directly estimated as a value proportional to the distance between the query example and the classification boundary. For example, Littlewort et al. [6] applied SVM for facial expression recognition, and the distances to the SVM hyperplanes are used to estimate the expression intensities. Chang et al. [7] employed a discriminative manifold learning approach for facial components, and estimated the probability of an expression according to the distances to the manifold. However, such a naive estimation method causes easily inaccurate results because distances to the classification boundary in the feature space do not necessarily reflect how strong an expression is. Later studies [9 14] employed the training data with known intensity labels to conduct more accurate expression-intensity estimation results. The intensity labels can be represented as either continuous values [12, 15] or discrete ranks [9, 11]. Then, learning mechanisms such as nonlinear (ordinal) regression can be applied to learn an estimator from the training data with intensity labels. Since the labels of intensities have been /13 $ IEEE DOI /SMC

2 Fig. 3: Relative order between two intensities of facial expressions. employed, the expression intensity degree can be predicted and validated more accurately. However, these approaches require the employment of an image sequence or multiple images to estimate the facial expression intensities. In this way, an advantage is that motion-based features can be extracted to reflect the temporal changes of facial expressions. Nevertheless, it has the limitation that the estimation can be done only when sufficient images are available in a sequence-based approach. B. Motivation of Our Approach In this paper, we introduce a single-image-based approach for facial expression intensity estimation. Our approach is more flexible since it can infer the intensity degree when only one image is available. In addition, it can also serve as a basic module for a sequence-based approach. To avoid the difficulty of normalizing the intensities for different individuals in the continuous-valued representation (also note that it is easier for human to distinguish the intensity ranks than to infer the exact values), we divide the intensity into three discrete ranks (or levels), low, medium, and high. Such descriptive levels are more intuitive for representation than continuous values. The principle employed to infer the intensity rank is as follows. When representing the intensity labels as ranks, only the relative order between labels are considered. That is, we do not concern what the exact value of the intensity difference is, but care only about which expression is more intense. For example, in Fig. 3, we employ only the relative-order information that right-image expression is more intense than left-image expression. Besides, we adopt a feature descriptor that is translational invariant but stable to local deformation for expression intensity estimation. This descriptor can tolerate local changes caused by inexact facial alignments across individuals without losing discriminating abilities for subtle expression variations. The study of Delannoy et al. [11] would be the most relevant one to ours. In this work, an image-based approach was developed for the three-levels (low, medium, and high) expression intensity estimation. They regard the problem as a multi-class classification problem and employ one-against-all SVMs for training a classifier. However, since a multi-class approach assumes that the intensity levels are independent to each other, the ordinal relationship among labels (i.e., high medium low ) has not been well employed for performance enhancement. In addition, although a facialexpression image can be classified into one of the three Fig. 4: The intensity output of classification and ranking approaches. intensity levels, the action units (AUs) must still be extracted from an image sequence as a feature representation in this approach. We compare the performance of our approach with [11] in the experiments. This paper is organized as follows: In the next section, we describe the ranking framework of expression intensity estimation. In Section III, we review the scattering transform. Experimental results are presented in Section IV. Section V gives the conclusions. II. EXPRESSION INTENSITY RANKING FRAMEWORK In this paper, we treat the single-image-based expression intensity estimation as a ranking problem. Ranking was widely used in information retrieval, where many algorithms have been proposed in the past. As discussed above, multi-class classification approaches overlook the ordinal consistency of the predicted intensities. Besides, regression approaches suffer from the difficulty of gathering the ground-truth values for the continuous intensity labels. Therefore, learning-to-rank (or ordinal regression) is a better choice for discrete intensity-level estimation. Among the learning-to-rank approaches, the thresholds model is a simple and intuitive method for employing the ordinal information, which can produce consistent and nonambiguous ranking results. Shashua and Levin [16] proposed a ranking model that applies the large margin principle with the thresholds model. In this approach, there are a set of parallel hyperplanes that divide the data monotonically. Li and Lin [17] extended the parallel-hyperplanes approach [16] by considering the cost-sensitive property to improve the performance, and reduces ordinal ranking to binary classifications [17]. In this work, we employ the RED-SVM developed in [17] in our intensity-level estimation framework. Given the training examples tx i,y i i 1,..., N u, where x i P R D is the i-th feature vector and y i Pt1,..., Ku is the intensity ranks. How to extract the feature vector x i from a face image will be introduced in Section III. Note that y i is a well-ordered set, i.e., intensity ranks K K , where means larger than and denotes the relative order between different intensities. Details of RED-SVM is presented in Algorithm

3 Algorithm 1 RED-SVM. Input: training examples tx i,y i i 1,..., N u; Output: 1: For each k where 1 k K, divide the original training data into two sets X k X k tpx i,y pkq i q y i ku tpx i,y pkq i q y i ku, (1) where y pkq i 1 if y i k and y pkq i 1 otherwise. 2: For all k, the binary classifier gpx i,kqsignp w, Φpx i q b θ k q, (2) can be obtained by solving min w,θ,b,ξ s.t. }w} 2 }θ}2 λ C Ņ K 1 i1 k1 c pkq i ξ pkq i (3) y pkq i p w, Φpx i q b θ k q 1 ξ pkq i, ξ pkq i 0, for i 1,..., N, and k 1,...K 1, where θ is a set of well-ordered thresholds, i.e., θ 1 θ 2... θ K 1, w and b are the hyperplane parameters in the kernel space, Φpq is a mapping to the kernel space, c pkq i y i k is the absolute cost suggested in [17], C and λ are the weights of loss and regularization, respectively. 3: A ranking rule r constructed from g is used to collect all the relative orders and predict the expression intensity of x: K 1 rpxq 1 k1 vgpx, kq 0w. (4) where vw is 1 if the inner condition is true, and 0 otherwise. 4: return rpxq; An important characteristic of RED-SVM is rankmonotonic, i.e., gpx, 1q gpx, 2q... gpx, K 1q for every x. Note that the rank-monotonic function g can provide consistent ranking results. The consistent property is important to design a ranking rule r that there are no conflicts among the binary classifiers vgpx, kq 0w, 1 k K 1. In summary, RED-SVM constructs parallel hyperplanes to divide the data in a kernel space specified by the implicit mapping Φ. In our work, the radial basis function (RBF) kernels are used. The parameters of C and RBF kernel are selected using 5-fold cross validation in the training dataset, and the default value of λ is 1 in RED-SVM. III. SCATTERING TRANSFORM Facial expression intensity is a subtle variation and thus identifying the facial changes is important. For a video-based approach, the variation of expression intensity can be predicted by adjacent frames, and the divergence among different persons can be mitigated by tracking the facial points. For an image-based approach, it is more critical to have a powerful feature representation because there are no other reference images of the same subject. Besides describing the subtle variation of expressions, a common problem encountered in face-recognition related task is to align the facial images in both the training and testing phases. An intuitive method is to use the location of a set of facial landmark points extracted, such as Active Appearance Models (AAMs) [18]. However, such an alignment requirement would be too demanding in practice because automatically detecting the facial points is still unstable in general situations. A tradeoff is to align only two eyes of face images. Although this setting causes some inexact facial alignments among individuals, our feature representation introduced below can still tolerate the misalignment and achieve better performance. Brunna and Mallat [19, 20] proposed a local descriptor, called scattering transform, which is translation invariant and linearizes deformations. In this paper, we employ the scattering transform for extracting the subtle expression changes and mitigating local changes across different persons. Given an input signal z P R 2 for scattering transform, we construct the directional wavelet filter ψ j,γ by scaling a filter ψ with 2 j and rotating z by an angle γ. ψ j,γ pzq 2 2j ψp2 j R γ zq, (5) where R γ z indicates the rotation of z. Leung et al. [21] have proved that complex Gabor function for ψ is effective for texture analysis. We use complex Gabor function for ψ in our experiments. The resulted wavelet transform of f at a position z is a filter bank defined by f ψ j,r pzq W J f pzq, (6) f φ J pzq j J, γpγ where φ J pzq 2 2J φp2 J zq is a low-pass filter that carries the low frequency information. In this paper, we use Gaussian filter as the low-pass filter φ J pzq. Some descriptors achieve invariance property by averaging, such as Scale-Invariant Feature Transform (SIFT) [22] and Histogram of Oriented Gradients (HOG) [23]. The wavelet coefficient amplitude of f are averaged by φ J pzq in the following f ψ j,r φ J pzq. (7) These averaged wavelet coefficients are almost invariant to translation or deformation. Convoluting with low pass filter can increase the invariant ability, but this sacrifices high frequencies simultaneously. The lost high frequency information in (7) can be restored again by finer scales wavelet coefficients as f ψ j1,r 1 ψ j2,r 2, where j 1 j 2,r 1,r 2 P Γ, (8) where j 2 is the adjacent finer scale of j 1. In order to preserve invariance and keep discriminating information of (8), it is

4 (a) Fig. 5: A scattering transform is a cascade of wavelet decompositions. required to reduce the variability of these coefficients. The high frequencies are averaged with low-pass filtering: f ψ j1,r 1 ψ j2,r 2 φ J pzq. (9) In Fig. 5, the scattering coefficients are computed with a cascade of wavelet decompositions. The input signal can be decomposed into finer scales wavelet coefficients of different orientations. In theory, the high frequencies can be kept restoring and averaging iteratively. In real application, the number of levels can be set to a constant because the distinctiveness decreases with finer scales. The results in [20] suggest that three levels are enough for most cases. In sum, high frequencies appear to be instable to deformation but are discriminative. Conventional feature descriptors may overlook the high frequencies when extracting locally invariant representations. Scattering transform is computed with a cascade of wavelet decompositions and modulus operators, which recovers the lost high frequencies. This transform is also proved to be Lipschitz continuous [19] and the resulted representation guarantees translation and/or rotational invariance. SIFT [22] or Daisy [24] type of descriptors can be approximated by the first layer of the cascade of scattering transform. Because of the above advantages, scattering transform has been applied to audio and image pattern recognition [20, 25, 26] and achieves good results. In addition, recent studies on face recognition in the wild (such as [27]) demonstrated that scattering transform outperforms other features, e.g., Local Binary Pattern, HOG and Gabor wavelet. We adopt the scattering transform as a translation-invariant and discriminative feature representation for single-imagebased expression intensity estimation. According to the suggestion in [28], we divide a face image into several components by applying a mask in Fig. 6(a). There are totally eight components including forehead, middle of eye brow, left eye, right eye, nose, mouth and chin, left cheek, and right cheek as shown in Fig. 6(b). In addition to facial components, holistic faces aligned with two eyes are used for feature extraction as in [29]. We combine both scattering features of facial components and holistic face, and perform PCA on the extracted features to reduce the feature vectors to D-dimensions that preserves 98% energy. For comparison, we also apply two widely used feature descriptors, AAMs [18] and Gabor wavelet in our experiment. (b) Fig. 6: (a) Facial component mask (b) eight facial components used in our method. AAMs describe both shape and appearance variation of facial images, and have been widely used as a feature descriptor in expression recognition [30]. The main limitation of AAMs is the requirement of 68 landmarks on faces. Gabor wavelet is a also a powerful and popular feature descriptor for face recognition. In [31], Bartlett et al. utilized Gabor wavelets for facial expression category recognition and we evaluate its performance for intensity estimation in this work. IV. EXPERIMENTAL RESULTS A. Database Our experiments are conducted on the extended Cohn- Kanade (CK+) dataset [30], which is a large and publicly available dataset for facial expression recognition. CK+ dataset is the extension of original Cohn-Kanade (CK) dataset and contains more subjects, video sequences and expressions. In CK+ dataset, there are 593 image sequences from 123 subjects of seven prototypic emotions, but the dataset provides expression labels for only some portion of data. Like [11], we use the six universal expressions: Anger, Disgust, Fear, Happiness, Sadness, and Surprise for performance comparison. In [11], the authors selected only 68 images from 10 subjects in the CK dataset, and use 10-fold cross validation in their experiment. We report their results directly from [11] in the first row of Table I. Since the subjects chosen in the experiments of [11] would be too few to reflect the performance, we employ more data from CK+ to validate our approach. In our experiments, all the data with labels in CK+ are used, where there are 4,043 images from 91 subjects in 222 image sequences, which is a severer experimental setting than that in [11]. The image sequences are manually divided into K=3 levels associated with the intensity levels from neutral to apex. B. Features and Experiment Setup We evaluate the performance of two widely used feature descriptors, AAMs and Gabor wavelets, in comparison with the scattering transform. In AAMs, each image is aligned with the given 68 labeled landmark points, and both texture and shape information are extracted for facial images. Gabor wavelets are set with 3 scales and 6 orientations. Both the Gabor and scattering transforms align the input images with only two

5 TABLE I: Mean error rate and MAEs of the compared approaches in CK+ database. Feature Type Error on low Error on medium Error on high Mean Error MAEs One-against-all SVMs [11] AUs SVR Scattering Transform Our framework Scattering Transform SVR AAMs Our framework AAMs SVR Gabor Our framework Gabor eyes. The feature vectors of AAMs and Gabor are obtained by preserving 98% energy by PCA, too. We apply 10-fold cross validation to evaluate the performance. In each fold, the training set contains different subjects to the testing set. We apply a 5-fold cross validation procedure only on the training set for parameter selection, and report the average performance of the testing set. We also employ a standard nonlinear regression method, Support Vector Regression (SVR), for comparison in the experiments. The RBF kernel function is adopted and the associated parameters are selected by five-fold cross validation in training dataset. In [11], Delannoy et al. focus on the discrimination of binary classifications and evaluate the area under curve (AUC). In order to comparing with SVR, we replace AUC with mean error rate. We report the mean error rate 1 N test N test i1 and mean absolute error (MAEs) N test i1 Ipy i y i q (10) y i y i {N test (11) for expression intensity estimation, where N test indicates the number of test images, and y i is the predicted intensity level. Because the outputs of SVR are continuous values, the predictions of SVR are rounded when calculating the mean error rate. Mean error rate only focuses on the classification accuracy. For a more accurate comparison, we employ MAEs to measure the absolute difference between prediction and ground truth. MAEs has been used to evaluate expression intensity estimation [9, 32], which measure how close the predictions are to the ground truth. Therefore, MAEs could be a more appropriate measurement for expression intensity estimation. We report both mean error and MAEs in the experimental results. C. Results and Discussion There are two parts of our experimental results. At first, we focus on the comparison between different approaches for estimating discrete intensity ranks. Table I shows the mean error rate and MAEs of the compared approaches under different feature representations. Error on low, medium and high reveal the error rates on different intensity ranks. The error rats on medium are generally higher than the other two intensities, which indicates that medium is more difficult to be separated from the others. The results also demonstrate that SVR and our framework using scattering transform perform better under the measurement of mean error rate than the other compared approaches. It is worth noting that although the mean error rate of SVR and our framework are close, the MAEs of our framework are significantly lower than SVR. It reveals that even when the classification accuracy of SVR and our framework are approximate, the misclassified ranking labels of SVR are farther from the ground-truth than those obtained by our framework. Our approach performs better than conventional regression SVR and the compared approach [11] on both measurements. The proposed ranking framework for expression intensity estimation performs consistently better than SVR under different types of feature representations. The second part is the comparison between feature descriptors. Bartlett et al. [31] utilized Gabor wavelets for expression category recognitions. From our results, the Gabor wavelets perform worse on expression intensity estimation than the others. AAMs is a popular feature extraction approach on face recognition, e.g., facial expression recognition [5] and age estimation [33]. The restrictions is that AAMs require 68 landmarks to align the facial images. In the experiment, we only align two eyes for the scattering transform. In Table I, the results also show that the performance of scattering transform is better than AAMs (68 landmarks) and Gabor wavelets. In brief, scattering transform is a powerful descriptor and can achieve better performance for expression intensity estimation based on face images. V. CONCLUSION In this paper, we propose a ranking framework to estimate discrete intensity ranks based on a single image. Discrete intensity ranks are closer to human s experience than continuous intensity values. We employ a parallel-hyperplanes ranker in our approach for ordinal regression. Although image-based approaches ignore temporal clues, it can be more flexibility employed than video-based approaches. We have shown that the feature representation of scattering transform performs

6 well. Experimental studies reveal that our framework outperforms the other competitive methods. ACKNOWLEDGMENT This work was supported in part by the National Science Council, Taiwan, under the grants NSC E REFERENCES [1] Z. Zeng, J. Tu, B. Pianfetti, and T. Huang, Audio visual affective expression recognition through multistream fused hmm, IEEE Transactions on Multimedia, vol. 10, no. 4, pp , [2] P. Yang, Q. Liu, X. Cui, and D. Metaxas, Facial expression recognition based on dynamic binary patterns, in IEEE Conference on Computer Vision and Pattern Recognition, [3] Z. Zeng, M. Pantic, G. I. Roisman, and T. S. Huang, A survey of affect recognition methods: Audio, visual, and spontaneous expressions, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp , [4] P. Yang, Q. Liu, and D. N. Metaxas, Exploring facial expressions with compositional features, in IEEE Conference on Computer Vision and Pattern Recognition, [5] T. Pfister, X. Li, G. Zhao, and M. Pietikainen, Recognising spontaneous facial micro-expressions, in IEEE International Conference on Computer Vision, [6] G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan, Dynamics of facial expression extracted automatically from video, Image and Vision Computing, vol. 24, no. 6, pp , [7] W.-Y. Chang, C.-S. Chen, and Y.-P. Hung, Analyzing facial expression by fusing manifolds, in Asian Conference on Computer Vision, [8] S. Koelstra and M. Pantic, Non-rigid registration using freeform deformations for recognition of facial actions and their temporal dynamics, in IEEE International Conference on Automatic Face & Gesture Recognition, [9] M. Kim and V. Pavlovic, Structured output ordinal regression for dynamic facial emotion intensity prediction, in European Conference on Computer Vision, [10] M. F. Valstar and M. Pantic, Fully automatic recognition of the temporal phases of facial actions, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 42, no. 1, pp , [11] J. Delannoy and J. McDonald, Automatic estimation of the dynamics of facial expression using a three-level model of intensity, in IEEE International Conference on Automatic Face & Gesture Recognition, [12] C. Liao, H. Chuang, and S. Lai, Learning expression kernels for facial expression intensity estimation, in IEEE International Conference on Acoustics, Speech, and Signal Processing, [13] A. Dhall and R. Goecke, Group expression intensity estimation in videos via gaussian processes, in International Conference on Pattern Recognition, [14] A. Savran, B. Sankur, and M. Taha Bilge, Regression-based intensity estimation of facial action units, Image and Vision Computing, vol. 30, no. 10, pp , [15] K. Song and S. Chien, Facial expression recognition based on mixture of basic expressions and intensities, in IEEE International Conference on Systems, Man, and Cybernetics, [16] A. Shashua and A. Levin, Ranking with large margin principle: Two approaches, in Advances in Neural Information Processing Systems, [17] L. Li and H.-T. Lin, Ordinal regression by extended binary classification, in Advances in Neural Information Processing Systems, [18] T. Cootes, G. Edwards, and C. Taylor, Active appearance models, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp , [19] S. Mallat, Group invariant scattering, Communications in Pure and Applied Mathematics, vol. 65, no. 10, pp , [20] J. Bruna and S. Mallat, Classification with scattering operators, in IEEE Conference on Computer Vision and Pattern Recognition, [21] T. Leung and J. Malik, Representing and recognizing the visual appearance of materials using three-dimensional textons, International Journal of Computer Vision, vol. 43, no. 1, pp , [22] D. G. Lowe, Distinctive image features from scale-invariant keypoints, International Journal of Computer Vision, vol. 60, no. 2, pp , [23] N. Dalal and B. Triggs, Histograms of oriented gradients for human detection, in IEEE Conference on Computer Vision and Pattern Recognition, [24] E. Tola, V. Lepetit, and P. Fua, Daisy: An efficient dense descriptor applied to wide-baseline stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 5, pp , [25] J. Andén and S. Mallat, Multiscale scattering for audio classification, in The International Society for Music Information Retrieval, [26] L. Sifre and S. Mallat, Combined scattering for rotation invariant texture analysis, in European Symposium on Artificial Neural Networks, [27] K.-Y. Chang, C.-F. Lin, C.-S. Chen, and Y.-P. Hung, Applying scattering operators for face recognition: A comparative study, in International Conference on Pattern Recognition, [28] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, Attribute and simile classifiers for face verification, in IEEE Conference on Computer Vision and Pattern Recognition, [29] M. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan, Recognizing facial expression: machine learning and application to spontaneous behavior, in IEEE Conference on Computer Vision and Pattern Recognition, [30] P. Lucey, J. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression, in IEEE Conference on Computer Vision and Pattern Recognition, [31] M. S. Bartlett, J. R. Movellan, G. Littlewort, B. Braathen, M. G. Frank, and T. J. Sejnowski, Towards automatic recognition of spontaneous facial actions, What the Face Reveals, pp , [32] O. Rudovic, V. Pavlovic, and M. Pantic, Multi-output laplacian dynamic ordinal regression for facial expression recognition and intensity estimation, in IEEE Conference on Computer Vision and Pattern Recognition, [33] A. Lanitis, C. Draganova, and C. Christodoulou, Comparing different classifiers for automatic age estimation, IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 1, pp ,

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time

More information

Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition

Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression

More information

Robust Facial Expression Classification Using Shape and Appearance Features

Robust Facial Expression Classification Using Shape and Appearance Features Robust Facial Expression Classification Using Shape and Appearance Features S L Happy and Aurobinda Routray Department of Electrical Engineering, Indian Institute of Technology Kharagpur, India Abstract

More information

Cross-pose Facial Expression Recognition

Cross-pose Facial Expression Recognition Cross-pose Facial Expression Recognition Abstract In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose

More information

Multiple Kernel Learning for Emotion Recognition in the Wild

Multiple Kernel Learning for Emotion Recognition in the Wild Multiple Kernel Learning for Emotion Recognition in the Wild Karan Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort and Marian S. Bartlett Machine Perception Laboratory UCSD EmotiW Challenge,

More information

Facial Expression Analysis

Facial Expression Analysis Facial Expression Analysis Jeff Cohn Fernando De la Torre Human Sensing Laboratory Tutorial Looking @ People June 2012 Facial Expression Analysis F. De la Torre/J. Cohn Looking @ People (CVPR-12) 1 Outline

More information

Facial expression recognition using shape and texture information

Facial expression recognition using shape and texture information 1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,

More information

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Peng Yang Qingshan Liu,2 Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road,

More information

A Real Time Facial Expression Classification System Using Local Binary Patterns

A Real Time Facial Expression Classification System Using Local Binary Patterns A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial

More information

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse DA Progress report 2 Multi-view facial expression classification 16.12.2010 Nikolas Hesse Motivation Facial expressions (FE) play an important role in interpersonal communication FE recognition can help

More information

Fully Automatic Facial Action Recognition in Spontaneous Behavior

Fully Automatic Facial Action Recognition in Spontaneous Behavior Fully Automatic Facial Action Recognition in Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia Lainscsek 1, Ian Fasel 1, Javier Movellan 1 1 Institute for Neural

More information

Facial Expression Recognition Using Non-negative Matrix Factorization

Facial Expression Recognition Using Non-negative Matrix Factorization Facial Expression Recognition Using Non-negative Matrix Factorization Symeon Nikitidis, Anastasios Tefas and Ioannis Pitas Artificial Intelligence & Information Analysis Lab Department of Informatics Aristotle,

More information

Atlas Construction and Sparse Representation for the Recognition of Facial Expression

Atlas Construction and Sparse Representation for the Recognition of Facial Expression This work by IJARBEST is licensed under a Creative Commons Attribution 4.0 International License. Available at: https://www.ijarbest.com/ Atlas Construction and Sparse Representation for the Recognition

More information

Spatiotemporal Features for Effective Facial Expression Recognition

Spatiotemporal Features for Effective Facial Expression Recognition Spatiotemporal Features for Effective Facial Expression Recognition Hatice Çınar Akakın and Bülent Sankur Bogazici University, Electrical & Electronics Engineering Department, Bebek, Istanbul {hatice.cinar,bulent.sankur}@boun.edu.tr

More information

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib 3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 201) Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi

More information

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks HADI SEYEDARABI*, WON-SOOK LEE**, ALI AGHAGOLZADEH* AND SOHRAB

More information

Recognizing Micro-Expressions & Spontaneous Expressions

Recognizing Micro-Expressions & Spontaneous Expressions Recognizing Micro-Expressions & Spontaneous Expressions Presentation by Matthias Sperber KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association www.kit.edu

More information

Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism

Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism 1 2 Wei-Lun Chao, Jun-Zuo Liu, 3 Jian-Jiun Ding, 4 Po-Hung Wu 1, 2, 3, 4 Graduate Institute

More information

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION Priyanka Rani 1, Dr. Deepak Garg 2 1,2 Department of Electronics and Communication, ABES Engineering College, Ghaziabad

More information

Exploring Facial Expressions with Compositional Features

Exploring Facial Expressions with Compositional Features Exploring Facial Expressions with Compositional Features Peng Yang Qingshan Liu Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road, Piscataway, NJ 88, USA peyang@cs.rutgers.edu,

More information

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated

More information

Exploring Bag of Words Architectures in the Facial Expression Domain

Exploring Bag of Words Architectures in the Facial Expression Domain Exploring Bag of Words Architectures in the Facial Expression Domain Karan Sikka, Tingfan Wu, Josh Susskind, and Marian Bartlett Machine Perception Laboratory, University of California San Diego {ksikka,ting,josh,marni}@mplab.ucsd.edu

More information

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial

More information

Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches

Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Yanpeng Liu a, Yuwen Cao a, Yibin Li a, Ming Liu, Rui Song a Yafang Wang, Zhigang Xu, Xin Ma a Abstract Facial

More information

Dynamic Facial Expression Recognition with Atlas Construction and Sparse Representation

Dynamic Facial Expression Recognition with Atlas Construction and Sparse Representation IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. XX, NO. XX, MONTH XX 1 Dynamic Facial Expression Recognition with Atlas Construction and Sparse Representation Yimo Guo, Guoying Zhao, Senior Member, IEEE, and

More information

Facial Expression Recognition Using Gabor Motion Energy Filters

Facial Expression Recognition Using Gabor Motion Energy Filters Facial Expression Recognition Using Gabor Motion Energy Filters Tingfan Wu Marian S. Bartlett Javier R. Movellan Dept. Computer Science Engineering Institute for Neural Computation UC San Diego UC San

More information

Real-time Automatic Facial Expression Recognition in Video Sequence

Real-time Automatic Facial Expression Recognition in Video Sequence www.ijcsi.org 59 Real-time Automatic Facial Expression Recognition in Video Sequence Nivedita Singh 1 and Chandra Mani Sharma 2 1 Institute of Technology & Science (ITS) Mohan Nagar, Ghaziabad-201007,

More information

Recognition of Facial Action Units with Action Unit Classifiers and An Association Network

Recognition of Facial Action Units with Action Unit Classifiers and An Association Network Recognition of Facial Action Units with Action Unit Classifiers and An Association Network Junkai Chen 1, Zenghai Chen 1, Zheru Chi 1 and Hong Fu 1,2 1 Department of Electronic and Information Engineering,

More information

Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection

Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection Yue Wu Qiang Ji ECSE Department, Rensselaer Polytechnic Institute 110 8th street,

More information

Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition

Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition Yi Sun, Michael Reale, and Lijun Yin Department of Computer Science, State University of New York

More information

Multi-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature

Multi-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature 0/19.. Multi-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature Usman Tariq, Jianchao Yang, Thomas S. Huang Department of Electrical and Computer Engineering Beckman Institute

More information

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Caifeng Shan, Shaogang Gong, and Peter W. McOwan Department of Computer Science Queen Mary University of London Mile End Road,

More information

Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior

Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior Computer Vision and Pattern Recognition 2005 Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia

More information

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014 195 Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) Faisal Ahmed 1, Hossain

More information

Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion

Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion Juxiang Zhou 1, Tianwei Xu 2, Jianhou Gan 1 1. Key Laboratory of Education Informalization for Nationalities,

More information

A Novel Feature Extraction Technique for Facial Expression Recognition

A Novel Feature Extraction Technique for Facial Expression Recognition www.ijcsi.org 9 A Novel Feature Extraction Technique for Facial Expression Recognition *Mohammad Shahidul Islam 1, Surapong Auwatanamongkol 2 1 Department of Computer Science, School of Applied Statistics,

More information

IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES

IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES IMAGE RETRIEVAL USING VLAD WITH MULTIPLE FEATURES Pin-Syuan Huang, Jing-Yi Tsai, Yu-Fang Wang, and Chun-Yi Tsai Department of Computer Science and Information Engineering, National Taitung University,

More information

3D Facial Action Units Recognition for Emotional Expression

3D Facial Action Units Recognition for Emotional Expression 3D Facial Action Units Recognition for Emotional Expression Norhaida Hussain 1, Hamimah Ujir, Irwandi Hipiny and Jacey-Lynn Minoi 1 Department of Information Technology and Communication, Politeknik Kuching,

More information

A Novel Extreme Point Selection Algorithm in SIFT

A Novel Extreme Point Selection Algorithm in SIFT A Novel Extreme Point Selection Algorithm in SIFT Ding Zuchun School of Electronic and Communication, South China University of Technolog Guangzhou, China zucding@gmail.com Abstract. This paper proposes

More information

3D Shape Estimation in Video Sequences Provides High Precision Evaluation of Facial Expressions

3D Shape Estimation in Video Sequences Provides High Precision Evaluation of Facial Expressions 3D Shape Estimation in Video Sequences Provides High Precision Evaluation of Facial Expressions László A. Jeni a, András Lőrincz b, Tamás Nagy b, Zsolt Palotai c, Judit Sebők b, Zoltán Szabó b, Dániel

More information

Natural Facial Expression Recognition Using Dynamic and Static Schemes

Natural Facial Expression Recognition Using Dynamic and Static Schemes Natural Facial Expression Recognition Using Dynamic and Static Schemes Bogdan Raducanu 1 and Fadi Dornaika 2,3 1 Computer Vision Center, 08193 Bellaterra, Barcelona, Spain bogdan@cvc.uab.es 2 IKERBASQUE,

More information

Convolutional Neural Networks for Facial Expression Recognition

Convolutional Neural Networks for Facial Expression Recognition Convolutional Neural Networks for Facial Expression Recognition Shima Alizadeh Stanford University shima86@stanford.edu Azar Fazel Stanford University azarf@stanford.edu Abstract In this project, we have

More information

Action Unit Based Facial Expression Recognition Using Deep Learning

Action Unit Based Facial Expression Recognition Using Deep Learning Action Unit Based Facial Expression Recognition Using Deep Learning Salah Al-Darraji 1, Karsten Berns 1, and Aleksandar Rodić 2 1 Robotics Research Lab, Department of Computer Science, University of Kaiserslautern,

More information

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1 Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

More information

Learning the Deep Features for Eye Detection in Uncontrolled Conditions

Learning the Deep Features for Eye Detection in Uncontrolled Conditions 2014 22nd International Conference on Pattern Recognition Learning the Deep Features for Eye Detection in Uncontrolled Conditions Yue Wu Dept. of ECSE, Rensselaer Polytechnic Institute Troy, NY, USA 12180

More information

Object Tracking using HOG and SVM

Object Tracking using HOG and SVM Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection

More information

Face Alignment Under Various Poses and Expressions

Face Alignment Under Various Poses and Expressions Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

Detection of asymmetric eye action units in spontaneous videos

Detection of asymmetric eye action units in spontaneous videos Detection of asymmetric eye action units in spontaneous videos The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published

More information

Describable Visual Attributes for Face Verification and Image Search

Describable Visual Attributes for Face Verification and Image Search Advanced Topics in Multimedia Analysis and Indexing, Spring 2011, NTU. 1 Describable Visual Attributes for Face Verification and Image Search Kumar, Berg, Belhumeur, Nayar. PAMI, 2011. Ryan Lei 2011/05/05

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

C.R VIMALCHAND ABSTRACT

C.R VIMALCHAND ABSTRACT International Journal of Scientific & Engineering Research, Volume 5, Issue 3, March-2014 1173 ANALYSIS OF FACE RECOGNITION SYSTEM WITH FACIAL EXPRESSION USING CONVOLUTIONAL NEURAL NETWORK AND EXTRACTED

More information

FACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTIVE SHAPE MODEL

FACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTIVE SHAPE MODEL FACIAL EXPRESSIO RECOGITIO USIG DIGITALISED FACIAL FEATURES BASED O ACTIVE SHAPE MODEL an Sun 1, Zheng Chen 2 and Richard Day 3 Institute for Arts, Science & Technology Glyndwr University Wrexham, United

More information

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany

More information

A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions

A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions Akhil Bansal, Santanu Chaudhary, Sumantra Dutta Roy Indian Institute of Technology, Delhi, India akhil.engg86@gmail.com,

More information

FACIAL EXPRESSION RECOGNITION AND EXPRESSION INTENSITY ESTIMATION

FACIAL EXPRESSION RECOGNITION AND EXPRESSION INTENSITY ESTIMATION FACIAL EXPRESSION RECOGNITION AND EXPRESSION INTENSITY ESTIMATION BY PENG YANG A dissertation submitted to the Graduate School New Brunswick Rutgers, The State University of New Jersey in partial fulfillment

More information

INTERNATIONAL JOURNAL FOR ADVANCE RESEARCH IN ENGINEERING AND TECHNOLOGY WINGS TO YOUR THOUGHTS.. XBeats-An Emotion Based Music Player

INTERNATIONAL JOURNAL FOR ADVANCE RESEARCH IN ENGINEERING AND TECHNOLOGY WINGS TO YOUR THOUGHTS.. XBeats-An Emotion Based Music Player XBeats-An Emotion Based Music Player Sayali Chavan 1, Ekta Malkan 2, Dipali Bhatt 3, Prakash H. Paranjape 4 1 U.G. Student, Dept. of Computer Engineering, sayalichavan17@gmail.com 2 U.G. Student, Dept.

More information

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Emotion Recognition In The Wild Challenge and Workshop (EmotiW 2013) Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Mengyi Liu, Ruiping Wang, Zhiwu Huang, Shiguang Shan,

More information

Multi-Instance Hidden Markov Model For Facial Expression Recognition

Multi-Instance Hidden Markov Model For Facial Expression Recognition Multi-Instance Hidden Markov Model For Facial Expression Recognition Chongliang Wu 1, Shangfei Wang 1 and Qiang Ji 2 1 School of Computer Science and Technology, University of Science and Technology of

More information

Facial Expression Recognition with Emotion-Based Feature Fusion

Facial Expression Recognition with Emotion-Based Feature Fusion Facial Expression Recognition with Emotion-Based Feature Fusion Cigdem Turan 1, Kin-Man Lam 1, Xiangjian He 2 1 The Hong Kong Polytechnic University, Hong Kong, SAR, 2 University of Technology Sydney,

More information

Facial Expression Recognition Using Improved Artificial Bee Colony Algorithm

Facial Expression Recognition Using Improved Artificial Bee Colony Algorithm International Journal of Emerging Trends in Science and Technology IC Value: 76.89 (Index Copernicus) Impact Factor: 4.219 DOI: https://dx.doi.org/10.18535/ijetst/v4i8.38 Facial Expression Recognition

More information

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Mengyi Liu, Ruiping Wang, Zhiwu Huang, Shiguang Shan, Xilin Chen Key Lab of Intelligence Information Processing Institute

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Evaluation of Expression Recognition Techniques

Evaluation of Expression Recognition Techniques Evaluation of Expression Recognition Techniques Ira Cohen 1, Nicu Sebe 2,3, Yafei Sun 3, Michael S. Lew 3, Thomas S. Huang 1 1 Beckman Institute, University of Illinois at Urbana-Champaign, USA 2 Faculty

More information

Affine-invariant scene categorization

Affine-invariant scene categorization University of Wollongong Research Online Faculty of Engineering and Information Sciences - Papers: Part A Faculty of Engineering and Information Sciences 2014 Affine-invariant scene categorization Xue

More information

An Associate-Predict Model for Face Recognition FIPA Seminar WS 2011/2012

An Associate-Predict Model for Face Recognition FIPA Seminar WS 2011/2012 An Associate-Predict Model for Face Recognition FIPA Seminar WS 2011/2012, 19.01.2012 INSTITUTE FOR ANTHROPOMATICS, FACIAL IMAGE PROCESSING AND ANALYSIS YIG University of the State of Baden-Wuerttemberg

More information

Extracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition

Extracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition Extracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition Xiaoyi Feng 1, Yangming Lai 1, Xiaofei Mao 1,JinyePeng 1, Xiaoyue Jiang 1, and Abdenour Hadid

More information

Facial Emotion Recognition using Eye

Facial Emotion Recognition using Eye Facial Emotion Recognition using Eye Vishnu Priya R 1 and Muralidhar A 2 1 School of Computing Science and Engineering, VIT Chennai Campus, Tamil Nadu, India. Orcid: 0000-0002-2016-0066 2 School of Computing

More information

Real time facial expression recognition from image sequences using Support Vector Machines

Real time facial expression recognition from image sequences using Support Vector Machines Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,

More information

An efficient face recognition algorithm based on multi-kernel regularization learning

An efficient face recognition algorithm based on multi-kernel regularization learning Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel

More information

Enhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets.

Enhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets. Enhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets. Zermi.Narima(1), Saaidia.Mohammed(2), (1)Laboratory of Automatic and Signals Annaba (LASA), Department

More information

Face Recognition using SURF Features and SVM Classifier

Face Recognition using SURF Features and SVM Classifier International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 8, Number 1 (016) pp. 1-8 Research India Publications http://www.ripublication.com Face Recognition using SURF Features

More information

Recognition of facial expressions in presence of partial occlusion

Recognition of facial expressions in presence of partial occlusion Recognition of facial expressions in presence of partial occlusion Ioan Buciu, 1 Irene Kotsia 1 and Ioannis Pitas 1 AIIA Laboratory Computer Vision and Image Processing Group Department of Informatics

More information

3D Active Appearance Model for Aligning Faces in 2D Images

3D Active Appearance Model for Aligning Faces in 2D Images 3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active

More information

Facial Expression Recognition Using Encoded Dynamic Features

Facial Expression Recognition Using Encoded Dynamic Features Facial Expression Recognition Using Encoded Dynamic Features Peng Yang Qingshan Liu,2 Xinyi Cui Dimitris N.Metaxas Computer Science Department, Rutgers University Frelinghuysen Road Piscataway, NJ 8854

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

NTHU Rain Removal Project

NTHU Rain Removal Project People NTHU Rain Removal Project Networked Video Lab, National Tsing Hua University, Hsinchu, Taiwan Li-Wei Kang, Institute of Information Science, Academia Sinica, Taipei, Taiwan Chia-Wen Lin *, Department

More information

The Effect of Facial Components Decomposition on Face Tracking

The Effect of Facial Components Decomposition on Face Tracking The Effect of Facial Components Decomposition on Face Tracking Vikas Choudhary, Sergey Tulyakov and Venu Govindaraju Center for Unified Biometrics and Sensors, University at Buffalo, State University of

More information

Facial Expression Recognition for HCI Applications

Facial Expression Recognition for HCI Applications acial Expression Recognition for HCI Applications adi Dornaika Institut Géographique National, rance Bogdan Raducanu Computer Vision Center, Spain INTRODUCTION acial expression plays an important role

More information

Texture Features in Facial Image Analysis

Texture Features in Facial Image Analysis Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University

More information

Smile Detection Using Multi-scale Gaussian Derivatives

Smile Detection Using Multi-scale Gaussian Derivatives Smile Detection Using Multi-scale Gaussian Derivatives Varun Jain, James L. Crowley To cite this version: Varun Jain, James L. Crowley. Smile Detection Using Multi-scale Gaussian Derivatives. 12th WSEAS

More information

A Hierarchical Face Identification System Based on Facial Components

A Hierarchical Face Identification System Based on Facial Components A Hierarchical Face Identification System Based on Facial Components Mehrtash T. Harandi, Majid Nili Ahmadabadi, and Babak N. Araabi Control and Intelligent Processing Center of Excellence Department of

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Discriminative classifiers for image recognition

Discriminative classifiers for image recognition Discriminative classifiers for image recognition May 26 th, 2015 Yong Jae Lee UC Davis Outline Last time: window-based generic object detection basic pipeline face detection with boosting as case study

More information

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian

Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian 4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 2016) Face Recognition Technology Based On Image Processing Chen Xin, Yajuan Li, Zhimin Tian Hebei Engineering and

More information

An Algorithm based on SURF and LBP approach for Facial Expression Recognition

An Algorithm based on SURF and LBP approach for Facial Expression Recognition ISSN: 2454-2377, An Algorithm based on SURF and LBP approach for Facial Expression Recognition Neha Sahu 1*, Chhavi Sharma 2, Hitesh Yadav 3 1 Assistant Professor, CSE/IT, The North Cap University, Gurgaon,

More information

SPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS. André Mourão, Pedro Borges, Nuno Correia, João Magalhães

SPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS. André Mourão, Pedro Borges, Nuno Correia, João Magalhães SPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS André Mourão, Pedro Borges, Nuno Correia, João Magalhães Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade

More information

An Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features

An Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features An Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features C.R Vimalchand Research Scholar, Associate Professor, Department of Computer Science, S.N.R Sons

More information

Cost-alleviative Learning for Deep Convolutional Neural Network-based Facial Part Labeling

Cost-alleviative Learning for Deep Convolutional Neural Network-based Facial Part Labeling [DOI: 10.2197/ipsjtcva.7.99] Express Paper Cost-alleviative Learning for Deep Convolutional Neural Network-based Facial Part Labeling Takayoshi Yamashita 1,a) Takaya Nakamura 1 Hiroshi Fukui 1,b) Yuji

More information

Emotion Detection System using Facial Action Coding System

Emotion Detection System using Facial Action Coding System International Journal of Engineering and Technical Research (IJETR) Emotion Detection System using Facial Action Coding System Vedant Chauhan, Yash Agrawal, Vinay Bhutada Abstract Behaviors, poses, actions,

More information

A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units

A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units Mohammad H. Mahoor 1, Steven Cadavid 2, Daniel S. Messinger 3, and Jeffrey F. Cohn 4 1 Department of Electrical and

More information

Sparse Shape Registration for Occluded Facial Feature Localization

Sparse Shape Registration for Occluded Facial Feature Localization Shape Registration for Occluded Facial Feature Localization Fei Yang, Junzhou Huang and Dimitris Metaxas Abstract This paper proposes a sparsity driven shape registration method for occluded facial feature

More information

Appearance Manifold of Facial Expression

Appearance Manifold of Facial Expression Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk

More information

A Hierarchical Probabilistic Model for Facial Feature Detection

A Hierarchical Probabilistic Model for Facial Feature Detection A Hierarchical Probabilistic Model for Facial Feature Detection Yue Wu Ziheng Wang Qiang Ji ECSE Department, Rensselaer Polytechnic Institute {wuy9,wangz1,jiq}@rpi.edu Abstract Facial feature detection

More information

International Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017

International Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017 RESEARCH ARTICLE OPEN ACCESS Facial expression recognition based on completed LBP Zicheng Lin 1, Yuanliang Huang 2 1 (College of Science and Engineering, Jinan University, Guangzhou, PR China) 2 (Institute

More information

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs

More information

ESTIMATION OF FACIAL ACTION INTENSITIES ON 2D AND 3D DATA

ESTIMATION OF FACIAL ACTION INTENSITIES ON 2D AND 3D DATA ESTIMATION OF FACIAL ACTION INTENSITIES ON 2D AND 3D DATA Arman Savran 1, Bülent Sankur 2, and M. Taha Bilge 3 Electrical and Electronics Engineering 1,2 Department of Psychology 3 Boğaziçi University,

More information

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN 2016 International Conference on Artificial Intelligence: Techniques and Applications (AITA 2016) ISBN: 978-1-60595-389-2 Face Recognition Using Vector Quantization Histogram and Support Vector Machine

More information

Short Run length Descriptor for Image Retrieval

Short Run length Descriptor for Image Retrieval CHAPTER -6 Short Run length Descriptor for Image Retrieval 6.1 Introduction In the recent years, growth of multimedia information from various sources has increased many folds. This has created the demand

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information