A comparison of alternative classifiers for detecting occurrence and intensity in spontaneous facial expression of infants with their mothers

Size: px
Start display at page:

Download "A comparison of alternative classifiers for detecting occurrence and intensity in spontaneous facial expression of infants with their mothers"

Transcription

1 A comparison of alternative classifiers for detecting occurrence and intensity in spontaneous facial expression of infants with their mothers Nazanin Zaker 1, Mohammad H.Mahoor 1, Whitney I. Mattson 2, Daniel S. Messinger 2, and Jeffrey F. Cohn 3 1 Department of Electrical and Computer Engineering, University of Denver 2 Department of Psychology, University of Miami 3 Department of Psychology, University of Pittsburgh and Carnegie Mellon University s:( Nazanin.ZakerHabibabadi, mmahoor)@du.edu, (dmessinger, w.mattson)@miami.edu, jeffcohn@pitt.edu Abstract To model the dynamics of social interaction, it is necessary both to detect specific Action Units (AUs) and variation in their intensity and coordination over time. An automated method that performs well when detecting occurrence may or may not perform well for intensity measurements. We compared two dimensionality reduction approaches Principal Components Analysis with Large Margin Nearest Neighbor (PCA+LMNN) and Laplacian Eigenmap and two classifiers, SVM and K-Nearest Neighbor. Twelve infants were video-recorded during face-to-face interactions with their mothers. AUs related to positive and negative affect were manually coded from the video by certified FACS coders. Facial features were tracked using Active Appearance Models (AAM) and registered to a canonical view before extracting Histogram of Oriented Gradients (HOG) features. All possible combinations of dimensionality reduction approaches and classifiers were tested using a leave-onesubject-out cross-validation. For detecting consistency (i.e. reliability as measured by ICC), PCA+LMNN and SVM classifiers gave best results. Keywords-Facial Expressions; Action Unit; Histogram of Oriented Gradients, Laplacian Eigenmap, Structural SVM. 1. INTRODUCTION Face-to-face communication has attracted much attention in behavioral science and developmental psychology, and facial expressions are an important channel for nonverbal facial communication [1]. Efficient measurement of emotional responses via facial expressions is necessary to understand the functionality of face-to-face communication. The Facial Action Coding System (FACS) is the most comprehensive method to annotate facial expression [2]. FACS and BabyFACS [3] (the application of FACS to infants) provide a description of all possible visually detectable facial variations in terms of different Action Units (AUs) For example, AU 12 (lip corner puller) codes contraction of orbicularis oculi, AU 6 (cheek raiser) codes oblique action of zygomaticus major, and AU 20 (lip stretcher) codes action of risorius. Psychological studies have employed manual coding of facial expressions for many applications, such as the evaluation of the temporal structure of face-to-face communication between mothers and infants [4] and individual differences in facial expression [5]. Because manual coding is labor-intensive and difficult to standardize within and between research groups, automated computer systems that detect AUs and measure their intensity would contribute importantly to behavioral science. Some success has been achieved in automatic measurement of facial expression and facial action units under controlled conditions [6, 7, and 8]; however, automated measurement is still quite limited for real-world applications. Research psychologists are interested in analyzing the time-varying interactive dynamics of emotional expressions between partners and precise measurement of expression intensity over time. In order to measure the intensity of facial AUs, six intensity levels from absent to maximal appearance are usually considered. The intensities are denoted with labels 0 to 5 where label 0 shows the absent of an AU and labels 1-5 correspond to five intensity levels defined by FACS. Many works have been done on automatic action unit detection and measurement [6, 7, 9, 10] with concentration on posed facial expression data (Cohn-Kanade database [22]). Less work has been done on non-posed expressions [11, 12]. In automatic facial expressions measurement, many challenges and difficulties arise, such as head motion and individual differences in face shape and appearance. In addition, AU intensity and timing differ across applications, especially between posed and non-posed, facial actions [11]. Since the non-posed expressions are more representative of facial behaviors in real life, they have gained more attention very recently [11]. In addition to the difficulties mentioned above, unique challenges are considered in measuring the intensities of AUs for infants. With few exceptions, almost all research in automated facial expression analysis has been limited to adult participants, most of whom are young. Faces change over the lifespan. Infant faces in particular have different shape and proportions than adults, they have less facial texture, and their head pose, movement, and expressive

2 behaviors differ markedly. Sudden and rapid head rotation is common, some unique facial actions occur (See Baby FACS), and the likelihood of particular AU combinations differs (e.g., less AU 12+15). Diversity in participant characteristics such as these is critical to the development of automated facial expression analysis. This characteristic of non-posed expressions leads to the demand for more sophisticated automated systems being able to measure the intensity of action units more accurately. Many methods have been devised to detect the AUs and their intensity variation. Efforts have identified promising features and classifiers. Leading features include HOG [13, 14], and Local Binary Pattern (LBP) Histogram are used [14]. Both appear relatively robust to the small to moderate registration error. Classifiers include Support Vector Machines (SVMs) and Hidden Markov Models (HMMs). HOG features are considered a suitable descriptor relevant to facial expression variations which provide a good performance in recognition of action units. In a recent study, HOG features and SVM classifiers were used for emotion recognition, which showed better performance than LBP features [14]. After extracting the features, facial expression classifiers are employed to recognize the expressions. In recent studies, SVM classifiers are used as a baseline which can perform well on classifying facial expressions [11, 12]. In summary, there is a more demand for automatic methods to reliably recognize non-posed facial actions in less constrained contexts, which is more common in real life [9, 10]. In the framework presented in this paper, we employed different computer vision algorithms for facial expression recognition and compared them in order to investigate the one with the best performance. This framework is applied on an infants face dataset, a population that is nearly ignored in previous researches. Based on our recent study [16] and a recent work [14], the HOG descriptor performs better than LBPH in representing facial expressions. In this work we applied the combination of PCA+LMNN (Large Margin Nearest Neighbor [16]). We compared this combination (PCA+LMNN) to a nonlinear dimensionality reduction technique (i.e. Laplacian Eigenmap [11]), in order to compare the performance of these algorithms in the area of facial expression recognition. The Principal Component Analysis procedure is employed to remove the redundant information and decrease the dimensionality of feature space for further computations. Next, the LMNN method is applied on the data to rearrange data points in the feature space based on their labels. Finally the K-Nearest Neighbor and the Support Vector Machine classifiers are applied to measure the intensity of action units [16]. The rest of this paper is organized as follows. In Section 2, we present a framework to recognize and measure the nonposed facial expressions for 6-month old infants influenced by their parents/caregivers to express different expressions. The infants show non-posed expressions with head movement. In this framework, different classifiers are tested and compared. Moreover, in Section 3 which is the Experiment Section, the results of these classifiers are also compared with the results of Laplacian Eigenmaps and SVM. Conclusion can be found in Section PROPOSED METHOD 2.1. System Overview The overall system model for detecting and measuring the intensity of Action Units (AUs) consists of four major phases, i.e., image registration, facial feature extraction, dimensionality reduction and feature projection and, intensitybased classification. Fig. 1 outlines the main parts of the proposed framework. According to Fig. 1, first all images are registered to a common view, then the HOG facial features are extracted. The process follows by applying PCA on the extracted features to decrease their dimensionality. Then, the LMNN method is used to project data to a new feature space so that the classification performance will be improved. Different classifiers are tested and compared. KNN, which is regarded as the base classier for LMNN is tested. In addition to that, the results of SVM classifiers are compared. Experimental results uphold that the last classifier (SVM) lead to better classification rate although the LMNN method is not originally designed for it Image Registration In the first phase, which is image registration, the facial landmark points are tracked using Active Shape Model (ASM) [21]. Having the landmark points of all images, the common head position is easily computed. Since changes in pose and head movement are potential confounds, images must be registered to a common frame. This process is done by Procrustes analysis, which is a statistical shape analysis, and performs a linear transformation to minimize the measure of shape differences called Procrustes distance. Fig. 1 better demonstrates this procedure Facial Features Facial feature extraction comprises of facial landmark points tracking using AAM [21, 22], and extracting HOG descriptors. The HOG features code local spatial patterns and shapes from the distribution of intensity gradients and edge orientation. To obtain HOG features, the image is divided into small blocks. A histogram of gradient directions or edge orientations is computed for the pixels in each block. Then, these histograms are combined to represent the HOG features for the image. In order to contrast-normalize the local histograms, a larger block of the image is considered to calculate the measure of intensity. Then, the calculated measure is employed to normalize all smaller blocks within the large one. The mentioned normalization leads to a better robustness against illumination changes. [14]. To implement HOG descriptors, the image is first segmented into 35 small blocks based on the landmark points positions as shown in Fig. 2 (a). Since the coordinate

3 (a) (b) Figure 2. (a) The image is divided into 30 small blocks. (b) The size of the blocks changes when expression occurs. Figure 1. The overview of the system. of the blocks are based on positions of the landmarks, the size of the blocks may vary when an expression occurs, which is shown in Fig. 2 (b). As a result, a specific region of the face is always located in a specific block independent of different expressions and face formations. As a consequence, features are better categorized and covered. Then, for each block, the histogram of gradient directions (edge orientations) is computed. The concatenation of these histograms results in HOG descriptors, which are mapped onto a 64-bin histogram to decrease the size of feature vector. The full image is represented by a feature vector consisting of 1920 features. After extracting HOG features, the Principal Component Analysis (PCA) procedure is applied to decrease the dimensionality of feature space. PCA is a mathematical method that transforms a number of correlated variables into a (possibly) smaller set of uncorrelated variables, which are called the principal components. The number of principal components that are chosen is based on the percentage of the covered cumulative energy. The threshold for the covered accumulative energy is considered as 98% of the total energy. After applying PCA, the high dimensional feature vector is replaced by the lower dimension one from PCA which has enough information to describe the actual data and fasten the classification procedure Feature Projection Large margin nearest neighbor (LMNN) classifier is a newly proposed method by Weinberger et al [15]. It learns Mahalanobis distance metric for K-Nearest Neighbor (KNN) classification. The KNN rule is an effective and plain computer vision method for multi-way classification and its performance extremely depends on the distance metric employed. In the common KNN rule, Euclidian distance is applied to compute the distance between sample points in the feature space. However, without loss of generality the Mahalanobis metric can be considered as a linear transformation applied on all the points of input feature space followed by Euclidean based KNN classifier. The goal of learning a new distance metric is to push away impostors (points from different classes) out of the local margin around the test sample, and to attract target points (neighbors from the same class) by shrinking and penalizing their distance to the test sample. Thereby, the KNN classification accuracy improves. The learning procedure of Mahalanobis metric includes semi-definite programming for optimizing a convex problem. Assume a set of feature vectors of dimension d, x i R d 1,, along with their labels y i in addition to the target neighbors for each point. The parameter L is used to learn a Mahalanobis distance which improves the accuracy of KNN., 2 (1) To minimize the distance to target points, the variable 0, 1 is used to identify whether xj is a target neighbor of xi. Thus, L should be learnt to minimize the following objective function: 1 2 (2) To attract target neighbors comparing to impostor neighbors, say x l, the following function is suggested to minimize the sum of standard hinge loss over x i, x j and x l.

4 ψ 2 (L) = (3) In Equation 2, if, 1,, the hinge loss will be zero which means there is a margin of 1 between target and impostor neighbors. equals to, where is 1 if and only if the argument is true. The function implies that maximum of {f, 0}. The total cost function is written by where 0 and is a constant. This minimization of the cost function can be formulated and solved as a convex SDP since M = L L 0. More details on solving the objective function are provided in [15]. To get a better classification rate, we employed the LMNN method on our feature vectors and learnt the L matrix. This L is used to project all data points to a new feature space. With this projection, the data points are rearranged so that the performance of the classifier in the next phase of the system will be improved. In our framework, we used LMNN for either the KNN classifier or the SVM. According to experiments, applying LMNN leads to better performance for both classifiers Classifiers In order to test and compare the performance of the system with different classifiers, we used K-Nearest Neighbor and SVM classifiers. After the projection of PCAresulted features into action unit sub-spaces, the problem is to measure the intensity of action units described in six levels from these features. This is a multi-label classification problem and we employ KNN and SVM classifiers to solve this problem. In the following, brief explanation for each one is provided K- Nearest Neighbor (KNN) Classifier KNN is a simple pattern recognition rule to classify a new point among a number of points with assigned labels. KNN classifies the label of the new point based on a selected number of its nearest neighbors. The neighbor points are defined according totheir distance to the new point. There are different distance metrics to be considered. The most common one is Euclidian distance that is applied in many KNN classifiers. We used and compared the performance of KNN classifier with other ones in our experiments Support Vector Machine (SVM) To measure the intensity of AUs, we used SVM classifiers that are used in the field of pattern recognition and machine learning, and shown to perform well in facial expression and facial action unit recognition [11, 17]. The technical details for SVM classifier can be found in [18]. Overall, SVM utilizes kernel functions that map not linearly separable data to a new feature space with higher dimensionality, where the linear methods would be applicable. There are different types of kernel functions that provide this mapping approach such as polynomial and Radial Bases Functions (RBF) where the RBF kernel showed better performance than others in our experiments. Since our goal is to classify frames into six classes of AU intensity, we applied the one-against-one approach [19] to turn the binary SVM classifiers into a multi-label classifier. For our experiments, the libsvm library is used [20]. Also, the ground truth for this system is provided by an expert FACS coder who manually labeled the AU intensities frame by frame in a 0-5 scale, for each AU. 3. EXPERIMENTAL RESULTS Our experiments are performed on the parents-infants dataset. In this study, videos of facial expressions of twelve 6-month old infants are considered. The videos are recorded at 30 frames per second during the face to face interaction of the mother and infant. The goal is to recognize and measure the intensity of infants facial expressions. There are 163,380 video frames for 12 subjects that had manual coding for a total of minutes frames are tracked by the AAM and used in this paper. There were 1,163 discriminable events of AUs within manual coding amongst all of the AUs. There were 531 instances of AU6, 398 instances of AU12, and 234 instances of AU20. An instance is defined as a coding of no AU to any intensity 1 or greater. In order to provide the ground truth for training and testing of proposed system, an expert FACS/BabyFACS coder manually labeled all frames based on the intensity of AU6, AU12 and AU20 occurring in all frames. As mentioned, Class 0 shows the absent of an AU and classes 1-5 correspond to five intensity levels of FACS. First, the faces in each frame are tracked by the AAM and registered to a canonical view using the Procrustes analysis. The registered faces are cropped and saved in the dataset. Then, the HOG features are extracted from each image. The images are segmented into 30 small blocks and the HOG computed for each block. All the histograms mapped onto a 64-bin histogram, and concatenated to build a feature vector of size In the next step, PCA which is an unsupervised algorithm for dimensionality reduction is used to replace the high dimension feature vector with a small one of size 218 elements. For each AU (i.e. AU6, AU12 and AU20), a mapping function is learnt by using the supervised method LMNN, to project features to a new feature space which is more correspondent to that AU. In our previous work [24], we tuned and aligned the manifold for each subject and used them for dimensionality reduction and AU classification using support vector machines. Whereas, in this study, we learned one generic manifold for all subjects and used for data dimensionality reduction and then AU classification. Then, the resulted

5 TABLE I. THE ICC VALUES FOR 12 SUBJECTS, USING DIFFERENT DIMENSIONALITY REDUCTION METHODS (I.E. PCA+LMNN, LAPLACIANEIGENMAP), AND DIFFERENT CLASSIFIERS (I.E. KNN AND SVM). THE ICC VALUES ARE REPORTED FOR 3 AUS (I.E. AU6, AU12 AND AU20). AU6 AU12 AU20 PCA+LMNN LE PCA+LMNN LE PCA+LMNN LE KNN SVM KNN SVM KNN SVM KNN SVM KNN SVM KNN SVM feature vectors are used to train the KNN and SVM classifiers. In order to measure the recognition power of the aforementioned methods based on predicted values and manually FACS coding, we used Intra-Class Correlation (ICC) values. ICC value is in the range 0 to 1, and is regarded as a good measure of correlation and conformity when data sets with multiple labels are considered [20]. ICC is used to quantify the degree to which related individuals resemble each other. When n targets of data are rated by k judges (here k=2, and n=6) the ICC can be defined as in Equation 4. 1 (4) BMS is defined as between-targets mean squares, and EMS is regarded as the residual mean squares defined by Analysis Of Variance (ANOVA). In Table I, the ICC values for AU6, AU12 and AU20 using different dimensionality reduction methods and classifiers are shown. Each row corresponds to one subject, and three main columns are defined for three AUs. For each AU, two dimensionality reduction methods (i.e. PCA+LMNN, and Laplacian Eigenmap) are tested. Then, two classifies (i.e. KNN, and SVM) are applied on the data resulted from each dimensionality reduction method. By comparing different methods for measuring the intensity of AUs, not only the performance of different methods can be tested, but also we can evaluate our dataset. As an example, consider the ICC values for AU12 in subject 4. All the methods returned 0 as an output, which shows there are not enough samples for subject 4 in which AU12 occurred (only 12 samples in subject 4 with intensity equal to 1 comparing to the overall samples in this subject). As a consequence, stronger classifiers cannot also get the higher ICC values for this subject. The same situation happened for subject 6 in AU20. To better compare the performance of different methods, we have provided Table II in which the overall ICC values are measured and shown. According to the Table, each method has some pros and cons, however, the combination of PCA+LMNN as dimensionality reduction method and SVM as the classifier lead to higher ICC values. TABLE II. OVERALL ICC VALUES FOR ALL SUBJECTS. PCA+LMNN Laplacian Eigenmap AU 6 AU 12 AU 20 KNN SVM KNN SVM In Table III, the number of occurrences of each AU per subject is provided. As the table shows, for some subjects the number of AU occurrences is very low (e.g. only 12 instances for AU12 for subject 4) and this makes the interpretation of the reliability difficult due to lack of sufficient samples. TABLE III. NUMBER OF OCCURRENCES (INTENSITY GREATER THAN 0) OF EACH AU FOR 12 SUBJECTS. THE LAST COLUMN INDICATES THE TOTAL NUMBER OF FRAMES FOR EACH SUBJECT. AU6 AU12 AU20 # of Frames Total CONCLUSION AND FUTURE WORK In this paper, we studied all combinations of two classifiers and two dimensionality reduction methods to

6 automatically measure the intensity of spontaneous facial expressions of infants with their parents. The videos of faceto-face interaction of twelve infants with their mothers are recorded, and specific Action Units related to positive and negative expressions are manually coded by expert FACS coders. AAM used to track and register facial features to a canonical view. The HOG descriptors are used to code local spatial patterns and shapes from the distribution of intensity gradients and edge orientation. To reduce the dimensionality of HOG features, two approaches are tested and compared which are Principal Components Analysis with Large Margin Nearest Neighbor (PCA+LMNN), and Laplacian Eigenmap. Also, two different classifiers compared, K-Nearest Neighbor and SVM. The KNN classifier is regarded to be more compatible with LMNN method and SVM is known as a powerful classifier for facial expression recognition. Four different combinations of dimensionality reduction and classifiers were compared in order to find the best automated system. Regarding the consistency measure (i.e. ICC value), PCA+LMNN and SVM classifiers gave best results. In order to improve the reliability of our system, in the future, we will study and consider the effect of head pose in measuring the intensity of AUs and this may improve the low reliability of AU12 measurement. ACKNOWLEGMENT This research was supported by the grant BCS from the National Science Foundation. REFERENCES [1] I. Cohen, N. Sebe, L. Chen, A. Garg, and T. S. Huang, Facial expression recognition from video sequences: Temporal and static modelling, Computer Vision and Image Understanding, 2003, 91, pp [2] P. Ekman, W. V. Friesen, and J. C. Hager. Facial Action Coding System, The Manual on CD ROM, [3] H. Oster, Baby FACS: Facial action coding system for infants and young children (Unpublished monograph and coding manual), New York, NY: New York University, [4] K. Kaye, & A. Fogel, The temporal structure of face-to-face communication between mothers and infants. evelopmental Psychology, 1980, vol. 16, pp [5] J. F. Cohn, K. Schmidt, R. Gross, and P. Ekman, "Individual differences in facial expression: Stability over time, relation to selfreported emotion, and ability to inform person identification," Proceedings of the International Conference on Multimodal User Interfaces, October, [6] B. Fasel and J. Luettin, Automatic Facial Expression Analysis: A Survey, Pattern Recognition, 2003, pp [7] M. Bartlett, G. Littlewort, I. Fasel, J. Chenu, and J. Movellan, Fully Automatic Facial Action Recognition in Spontaneous Behavior, Proc. Seventh IEEE Int l Conf. Automatic Face and Gesture Recognition, 2006, pp [8] Z. Yunfeng, F. De la Torre, J.F. Cohn, Z. Yu-Jin, Dynamic Cascades with Bidirectional Bootstrapping for Action Unit Detection in Spontaneous Facial Behavior, IEEE Transactions on Affective Computing, 2011, pp [9] Y. Tian, J.F. Cohn, and T. Kanade, Facial Expression Analysis, Handbook of Face Recognition, Springer, [10] M. Pantic and L. Rothkrantz, Facial Action Recognition for Facial Expression Analysis from Static Face Images, IEEE Trans. Systems, Man, and Cybernetics, 2004, vol. 34, no. 3, pp [11] M. H. Mahoor, S. Cadavid, D. S. Messinger, and J. F. Cohn, "A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units", 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis (CVPR4HB), Miami Beach, June 25, [12] D. S. Messinger, M. Mahoor, S. Chow, J. F. Cohn, Automated measurement of facial expression in infant mother interaction: A pilot study. Infancy pp [13] H. Yuxiao, Z. Zhihong, Y. Lijun, W. Xiaozhou, Z. Xi, T.S. Huang, Multi-view facial expression recognition, IEEE International Conference on Automatic Face & Gesture Recognition, 2008, pp. 1-6, [14] M. Dahmane, J. Meunier, Emotion recognition using dynamic gridbased HOG features, Automatic Face & Gesture Recognition and Workshops, 2011, pp [15] N. Zaker, M. H. Mahoor, W. I. Mattson, D. S. Messinger, J. F. Cohn, Intensity Measurement of Spontaneous Facial Actions: Evaluation of Different Image Representations, In the Proceedings of the IEEE International Conference on Development and Learning and Epigenetic Robotics, November [16] K. Q. Weinberger, J. Blitzer, and L. K. Saul, Distance Metric Learning for Large Margin Nearest Neighbor Classification Adv. in Neural Inf. Proc. Sys. (NIPS), [17] S. Lucey, A. B. Ashraf, and J. Cohn. Investigating spontaneous facial action recognition through aam representations of the face, In K. Kurihara, editor, Face Recognition Book. Pro Literatur Verlag, Mammendorf, Germany, April [18] V. Vapnik, S. E. Golowich, and A. J. Smola. Support vector method for function approximation, regression estimation and signal processing. In M. Mozer, M. I. Jordan, and T. Petsche, editors, Proceedings of the 1996 Neural Information Processing Systems Conference, December 2 5, 1996, Dever, CO, USA, pp , [19] C. W. Hsu, C. C. Chang, and C.-J. Lin. A practical guide to support vector classification. Technical report, Department of Computer Science, National Taiwan University, [20] C. C. Chang and C. J. Lin. Libsvm a library for support vector machines. cjlin/libsvm/. [21] T. Cootes, D. Cooper, C. Taylor, and J. Graham. Active shape models - their training and application. Computer Vision and Image Understanding 1995, pp [22] I. Matthews and S. Baker. Active appearance models revisited. International Journal of Computer Vision, 2004, pp [23] T. Kanade, J. Cohn, and Y. L. Tian. Comprehensive database for facial expression analysis. In Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG 00), 2000, pp [24] D.S. Messinger, D.S., W.I. Mattson, M.H. Mahoor, and J.F. Cohn, The eyes have it: Making positive expressions more positive and negative expressions more negative. Emotion, In Press.

A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units

A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units Mohammad H. Mahoor 1, Steven Cadavid 2, Daniel S. Messinger 3, and Jeffrey F. Cohn 4 1 Department of Electrical and

More information

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1 Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

More information

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time

More information

Facial Expression Analysis

Facial Expression Analysis Facial Expression Analysis Jeff Cohn Fernando De la Torre Human Sensing Laboratory Tutorial Looking @ People June 2012 Facial Expression Analysis F. De la Torre/J. Cohn Looking @ People (CVPR-12) 1 Outline

More information

Robust Facial Expression Classification Using Shape and Appearance Features

Robust Facial Expression Classification Using Shape and Appearance Features Robust Facial Expression Classification Using Shape and Appearance Features S L Happy and Aurobinda Routray Department of Electrical Engineering, Indian Institute of Technology Kharagpur, India Abstract

More information

A Real Time Facial Expression Classification System Using Local Binary Patterns

A Real Time Facial Expression Classification System Using Local Binary Patterns A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial

More information

Appearance Manifold of Facial Expression

Appearance Manifold of Facial Expression Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk

More information

Recognition of facial expressions in presence of partial occlusion

Recognition of facial expressions in presence of partial occlusion Recognition of facial expressions in presence of partial occlusion Ioan Buciu, 1 Irene Kotsia 1 and Ioannis Pitas 1 AIIA Laboratory Computer Vision and Image Processing Group Department of Informatics

More information

Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion

Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion Juxiang Zhou 1, Tianwei Xu 2, Jianhou Gan 1 1. Key Laboratory of Education Informalization for Nationalities,

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial

More information

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks HADI SEYEDARABI*, WON-SOOK LEE**, ALI AGHAGOLZADEH* AND SOHRAB

More information

Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection

Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection Yue Wu Qiang Ji ECSE Department, Rensselaer Polytechnic Institute 110 8th street,

More information

FAST-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding

FAST-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding FAST-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding Fernando De la Torre (1), Tomas Simon (1), Zara Ambadar (2), and Jeffrey F. Cohn (2) 1. Robotics Institute,

More information

Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition

Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression

More information

Fully Automatic Facial Action Recognition in Spontaneous Behavior

Fully Automatic Facial Action Recognition in Spontaneous Behavior Fully Automatic Facial Action Recognition in Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia Lainscsek 1, Ian Fasel 1, Javier Movellan 1 1 Institute for Neural

More information

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse DA Progress report 2 Multi-view facial expression classification 16.12.2010 Nikolas Hesse Motivation Facial expressions (FE) play an important role in interpersonal communication FE recognition can help

More information

AAM Derived Face Representations for Robust Facial Action Recognition

AAM Derived Face Representations for Robust Facial Action Recognition AAM Derived Face Representations for Robust Facial Action Recognition Simon Lucey, Iain Matthews, Changbo Hu, Zara Ambadar, Fernando de la Torre, Jeffrey Cohn Robotics Institute, Carnegie Mellon University

More information

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Caifeng Shan, Shaogang Gong, and Peter W. McOwan Department of Computer Science Queen Mary University of London Mile End Road,

More information

The Painful Face Pain Expression Recognition Using Active Appearance Models

The Painful Face Pain Expression Recognition Using Active Appearance Models The Painful Face Pain Expression Recognition Using Active Appearance Models Ahmed Bilal Ashraf bilal@cmu.edu Ken Prkachin kmprk@unbc.ca Tsuhan Chen tsuhan@cmu.edu Simon Lucey slucey@cs.cmu.edu Patty Solomon

More information

Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition

Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition Yi Sun, Michael Reale, and Lijun Yin Department of Computer Science, State University of New York

More information

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION Priyanka Rani 1, Dr. Deepak Garg 2 1,2 Department of Electronics and Communication, ABES Engineering College, Ghaziabad

More information

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN

Face Recognition Using Vector Quantization Histogram and Support Vector Machine Classifier Rong-sheng LI, Fei-fei LEE *, Yan YAN and Qiu CHEN 2016 International Conference on Artificial Intelligence: Techniques and Applications (AITA 2016) ISBN: 978-1-60595-389-2 Face Recognition Using Vector Quantization Histogram and Support Vector Machine

More information

Image Coding with Active Appearance Models

Image Coding with Active Appearance Models Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing

More information

Learning to Recognize Faces in Realistic Conditions

Learning to Recognize Faces in Realistic Conditions 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Selecting Models from Videos for Appearance-Based Face Recognition

Selecting Models from Videos for Appearance-Based Face Recognition Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.

More information

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Peng Yang Qingshan Liu,2 Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road,

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Emotion Recognition In The Wild Challenge and Workshop (EmotiW 2013) Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Mengyi Liu, Ruiping Wang, Zhiwu Huang, Shiguang Shan,

More information

Recognition of Facial Action Units with Action Unit Classifiers and An Association Network

Recognition of Facial Action Units with Action Unit Classifiers and An Association Network Recognition of Facial Action Units with Action Unit Classifiers and An Association Network Junkai Chen 1, Zenghai Chen 1, Zheru Chi 1 and Hong Fu 1,2 1 Department of Electronic and Information Engineering,

More information

Convolutional Neural Networks for Facial Expression Recognition

Convolutional Neural Networks for Facial Expression Recognition Convolutional Neural Networks for Facial Expression Recognition Shima Alizadeh Stanford University shima86@stanford.edu Azar Fazel Stanford University azarf@stanford.edu Abstract In this project, we have

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

LBP Based Facial Expression Recognition Using k-nn Classifier

LBP Based Facial Expression Recognition Using k-nn Classifier ISSN 2395-1621 LBP Based Facial Expression Recognition Using k-nn Classifier #1 Chethan Singh. A, #2 Gowtham. N, #3 John Freddy. M, #4 Kashinath. N, #5 Mrs. Vijayalakshmi. G.V 1 chethan.singh1994@gmail.com

More information

Multiple Kernel Learning for Emotion Recognition in the Wild

Multiple Kernel Learning for Emotion Recognition in the Wild Multiple Kernel Learning for Emotion Recognition in the Wild Karan Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort and Marian S. Bartlett Machine Perception Laboratory UCSD EmotiW Challenge,

More information

A Novel Feature Extraction Technique for Facial Expression Recognition

A Novel Feature Extraction Technique for Facial Expression Recognition www.ijcsi.org 9 A Novel Feature Extraction Technique for Facial Expression Recognition *Mohammad Shahidul Islam 1, Surapong Auwatanamongkol 2 1 Department of Computer Science, School of Applied Statistics,

More information

Real time facial expression recognition from image sequences using Support Vector Machines

Real time facial expression recognition from image sequences using Support Vector Machines Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,

More information

Automated Facial Expression Recognition System

Automated Facial Expression Recognition System Automated Facial Expression Recognition System Andrew Ryan Naval Criminal Investigative Services, NCIS Washington, DC United States andrew.h.ryan@navy.mil Jeffery F. Cohn, Simon Lucey, Jason Saragih, Patrick

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information

Face analysis : identity vs. expressions

Face analysis : identity vs. expressions Face analysis : identity vs. expressions Hugo Mercier 1,2 Patrice Dalle 1 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd 3, passage André Maurois -

More information

A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2

A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation. Kwanyong Lee 1 and Hyeyoung Park 2 A Distance-Based Classifier Using Dissimilarity Based on Class Conditional Probability and Within-Class Variation Kwanyong Lee 1 and Hyeyoung Park 2 1. Department of Computer Science, Korea National Open

More information

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Detection Using Implemented (PCA) Algorithm Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with

More information

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Facial expression recognition using shape and texture information

Facial expression recognition using shape and texture information 1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,

More information

Emotion Detection System using Facial Action Coding System

Emotion Detection System using Facial Action Coding System International Journal of Engineering and Technical Research (IJETR) Emotion Detection System using Facial Action Coding System Vedant Chauhan, Yash Agrawal, Vinay Bhutada Abstract Behaviors, poses, actions,

More information

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Fast-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding

Fast-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding Fast-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding Fernando De la Torre 1,TomasSimon 1, Zara Ambadar 2, and Jeffrey F. Cohn 2 1 Robotics Institute, Carnegie Mellon

More information

Facial Expressions Recognition from Image Sequences

Facial Expressions Recognition from Image Sequences Facial Expressions Recognition from Image Sequences Zahid Riaz, Christoph Mayer, Michael Beetz and Bernd Radig Department of Informatics, Technische Universität München, D-85748 Garching, Germany Abstract.

More information

Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior

Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior Computer Vision and Pattern Recognition 2005 Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia

More information

Model Based Analysis of Face Images for Facial Feature Extraction

Model Based Analysis of Face Images for Facial Feature Extraction Model Based Analysis of Face Images for Facial Feature Extraction Zahid Riaz, Christoph Mayer, Michael Beetz, and Bernd Radig Technische Universität München, Boltzmannstr. 3, 85748 Garching, Germany {riaz,mayerc,beetz,radig}@in.tum.de

More information

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib 3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 201) Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi

More information

Evaluation of Expression Recognition Techniques

Evaluation of Expression Recognition Techniques Evaluation of Expression Recognition Techniques Ira Cohen 1, Nicu Sebe 2,3, Yafei Sun 3, Michael S. Lew 3, Thomas S. Huang 1 1 Beckman Institute, University of Illinois at Urbana-Champaign, USA 2 Faculty

More information

Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image

Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image Tsuyoshi Moriyama Keio University moriyama@ozawa.ics.keio.ac.jp Jing Xiao Carnegie Mellon University jxiao@cs.cmu.edu Takeo

More information

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014 195 Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) Faisal Ahmed 1, Hossain

More information

Mood detection of psychological and mentally disturbed patients using Machine Learning techniques

Mood detection of psychological and mentally disturbed patients using Machine Learning techniques IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.8, August 2016 63 Mood detection of psychological and mentally disturbed patients using Machine Learning techniques Muhammad

More information

Edge Detection for Facial Expression Recognition

Edge Detection for Facial Expression Recognition Edge Detection for Facial Expression Recognition Jesús García-Ramírez, Ivan Olmos-Pineda, J. Arturo Olvera-López, Manuel Martín Ortíz Faculty of Computer Science, Benemérita Universidad Autónoma de Puebla,

More information

Facial Expression Recognition for HCI Applications

Facial Expression Recognition for HCI Applications acial Expression Recognition for HCI Applications adi Dornaika Institut Géographique National, rance Bogdan Raducanu Computer Vision Center, Spain INTRODUCTION acial expression plays an important role

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS

DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS Sylvain Le Gallou*, Gaspard Breton*, Christophe Garcia*, Renaud Séguier** * France Telecom R&D - TECH/IRIS 4 rue du clos

More information

Active Appearance Models

Active Appearance Models Active Appearance Models Edwards, Taylor, and Cootes Presented by Bryan Russell Overview Overview of Appearance Models Combined Appearance Models Active Appearance Model Search Results Constrained Active

More information

Object Tracking using HOG and SVM

Object Tracking using HOG and SVM Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection

More information

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)

AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT

More information

Intensity Rank Estimation of Facial Expressions Based on A Single Image

Intensity Rank Estimation of Facial Expressions Based on A Single Image 2013 IEEE International Conference on Systems, Man, and Cybernetics Intensity Rank Estimation of Facial Expressions Based on A Single Image Kuang-Yu Chang ;, Chu-Song Chen : and Yi-Ping Hung ; Institute

More information

Multiview Pedestrian Detection Based on Online Support Vector Machine Using Convex Hull

Multiview Pedestrian Detection Based on Online Support Vector Machine Using Convex Hull Multiview Pedestrian Detection Based on Online Support Vector Machine Using Convex Hull Revathi M K 1, Ramya K P 2, Sona G 3 1, 2, 3 Information Technology, Anna University, Dr.Sivanthi Aditanar College

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

A Taxonomy of Semi-Supervised Learning Algorithms

A Taxonomy of Semi-Supervised Learning Algorithms A Taxonomy of Semi-Supervised Learning Algorithms Olivier Chapelle Max Planck Institute for Biological Cybernetics December 2005 Outline 1 Introduction 2 Generative models 3 Low density separation 4 Graph

More information

FACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTIVE SHAPE MODEL

FACIAL EXPRESSION RECOGNITION USING DIGITALISED FACIAL FEATURES BASED ON ACTIVE SHAPE MODEL FACIAL EXPRESSIO RECOGITIO USIG DIGITALISED FACIAL FEATURES BASED O ACTIVE SHAPE MODEL an Sun 1, Zheng Chen 2 and Richard Day 3 Institute for Arts, Science & Technology Glyndwr University Wrexham, United

More information

Action Unit Intensity Estimation using Hierarchical Partial Least Squares

Action Unit Intensity Estimation using Hierarchical Partial Least Squares Action Unit Intensity Estimation using Hierarchical Partial Least Squares Tobias Gehrig,, Ziad Al-Halah,, Hazım Kemal Ekenel, Rainer Stiefelhagen Institute for Anthropomatics and Robotics, Karlsruhe Institute

More information

Real-time Automatic Facial Expression Recognition in Video Sequence

Real-time Automatic Facial Expression Recognition in Video Sequence www.ijcsi.org 59 Real-time Automatic Facial Expression Recognition in Video Sequence Nivedita Singh 1 and Chandra Mani Sharma 2 1 Institute of Technology & Science (ITS) Mohan Nagar, Ghaziabad-201007,

More information

Action Unit Based Facial Expression Recognition Using Deep Learning

Action Unit Based Facial Expression Recognition Using Deep Learning Action Unit Based Facial Expression Recognition Using Deep Learning Salah Al-Darraji 1, Karsten Berns 1, and Aleksandar Rodić 2 1 Robotics Research Lab, Department of Computer Science, University of Kaiserslautern,

More information

Face Alignment Under Various Poses and Expressions

Face Alignment Under Various Poses and Expressions Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.

More information

Metric learning approaches! for image annotation! and face recognition!

Metric learning approaches! for image annotation! and face recognition! Metric learning approaches! for image annotation! and face recognition! Jakob Verbeek" LEAR Team, INRIA Grenoble, France! Joint work with :"!Matthieu Guillaumin"!!Thomas Mensink"!!!Cordelia Schmid! 1 2

More information

Exploring Facial Expressions with Compositional Features

Exploring Facial Expressions with Compositional Features Exploring Facial Expressions with Compositional Features Peng Yang Qingshan Liu Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road, Piscataway, NJ 88, USA peyang@cs.rutgers.edu,

More information

Face Recognition using Laplacianfaces

Face Recognition using Laplacianfaces Journal homepage: www.mjret.in ISSN:2348-6953 Kunal kawale Face Recognition using Laplacianfaces Chinmay Gadgil Mohanish Khunte Ajinkya Bhuruk Prof. Ranjana M.Kedar Abstract Security of a system is an

More information

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory Computers and Mathematics with Applications 61 (2011) 2101 2106 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa An

More information

Sketchable Histograms of Oriented Gradients for Object Detection

Sketchable Histograms of Oriented Gradients for Object Detection Sketchable Histograms of Oriented Gradients for Object Detection No Author Given No Institute Given Abstract. In this paper we investigate a new representation approach for visual object recognition. The

More information

Evaluation of Face Resolution for Expression Analysis

Evaluation of Face Resolution for Expression Analysis Evaluation of Face Resolution for Expression Analysis Ying-li Tian IBM T. J. Watson Research Center, PO Box 704, Yorktown Heights, NY 10598 Email: yltian@us.ibm.com Abstract Most automatic facial expression

More information

Estimation of Age Group using Histogram of Oriented gradients and Neural Network

Estimation of Age Group using Histogram of Oriented gradients and Neural Network International Journal Of Engineering Research And Development e-issn: 2278-067X, p-issn: 2278-800X, www.ijerd.com Volume 13, Issue 11 (November 2017), PP. 43-49 Estimation of Age Group using Histogram

More information

Spatiotemporal Features for Effective Facial Expression Recognition

Spatiotemporal Features for Effective Facial Expression Recognition Spatiotemporal Features for Effective Facial Expression Recognition Hatice Çınar Akakın and Bülent Sankur Bogazici University, Electrical & Electronics Engineering Department, Bebek, Istanbul {hatice.cinar,bulent.sankur}@boun.edu.tr

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Face detection and recognition. Detection Recognition Sally

Face detection and recognition. Detection Recognition Sally Face detection and recognition Detection Recognition Sally Face detection & recognition Viola & Jones detector Available in open CV Face recognition Eigenfaces for face recognition Metric learning identification

More information

Face Recognition using SURF Features and SVM Classifier

Face Recognition using SURF Features and SVM Classifier International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 8, Number 1 (016) pp. 1-8 Research India Publications http://www.ripublication.com Face Recognition using SURF Features

More information

Facial Expression Recognition in Real Time

Facial Expression Recognition in Real Time Facial Expression Recognition in Real Time Jaya Prakash S M 1, Santhosh Kumar K L 2, Jharna Majumdar 3 1 M.Tech Scholar, Department of CSE, Nitte Meenakshi Institute of Technology, Bangalore, India 2 Assistant

More information

Cross-pose Facial Expression Recognition

Cross-pose Facial Expression Recognition Cross-pose Facial Expression Recognition Abstract In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose

More information

Human pose estimation using Active Shape Models

Human pose estimation using Active Shape Models Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body

More information

Gender Classification Technique Based on Facial Features using Neural Network

Gender Classification Technique Based on Facial Features using Neural Network Gender Classification Technique Based on Facial Features using Neural Network Anushri Jaswante Dr. Asif Ullah Khan Dr. Bhupesh Gour Computer Science & Engineering, Rajiv Gandhi Proudyogiki Vishwavidyalaya,

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity

Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Ying-li Tian 1 Takeo Kanade 2 and Jeffrey F. Cohn 2,3 1 IBM T. J. Watson Research Center, PO

More information

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany

More information

Adaptive Skin Color Classifier for Face Outline Models

Adaptive Skin Color Classifier for Face Outline Models Adaptive Skin Color Classifier for Face Outline Models M. Wimmer, B. Radig, M. Beetz Informatik IX, Technische Universität München, Germany Boltzmannstr. 3, 87548 Garching, Germany [wimmerm, radig, beetz]@informatik.tu-muenchen.de

More information

Face Recognition for Different Facial Expressions Using Principal Component analysis

Face Recognition for Different Facial Expressions Using Principal Component analysis Face Recognition for Different Facial Expressions Using Principal Component analysis ASHISH SHRIVASTAVA *, SHEETESH SAD # # Department of Electronics & Communications, CIIT, Indore Dewas Bypass Road, Arandiya

More information

Action Recognition & Categories via Spatial-Temporal Features

Action Recognition & Categories via Spatial-Temporal Features Action Recognition & Categories via Spatial-Temporal Features 华俊豪, 11331007 huajh7@gmail.com 2014/4/9 Talk at Image & Video Analysis taught by Huimin Yu. Outline Introduction Frameworks Feature extraction

More information

Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches

Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Yanpeng Liu a, Yuwen Cao a, Yibin Li a, Ming Liu, Rui Song a Yafang Wang, Zhigang Xu, Xin Ma a Abstract Facial

More information

ESTIMATION OF FACIAL ACTION INTENSITIES ON 2D AND 3D DATA

ESTIMATION OF FACIAL ACTION INTENSITIES ON 2D AND 3D DATA ESTIMATION OF FACIAL ACTION INTENSITIES ON 2D AND 3D DATA Arman Savran 1, Bülent Sankur 2, and M. Taha Bilge 3 Electrical and Electronics Engineering 1,2 Department of Psychology 3 Boğaziçi University,

More information

Gait analysis for person recognition using principal component analysis and support vector machines

Gait analysis for person recognition using principal component analysis and support vector machines Gait analysis for person recognition using principal component analysis and support vector machines O V Strukova 1, LV Shiripova 1 and E V Myasnikov 1 1 Samara National Research University, Moskovskoe

More information

3D Facial Action Units Recognition for Emotional Expression

3D Facial Action Units Recognition for Emotional Expression 3D Facial Action Units Recognition for Emotional Expression Norhaida Hussain 1, Hamimah Ujir, Irwandi Hipiny and Jacey-Lynn Minoi 1 Department of Information Technology and Communication, Politeknik Kuching,

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Facial Expression Analysis

Facial Expression Analysis Facial Expression Analysis Faces are special Face perception may be the most developed visual perceptual skill in humans. Infants prefer to look at faces from shortly after birth (Morton and Johnson 1991).

More information

NIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899

NIST. Support Vector Machines. Applied to Face Recognition U56 QC 100 NO A OS S. P. Jonathon Phillips. Gaithersburg, MD 20899 ^ A 1 1 1 OS 5 1. 4 0 S Support Vector Machines Applied to Face Recognition P. Jonathon Phillips U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Information

More information

Exploring Bag of Words Architectures in the Facial Expression Domain

Exploring Bag of Words Architectures in the Facial Expression Domain Exploring Bag of Words Architectures in the Facial Expression Domain Karan Sikka, Tingfan Wu, Josh Susskind, and Marian Bartlett Machine Perception Laboratory, University of California San Diego {ksikka,ting,josh,marni}@mplab.ucsd.edu

More information