Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition

Size: px
Start display at page:

Download "Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition"

Transcription

1 Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition Yi Sun, Michael Reale, and Lijun Yin Department of Computer Science, State University of New York at Binghamton Binghamton, New York, USA Abstract Research on automatic facial expression recognition has benefited from work in psychology, specifically the Facial Action Coding System (FACS). To date, most existing approaches are primarily based on 2D images or videos. With the emergence of real-time 3D dynamic imaging technologies, however, 3D dynamic facial data is now available, thus opening up an alternative to detect facial action units in dynamic 3D space. In this paper, we investigate how to use this new modality to improve action unit (AU) detection. We select a subset of AUs from both the upper and lower parts of a facial area, apply the active appearance model (AAM) method and take the correspondence between textures and range models to track the pre-defined facial features across the 3D model sequences. A Hidden Markov Model (HMM) based classifier is employed to recognize the partial AUs. The experiments show that our 3D dynamic tracking based approach outperforms the compared 2D feature tracking based approach. The results are also comparable with the manually-picked 3D facial features based method. Finally, we extend our approach to validate the experiment for recognizing six prototypic facial expressions. 1. Introduction In the past decade, a great deal of effort has been put into developing automatic approaches for facial expression recognition [19, 22, 7, 8, 6, 35]. Many successful approaches have utilized action unit (AU) recognition [27, 20, 33, 2, 11, 16] or motion unit (MU) detection [5, 24, 36]. Other successful approaches have concentrated on facial region features, such as manifold features [3] and facial texture features [37, 18]. Up until recently, most existing systems work on 2D images or 2D videos. With the recent advancement of 3D imaging technologies, real-time 3D dynamic imaging systems have become available; thus, there has been some work exploring 3D range data for facial expression recognition [4, 32, 28, 29]. The 3D dynamic facial representation is by nature a good reflection of facial actions. However, it has not been clear how such a representation could help improve AU recognition over the traditional 2D representation. In this paper, we focus on the study of facial action unit recognition using 3D dynamic range data. To do so, we created a 3D dynamic facial expression database. We used the active appearance model [9] and the correspondence between textures and range models to track the pre-defined 83 facial features across 3D model sequences. We also applied a Hidden Markov Model (HMM) based classifier to recognize the partial AUs from both the upper and the lower facial areas. To evaluate the proposed approach, we conducted a comparison study by implementing a 2D feature tracking based approach [15]. The facial expression recognition experiments were also conducted using our 3D dynamic facial expression database. The Facial Action Coding System (FACS) [12] divides a face into upper and lower face action and uses action units (AUs) to describe the facial muscle movement. Using FACS, we can decompose an observed facial expression into one or more combination of the defined 44 AUs. In this paper, we restrict our scope to recognizing the eight most commonly used AUs [15, 17, 13], which appear in either the upper or the lower facial region. Table 1 and Figure 1 illustrate and annotate the eight action units being analyzed in this paper. These eight AUs may occur during various expressions; for example, AU 1 may occur during a sad expression, AU 5 during surprise, and AU 27 during smile, disgust,orsurprise. The remainder of this paper is organized as follows. In Section 2, some related work will be reviewed. Our newly created dynamic 3D facial expression database will be introduced briefly in Section 3. In Section 4, we will describe the proposed 3D AU recognition system, followed by the experiments and performance evaluation in Section 5. Finally, a discussion with conclusions and future work will be given in Section /08/$ IE

2 Table 1. Description of action units used for recognition in this paper. Action units Description Action Units Description AU 1 Inner eyebrow raised AU 2 Outer eyebrow raised AU 1+2 Entire eyebrow raised AU 4 Eyebrow lowered AU 5 Upper lid raised AU 27 Mouth stretched vertically AU 20 Mouth stretched horizontally AU 15 Lip corner depressed Figure 1. Sample appearance of the eight AUs used in this paper. From left to right and then top to bottom, they are AU1, AU2, AU1+2, AU4, AU5, AU27, AU20, and AU15 respectively. 2. Related Work The AU recognition task can be decomposed into two stages: feature representation and classification. Lien et al. [15, 16] used facial feature point tracking or dense flow tracking to derive different types of facial features and used a discrete HMM classifier to recognize three upper action units. Tian et al. [26] used a lip tracking system and template matching method to recognize 15 AUs. Lucey et al. [17] presented a method that used either shape features, appearance features, or both to represent a face and classified spontaneous action units using four classifiers (e.g., support vector machine (SVM), nearest neighbor (NN), PCA and LDA). Barlett et al. [1] used Gabor filters, SVM and HMM to detect three AUs. Tong et al. [27] applied the Adaboost classifier and wavelet features to produce the measurement score of each AU and then used a dynamic Bayesian network to infer more possible AUs from the detected AUs. Pantic et al. [21] proposed to use the extracted feature points locations from the profile/frontal static images as the feature representation and used some decision rules to find out different AUs based on these feature points. All of the methods listed above have shown success in recognizing action units; however, most of the existing feature tracking methods are based on 2D static images or 2D videos. 2D-based tracking methods confront with the challenges of pose variations and expression subtlety. As an example, Figure 2 illustrates why 3D motion vectors reflect the true motion of facial features while 2D motion vectors may not be reliable in some circumstances. In Figure 2, the left two pictures show two faces with the mouth stretched horizontally in the frontal view. The right two pictures show the same faces of the left two with different poses. We can see that if the subject does not change her pose during the expressions, we can trace the motion vector of her two lip corners correctly (the distance between her two lip corners is enlarged from the first picture to the second picture). However, if the subject changes her expression from a slight fear (the first picture) to a strong fear while rotating her head (the fourth picture), the similar lengths of her lip corners motion vectors are measured in 2D (they are both marked as green lines) on the two pictures. In reality, however, the magnitudes of the mouth corner motion vectors are enlarged in the three dimensional space. This example shows that, if we rely entirely on 2D motion vectors from 2D videos, we may not be able detect the facial AUs completely or accurately. In this paper, we explore whether using the 3D dynamic range data could help detect the correct motion of facial feature points for AU detection. We note that the major difference between the traditional 2D data and the new 3D dynamic data is that the latter provides the authentic, time-varying geometric surface information while the former does not. Thus, the 3D data allows estimation of more complete (or genuine) motion trajectories in 3D space than those in 2D space. Thus, we will investigate 3D motion estimation and compare the results with those obtained from a 2D feature point based tracking method. In this paper, we select a well-known 2D feature point tracking-based approach developed by Lien et al. [15] as the baseline for comparison. Figure 2. AU 20. Left two images: AU 20 starts from onset to apex in frontal view. Right two images: the same AU as the first two images except with a different orientation. Different colored lines have different lengths in the 2D frontal view. 2

3 3. Dynamic 3D Facial Expression Database Figure 3. Dynamic 3D face capturing system setup. Due to the lack of publicly available 3D dynamic facial expression databases, we created a dynamic 3D face database [34] to investigate how the dynamics of a 3D model sequence with varied facial expressions and poses affects face analysis. We used the Dimensional Imaging s 4D capture system [10] to acquire sequential stereo images. Figure 3 shows the dynamic 3D face capture system. Our database contains 101 subjects, each of which performs six prototypic expressions (e.g., anger, disgust, fear, smile, sad, and surprise) corresponding to 6 video clips. For each expression, a subject starts from the neutral expression to the extreme expression and returns back to the neutral expression. The system captures the sequence at a rate of 25 frames per second, and each 3D video clip lasts about 4 seconds. The entire database includes 606 3D model sequences (3D videos) and corresponding 606 2D texture videos with a variety of facial expressions. A snapshot of two sample video sequences taken from the database is shown in Figure 4 (detailed description of the database can be found in [34]). Notice that although the database includes only seven expressions (six prototypic expressions plus the neutral expression), different subjects may exhibit different action units for the same expression. To obtain the ground-truth AU labels for our experiments, we manually labeled temporal segments from each 3D video clip based on observed action units. 4. 3D Range Data Based AU Recognition Figure 5. Structure of the 3D Facial Action Units recognition system. The first row is the learning procedure, and the second row shows the recognition procedure. Our proposed action unit (AU) recognition system is essentially a 3D feature point based approach. It uses the active appearance model and the texture-model correspondence to track the pre-defined 83 feature points. Since the range model and the corresponding texture are aligned with each other at the data acquisition stage, the correspondence of the 2D texture and the 3D facial models is already established. We can directly locate the exact 3D position of each tracked point by mapping its 2D position to the corresponding 3D position of the range model. Then, the real 3D position of each feature point is found and can be used to represent the faces. The whole system is composed of four steps: feature points tracking, feature representation, HMM-based learning, and recognition. The first two steps are shared by both the learning and recognition stages. Figure 5 shows the structure of the proposed 3D track approach. We will describe each stage of the system in the following sections Feature Point Tracking We applied the active appearance model (AAM) based tracking method as described in [9] to track the face features in each of the 2D texture frames. Figure 6 illustrates the predefined 83 feature points. The AAM builds a statistical shape model S and an appearance model A by using a training set of images with the defined 83 facial points which are manually marked. S = S + P S c, A = Ā + P A c (1) where S is the mean shape, Ā is the mean texture within the mean shape patch, P S and P A are matrices describing the variations of shape and texture respectively, and c is a set of control parameters for both shape and texture. It uses the Procrustes analysis to align points and Principle Component Analysis (PCA) to derive all parameters [9]. Thus, a model instance is generated by warping the appearance A within the base patch S on the shape S. It uses the learned correlation between errors in the model parameter and the residual texture errors and then converges the search to find the best match for the face mesh in current frame. Finally, the 83 feature points are tracked along the video sequence. The first image of Figure 7 is an example of the tracked 83 points on a 2D textured frame. Since the spatial correlation between a 2D texture image and its corresponding 3D model is already recorded during the model acquisition, we can directly find the real 3D position of each tracked point in the 2D image. The second and third images of Figure 7 are the corresponding tracked 83 feature points in the 3D textured model and 3D shaded model, respectively. The third row of Figure 4 show the tracked results on the sequential 3D wire-frame models of two subjects D motion vectors for feature representation After tracking the feature points on each model, we eliminate the rigid head motion using the affine transformation. 3

4 Figure 4. Two sampled video sequences from the dynamic 3D facial database. Left: three rows presenting the textured models, shading models, and wire-frame models with 83 tracked feature points of a subject with smile expression, respectively. Right: three rows presenting the same types of models of another subject with surprise expression. represent onset, middle and apex expressions respectively. Figure 9 shows the displacement vectors (blue lines) of the feature points (green dots) of the sample frame with a close look in both 2D and 3D. Figure 6. Index of 83 predefined facial feature points. Figure 8. Displacement vectors of feature points from a sample video. From left to right: 2D onset, 3D onset, 2D middle, 3D middle, 2D apex, and 3D apex. The upper and lower bars show the magnitudes of displacement vectors of the feature points from the upper half (points 1-36) of the face and the lower half (points 49-68) of the face, respectively. Figure 7. A sample tracking result: From left to right, the tracked 83 facial feature points on the 2D texture image (shown with red points), the 3D textured model (shown with green points), and the 3D shaded model (shown with green points), respectively. The transformation matrix is derived by registering each 3D face model to the initial neutral expression model, similar to the method employed in [14]. Each feature point j is represented as Pj = (xj, yj, zj ) after transformation. Each face of current frame i can be represented as Fi = [P i1, P i2,..., P i83 ]. Not all 83 are required to represent a face instance. Letting the model with neutral expression be the initial frame (F neutral = F0 ), we derive the displacement vector between each individual frame i and the initial frame as Displacei = Fi Fneutral. Figure 8 shows several snapshots of displacement vectors of feature points from a sample video in both 2D and 3D, which Figure 9. Tracked motion vectors (blue lines with arrows). Left: upper facial part; Right: lower facial part. Top: motions of 2D AUs. Bottom: motions of 3D AUs HMM Based Learning and Recognition During the learning stage, the statistical information and the temporal dynamics of the trained data are learned by HMMs [23]. For the action unit recognition task, each distinct action unit is modeled as an N-state continuous HMM 4

5 that is described briefly as follows: Let λ =[A, B, π] denote a HMM to be trained, and N be the number of hidden states in the model, we denote the individual states as S = {S 0,S 1,..., S N 1 } and the state at time t is q t. A = {a ij }is the state transition probability distribution, where facial AU recognition, C is 5; for the lower facial AU recognition, C is 3. C is 6 for the six-expression recognition. Figure 10 shows the structure of the 6-state HMM Model that is used in our experiment. a ij = P [q t+1 = S j q t = S i ], 0 i, j N 1 (2) B = {b j (k)}is the observation probability distribution in state j, with k as an observation. We use Gaussian distributions to estimate each B = {b j (k)}, where b j (k) =P [k q t = S j ] N (µ j, Σ j ), 0 j N 1 (3) Let π = {π i } be the initial state distribution, where π i = P [q 0 = S i ], 0 i N 1 Then, given an observation sequence, O = O 1 O 2...O T (4) where O i denote an observation at time i, the HMM training procedure can be described as follows: Step 0: Take the feature representation Displace i of each 3D range face model as an observation O i. Step 1: Initialize the HMM model λ =[A, B, π]. Each observed model will be separated into N parts, and each part corresponds to one state. Then, the observation parts for one state are used to estimate the parameters in the observation matrix B. Set the initial values of A and π based on the observations. Step 2: Use the forward-backward (Baum-Welch) algorithm as described in [23] to derive the maximum likelihood estimation of the model parameter λ = [A, B, π] when P (O λ) is maximized. After the learning phase, we derive 8 different HMMs, each representing one AU (see Table 1). Note that the upper facial AU recognition and the lower facial AU recognition take the displacement vectors from different subsets of the 83 feature points. In our experiment, for the upper AU recognition, we use the contour points of the eyebrows and eyes (see feature points from 1 to 36 of Figure 6). For the lower AU recognition, we take the contour points of the mouth (feature point 49 to 68 of Figure 6). Given a query model sequence, after the model preprocessing stage, we follow Step 1 in the training procedure to represent the query sequence Q = Q 1 Q 2...Q T. Using the forward-backward method, we compute the probability of the observation sequence given one trained HMM i as P (Q λ i ). We use the Bayesian decision rule to classify the query sequence c = argmax [P (λ i Q)],i C. (5) P (Q λ i)p (λ P i) C j=1 where P (λ i Q) = and C is the number of the trained HMM models. In this paper, for the P (Q λj)p (λj) upper Figure 10. Structure of a 6 state HMM model. 5. Experiments and Analysis 5.1. AU Recognition Result From our database, we generated 2,000 segments, each of which includes 6 sequential frames. These segments are chosen from 30 different subjects, and each of them includes at least one of the 8 AUs used in this paper. These segments cover the duration between AU onset (start), apex, and offset (end). We conducted 5-fold cross-validation and took the average result as the final reported number. The 3D model based approaches used here take the 3D dynamic models as the observation, and the 2D feature points based approach takes the corresponding 2D video as the observation. It should be noted that, in this paper, we do not focus on devising a novel method to track the facial features along 3D video sequences. Instead, we only wished to demonstrate that with the aid of this new modality, dynamic 3D facial models, we are able to improve the AU recognition results, even using established tracking methods. Our method is referred as the 3D tracking method (3D track). We also manually picked the control points (feature points) in each individual 3D frame and used these points to produce the ground-truth tracking result. We then utilized HMM-based learning to recognize action units. This is referred to as the 3D manually tracked method (3D manual). In addition, we implemented the 2D feature tracking based tracking plus HMM based method (denoted as 2D track)[15] to compare the AU recognition results. Table 2 lists the average 3D tracking errors using the proposed tracking approach. From Table 2, some key feature points like eye corners and mouth corners are more accurately tracked. This may be partially due to the fact that the corner points are more distinct. Figure 11 shows samples of the magnitudes of 2D and 3D motion vectors of feature points on upper and lower facial regions. It can be seen that the additional dimensional (z) motion provided by the new 3D modality magnifies the motion compared with motion in the x-y plane alone using 2D videos. The difference in magnitude remains throughout 5

6 Table 2. Tracking error (Err) of feature points on 3D facial models (mm). The point index (ID) is identical to the point index in Figure 6. ID Err ID Err ID Err ID Err ID Err ID Err ID Err ID Err Table 3. Upper AUs recognition results using 3D track approach, 3D manual approach, and 2D track approach. Upper AU Index 3D track 3D manual 2D track [15] AU AU AU AU AU Table 4. Lower AUs recognition results using 3D track approach, 3D manual approach, and 2D track approach. Lower AU Index 3D track 3D manual 2D track [15] AU AU AU the entire sequence, from the beginning (onset of expression) to the extreme stage (apex). Intuitively, these results bolster our proposal that using a 3D modality rather than 2D for AU recognition may be more advantageous. Table 3 and Table 4 report the upper/lower facial AU recognition results using the proposed 3D track approach, the 3D manual approach, and the 2D track baseline approach [15], respectively. As the results show, the 3D tracking point based method outperforms the 2D tracking point baseline method for recognizing both upper and lower facial AUs. We owe this to the additional dimensional information provided by the dynamic 3D facial modality. Another observation is that our proposed 3D track approach provides a respectable degree of tracking accuracy; thus, the AU recognition performance is not degraded much as compared to the method using manually picked tracking points (3D manual) Facial Expression Recognition Results Different facial action units may infer different kind of expressions. We conducted a statistical analysis to see the Figure 11. Motion magnitude of feature points on upper and lower facial regions, when action units are at onset or apex. relation between AUs and expressions. Figure 12 shows the probability of the occurrence of different AUs and expressions. For example, surprise is usually accompanied with the mouth open action unit (AU 27). As one might expect, different expressions may include different action units; however, even for the same expression, different subjects may exhibit a different combination of action units. A straightforward way to extend the proposed AU recognition approach for facial expression recognition is to infer the expression type from the motion of tracked feature points directly. We used a different subset of the 83 defined feature points and took the displacement vectors as described in section 4.2 to be the feature representation (i.e., PU, PL, 6

7 and PA). future work. Table 6. Confusion matrix using 3D action unit based HMM method (subject dependent). True/Predict Anger Disgust Fear Smile Sadness Surprise Anger 81.4% 5.3% 2.7% 2.9% 4.4% 3.3% Disgust 1.7% 83.6% 7.3% 3.5% 2.7% 1.2% Fear 2.3% 9.2% 69.0% 7.9% 3.8% 7.8% Smile 0.9% 3.7% 4.7% 86.2% 2.3% 2.2% Sadness 10.0% 3.9% 4.3% 1.1% 80.0% 0.7% Surprise 2.4% 1.5% 8.6% 2.3% 0.1% 85.1% Figure 12. Statistical analysis of the occurrence of AUs for different expressions. Note that PU refers to the displacement vectors of point 1-36 at contours of the eyebrows and eyes. PL refers to the displacement vectors of point at the mouth contour, and PA=PU+PL (see Figure 6). Table 5. Expression recognition results using 2D and 3D tracking point and HMM based methods. Test/Feature PU PL PA Subject-independent(2D) 53.76% 50.54% 59.57% Subject-independent(3D) 62.90% 63.51% 65.13% Subject-dependant(2D) 60.86% 56.21% 61.63% Subject-dependant(3D) 78.52% 79.92% 80.85% Next, the HMM-based learning and recognition method used in section 4.3 was conducted for the experiments. We also compared the results with the 2D feature point tracking method. We segmented the video sequence from 30 subjects in our database into 6-frame subsequences and used them for our experiments. For the subject-dependent experiments, half of the segments from each of the 30 subjects are used for training, and the remainder is used for testing. For the subject-independent experiments, we follow the 5-fold cross-validation procedure, in which we randomly select 24 of the subjects for training and the remaining 6 subjects for testing. The average recognition rate is reported. Table 5 shows the expression recognition results using both 2D and 3D tracking point and HMM-based methods. The confusion matrix using the 3D track HMM-based approach to recognize the six universal expressions is reported in Table 6. Similar to the AU recognition results, the 3D track method outperforms the compared 2D-track baseline method in both subject-dependent and subject-independent expression recognition. However, the performance in both cases shall be improved if more accurate tracking methods are developed or more powerful classifiers are employed [25]. This demands a task of further development in our 6. Conclusion and Future Work This paper presents a pilot study on recognizing facial action units (AUs) using a new modality: 3D dynamic range model sequences. It uses the active appearance model based approach to track the position of 83 pre-defined facial feature points and uses the HMM classifier to learn the property of the tracked feature points for facial action unit recognition. The preliminary results demonstrate that, by tracking the true 3D position of facial feature points, we are able to recognize AUs more accurately than we would be able to using only the tracked location of facial features from 2D images. We also extend this work for facial expression recognition, and the experimental results show the 3D tracking point approach outperforms the compared 2D tracking point baseline approach. We owe this to the more accurate facial information provided by the dynamic 3D facial model sequences. The experiments indicate that the more accurate tracking results may improve the recognition performance. Thus, we shall investigate a better method to track the 3D non-rigid facial features [31, 30]. Our current method includes only a few facial features, which may not be sufficient to identify all 44 facial action units. Moreover, we may add more spontaneous expression examples to explore more complicated combinations of action units in the future. We will further investigate the performance of facial expression recognition based on the identified AUs. The system may integrate texture image based methods to improve its performance. Since dynamic 3D facial model sequences provide not only the true 3D position of facial features but also the real surface s shape information, we could derive some shape descriptors, such as spatiotemporal curvatures, from the 3D dynamic data to learn the time-varying surface change in an attempt to improve the current performance of both AU recognition and expression recognition. 7

8 References [1] M. Bartlett, J. Hager, P. Ekman, and T. Sejnowski. Measuring facial expressions by computer image analysis. Psychophysiology, 36, [2] M. Bartlett et al. Fully automatic facial action recognition in spontaneous behavior. In FGR06, p , [3] Y. Chang, C. Hu, and M. Turk. Probabilistic expression analysis on manifolds. In IEEE Inter. Conf. on CVPR04. 1 [4] Y. Chang, M. Vieira, M. Turk, and L. Velho. Automatic 3d facial expression analysis in videos. In IEEE ICCV05 Workshop on Analysis and Modeling of Faces and Gestures, Beijing, [5] I. Cohen, N. Sebe, A. Garg, L. Chen, and T. Huang. Facial expression recognition from video sequences: temporal and static modeling. Journal of CVIU, 91(1), [6] J. Cohn. Foundations of human computing: Facial expression and emotion. Int l Conf. Multimodal Interfaces, , [7] J. Cohn, Z. Ambadar, and P. Ekman. Observer-based measurement of facial expression with the facial action coding system. The handbook of emotion elicitation and assessment. Oxford University Press Series in Affective Science, J. Coan and J. Allen, ed., [8] J. Cohn and T. Kanade. Use of automated facial image analysis for measurement of emotion expression. The handbook of emotion elicitation and assessment. Oxford University Press Series in Affective Science, J. Coan and J. Allen, Eds., New York, NY: Oxford., [9] T. F. Cootes and C. Taylor. Statistical models of apperance for computer vision. In Technical Report, University of Manchester, Manchester, UK, , 3 [10] I. Di3D [11] G. Donato, M. Bartlett, J. Hager, P. Ekman, and T. Sejnowski. Classifying facial actions. IEEE Trans. PAMI, 21(10): , [12] P. Ekman and W. Friesen. The Facial Action Coding System. San Francisco, CA: Consulting Psychologists Press, [13] A. Kapoor, Y. Qi, and R. Picard. Ieee international workshop on amfg. In Fully Automatic Upper Facial Action Recognition, [14] C. Li and A. Barreto. Profile-based 3d face registration and recognition. In 7th International Conference on Information Security and Cryptology, ICISC 2004, [15] J. Lien, T. Kanade, J. Cohn, and C. Li. Automated facial expression recognition based on facs action units. In FG 1998, , 2, 5, 6 [16] J. Lien et al. Subtly different facial expression recognition and expression intensity estimation. In CVPR98. 1, 2 [17] S. Lucey, A. Ashraf, and J. Cohn. Investigating spontaneous facial action recognition through aam representations of the face. In Face Recognition Book. Pro Literatur Verlag, April , 2 [18] M. Lyons. et. al. Automatic classification of single facial images. IEEE Trans. PAMI, 21(12): , [19] M. Pantic and M. Bartlett. Machine analysis of facial expressions. Face Recognition, K. Kurihara, Ed. Vienna, Austria: Advanced Robotics Systems, [20] M. Pantic and L. Rothkrantz. Automatic analysis of facial expressions: the state of the art. IEEE Trans. PAMI, [21] M. Pantic and L. Rothkrantz. Facial action recognition for facial expression analysis from static face images. IEEE Trans. on SMC-Part B: Cybernetics, 34(3): , [22] M. Pantic, N. Sebe, J. Cohn, and T. Huang. Affective multimodal human-computer interaction. In ACM Multimedia, Singapore, [23] L. Rabiner. A tutorial on hidden markov models and selected applications in speech recognition. In Proceedings of IEEE, 77(2), , 5 [24] N. Sebe et al. Authentic facial expression analysis. Image Vision Computing, 12(25), , [25] J. Skelley and et al. Recognizing expressions in a new database containing played and natural expressions. In IEEE ICPR06, [26] Y. Tian and et al. Recognizing action units for facial expression analysis. PAMI, 23(2), [27] Y. Tong, W. Liao, and Q. Ji. Facial action unit recognition by exploiting their dynamic and semantic relationships. IEEE PAMI, 10(29), , , 2 [28] J. Wang, L. Yin, X. Wei, and Y. Sun. 3D facial expression recognition based on prmitive surface feature distribution. In IEEE CVPR06. 1 [29] P. Wang and R. V. et al. Quantifying facial expression abnormality in schizophrenia by combining 2d and 3d features. In CVPR07, [30] W. Wang, Y. Wang, D. Samaras, and et al. Conformal geometry and its applications on 3d shape matching, recognition and stitching. IEEE Trans. on PAMI, 29(7): , [31] Y. Wang, M. Gupta, D. Samaras, and et al. High resolution tracking of non-rigid 3d motion of densely sampled data using harmonic maps. ICCV 2005, [32] Y. Wang, X. Huang, C. Lee, S. Zhang, Z. Li, D. Samaras, D. Metaxas, A. Elgammal, and P. Huang. High resolution acquisition, learning, transfer of dynamic 3d face expression. In Eurographics04. 1 [33] P. Yang, Q. Liu, and D. Metaxas. Boosting coded dynamic features for facial action units and facial expression recognition. In CVPR07, [34] L. Yin, X. Chen, Y. Sun, T. Worm, and M. Reale. A high resolution 3d dynamic facial expression database. In IEEE FGR 08, [35] Z. Zeng, M. Pantic, T. Huang, and et al. A survey of affect recognition methods: Audio, visual, and spontaneous expressions. Int l Conf. Multimodal Interfaces, , [36] Z. Zeng et al. Spontaneous emotional facial expression detection. Journal of Multimedia, 5(1), 1-8, [37] G. Zhao and M. Pietikainen. Dynamic texture recognition using local binary patterns with an application to facial expressions. IEEE Trans. on PAMI,

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Peng Yang Qingshan Liu,2 Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road,

More information

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks HADI SEYEDARABI*, WON-SOOK LEE**, ALI AGHAGOLZADEH* AND SOHRAB

More information

Recognition of facial expressions in presence of partial occlusion

Recognition of facial expressions in presence of partial occlusion Recognition of facial expressions in presence of partial occlusion Ioan Buciu, 1 Irene Kotsia 1 and Ioannis Pitas 1 AIIA Laboratory Computer Vision and Image Processing Group Department of Informatics

More information

Evaluation of Expression Recognition Techniques

Evaluation of Expression Recognition Techniques Evaluation of Expression Recognition Techniques Ira Cohen 1, Nicu Sebe 2,3, Yafei Sun 3, Michael S. Lew 3, Thomas S. Huang 1 1 Beckman Institute, University of Illinois at Urbana-Champaign, USA 2 Faculty

More information

Facial Expression Analysis

Facial Expression Analysis Facial Expression Analysis Jeff Cohn Fernando De la Torre Human Sensing Laboratory Tutorial Looking @ People June 2012 Facial Expression Analysis F. De la Torre/J. Cohn Looking @ People (CVPR-12) 1 Outline

More information

Real time facial expression recognition from image sequences using Support Vector Machines

Real time facial expression recognition from image sequences using Support Vector Machines Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,

More information

Exploring Facial Expressions with Compositional Features

Exploring Facial Expressions with Compositional Features Exploring Facial Expressions with Compositional Features Peng Yang Qingshan Liu Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road, Piscataway, NJ 88, USA peyang@cs.rutgers.edu,

More information

Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity

Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Ying-li Tian 1 Takeo Kanade 2 and Jeffrey F. Cohn 2,3 1 IBM T. J. Watson Research Center, PO

More information

Cross-pose Facial Expression Recognition

Cross-pose Facial Expression Recognition Cross-pose Facial Expression Recognition Abstract In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose

More information

Emotion Detection System using Facial Action Coding System

Emotion Detection System using Facial Action Coding System International Journal of Engineering and Technical Research (IJETR) Emotion Detection System using Facial Action Coding System Vedant Chauhan, Yash Agrawal, Vinay Bhutada Abstract Behaviors, poses, actions,

More information

Fully Automatic Facial Action Recognition in Spontaneous Behavior

Fully Automatic Facial Action Recognition in Spontaneous Behavior Fully Automatic Facial Action Recognition in Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia Lainscsek 1, Ian Fasel 1, Javier Movellan 1 1 Institute for Neural

More information

Appearance Manifold of Facial Expression

Appearance Manifold of Facial Expression Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk

More information

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Caifeng Shan, Shaogang Gong, and Peter W. McOwan Department of Computer Science Queen Mary University of London Mile End Road,

More information

Facial expression recognition using shape and texture information

Facial expression recognition using shape and texture information 1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,

More information

Facial Expression Recognition Using Encoded Dynamic Features

Facial Expression Recognition Using Encoded Dynamic Features Facial Expression Recognition Using Encoded Dynamic Features Peng Yang Qingshan Liu,2 Xinyi Cui Dimitris N.Metaxas Computer Science Department, Rutgers University Frelinghuysen Road Piscataway, NJ 8854

More information

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse DA Progress report 2 Multi-view facial expression classification 16.12.2010 Nikolas Hesse Motivation Facial expressions (FE) play an important role in interpersonal communication FE recognition can help

More information

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1 Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

More information

3D Facial Action Units Recognition for Emotional Expression

3D Facial Action Units Recognition for Emotional Expression 3D Facial Action Units Recognition for Emotional Expression Norhaida Hussain 1, Hamimah Ujir, Irwandi Hipiny and Jacey-Lynn Minoi 1 Department of Information Technology and Communication, Politeknik Kuching,

More information

Facial Expression Recognition for HCI Applications

Facial Expression Recognition for HCI Applications acial Expression Recognition for HCI Applications adi Dornaika Institut Géographique National, rance Bogdan Raducanu Computer Vision Center, Spain INTRODUCTION acial expression plays an important role

More information

A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions

A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions Akhil Bansal, Santanu Chaudhary, Sumantra Dutta Roy Indian Institute of Technology, Delhi, India akhil.engg86@gmail.com,

More information

Face Alignment Under Various Poses and Expressions

Face Alignment Under Various Poses and Expressions Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.

More information

Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition

Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression

More information

Spatiotemporal Features for Effective Facial Expression Recognition

Spatiotemporal Features for Effective Facial Expression Recognition Spatiotemporal Features for Effective Facial Expression Recognition Hatice Çınar Akakın and Bülent Sankur Bogazici University, Electrical & Electronics Engineering Department, Bebek, Istanbul {hatice.cinar,bulent.sankur}@boun.edu.tr

More information

Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior

Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior Computer Vision and Pattern Recognition 2005 Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia

More information

Evaluation of Face Resolution for Expression Analysis

Evaluation of Face Resolution for Expression Analysis Evaluation of Face Resolution for Expression Analysis Ying-li Tian IBM T. J. Watson Research Center, PO Box 704, Yorktown Heights, NY 10598 Email: yltian@us.ibm.com Abstract Most automatic facial expression

More information

FITTING AND TRACKING 3D/4D FACIAL DATA USING A TEMPORAL DEFORMABLE SHAPE MODEL. Shaun Canavan, Xing Zhang, and Lijun Yin

FITTING AND TRACKING 3D/4D FACIAL DATA USING A TEMPORAL DEFORMABLE SHAPE MODEL. Shaun Canavan, Xing Zhang, and Lijun Yin FITTING AND TRACKING 3D/4D FACIAL DATA USING A TEMPORAL DEFORMABLE SHAPE MODEL Shaun Canavan, Xing Zhang, and Lijun Yin Department of Computer Science State University of New York at Binghamton ABSTRACT

More information

Facial Processing Projects at the Intelligent Systems Lab

Facial Processing Projects at the Intelligent Systems Lab Facial Processing Projects at the Intelligent Systems Lab Qiang Ji Intelligent Systems Laboratory (ISL) Department of Electrical, Computer, and System Eng. Rensselaer Polytechnic Institute jiq@rpi.edu

More information

FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS

FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS M.Gargesha and P.Kuchi EEE 511 Artificial Neural Computation Systems, Spring 2002 Department of Electrical Engineering Arizona State University

More information

Model Based Analysis of Face Images for Facial Feature Extraction

Model Based Analysis of Face Images for Facial Feature Extraction Model Based Analysis of Face Images for Facial Feature Extraction Zahid Riaz, Christoph Mayer, Michael Beetz, and Bernd Radig Technische Universität München, Boltzmannstr. 3, 85748 Garching, Germany {riaz,mayerc,beetz,radig}@in.tum.de

More information

A Real Time Facial Expression Classification System Using Local Binary Patterns

A Real Time Facial Expression Classification System Using Local Binary Patterns A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial

More information

Robust Facial Expression Classification Using Shape and Appearance Features

Robust Facial Expression Classification Using Shape and Appearance Features Robust Facial Expression Classification Using Shape and Appearance Features S L Happy and Aurobinda Routray Department of Electrical Engineering, Indian Institute of Technology Kharagpur, India Abstract

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION Priyanka Rani 1, Dr. Deepak Garg 2 1,2 Department of Electronics and Communication, ABES Engineering College, Ghaziabad

More information

Robust facial action recognition from real-time 3D streams

Robust facial action recognition from real-time 3D streams Robust facial action recognition from real-time 3D streams Filareti Tsalakanidou and Sotiris Malassiotis Informatics and Telematics Institute, Centre for Research and Technology Hellas 6th km Charilaou-Thermi

More information

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time

More information

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial

More information

Action Unit Based Facial Expression Recognition Using Deep Learning

Action Unit Based Facial Expression Recognition Using Deep Learning Action Unit Based Facial Expression Recognition Using Deep Learning Salah Al-Darraji 1, Karsten Berns 1, and Aleksandar Rodić 2 1 Robotics Research Lab, Department of Computer Science, University of Kaiserslautern,

More information

Real-Time Facial Expression Recognition with Illumination-Corrected Image Sequences

Real-Time Facial Expression Recognition with Illumination-Corrected Image Sequences Real-Time Facial Expression Recognition with Illumination-Corrected Image Sequences He Li Department of Computer Science and Engineering, Fudan University, China demonstrate@163.com José M. Buenaposada

More information

AAM Derived Face Representations for Robust Facial Action Recognition

AAM Derived Face Representations for Robust Facial Action Recognition AAM Derived Face Representations for Robust Facial Action Recognition Simon Lucey, Iain Matthews, Changbo Hu, Zara Ambadar, Fernando de la Torre, Jeffrey Cohn Robotics Institute, Carnegie Mellon University

More information

Natural Facial Expression Recognition Using Dynamic and Static Schemes

Natural Facial Expression Recognition Using Dynamic and Static Schemes Natural Facial Expression Recognition Using Dynamic and Static Schemes Bogdan Raducanu 1 and Fadi Dornaika 2,3 1 Computer Vision Center, 08193 Bellaterra, Barcelona, Spain bogdan@cvc.uab.es 2 IKERBASQUE,

More information

Dynamic Human Fatigue Detection Using Feature-Level Fusion

Dynamic Human Fatigue Detection Using Feature-Level Fusion Dynamic Human Fatigue Detection Using Feature-Level Fusion Xiao Fan, Bao-Cai Yin, and Yan-Feng Sun Beijing Key Laboratory of Multimedia and Intelligent Software, College of Computer Science and Technology,

More information

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Emotion Recognition In The Wild Challenge and Workshop (EmotiW 2013) Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Mengyi Liu, Ruiping Wang, Zhiwu Huang, Shiguang Shan,

More information

Facial Emotion Recognition using Eye

Facial Emotion Recognition using Eye Facial Emotion Recognition using Eye Vishnu Priya R 1 and Muralidhar A 2 1 School of Computing Science and Engineering, VIT Chennai Campus, Tamil Nadu, India. Orcid: 0000-0002-2016-0066 2 School of Computing

More information

Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection

Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection Constrained Joint Cascade Regression Framework for Simultaneous Facial Action Unit Recognition and Facial Landmark Detection Yue Wu Qiang Ji ECSE Department, Rensselaer Polytechnic Institute 110 8th street,

More information

Classifying Facial Gestures in Presence of Head Motion

Classifying Facial Gestures in Presence of Head Motion Classifying Facial Gestures in Presence of Head Motion Wei-Kai Liao and Isaac Cohen Institute for Robotics and Intelligent Systems Integrated Media Systems Center University of Southern California Los

More information

Facial Expressions Recognition from Image Sequences

Facial Expressions Recognition from Image Sequences Facial Expressions Recognition from Image Sequences Zahid Riaz, Christoph Mayer, Michael Beetz and Bernd Radig Department of Informatics, Technische Universität München, D-85748 Garching, Germany Abstract.

More information

Facial Expression Recognition Using Gabor Motion Energy Filters

Facial Expression Recognition Using Gabor Motion Energy Filters Facial Expression Recognition Using Gabor Motion Energy Filters Tingfan Wu Marian S. Bartlett Javier R. Movellan Dept. Computer Science Engineering Institute for Neural Computation UC San Diego UC San

More information

Multi-Instance Hidden Markov Model For Facial Expression Recognition

Multi-Instance Hidden Markov Model For Facial Expression Recognition Multi-Instance Hidden Markov Model For Facial Expression Recognition Chongliang Wu 1, Shangfei Wang 1 and Qiang Ji 2 1 School of Computer Science and Technology, University of Science and Technology of

More information

A Novel Feature Extraction Technique for Facial Expression Recognition

A Novel Feature Extraction Technique for Facial Expression Recognition www.ijcsi.org 9 A Novel Feature Extraction Technique for Facial Expression Recognition *Mohammad Shahidul Islam 1, Surapong Auwatanamongkol 2 1 Department of Computer Science, School of Applied Statistics,

More information

FAST-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding

FAST-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding FAST-FACS: A Computer-Assisted System to Increase Speed and Reliability of Manual FACS Coding Fernando De la Torre (1), Tomas Simon (1), Zara Ambadar (2), and Jeffrey F. Cohn (2) 1. Robotics Institute,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition

Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Partial Least Squares Regression on Grassmannian Manifold for Emotion Recognition Mengyi Liu, Ruiping Wang, Zhiwu Huang, Shiguang Shan, Xilin Chen Key Lab of Intelligence Information Processing Institute

More information

Edge Detection for Facial Expression Recognition

Edge Detection for Facial Expression Recognition Edge Detection for Facial Expression Recognition Jesús García-Ramírez, Ivan Olmos-Pineda, J. Arturo Olvera-López, Manuel Martín Ortíz Faculty of Computer Science, Benemérita Universidad Autónoma de Puebla,

More information

Research on Emotion Recognition for Facial Expression Images Based on Hidden Markov Model

Research on Emotion Recognition for Facial Expression Images Based on Hidden Markov Model e-issn: 2349-9745 p-issn: 2393-8161 Scientific Journal Impact Factor (SJIF): 1.711 International Journal of Modern Trends in Engineering and Research www.ijmter.com Research on Emotion Recognition for

More information

Recognition of Facial Action Units with Action Unit Classifiers and An Association Network

Recognition of Facial Action Units with Action Unit Classifiers and An Association Network Recognition of Facial Action Units with Action Unit Classifiers and An Association Network Junkai Chen 1, Zenghai Chen 1, Zheru Chi 1 and Hong Fu 1,2 1 Department of Electronic and Information Engineering,

More information

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis

Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Enhanced Active Shape Models with Global Texture Constraints for Image Analysis Shiguang Shan, Wen Gao, Wei Wang, Debin Zhao, Baocai Yin Institute of Computing Technology, Chinese Academy of Sciences,

More information

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis From: AAAI Technical Report SS-03-08. Compilation copyright 2003, AAAI (www.aaai.org). All rights reserved. Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis Ying-li

More information

3D Face Sketch Modeling and Assessment for Component Based Face Recognition

3D Face Sketch Modeling and Assessment for Component Based Face Recognition 3D Face Sketch Modeling and Assessment for Component Based Face Recognition Shaun Canavan 1, Xing Zhang 1, Lijun Yin 1, and Yong Zhang 2 1 State University of New York at Binghamton, Binghamton, NY. 2

More information

Real-time Driver Affect Analysis and Tele-viewing System i

Real-time Driver Affect Analysis and Tele-viewing System i Appeared in Intelligent Vehicles Symposium, Proceedings. IEEE, June 9-11, 2003, 372-377 Real-time Driver Affect Analysis and Tele-viewing System i Joel C. McCall, Satya P. Mallick, and Mohan M. Trivedi

More information

Tailoring Model-based Techniques to Facial Expression Interpretation

Tailoring Model-based Techniques to Facial Expression Interpretation Tailoring Model-based Techniques to Facial Expression Interpretation Matthias Wimmer, Christoph Mayer, Sylvia Pietzsch, and Bernd Radig Chair for Image Understanding and Knowledge-Based Systems Technische

More information

Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis

Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis Mehdi Ghayoumi 1, Arvind K Bansal 1 1 Computer Science Department, Kent State University, {mghayoum,akbansal}@kent.edu

More information

Research on Dynamic Facial Expressions Recognition

Research on Dynamic Facial Expressions Recognition Research on Dynamic Facial Expressions Recognition Xiaoning Peng & Beii Zou School of Information Science and Engineering Central South University Changsha 410083, China E-mail: hhpxn@mail.csu.edu.cn Department

More information

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory Computers and Mathematics with Applications 61 (2011) 2101 2106 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa An

More information

Facial Action Detection from Dual-View Static Face Images

Facial Action Detection from Dual-View Static Face Images Facial Action Detection from Dual-View Static Face Images Maja Pantic and Leon Rothkrantz Delft University of Technology Electrical Engineering, Mathematics and Computer Science Mekelweg 4, 2628 CD Delft,

More information

Convolutional Neural Networks for Facial Expression Recognition

Convolutional Neural Networks for Facial Expression Recognition Convolutional Neural Networks for Facial Expression Recognition Shima Alizadeh Stanford University shima86@stanford.edu Azar Fazel Stanford University azarf@stanford.edu Abstract In this project, we have

More information

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated

More information

Facial Expression Recognition using Gabor Filter

Facial Expression Recognition using Gabor Filter Facial Expression Recognition using Gabor Filter Namitha J #1 and Dr. Bindu A Thomas *2 # Student, Dept. of ECE, VVIET, Mysore, India * HOD, Dept. of ECE, VVIET, Mysore, India Abstract Outward appearance

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Facial Expression Recognition Using Non-negative Matrix Factorization

Facial Expression Recognition Using Non-negative Matrix Factorization Facial Expression Recognition Using Non-negative Matrix Factorization Symeon Nikitidis, Anastasios Tefas and Ioannis Pitas Artificial Intelligence & Information Analysis Lab Department of Informatics Aristotle,

More information

A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units

A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units Mohammad H. Mahoor 1, Steven Cadavid 2, Daniel S. Messinger 3, and Jeffrey F. Cohn 4 1 Department of Electrical and

More information

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Detection Using Implemented (PCA) Algorithm Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with

More information

Facial Expressions Recognition Using Eigenspaces

Facial Expressions Recognition Using Eigenspaces Journal of Computer Science 8 (10): 1674-1679, 2012 ISSN 1549-3636 2012 Science Publications Facial Expressions Recognition Using Eigenspaces 1 Senthil Ragavan Valayapalayam Kittusamy and 2 Venkatesh Chakrapani

More information

A Hidden Markov Model Based Approach for Facial Expression Recognition in Image Sequences

A Hidden Markov Model Based Approach for Facial Expression Recognition in Image Sequences A Hidden Markov Model Based Approach for Facial Expression Recognition in Image Sequences Miriam Schmidt, Martin Schels, and Friedhelm Schwenker Institute of Neural Information Processing, University of

More information

Texture Features in Facial Image Analysis

Texture Features in Facial Image Analysis Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University

More information

Dynamic facial expression recognition using a behavioural model

Dynamic facial expression recognition using a behavioural model Dynamic facial expression recognition using a behavioural model Thomas Robin Michel Bierlaire Javier Cruz STRC 2009 10th september The context Recent interest for emotion recognition in transportation

More information

Learning the Deep Features for Eye Detection in Uncontrolled Conditions

Learning the Deep Features for Eye Detection in Uncontrolled Conditions 2014 22nd International Conference on Pattern Recognition Learning the Deep Features for Eye Detection in Uncontrolled Conditions Yue Wu Dept. of ECSE, Rensselaer Polytechnic Institute Troy, NY, USA 12180

More information

MULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION

MULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION MULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION Irene Kotsia, Stefanos Zafeiriou, Nikolaos Nikolaidis and Ioannis Pitas Aristotle University of

More information

EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE

EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE V. Sathya 1 T.Chakravarthy 2 1 Research Scholar, A.V.V.M.Sri Pushpam College,Poondi,Tamilnadu,India. 2 Associate Professor, Dept.of

More information

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014 195 Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) Faisal Ahmed 1, Hossain

More information

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS Yanghao Li, Jiaying Liu, Wenhan Yang, Zongg Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,

More information

Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li

Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li FALL 2009 1.Introduction In the data mining class one of the aspects of interest were classifications. For the final project, the decision

More information

Automated Facial Expression Recognition Based on FACS Action Units

Automated Facial Expression Recognition Based on FACS Action Units Automated Facial Expression Recognition Based on FACS Action Units 1,2 James J. Lien 1 Department of Electrical Engineering University of Pittsburgh Pittsburgh, PA 15260 jjlien@cs.cmu.edu 2 Takeo Kanade

More information

Multiple Kernel Learning for Emotion Recognition in the Wild

Multiple Kernel Learning for Emotion Recognition in the Wild Multiple Kernel Learning for Emotion Recognition in the Wild Karan Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort and Marian S. Bartlett Machine Perception Laboratory UCSD EmotiW Challenge,

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

A Simple Approach to Facial Expression Recognition

A Simple Approach to Facial Expression Recognition Proceedings of the 2007 WSEAS International Conference on Computer Engineering and Applications, Gold Coast, Australia, January 17-19, 2007 456 A Simple Approach to Facial Expression Recognition MU-CHUN

More information

Facial Expression Analysis

Facial Expression Analysis Facial Expression Analysis Faces are special Face perception may be the most developed visual perceptual skill in humans. Infants prefer to look at faces from shortly after birth (Morton and Johnson 1991).

More information

A Facial Expression Imitation System in Human Robot Interaction

A Facial Expression Imitation System in Human Robot Interaction A Facial Expression Imitation System in Human Robot Interaction S. S. Ge, C. Wang, C. C. Hang Abstract In this paper, we propose an interactive system for reconstructing human facial expression. In the

More information

Recognizing Facial Expressions Automatically from Video

Recognizing Facial Expressions Automatically from Video Recognizing Facial Expressions Automatically from Video Caifeng Shan and Ralph Braspenning Introduction Facial expressions, resulting from movements of the facial muscles, are the face changes in response

More information

Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches

Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Yanpeng Liu a, Yuwen Cao a, Yibin Li a, Ming Liu, Rui Song a Yafang Wang, Zhigang Xu, Xin Ma a Abstract Facial

More information

Non-rigid registration using free-form deformations for recognition of facial actions and their temporal dynamics

Non-rigid registration using free-form deformations for recognition of facial actions and their temporal dynamics Non-rigid registration using free-form deformations for recognition of facial actions and their temporal dynamics Sander Koelstra Queen Mary, University of London Mile End Rd, London, E1 4NS, UK sander.koelstra@elec.qmul.ac.uk

More information

FACIAL EXPRESSION RECOGNITION AND EXPRESSION INTENSITY ESTIMATION

FACIAL EXPRESSION RECOGNITION AND EXPRESSION INTENSITY ESTIMATION FACIAL EXPRESSION RECOGNITION AND EXPRESSION INTENSITY ESTIMATION BY PENG YANG A dissertation submitted to the Graduate School New Brunswick Rutgers, The State University of New Jersey in partial fulfillment

More information

Face analysis : identity vs. expressions

Face analysis : identity vs. expressions Face analysis : identity vs. expressions Hugo Mercier 1,2 Patrice Dalle 1 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd 3, passage André Maurois -

More information

A Novel Approach for Face Pattern Identification and Illumination

A Novel Approach for Face Pattern Identification and Illumination A Novel Approach for Face Pattern Identification and Illumination Viniya.P 1,Peeroli.H 2 PG scholar, Applied Electronics, Mohamed sathak Engineering college,kilakarai,tamilnadu, India 1 HOD, Department

More information

Real-time Automatic Facial Expression Recognition in Video Sequence

Real-time Automatic Facial Expression Recognition in Video Sequence www.ijcsi.org 59 Real-time Automatic Facial Expression Recognition in Video Sequence Nivedita Singh 1 and Chandra Mani Sharma 2 1 Institute of Technology & Science (ITS) Mohan Nagar, Ghaziabad-201007,

More information

FACIAL MOVEMENT BASED PERSON AUTHENTICATION

FACIAL MOVEMENT BASED PERSON AUTHENTICATION FACIAL MOVEMENT BASED PERSON AUTHENTICATION Pengqing Xie Yang Liu (Presenter) Yong Guan Iowa State University Department of Electrical and Computer Engineering OUTLINE Introduction Literature Review Methodology

More information

A Hierarchical Probabilistic Model for Facial Feature Detection

A Hierarchical Probabilistic Model for Facial Feature Detection A Hierarchical Probabilistic Model for Facial Feature Detection Yue Wu Ziheng Wang Qiang Ji ECSE Department, Rensselaer Polytechnic Institute {wuy9,wangz1,jiq}@rpi.edu Abstract Facial feature detection

More information

3D Mesh Sequence Compression Using Thin-plate Spline based Prediction

3D Mesh Sequence Compression Using Thin-plate Spline based Prediction Appl. Math. Inf. Sci. 10, No. 4, 1603-1608 (2016) 1603 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.18576/amis/100440 3D Mesh Sequence Compression Using Thin-plate

More information

Facial Expression Analysis using Nonlinear Decomposable Generative Models

Facial Expression Analysis using Nonlinear Decomposable Generative Models Facial Expression Analysis using Nonlinear Decomposable Generative Models Chan-Su Lee and Ahmed Elgammal Computer Science, Rutgers University Piscataway NJ 8854, USA {chansu, elgammal}@cs.rutgers.edu Abstract.

More information

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib 3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 201) Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi

More information

A Facial Expression Classification using Histogram Based Method

A Facial Expression Classification using Histogram Based Method 2012 4th International Conference on Signal Processing Systems (ICSPS 2012) IPCSIT vol. 58 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V58.1 A Facial Expression Classification using

More information