Decoding Mixed Emotions from Expression Map of Face Images
|
|
- Eustace Wilson
- 6 years ago
- Views:
Transcription
1 Decoding Mixed Emotions from Expression Map of Face Images Swapna Agarwal and Dipti Prasad Mukherjee Abstract In real life facial expressions show mixture of emotions. This paper proposes a novel expression descriptor based expression map that can efficiently represent pure, mixture and transition of facial expressions. The expression descriptor is the integration of optic flow and image gradient values and the descriptor value is accumulated in temporal scale. The expression map is realized using self-organizing map. We develop an objective scheme to find the percentage of different prototypical pure emotions (e.g., happiness, surprise, disgust etc.) that mix up to generate a real facial expression. Experimental results show that the expression map can be used as an effective classifier for facial expressions. I. INTRODUCTION Two popular models of facial expressions are: continuous and categorical. Continuous models show each expression as a vector in continuous expression space [1]. Most of the research works are based on categorical model that tag a video sequence or a frame with one of the six prototypical expressions e.g., Happiness, Surprise, Disgust, Fear, Anger and Sadness. This model assigns either of the expression tags to a video segment morphing between two expressions say happiness and surprise. Again one may be happily surprised or fearfully surprised. Some facial expressions like (disgust and anger) or (fear and sandness) are very similar in appearance. So instead of classifying them into discrete categories, we must identify which emotional categories exist in what percentage at each time instant in a given face video. Keeping this aim in mind, our paper (1) proposes a novel method based on a feature integrating optical flow (OF) and image gradient to identify expression category at each time instant. We call this feature accumulated motion descriptor (AMD). Using these AMDs we (2) empirically develop an expression map (EM) that shows distribution of emotional categories like happiness, surprise etc. Self organizing map (SOM) neural network that emulates the memorization and categorization process of optical cortex of human brain is utilized in developing EM. Our paper also (3) proposes an algorithm to show which expressions exist in what percentage at each time instant in a given video sequence. Naturally, EM can work as an effective classifier of facial emotional expressions. Given this we review the related research in the next section. II. RELATED PREVIOUS WORK The first vital step for facial expression analysis is feature extraction. Some of the features used in related works are Local Binary Pattern (LBP), Optical Flow (OF), Gabor The authors are with Indian Statistical Institute, 203, B. T. Road, Kolkata agarwal.swapna@gmail.com, dipti@isical.ac.in wavelets [2] and curvelets [3]. Shan et. al. used LBP to recognize facial expressions from images and tested the feature on different databases [4]. In 2007, Zhao et. al. used a variant of LBP called LBP-TOP (Local Binary pattern- Three Orthogonal Planes) to recognize six prototypical facial expressions in videos [5]. In the same fashion as LBP- TOP, Jiang et. al. [6] proposed LPQ-TOP (Local Phase Quantization from Three Orthogonal Planes). They showed that LPQ yielded better result compared to LBP and dynamic appearance descriptor outperforms static descriptor in action unit (AU) detection. AUs are subtle but visible changes in facial expressions triggered by facial muscles. Paul Ekman and Wallace V. Friesen [7] proposed Facial Action Coding System (FACS), a comprehensive set of AUs. Wang and Yin in [8] treated the image as a 3D surface and labeled each pixel by its terrain features. The distribution of terrain labels in the expressive regions of a face is used to recognize facial expressions. Lien et. al. [9] used OF features to recognize three combinations of upper face action units. Sanchez et. al. [10] used OF for prototypical facial expression recognition based on two methods: fifteen feature points based tracking and dense flow tracking. In contrast, the feature proposed in this paper combines the OF with image gradient. Also, as the video progresses and the intensity of facial expression gets intensified, our freature magnitude accumulates weighted OF values. We refer this feature as accumulated motion descriptor (AMD) and it is described in the next section. Relatively few works in literature have proposed emotion and/or expression spaces/models. While Ekman and Friesen proposed six basic universal expressions [11], Plutchik [12] proposed eight primary emotions, acceptance, anger, anticipation, disgust, fear, joy, sadness and surprise. According to this model, every other emotion can be produced by mixing the primary emotions. He proposed a conical emotion wheel model. The vertical dimension of the cone represents the intensity of the emotion and the positions on the circle defines basic emotion sectors. James A. russel [13] proposed a system of two bipolar continui, valence and arousal. The valence ranges from sadness to happiness while arousal ranges from boredom to frantic excitement. Wu et. al. [14] used values of fuzzy integral in different facial expression spaces to describe the uncertainty of facial expressions. Liu et. al. [15] represented each expression image as a Gabor feature vector projected on a PCA space. The distribution of data belonging to each expression class in the PCA space was measured using Gaussian mixture model. These distribution functions were used to define the probability of any expression belonging to the six prototypical expressions. Martinez and Du [1] represented any emotion by linear combination of
2 some distinct continuous face spaces. Their method depends on accurate detection of some facial landmark points. While all these expression models used a fixed number of distinct continuous expression models, we propose one single EM that is better at visualization and representation of interrelations of different expressions. The important issue is that this expression map is customized and learned from the available video sequences. We could show the purity and transition of emotions in this map and can objectively asses the percentage of different pure emotions in the given video. The organization of the paper is as follows. Section IV describes the details of the proposed EM. Section V discusses implementation details of AMD and EM. This section also reports the results and outcomes of the methods proposed. The paper concludes in section VI. In the next section we state how the expression features (AMD) are obtained. III. PREPARATION OF AMD To prepare an expression map, we first need an expression descriptor that can accurately classify six basic expressions. For this, we opt for gradient weighted optical flow (OF) based feature. Works in literature like [9], [4] have used only OF or only gradient [16] for facial expression recognition. We want to capture the movement patterns of high contrasting facial regions such as boundaries of lips, eyes, eye-brows; nasolabial furrow etc. Therefore, we calculate OF field for each frame transition. Then for each frame we find the total deviation (OF accumulated over first frame to that frame) for each pixel. We multiply the gradient magnitude at each pixel location in the first frame with the corresponding accumulated OF vector of each frame to get the motion pattern for that frame. Suppose, video sequence S has f number of frames F k, k = 1,..., f. The pixel locations of F 1 are X = {x i, i = 1,..., n}, x i R 2. Let, G(x i ), i = 1,..., n represent the gradient vector of pixel x i in the first frame. Let O k (x i ), k = 1,..., f 1, i = 1,..., n represent OF vector corresponding to the pixel x i of the kth frame transition and A k (x i ), k = 1,..., f 1, i = 1,..., n represent the OF accumulated from the first frame to the (k + 1)th frame for pixel x i. That is, for pixel x i, we calculate OF in the first frame transition as O 1 (x i ). Pixel x i reaches pixel location O 1 (x i )+x i in the second frame. For the same pixel the OF in the second frame transition is calculated as O 2 (O 1 (x i ) + x i ) and so on. Therefore, A k (x i ) = {O 1 (x i )} + {O 2 (O 1 (x i ) + x i )} +... {O k (O k 1...O 3 (O 2 (O 1 (x i ) + x i ) + O 1 (x i ) + x i )...)}. Now we calculate the weighted accumulated OF vector V k (x i ) = G k (x i ) A k (x i ), where G(x i ) represents the magnitude of G(x i ). Next we calculate the distribution of these A k (x i ) in L different directions. For simplicity, now we remove the subscripts i and k. Let the histogram of L number of angular orientations be represented by H = {h(1), h(2),...h(l)} and θ(x, y) = tan 1 V y V x, where V = (V x, V y ) and g(x, y) = V, x = (x, y). We define h(l) = x,y { g(x, y), where, l = (θ(x, y) L)/ , 0, otherwise. (1) If we calculate H for the entire image, that may represent the overall motion pattern of the image context but the spatial information will be lost. For example, downward movement of eye-brows and lower lip may represent two different expressions. Therefore, to capture local information inside the face, we divide images into some equal number say, b number of blocks. The blocks in the last row and the last column may differ in size from others. A separate L bin histogram is calculated for each of the b blocks. The b number of histograms are normalized and then concatenated together to form the final Accumulated Motion Descriptor (AMD) for each frame (see Fig. 1). We choose L to be 8 as octant-specific angular resolution is good enough for 8- neighbor discrete grid representation of a digital image. The next section describes the procedure followed to form the EM using AMD. Fig. 1: (a) Partitioning a frame into 64 blocks, (b) corresponding histograms of weighted accumulated OF for each of the 64 blocks, (c) corresponding AMD. IV. FORMATION OF EXPRESSION MAP We want the expression map to have the following properties. This map must be able to identify each of the six prototypical expression categories as well as be able to represent images depicting facial expressions transiting from one category to the other. This map must help us to find out which expression category is present in what percentage in a face depicting mixture of emotional expressions (e.g., happily surprised). Specifically, an EM is an N-D lattice. Each node of the lattice is representative of a particular pattern of expression. Each node of this N-D lattice is connected to ξd input space through ξd weight vector (synaptic connection) w; each element of w corresponds to one input dimension. To form an EM we take the help of self organizing map (SOM). The main characteristic of SOM is that the neurons representing closely related patterns remain close together in the lattice and thus form a topographic map wherein the spatial location of each neuron is indicative of the statistical pattern [17]. For ease of visualization we take 2D lattice of m number of neurons. Each neuron in the lattice is connected to its 8-neighbors. Each neuron is connected to the AMD space through synaptic weight vector w. If AMD is ξ dimensional, w is also ξ dimensional. Following [17], formation of the EM that represents the expression pattern underlying the input expression space, is summarized in Algorithm 1.
3 Algorithm 1 Algorithm for adaption of the SOM to the input feature space. 1. Randomly initialize the synaptic weight vectors w j (0), j = 1, 2,..., m. 2. Draw an input vector a and find the best matching neuron i(a) by minimum Euclidean distance. i(a) = arg min j a w j, j = 1, 2,..., m. 3. Update the weight vectors of the excited neurons using the following formula. w j (k + 1) = w j (k) + η(k)h j,i(a) (k)(a(k) w j (k)), j = 1, 2,..., m. Here η(k) is learning rate parameter and h j,i(a) is a neighborhood function centered at i(a). The h j,i(a) is a function of the distance (in the lattice space) between jth node and i(a)th node. The value of η(k) and spread of h j,i(a) decay exponentially with time. 4. Repeat 1 to 3 until the map is formed. We take a Gaussian neighborhood function with spread that includes almost all the neurons initially and only the winning neuron at the end of learning. The three processes involved in the formation of the EM are (1) Competition (step 2) among the neurons for adapting themselves to the input data (for winning), (2) Cooperation (use of neighborhood function) of winning neuron with its neighbors for excitation and (3) Adaptation (step 3) of the excited neurons towards the input. The beauty of Algorithm 1 is that over iterations the lattice of the neurons unfurl themselves to represent the underlying pattern of the input expression space. We want the 2D SOM to adapt to the expression feature (AMD in our case) space. We extract AMD features for each frame transition of training video sequences using the method described in Sec. III. So, we have AMDs corresponding to all of the six expression classes. We train a 2D lattice of neurons using Algorithm 1 and AMDs as inputs. As the 2D lattice gets trained, the AMDs corresponding to say expression E i adapt their closest matching neurons towards themselves. Because of the cooperated adaptation (step 3 of Algorithm 1), AMDs belonging to closely related expressions (e.g., disgust and anger) map to neurons spatially close in the 2D lattice. We propose Algorithm 2 to label the nodes of the trained 2D lattice with expression tags and to find which expression is present in what percentage in a given test data (frame transition in our case). In step 1 of Algorithm 2 a neuron is labeled with the class which is the class label of most of the training data that map to the neuron. In step 2 each neuron casts a vote for its corresponding class in calculating the percentage of each expression class in the given test data. In this process each neuron s vote is weighted by a monotonically decreasing function of the distance between the neuron and the test data. Therefore, if a test data falls near the center of an expression cluster, most of the neurons belonging to that cluster get high weight compared to neurons belonging to other clusters. So, center of an expression cluster represents highest intensity. Towards Algorithm 2 Algorithm for finding which expression is present in what percentage in a given test data. 1. Assign the neurons represented by weight vectors w j, j = 1,..., m of the trained SOM with class labels Initialize a 6 dimensional vector v j for each of the m neurons to Repeat the following two steps for each of the M input data (AMD) a k, k = 1,..., M where M is the total number of data considering only the apex (having maximum expression intensity) three frame-transitions of all the training video sequences p = arg min j a k w j v p (class(a k )) = v p (class(a k )) + 1 where, class(a k ) represents the class label of a k Assign each neuron represented by w j with a class label using the following formula class(w j ) = max i v j (i). 2. Assign percentage of each of the six prototypical expressions to a test data represented by a Let c(i), i = 1,..., 6 represent the percentage of expression i in the given test data a Repeat the following step for i = 1,..., { c(i) = m exp( a w j 2 ), if class(w j ) = i, j=1 0, otherwise Repeat the following step for i = 1,..., c(i) = c(i)/ i c(i). 3. Assign class label class(a ) to the test data a class(a ) = arg max i c(i). the boundary of the cluster, relatively less number of neurons belonging to that cluster get high weight. So towards the boundary the intensity of the expression decreases. After 500 iterations of Algorithm 1 and labeling of nodes using Algorithm 2, the 2D lattice (EM) looks like the one in Fig. 2. Next we discuss the experimental setups for testing the efficiency of the AMDs for classification of expressions and that of SOM for representing different expressions depicted in a video sequence. V. EXPERIMENT DESIGN AND RESULTS We have done our experiments on the benchmark Cohn- Kanade (CK) [18] and Multimedia Understanding Group (MUG) [19] facial expression databases. While CK database includes subjects from different ethnicity (15% are African- American and 3% are Asian or Latino), MUG database contains subjects depicting different expressions one after another in one video sequence in a natural way. In CK database, in each video sequence the subject starts from a near-neutral face and gradually displays the highest intensity expression in the last frame. In MUG database (in nonmixture video) each subject starts with neutral face, gradually attains the apex of expression and then returns back to neutral expression. We have divided each dataset into two parts: training and test. The training set is used for testing the
4 Fig. 2: EM after training with AMDs. efficiency of AMD in recognizing basic expressions and training the SOM to form the EM whereas the test set is used for testing the expression representation capability of the map for completely unknown data. The distributions of data into {training, test} sets for both the databases are given in Table I. TABLE I: Distribution of data for CK and MUG based datasets. Happiness: Ha, Surprise: Su, Disgust: Di, Fear: Fe, Anger: An, Sadness: Sa. CK MUG Ha Su Di Fe An Sa Training Test Training Test We use the Viola-Jones face detection algorithm [20] to find the bounding box containing the face only. We find the bounding box of the first and the apex expression image of the sequence and take the union of them. Henceforth, by frame, we mean the bounding box thus calculated containing the face. No other pre-processing like alignment of eyes [4], [5], normalization of the face to a fixed size [9], [10], etc. are required. Illumination condition, skin color and presence of make-up in images are also not the same for different sequences. Some face images in CK database and many in MUG database have in-plane and out-of-plane rotation of the head. However, we do not have permission for publishing the images of these subjects. The face images need not be marked with fiducials like in [9], [10]. The proposed method is person invariant i.e., to train the classifier, we do not need data from the same person in test set. Before we test the expression map, we establish the efficiency of AMD in the next section. A. AMD While displaying an expression, in a video sequence a person s face displays some AUs. For example, as per CK database the AU combinations for fear for subject number 125 is { }; for anger for subject number 55 is { } and for sandness for subject number 130 is { }. For descriptions of AUs see [18]. A subject may display all the AUs in these combinations simultaneously or in any order. In the first few frames where the subject starts to display expression from near neutral face, some AUs, which are not representative of any particular expression, may appear alone. Fig. 3 shows such an example where one of the initial frames is taken from each of the three sequences displaying three different emotional expressions. All of these three frames show only AU4 (eyebrows drawn medially and down). Therefore, initial few frames of a video sequence may not represent a particular expression. Whereas, the apex expression frames, where all the concerned AUs are present, display the perceived expression. Therefore, following [4], we consider only the (a) (b) (c) Fig. 3: AU4 in initial sequences of (a) fear, (b) anger and (c) sandness. three frames where, the expression is in apex state, of a video sequence as displaying the concerned expression. So, we extract AMD features from the last three frames of all the 262 training video sequences of CK database. For MUG database we consider the automatically annotated apex frames provided with the database. For classification, we use SVM classifier with polynomial and RBF kernels. We train a total of 6 C 2 = 15 2-class classifiers each for classifying data into one of two expression out of total 6 expressions. Each test data goes through all the classifiers and the class with maximum number of votes, is designated as the observed class for the test data. We have used 10-fold cross-validation scheme. For comparing the proposed AMD features with the state-of-the-art we have implemented [4]. We have divided each image into 8 8 blocks for AMD and 9 8 blocks with 70% overlap for extracting LBP based feature (LBP8,8,8,3,3,3) u2 from both the databases. The results are given in Table II. Last two columns of Table II compare the results considering all the frames of a video as belonging to the same class. Column four and six show results for 7 class classification where the features corresponding to the first frame of each video is taken as members of neutral class. The classification percentage here means the percentage of correct classification of the data in the validation set averaged over all folds and expression classes. It can be noted that Sanchez et. al. have uses OF bases features to classify facial expressions. They have reported 92.81% classification accuracy for 6-class classification problem. Table II shows superiority of AMD to LBP based feature for classifying into basic expressions. Now, to explain the temporal sequence of each video in more detail we test the EM next.
5 TABLE II: Performance comparison of LBP with the proposed AMD features for CK and MUG database. Apex frames All frames class 7 class class class poly RBF poly RBF RBF RBF CK AMD MUG CK LBP MUG B. Expression Map For formation of expression map using Algorithm 1, we extract expression feature from each frame-transition of all training video sequences. For two types of expression features (AMD and LBP) and two databases (CK and MUG), we have four feature-database combinations. For testing the universality of the EM over different databases and different expression descriptors, we construct one EM for each combination and find out if these maps can represent the pattern underlying the corresponding expression databases. We have a total of 4613 and 8157 AMDs from CK and MUG databases respectively. Likewise we have 4875 and 8268 LBP based features from CK and MUG databases respectively. We have used a 2D lattice of neurons for formation of EM. To avoid confusion, from now on, we will denote AMD or LBP based feature for each frame-transition or frame respectively as data. As shown in Table I, there is huge inequality in distribution of data across different expressions. So there is a possibility that the SOM may get biased towards the expression class with higher number of data. To overcome this problem we take the least common multiple, say α of all numbers of data, say β i from each expression E i. Let d = α/β i and r = α%β i where % is remainder operator. Each data belonging to expression class E i is repeated d times plus randomly selected r data from class E i appear in the dataset for training the SOM. This way, each expression is expected to have equal influence on the training of SOM. We run Algorithm 1 for 500 iterations. In step 1. of Algorithm 2, for assigning class labels to the neurons of the SOM, we consider only the data corresponding to apex three frames of the training videos sequences. The reason is, as explained in Subsection V-A, we are not sure of the class labels of other frame transitions/frames. Fig. 2 shows the EM after assignment of the class labels (including neutral) for AMD features extracted from the CK database based training data. 1 It can be seen that nodes representing each expression has formed noticeable cluster. Further notice that, closely related expressions are represented by clusters spatially close to each other in the EM. For example, (disgust and anger) or (surprise and fear or happiness) are close together. Neutral falls between sets of two different clusters. Note that except 1 The EM constructed using AMD features extracted from MUG database and the map constructed using LBP features extracted from CK database are shown as Fig. 1(a), and 1(b) in supplementary material. These figures are available in the web dipti/. for the labeling of the neurons with expression classes, the training of the EM is completely unsupervised. The neuron, most similar to the test data, represents that test data in the EM. Fig. 4 shows the positions of four frames at certain intervals in a video sequence displaying anger form CK database in the EM. This figure also shows the percentage of each of the prototypical expressions calculated using Algorithm 2 for each of the frames (transitions) displayed. From Fig. 4 it can be seen that the video sequence starts from the neutral region of the EM. Then depending on the expression being displayed, the representation of each frame transition in EM moves towards the corresponding cluster. Specifically, the second row shows a considerable 24% disgust which aggravates to 99% and 100% anger in the third and fourth row respectively. 2 Following Algorithm 2, each test frame is assigned an expression class. The set of consecutive frames which are assigned the same expression class are taken to form a video segment of that expression class. The next set of consecutive frames which are assigned another class form another segment. Thus we divide a given video into different segments assigned with different classes. For classifying a given test video sequence into one of the six prototypical expressions, we segment the sequence into neutral and other expression classes. The expression class of the video segment with highest number of frames is taken to be the expression class of the video. Following this process, the classification percentage of the set of 262 videos from CK database and 111 videos from MUG database is shown in Table III. Table III shows that the proposed EM can work as an efficient classifier. TABLE III: Classification % of 262 CK videos and 111 MUG videos following Algorithm 2. The first column shows the database-feature combination used to form the EM. Ha Su Di Fe An Sa CK+AMD MUG+AMD CK+LBP MUG+LBP The same classification process correctly classifies all the CK test videos except two. fear of subject number 54 and sandness of subject number 504 are classified as anger and fear respectively. The reason can be explained from the proposed EM. For the concerned sandness video sequence, we found that first frame-transition has got sandness label, next few disgust label and rest fear. Though the last few frames have got fear label, sandness also has got significant percentage in those frames. The video sequence concerned, shows that AU15 (corners of mouth pulled downward and inward) is almost absent there. It may be noted that in CK database the combination of AU15 and AU17 (skin of chin elevated) represents sandness. AU17 and AU4 are present 2 A video showing temporal trajectory of test videos in EM is uploaded as supplementary material. This video also includes bargraph depicting different prototypical expression percentages in the temporal sequences.
6 in this sequence which occur both for disgust and fear in combination with AU9/10 (nasolabial furrow/upper lip raised) and AU20 (lip corners pulled laterally) respectively. Given this, we can say that the proposed EM is able to represent/explain the temporal changes in a facial expression efficiently. Fig. 4: Representation of different frame transitions in the proposed EM. The big red triangles in the first column show the positions in EM corresponding to the frames in the second column. Two successive positions (red triangles) are connected by a red line. The third column shows the percentage of each prototypical expression in the corresponding frames calculated using Algorithm 2. VI. CONCLUSIONS We have proposed an OF and gradient based facial expression descriptor, AMD that is person independent and outperforms related features for facial expression recognition. Using these AMDs we have proposed a method to form an expression map to visualize the expression space in 2D. This expression map can efficiently represent the mixed emotional expressions of any natural facial video. The representation capability of different expressions in one single expression map makes facial expression analysis easier. Nonlinear adaptation of Algorithm 1 makes the proposed expression map flexible as compared to linear PCA based approaches. Construction of the proposed expression map using both proposed AMD and LBP based features used in [4] shows the universality of the map over different facial expression features. This concept of expression map can be extended for creating synthetic facial expressions. This will of interest to animation and movie making. REFERENCES [1] A. Martinez and S. Du, A model of the perception of facial expressions of emotion by humans: Research overview and perspectives, Journal of Machine Learning Research, vol. 13, pp , [2] Z. Zhang, M. J. Lyons, M. Schuster, and S. Akamatsu, Comparison between geometry-based and gabor-wavelets-based facial expression recognition using multi-layer perceptron, in IEEE International Conference on Automatic Face and Gesture Recognition, April 1998, pp [3] X. Wu and J. Xhao, Curvele feature extraction for face recognition and facial expression recognition, in Sixth IEEE International Conference on Natural Computation, 2010, pp [4] C. Shan, S. Gong, and P. W. McOwan, Facial expression recognition based on local binary patterns: A comprehensive study, Image Vision Comput., vol. 27, pp , May [5] G. Zhao and M. Pietikainen, Dynamic texture recognition using local binary patterns with an application to facial expressions, Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp , june [6] B. Jiang, M. F. Valstar, and M. Pantic, Action unit detection uing sparse appearance descriptors in space-time video volumes, in IEEE International Conference on Automatic Face and Gesture Recognition and Workshops, March 1998, pp [7] P. Ekman and W. V. Friesen, The facial action coding system: A technique for the measurement of facial movement, Consulting Psychologists Press Inc.,, [8] J. Wang and L. Yin, Static topographic modeling for facial expression recognition and analysis, Computer Vision and Image Understanding, vol. 108, no. 1-2, pp , [9] J. J. Lien, T. Kanade, J. F. Cohn, and C.-C. Li, Automated facial expression recognition based on facs action units, in Third IEEE International Conference on Automatic Face and Gesture Recognition, April 1998, pp [10] A. Snchez, J. V. Ruiz, A. B. Moreno, A. S. Montemayor, J. Hernndez, and J. J. Pantrigo, Differential optical flow applied to automatic facial expression recognition, Neurocomputing, vol. 74, p , [11] P. Ekman and W. V. Friesen, The repertoire of nonverbal behavior: categories, origins, usage and coding, Semiotica, vol. 1, pp , [12] R. Plutchik, Emotion: A psychoevolutionary synthesis. New York: Harper and Row, [13] J. Russel, A circumplex model of affect, Journal of Personality and Social Psychology, vol. 39, pp , [14] Y. Wu, H. Liu, and H. Zha, Modeling facial expression space for recognition, in IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, pp [15] W. feng Liu, J. li Lu, Z. fu Wang, and H. jun Song, An expression space model for facial expression analysis, in Congress on Image and Signal Processing, CISP 08, 2008, pp [16] A. Dhall, A. Asthana, R. Goecke, and T. Gedeon, Emotion recognition using phog and lpq features, in FG, 2011, pp [17] S. Haykin, Neural Networks A Comprehensive Foundation. Dorling Kindersley (India) Pvt. Ltd., [18] T. Kanade, J. Cohn, and Y.-L. Tian, Comprehensive database for facial expression analysis, in Proceedings of the 4th IEEE International Conference on Automatic Face and Gesture Recognition (FG 00), March 2000, pp [19] N. Aifanti, C. Papachristou, and A. Delopoulos, The mug facial expression database, in Int. Workshop on Image Analysis for Multimedia Interactive Services, April 2010, p [20] P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001, pp
Real time facial expression recognition from image sequences using Support Vector Machines
Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,
More informationRobust Facial Expression Classification Using Shape and Appearance Features
Robust Facial Expression Classification Using Shape and Appearance Features S L Happy and Aurobinda Routray Department of Electrical Engineering, Indian Institute of Technology Kharagpur, India Abstract
More informationFacial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression
More informationHUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION
HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time
More information3D Facial Action Units Recognition for Emotional Expression
3D Facial Action Units Recognition for Emotional Expression Norhaida Hussain 1, Hamimah Ujir, Irwandi Hipiny and Jacey-Lynn Minoi 1 Department of Information Technology and Communication, Politeknik Kuching,
More informationAction Unit Based Facial Expression Recognition Using Deep Learning
Action Unit Based Facial Expression Recognition Using Deep Learning Salah Al-Darraji 1, Karsten Berns 1, and Aleksandar Rodić 2 1 Robotics Research Lab, Department of Computer Science, University of Kaiserslautern,
More informationFacial expression recognition using shape and texture information
1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,
More informationClassification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks
Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks HADI SEYEDARABI*, WON-SOOK LEE**, ALI AGHAGOLZADEH* AND SOHRAB
More informationExploring Bag of Words Architectures in the Facial Expression Domain
Exploring Bag of Words Architectures in the Facial Expression Domain Karan Sikka, Tingfan Wu, Josh Susskind, and Marian Bartlett Machine Perception Laboratory, University of California San Diego {ksikka,ting,josh,marni}@mplab.ucsd.edu
More informationFacial Expression Recognition with Emotion-Based Feature Fusion
Facial Expression Recognition with Emotion-Based Feature Fusion Cigdem Turan 1, Kin-Man Lam 1, Xiangjian He 2 1 The Hong Kong Polytechnic University, Hong Kong, SAR, 2 University of Technology Sydney,
More informationPerson-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)
The International Arab Journal of Information Technology, Vol. 11, No. 2, March 2014 195 Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP) Faisal Ahmed 1, Hossain
More informationBoosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition
Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition Peng Yang Qingshan Liu,2 Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road,
More informationEvaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity
Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Ying-li Tian 1 Takeo Kanade 2 and Jeffrey F. Cohn 2,3 1 IBM T. J. Watson Research Center, PO
More informationA Facial Expression Classification using Histogram Based Method
2012 4th International Conference on Signal Processing Systems (ICSPS 2012) IPCSIT vol. 58 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V58.1 A Facial Expression Classification using
More informationDynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model
Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model Caifeng Shan, Shaogang Gong, and Peter W. McOwan Department of Computer Science Queen Mary University of London Mile End Road,
More informationFacial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib
3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 201) Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi
More informationLBP Based Facial Expression Recognition Using k-nn Classifier
ISSN 2395-1621 LBP Based Facial Expression Recognition Using k-nn Classifier #1 Chethan Singh. A, #2 Gowtham. N, #3 John Freddy. M, #4 Kashinath. N, #5 Mrs. Vijayalakshmi. G.V 1 chethan.singh1994@gmail.com
More informationFACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS
FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS M.Gargesha and P.Kuchi EEE 511 Artificial Neural Computation Systems, Spring 2002 Department of Electrical Engineering Arizona State University
More informationA Real Time Facial Expression Classification System Using Local Binary Patterns
A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial
More informationFacial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches
Facial Expression Recognition with PCA and LBP Features Extracting from Active Facial Patches Yanpeng Liu a, Yuwen Cao a, Yibin Li a, Ming Liu, Rui Song a Yafang Wang, Zhigang Xu, Xin Ma a Abstract Facial
More informationClassification of Face Images for Gender, Age, Facial Expression, and Identity 1
Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1
More informationFacial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion
Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion Juxiang Zhou 1, Tianwei Xu 2, Jianhou Gan 1 1. Key Laboratory of Education Informalization for Nationalities,
More informationMultiple Kernel Learning for Emotion Recognition in the Wild
Multiple Kernel Learning for Emotion Recognition in the Wild Karan Sikka, Karmen Dykstra, Suchitra Sathyanarayana, Gwen Littlewort and Marian S. Bartlett Machine Perception Laboratory UCSD EmotiW Challenge,
More informationEmotion Detection System using Facial Action Coding System
International Journal of Engineering and Technical Research (IJETR) Emotion Detection System using Facial Action Coding System Vedant Chauhan, Yash Agrawal, Vinay Bhutada Abstract Behaviors, poses, actions,
More informationRecognition of facial expressions in presence of partial occlusion
Recognition of facial expressions in presence of partial occlusion Ioan Buciu, 1 Irene Kotsia 1 and Ioannis Pitas 1 AIIA Laboratory Computer Vision and Image Processing Group Department of Informatics
More informationInternational Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN
ISSN 2229-5518 538 Facial Expression Detection Using FACS Vinaya Tinguria, Sampada Borukar y, Aarti Awathare z,prof. K.T.V. Talele x Sardar Patel Institute of Technology, Mumbai-400058, India, vinaya.t20@gmail.com
More informationAppearance Manifold of Facial Expression
Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk
More informationAutomatic Facial Expression Recognition based on the Salient Facial Patches
IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 11 May 2016 ISSN (online): 2349-784X Automatic Facial Expression Recognition based on the Salient Facial Patches Rejila.
More informationAutomatic recognition of smiling and neutral facial expressions
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Automatic recognition of smiling and neutral facial expressions Peiyao
More informationRecognizing Micro-Expressions & Spontaneous Expressions
Recognizing Micro-Expressions & Spontaneous Expressions Presentation by Matthias Sperber KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association www.kit.edu
More informationLOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION
17th European Signal Processing Conference (EUSIPCO 2009) Glasgow, Scotland, August 24-28, 2009 LOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION Seyed Mehdi Lajevardi, Zahir M. Hussain
More informationFacial Expression Recognition using Principal Component Analysis with Singular Value Decomposition
ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial
More informationTexture Features in Facial Image Analysis
Texture Features in Facial Image Analysis Matti Pietikäinen and Abdenour Hadid Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O. Box 4500, FI-90014 University
More informationReal-time Automatic Facial Expression Recognition in Video Sequence
www.ijcsi.org 59 Real-time Automatic Facial Expression Recognition in Video Sequence Nivedita Singh 1 and Chandra Mani Sharma 2 1 Institute of Technology & Science (ITS) Mohan Nagar, Ghaziabad-201007,
More informationInternational Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017
RESEARCH ARTICLE OPEN ACCESS Facial expression recognition based on completed LBP Zicheng Lin 1, Yuanliang Huang 2 1 (College of Science and Engineering, Jinan University, Guangzhou, PR China) 2 (Institute
More informationFacial Expression Recognition Using Non-negative Matrix Factorization
Facial Expression Recognition Using Non-negative Matrix Factorization Symeon Nikitidis, Anastasios Tefas and Ioannis Pitas Artificial Intelligence & Information Analysis Lab Department of Informatics Aristotle,
More informationCOMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION
COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION Priyanka Rani 1, Dr. Deepak Garg 2 1,2 Department of Electronics and Communication, ABES Engineering College, Ghaziabad
More informationFully Automatic Facial Action Recognition in Spontaneous Behavior
Fully Automatic Facial Action Recognition in Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia Lainscsek 1, Ian Fasel 1, Javier Movellan 1 1 Institute for Neural
More informationFacial Emotion Recognition using Eye
Facial Emotion Recognition using Eye Vishnu Priya R 1 and Muralidhar A 2 1 School of Computing Science and Engineering, VIT Chennai Campus, Tamil Nadu, India. Orcid: 0000-0002-2016-0066 2 School of Computing
More informationAn Algorithm based on SURF and LBP approach for Facial Expression Recognition
ISSN: 2454-2377, An Algorithm based on SURF and LBP approach for Facial Expression Recognition Neha Sahu 1*, Chhavi Sharma 2, Hitesh Yadav 3 1 Assistant Professor, CSE/IT, The North Cap University, Gurgaon,
More informationA Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions
A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions Akhil Bansal, Santanu Chaudhary, Sumantra Dutta Roy Indian Institute of Technology, Delhi, India akhil.engg86@gmail.com,
More informationLBP with Six Intersection Points: Reducing Redundant Information in LBP-TOP for Micro-expression Recognition
LBP with Six Intersection Points: Reducing Redundant Information in LBP-TOP for Micro-expression Recognition Yandan Wang 1, John See 2, Raphael C.-W. Phan 1, Yee-Hui Oh 1 1 Faculty of Engineering, Multimedia
More informationFunction approximation using RBF network. 10 basis functions and 25 data points.
1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data
More informationCross-pose Facial Expression Recognition
Cross-pose Facial Expression Recognition Abstract In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose
More informationVideo annotation based on adaptive annular spatial partition scheme
Video annotation based on adaptive annular spatial partition scheme Guiguang Ding a), Lu Zhang, and Xiaoxu Li Key Laboratory for Information System Security, Ministry of Education, Tsinghua National Laboratory
More informationMulti-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature
0/19.. Multi-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature Usman Tariq, Jianchao Yang, Thomas S. Huang Department of Electrical and Computer Engineering Beckman Institute
More informationSpatiotemporal Features for Effective Facial Expression Recognition
Spatiotemporal Features for Effective Facial Expression Recognition Hatice Çınar Akakın and Bülent Sankur Bogazici University, Electrical & Electronics Engineering Department, Bebek, Istanbul {hatice.cinar,bulent.sankur}@boun.edu.tr
More informationHuman Face Classification using Genetic Algorithm
Human Face Classification using Genetic Algorithm Tania Akter Setu Dept. of Computer Science and Engineering Jatiya Kabi Kazi Nazrul Islam University Trishal, Mymenshing, Bangladesh Dr. Md. Mijanur Rahman
More informationA Modular Approach to Facial Expression Recognition
A Modular Approach to Facial Expression Recognition Michal Sindlar Cognitive Artificial Intelligence, Utrecht University, Heidelberglaan 6, 3584 CD, Utrecht Marco Wiering Intelligent Systems Group, Utrecht
More informationFacial Feature Extraction Based On FPD and GLCM Algorithms
Facial Feature Extraction Based On FPD and GLCM Algorithms Dr. S. Vijayarani 1, S. Priyatharsini 2 Assistant Professor, Department of Computer Science, School of Computer Science and Engineering, Bharathiar
More informationC.R VIMALCHAND ABSTRACT
International Journal of Scientific & Engineering Research, Volume 5, Issue 3, March-2014 1173 ANALYSIS OF FACE RECOGNITION SYSTEM WITH FACIAL EXPRESSION USING CONVOLUTIONAL NEURAL NETWORK AND EXTRACTED
More informationAutomatic Facial Expression Recognition Using Features of Salient Facial Patches
1 Automatic Facial Expression Recognition Using Features of Salient Facial Patches S L Happy and Aurobinda Routray Abstract Extraction of discriminative features from salient facial patches plays a vital
More informationFacial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism
Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism 1 2 Wei-Lun Chao, Jun-Zuo Liu, 3 Jian-Jiun Ding, 4 Po-Hung Wu 1, 2, 3, 4 Graduate Institute
More informationENHANCING FACIAL EXPRESSION CLASSIFICATION BY INFORMATION FUSION. I. Buciu 1, Z. Hammal 2, A. Caplier 2, N. Nikolaidis 1, and I.
ENHANCING FACIAL EXPRESSION CLASSIFICATION BY INFORMATION FUSION I. Buciu 1, Z. Hammal 2, A. Caplier 2, N. Nikolaidis 1, and I. Pitas 1 1 AUTH/Department of Informatics/ Aristotle University of Thessaloniki
More informationDA Progress report 2 Multi-view facial expression. classification Nikolas Hesse
DA Progress report 2 Multi-view facial expression classification 16.12.2010 Nikolas Hesse Motivation Facial expressions (FE) play an important role in interpersonal communication FE recognition can help
More informationComplete Local Binary Pattern for Representation of Facial Expression Based on Curvelet Transform
Proc. of Int. Conf. on Multimedia Processing, Communication& Info. Tech., MPCIT Complete Local Binary Pattern for Representation of Facial Expression Based on Curvelet Transform Nagaraja S., Prabhakar
More informationFACIAL EXPRESSION USING 3D ANIMATION
Volume 1 Issue 1 May 2010 pp. 1 7 http://iaeme.com/ijcet.html I J C E T IAEME FACIAL EXPRESSION USING 3D ANIMATION Mr. K. Gnanamuthu Prakash 1, Dr. S. Balasubramanian 2 ABSTRACT Traditionally, human facial
More informationFacial Expression Detection Using Implemented (PCA) Algorithm
Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with
More informationRecognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior
Computer Vision and Pattern Recognition 2005 Recognizing Facial Expression: Machine Learning and Application to Spontaneous Behavior Marian Stewart Bartlett 1, Gwen Littlewort 1, Mark Frank 2, Claudia
More informationEmotion Recognition With Facial Expressions Classification From Geometric Facial Features
Reviewed Paper Volume 2 Issue 12 August 2015 International Journal of Informative & Futuristic Research ISSN (Online): 2347-1697 Emotion Recognition With Facial Expressions Classification From Geometric
More informationFacial Expression Analysis
Facial Expression Analysis Jeff Cohn Fernando De la Torre Human Sensing Laboratory Tutorial Looking @ People June 2012 Facial Expression Analysis F. De la Torre/J. Cohn Looking @ People (CVPR-12) 1 Outline
More informationINTERNATIONAL JOURNAL FOR ADVANCE RESEARCH IN ENGINEERING AND TECHNOLOGY WINGS TO YOUR THOUGHTS.. XBeats-An Emotion Based Music Player
XBeats-An Emotion Based Music Player Sayali Chavan 1, Ekta Malkan 2, Dipali Bhatt 3, Prakash H. Paranjape 4 1 U.G. Student, Dept. of Computer Engineering, sayalichavan17@gmail.com 2 U.G. Student, Dept.
More informationComputer Animation Visualization. Lecture 5. Facial animation
Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based
More informationAutomatic Facial Expression Recognition Based on Hybrid Approach
Automatic Facial Expression Recognition Based on Hybrid Approach Ali K. K. Bermani College of Engineering, Babylon University, Babil, Iraq Atef Z. Ghalwash Computer Science Department, Faculty of Computers
More informationImage Processing Pipeline for Facial Expression Recognition under Variable Lighting
Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated
More informationSPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS. André Mourão, Pedro Borges, Nuno Correia, João Magalhães
SPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS André Mourão, Pedro Borges, Nuno Correia, João Magalhães Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade
More informationExploring Facial Expressions with Compositional Features
Exploring Facial Expressions with Compositional Features Peng Yang Qingshan Liu Dimitris N. Metaxas Computer Science Department, Rutgers University Frelinghuysen Road, Piscataway, NJ 88, USA peyang@cs.rutgers.edu,
More informationReal-Time Facial Expression Recognition with Illumination-Corrected Image Sequences
Real-Time Facial Expression Recognition with Illumination-Corrected Image Sequences He Li Department of Computer Science and Engineering, Fudan University, China demonstrate@163.com José M. Buenaposada
More informationTEXTURE CLASSIFICATION METHODS: A REVIEW
TEXTURE CLASSIFICATION METHODS: A REVIEW Ms. Sonal B. Bhandare Prof. Dr. S. M. Kamalapur M.E. Student Associate Professor Deparment of Computer Engineering, Deparment of Computer Engineering, K. K. Wagh
More informationFacial Expression Analysis
Facial Expression Analysis Faces are special Face perception may be the most developed visual perceptual skill in humans. Infants prefer to look at faces from shortly after birth (Morton and Johnson 1991).
More informationFacial expression recognition is a key element in human communication.
Facial Expression Recognition using Artificial Neural Network Rashi Goyal and Tanushri Mittal rashigoyal03@yahoo.in Abstract Facial expression recognition is a key element in human communication. In order
More informationEMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE
EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE V. Sathya 1 T.Chakravarthy 2 1 Research Scholar, A.V.V.M.Sri Pushpam College,Poondi,Tamilnadu,India. 2 Associate Professor, Dept.of
More informationAn Adaptive Threshold LBP Algorithm for Face Recognition
An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent
More informationFacial Expression Recognition Using a Hybrid CNN SIFT Aggregator
Facial Expression Recognition Using a Hybrid CNN SIFT Aggregator Mundher Al-Shabi, Wooi Ping Cheah, Tee Connie Faculty of Information Science and Technology, Multimedia University, Melaka, Malaysia Abstract.
More informationEvaluation of Face Resolution for Expression Analysis
Evaluation of Face Resolution for Expression Analysis Ying-li Tian IBM T. J. Watson Research Center, PO Box 704, Yorktown Heights, NY 10598 Email: yltian@us.ibm.com Abstract Most automatic facial expression
More informationAddress for Correspondence 1 Research Scholar, 2 Assistant Professor, ECE Deptt., DAVIET, Jalandhar, INDIA
Research Paper GEOMETRIC AND APPEARANCE FEATURE ANALYSIS FOR FACIAL EXPRESSION RECOGNITION 1 Sonu Dhall, 2 Poonam Sethi Address for Correspondence 1 Research Scholar, 2 Assistant Professor, ECE Deptt.,
More informationFace Alignment Under Various Poses and Expressions
Face Alignment Under Various Poses and Expressions Shengjun Xin and Haizhou Ai Computer Science and Technology Department, Tsinghua University, Beijing 100084, China ahz@mail.tsinghua.edu.cn Abstract.
More informationComputers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory
Computers and Mathematics with Applications 61 (2011) 2101 2106 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa An
More informationFacial Expression Recognition in Real Time
Facial Expression Recognition in Real Time Jaya Prakash S M 1, Santhosh Kumar K L 2, Jharna Majumdar 3 1 M.Tech Scholar, Department of CSE, Nitte Meenakshi Institute of Technology, Bangalore, India 2 Assistant
More informationGender Classification Technique Based on Facial Features using Neural Network
Gender Classification Technique Based on Facial Features using Neural Network Anushri Jaswante Dr. Asif Ullah Khan Dr. Bhupesh Gour Computer Science & Engineering, Rajiv Gandhi Proudyogiki Vishwavidyalaya,
More informationExtracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition
Extracting Local Binary Patterns from Image Key Points: Application to Automatic Facial Expression Recognition Xiaoyi Feng 1, Yangming Lai 1, Xiaofei Mao 1,JinyePeng 1, Xiaoyue Jiang 1, and Abdenour Hadid
More informationLearning the Deep Features for Eye Detection in Uncontrolled Conditions
2014 22nd International Conference on Pattern Recognition Learning the Deep Features for Eye Detection in Uncontrolled Conditions Yue Wu Dept. of ECSE, Rensselaer Polytechnic Institute Troy, NY, USA 12180
More informationResearch on Dynamic Facial Expressions Recognition
Research on Dynamic Facial Expressions Recognition Xiaoning Peng & Beii Zou School of Information Science and Engineering Central South University Changsha 410083, China E-mail: hhpxn@mail.csu.edu.cn Department
More informationAn efficient face recognition algorithm based on multi-kernel regularization learning
Acta Technica 61, No. 4A/2016, 75 84 c 2017 Institute of Thermomechanics CAS, v.v.i. An efficient face recognition algorithm based on multi-kernel regularization learning Bi Rongrong 1 Abstract. A novel
More informationA Simple Approach to Facial Expression Recognition
Proceedings of the 2007 WSEAS International Conference on Computer Engineering and Applications, Gold Coast, Australia, January 17-19, 2007 456 A Simple Approach to Facial Expression Recognition MU-CHUN
More informationFACIAL EXPRESSION USING 3D ANIMATION TECHNIQUE
FACIAL EXPRESSION USING 3D ANIMATION TECHNIQUE Vishal Bal Assistant Prof., Pyramid College of Business & Technology, Phagwara, Punjab, (India) ABSTRACT Traditionally, human facial language has been studied
More informationMULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION
MULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION Irene Kotsia, Stefanos Zafeiriou, Nikolaos Nikolaidis and Ioannis Pitas Aristotle University of
More informationTri-modal Human Body Segmentation
Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4
More informationFACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS
FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS Jaya Susan Edith. S 1 and A.Usha Ruby 2 1 Department of Computer Science and Engineering,CSI College of Engineering, 2 Research
More informationFacial Expression Recognition Using Gabor Motion Energy Filters
Facial Expression Recognition Using Gabor Motion Energy Filters Tingfan Wu Marian S. Bartlett Javier R. Movellan Dept. Computer Science Engineering Institute for Neural Computation UC San Diego UC San
More informationSpontaneous Facial Expression Recognition Based on Histogram of Oriented Gradients Descriptor
Computer and Information Science; Vol. 7, No. 3; 2014 ISSN 1913-8989 E-ISSN 1913-8997 Published by Canadian Center of Science and Education Spontaneous Facial Expression Recognition Based on Histogram
More informationEnhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets.
Enhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets. Zermi.Narima(1), Saaidia.Mohammed(2), (1)Laboratory of Automatic and Signals Annaba (LASA), Department
More informationAN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing)
AN EXAMINING FACE RECOGNITION BY LOCAL DIRECTIONAL NUMBER PATTERN (Image Processing) J.Nithya 1, P.Sathyasutha2 1,2 Assistant Professor,Gnanamani College of Engineering, Namakkal, Tamil Nadu, India ABSTRACT
More informationObject detection using non-redundant local Binary Patterns
University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh
More informationMood detection of psychological and mentally disturbed patients using Machine Learning techniques
IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.8, August 2016 63 Mood detection of psychological and mentally disturbed patients using Machine Learning techniques Muhammad
More informationMicro-Facial Movements: An Investigation on Spatio-Temporal Descriptors
Micro-Facial Movements: An Investigation on Spatio-Temporal Descriptors Adrian K. Davison 1, Moi Hoon Yap 1, Nicholas Costen 1, Kevin Tan 1, Cliff Lansley 2, and Daniel Leightley 1 1 Manchester Metropolitan
More informationFacial Expressions Recognition from Image Sequences
Facial Expressions Recognition from Image Sequences Zahid Riaz, Christoph Mayer, Michael Beetz and Bernd Radig Department of Informatics, Technische Universität München, D-85748 Garching, Germany Abstract.
More informationA Hybrid Face Detection System using combination of Appearance-based and Feature-based methods
IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 181 A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods Zahra Sadri
More informationEdge Detection for Facial Expression Recognition
Edge Detection for Facial Expression Recognition Jesús García-Ramírez, Ivan Olmos-Pineda, J. Arturo Olvera-López, Manuel Martín Ortíz Faculty of Computer Science, Benemérita Universidad Autónoma de Puebla,
More informationModel Based Analysis of Face Images for Facial Feature Extraction
Model Based Analysis of Face Images for Facial Feature Extraction Zahid Riaz, Christoph Mayer, Michael Beetz, and Bernd Radig Technische Universität München, Boltzmannstr. 3, 85748 Garching, Germany {riaz,mayerc,beetz,radig}@in.tum.de
More information