Survey on Human Face Expression Recognition

Similar documents
Real time facial expression recognition from image sequences using Support Vector Machines

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

Recognition of facial expressions in presence of partial occlusion

Facial expression recognition using shape and texture information

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

A Real Time Facial Expression Classification System Using Local Binary Patterns

Facial Emotion Recognition using Eye

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition

Facial Expression Recognition for HCI Applications

3D Facial Action Units Recognition for Emotional Expression

Evaluation of Expression Recognition Techniques

Spatiotemporal Features for Effective Facial Expression Recognition

Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition

Emotion Detection System using Facial Action Coding System

Texture Features in Facial Image Analysis

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

A Survey on Feature Extraction Techniques for Palmprint Identification

Facial expression recognition is a key element in human communication.

Cross-pose Facial Expression Recognition

Facial Expression Recognition Using Non-negative Matrix Factorization

A Novel Approach for Face Pattern Identification and Illumination

Facial Expression Recognition in Real Time

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib

Facial Feature Extraction Based On FPD and GLCM Algorithms

Facial Expression Recognition

Gender Classification Technique Based on Facial Features using Neural Network

Facial Expressions Recognition Using Eigenspaces

An Algorithm based on SURF and LBP approach for Facial Expression Recognition

D DAVID PUBLISHING. 3D Modelling, Simulation and Prediction of Facial Wrinkles. 1. Introduction

Robust Facial Expression Classification Using Shape and Appearance Features

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION

A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Edge Detection for Facial Expression Recognition

Convolutional Neural Networks for Facial Expression Recognition

Face and Facial Expression Detection Using Viola-Jones and PCA Algorithm

Real-time Automatic Facial Expression Recognition in Video Sequence

A Simple Approach to Facial Expression Recognition

LBP Based Facial Expression Recognition Using k-nn Classifier

Face Recognition Using K-Means and RBFN

Emotion Recognition With Facial Expressions Classification From Geometric Facial Features

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

Model Based Analysis of Face Images for Facial Feature Extraction

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis

Development of Behavioral Based System from Sports Video

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)

Detection of asymmetric eye action units in spontaneous videos

A Facial Expression Classification using Histogram Based Method

Eye Detection by Haar wavelets and cascaded Support Vector Machine

FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS

Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li

Natural Facial Expression Recognition Using Dynamic and Static Schemes

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model

A Hybrid Face Detection System using combination of Appearance-based and Feature-based methods

COMPARATIVE ANALYSIS OF EYE DETECTION AND TRACKING ALGORITHMS FOR SURVEILLANCE

Action Unit Based Facial Expression Recognition Using Deep Learning

Face Alignment Under Various Poses and Expressions

Enhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets.

Facial Expression Detection Using Implemented (PCA) Algorithm

Facial Expression Recognition Using Expression- Specific Local Binary Patterns and Layer Denoising Mechanism

SPARSE RECONSTRUCTION OF FACIAL EXPRESSIONS WITH LOCALIZED GABOR MOMENTS. André Mourão, Pedro Borges, Nuno Correia, João Magalhães

Appearance Manifold of Facial Expression

Machine Learning Approach for Smile Detection in Real Time Images

Exploring Facial Expressions with Compositional Features

Multiple Kernel Learning for Emotion Recognition in the Wild

A Hierarchical Face Identification System Based on Facial Components

Dynamic facial expression recognition using a behavioural model

IBM Research Report. Automatic Neutral Face Detection Using Location and Shape Features

MULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION

FACIAL EXPRESSION USING 3D ANIMATION

Face Objects Detection in still images using Viola-Jones Algorithm through MATLAB TOOLS

Learning based face hallucination techniques: A survey

C.R VIMALCHAND ABSTRACT

A Study on Different Challenges in Facial Recognition Methods

FACIAL MOVEMENT BASED PERSON AUTHENTICATION

A Novel Feature Extraction Technique for Facial Expression Recognition

Feature selection for facial expression recognition

Facial Expression Analysis

Hybrid Approach of Facial Expression Recognition

Face Recognition Based On Granular Computing Approach and Hybrid Spatial Features

INTERNATIONAL JOURNAL FOR ADVANCE RESEARCH IN ENGINEERING AND TECHNOLOGY WINGS TO YOUR THOUGHTS.. XBeats-An Emotion Based Music Player

Facial Feature Expression Based Approach for Human Face Recognition: A Review

Facial Expression Recognition based on Affine Moment Invariants

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

On Modeling Variations for Face Authentication

Facial Expression Recognition Using Local Binary Pattern and Support Vector Machine

Recognizing Partial Facial Action Units Based on 3D Dynamic Range Data for Facial Expression Recognition

Evaluation of Face Resolution for Expression Analysis

International Journal of Computer Techniques Volume 4 Issue 1, Jan Feb 2017

Abstract. 1 Introduction. 2 Motivation. Information and Communication Engineering October 29th 2010

A Laplacian Based Novel Approach to Efficient Text Localization in Grayscale Images

Complete Local Binary Pattern for Representation of Facial Expression Based on Curvelet Transform

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN

Automatic recognition of smiling and neutral facial expressions

EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE

Color Local Texture Features Based Face Recognition

A HYBRID APPROACH BASED ON PCA AND LBP FOR FACIAL EXPRESSION ANALYSIS

Transcription:

Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 10, October 2014, pg.20 24 SURVEY ARTICLE ISSN 2320 088X Survey on Human Face Expression Recognition Atul Chinchamalatpure 1, Prof. Anurag Jain 2 ¹Department of Computer Science & Engineering, REC, Bhopal, India ²Department of Computer Science & Engineering, REC, Bhopal, India Abstract Facial expression recognition plays an important and effective role in the interaction in Human. There are seven basic facial expressions namely fear, surprise, happy, sad, disgust, Neutral and anger. Facial Expression Recognition is process performed by computers which consist of detect the face in the image and pre-process the face region, extracting facial expression features from image by analyzing the motion of facial features or change in the appearance of facial features and classifying this information into facial expression categories. Face Expression Recognition techniques have always been a very challenging task in real life applications because of the variations in the illumination, pose and occlusion. This paper presents a methodology for face expression recognition. Keywords Face Detection, Facial Expression Recognition (FER), Feature Extraction, Face Action Coding System, Action Units (AU) I. INTRODUCTION Human face is a very useful and powerful source of communicative information about human behavior. It provides information about human personality, emotions and thoughts. Facial expression provides sensitive cues about emotional response and plays a major role in human interaction and nonverbal communications [1]. It can complement verbal communication, or can convey complete thoughts by itself. Facial expression analysis has been attracted considerable attention in the advancement of human machine interface since it provides natural and efficient way to communicate between humans [2]. Some application area related to face and its expression includes personal identification and access control, video phone and teleconferencing, forensic application, human computer application [5]. Facial expression analysis has been attracted considerable attention in the advancement of human machine interface since it provides natural and efficient way to communicate between humans [2]. Some application area related to face and its expression includes personal identification and access control, video phone and teleconferencing, forensic application, human computer application [5]. Most of the facial expression recognition methods reported to date or focus on expression category like happy, sad, fear, anger etc Human face is a very useful and powerful source of communicative information about human behaviour. It provides information about human personality, emotions and thoughts. Facial expression provides sensitive cues about emotional response and plays a major role in human interaction and nonverbal communications [2]. It can complement verbal communication, or can convey complete thoughts by itself. A system that can recognize AUs in real time without human intervention is more desirable for various application fields including automated tools for behavioural research, videoconference, affective computing, perceptual human-machine interfaces, 3D face reconstruction and animation, and others[8]. 2014, IJCSMC All Rights Reserved 20

II. LITERAURE REVIEW There are several approaches taken in the literature for learning classifiers for emotion recognition [6]. From the feature extraction point of view, the techniques aiming the recognition of face expression can be categorized into methods that use appearance features and methods that use geometric features, although some hybrid-based approaches are also possible. The face images are processed by an image filter or filter banks on the whole face or some specific regions of the face to extract changes in facial appearance. Typically, the entire facial images or some specific facial regions are convolved with some filters, e.g. Gabor wavelets, and the extracted filter responses at the fiducial points (manually selected, most of the times) form vectors that are further used for facial expression classification. In the geometric feature extraction system, the shape, distances, angles or the coordinates of the fiducial points form a feature vector that represents facial geometry. It seems that the highest recognition rates have been obtained when both the responses methods were combined [6]. From the temporal perspective, facial expression recognition techniques are represented by static (typically using still images) and dynamic (image sequences) approaches. Other categories are the global techniques - which analyze the texture of the whole face without having explicit knowledge about the location of single facial features - versus local approaches - which try to extract local features of the face or to fit any holistic face model containing a set of feature points to the face. Description of Approaches The facial action coding system (FACS) developed by Ekman [13] is a source of inspiration for many research papers. In the first paper that has been analysed present a method for recognizing the emotions through facial expressions presented in a video sequence is discussed. The novelty of this work consists in introducing the Tree-Augmented-Naive Bayes (TAN) classifier which incorporates the dependencies between features. This is in opposition to the Naïve Bayes (NB) assumption in which features are considered to be independent as in work of Sebe et al. [9]. The method presented by Ma and Khorasani [10] is based on a combination of a two-dimensional discrete cosine transform (2D-DCT) and constructive one-hidden-layer feedforward neural network. The novelty of the approach comes from the proposed pruning technique which substantially reduces the size of the neural network while improving the generalization capability and the recognition rate. A novel low-computation discriminative feature space is introduced in [11] by Shan et al. It is based on Local Binary Patterns (LBP) features which could be extracted faster. Moreover, it is shown that LBP features are robust to low resolution. As possible solutions for the classification stage, template matching and Support Vector Machine (SVM) were considered and compared. Also, a comparison with the previous mentioned work of Cohen et al. [8] (geometric features +TAN) proved the superiority of the proposed technique. Since the year 2006, little work has been done in 3D based face expression recognition. One of the first attempts is represented by the work of Zeng et al. [12] who employed a 3D face tracker for feature extraction. In the same year, in the approach proposed by Wang et al. [13], the classification of the prototypic facial expressions is performed by extracting primitive 3D facial expression features, and by calculating the feature distribution. Other innovative 3D solutions are the two approaches of Dornaika et al. [14] and Kotsia & Pitas [15] which use AUs together with a 3D face model named Candide, initially proposed by Ahlberg [16]. Some of the most important advantages of these techniques are the texture independence, the view independent (since the used tracker simultaneously provides the 3D head pose and the facial actions) and a simple learning phase which only need to fit second-order auto-regressive models to sequences of facial actions. For an excellent 3D facial expression survey please consult the work [15]. One of the most interesting approaches is presented by Panning in [11]. The authors propose a novel approach from multiple perspectives e.g., it use facial feature detection in color image sequences and the feature extraction is initialized without manual input. For expression classification a three layer feed forward artificial neural network is employed. In the next work, Buciu et al. [11] present a comparison of ICA approaches whereas the classification stage is implemented using either a Cosine Similarity Measure (CSM) or a SVM classifier. III. BASIC METHODOLOGY In this paper we are describing the basic methods of human face expression recognition are as follows. A. Face Detection and Tracking The first step in facial expression analysis is to detect the face in the given image or frame and then following it across different frames of a video. Locating theface within an image is termed as face detection and locating and tracking it across multiple frames is termed as face tracking. Face detection and tracking algorithms originates from the basis of feature extraction algorithms which looks for a certain representation within an image. One of the methods developed and perhaps one of the most popular one to detect and track faces is the Kanade- Lucas-Tomasi tracker[15]. In earlier studies Kanade and Lucas developed a fea- ture extraction algorithm[10] which matches two images for stereo 2014, IJCSMC All Rights Reserved 21

matching and assumed that the second frame in a continuous frame of images is a translation of the first one because of the small inter-frame motion. Their implementation can successfully determine the distance to the object from camera and also can calculate brightness, contrast and five other camera parameters. In the presence of human supervision, the system worked really well, but their procedure also conjures errors and Tomasi and Kanade updated and developed[15] the feature extraction algorithm on their own which iterates a few iterations over the basic solution that converges to a fast and simple solution. They define a feature as good, based on how well they can track that feature. Since by construction, their good feature selection criteria has become the optimal one. They represented a feature as a function of three variables x, y and t, where x, y are the space co- ordinate variables and t defines the time. They experimented with a stream of 100 frames that shows surface of different objects for example, furry puppets, mugs etc and found that the results from surface markings are very accurate and typically within one tenth of a pixel or better. As a result this technique is well suited for motion and shape determination. Skin color plays a vital role in differentiating human and non-human faces. From the study it is observe that skin color pixels have a decimal value in the range of 120 to 140. In this project, we are going to increase skin color pixels decimal value in the range of 120 to 180. It gives the more accuracy than previous one. We used a trial and error method to locate skin color and non skin color pixels. But many of the times, system fails to detect whether an image contains human face or not (i.e. for those images where there is a skin color background) an image is segmented into skin color and non-skin color pixels. B. Feature Extraction To develop any new detection system, the choice of database is very important[2]. If we have a common database used by all researchers, it would be very easy to test the new detection system and compare it against existing systems. So a lot of efforts are put into building the perfect database. After the database is built, researchers used a feature based approach where they tracks the permanent and transient features of the face separately[1]. The permanent features for example eyes can be tracked using eye tracker, lips can be tracked with lip tracker etc while edge detection methods are used for transient feature detection for example wrinkles. But all depends on the availability of a good database. Human face is made up of eyes; nose, mouth and chine etc. there are differences in shape, size, and structure of these organs. So the faces are differs in thousands way. One of the common methods for face expression recognition is to extract the shape of eyes and mouth and then distinguish the faces by the distance and scale of these organs C. Expression Classification After the face detection and feature extraction process the final piece of the puzzle of facial expression system is a good classification module, that will classify the extracted features into particular expressions. We are going to cover some recent researches on different facial expression classifiers Among the static classifiers, the naive bayes classifier assumes all features are conditionally independent. But in real life scenario that is not the case. On the other hand in the tree based classifier, each feature has at most one feature as a parent resulting in a tree based structure. For example, in text after the word thank, the probability of the word you to appear is higher than the other words. But naive bayesian classifiers doesn t account for that. Cohen et al.[5] noted this property is applicable to facial expression recognition system as well. So they found the tree augmented naive bayes classifier performing better. For the structure of the tree, they didn t fix it to any particular structure, rather they developed an algorithm that gives the optimal structure. Their experimental methodology consists of two types of tests person dependent where part of the data for each subject is used as training data and person independent test where all but one subject is used to train the system and the left out person is used for the classification test. Results shows that static classifiers performs poorly when the video feed is person dependent because dynamic classifiers take into account the difference in temporal patterns as well as the change in expression s appearance for different individuals. Finally they integrated these classifiers to build a real time facial expression recognition system. After the set of features are extracted from the face region are used in classification stage. The set of features are used to describe the facial expression. Classification requires supervised training, so the training set should consist of labeled data. Once the classifier is trained, it can recognize input images by assigning them a particular class label. The most commonly used facial expressions classification is done both in terms of Action Units, proposed in Facial Action Coding System (FACS) [12] and in terms of six universal emotions: happy, sad, anger, surprise, disgust and fear defined by Ekman [13]. 2014, IJCSMC All Rights Reserved 22

1) Facial Action Coding System (FACS): Facial Action Coding System (FACS) was developed by Paul Ekman and Wallace Friesen in 1976 is a system for measuring facial expression. FACS is based on the analysis of the relations between muscle contraction and changes in the face appearance. It is a common standard to systematically categorize the physical expression which has been useful to psychologists and to animators. The Face can be divided into Upper Face and Lower Face Action units [13] and the subsequent expressions are also identified. The fig 5 shows some of the combined action units. Contractions of muscles occurred because of some expression are marked as an Action Unit (AU).Action Units are changes in the face caused by one muscle or a combination of muscles. The task of expression analysis with use of FACS is based on decomposing observed expression into the set of Action Units. There are 46 AUs that represent changes in facial expression and 12 AUs connected with eye gaze direction and head orientation. Action Units are highly descriptive in terms of facial movements, however, they do not provide any information about the message they represent. AUs are labeled with the description of the action. 2) Prototypical Facial Expression Instead of describing the detailed facial features most facial expression recognition system attempt to recognize a small set of prototypical emotional expressions. According to the Ekman's theory [13], there are six basic emotion expressions that are universal for people of different nations and cultures. Those basic emotions are anger, neutral, disgust, fear, happy, sad and surprise There are a lot of different machine learning techniques for classification task, namely: K-Nearest Neighbors [10], Artificial Neural Networks [11], Support Vector Machines [3], Hidden Markov Models [3] and Boosting Techniques like Adaboost classifier [2]. IV. CONCLUSIONS This paper has briefly overviewed the methodology of facial expression recognition. Human detect and identify faces and facial expressions in a scene with little or no effort. Feature extraction is important stage for expression recognition system because extracted features are used for Classification stage. Feature extraction for expression recognition using geometric features is more difficult because it depends on the shape and sizes of features so appearance based features are easier to extract. ACKNOWLEDGEMENT First I would like to express my sincere gratitude to Prof. Anurag Jain for his valuable guidance, encouragement, and optimism. I wish to acknowledge with thanks to all directly or indirectly helped me and providing me all the needful facilities during this work. REFERENCES [1] Nidhi N. Khatri, Zankhana H. Shah, and Samip A. Patel, Facial Expression Recognition: A Survey / (Vol. 5 (1), 2014, 149-152 [2] Cătălin-Daniel Căleanu, Face Expression Recognition: a Brief Overview of the Last Decade, 8th IEEE International Symposium on Applied Computational Intelligence and Informatics May 23 25, 2013 Timisoara, Romania. [3] Maja Pantic, Leon J.m. Rothkrantz, Automatics Analysis of Facial Expressions: The State of the Art, IEEE transactions on pattern analysis and machine intelligence, Vol. 22, No. 12, December 2000. [4] ] ShilpaChoudhary,KamleshLakhwani,Shubhlakshmi Agrwal, An Efficient Hybrid Technique Of Feature Extraction For Facial Expression Recognition Using Adaboost Classifier,InternationalJournal of Engineering Research & Technology, Vol. 1 Issue 8, October - 2012 [5] Shuai-Shi Liu, Yan-Tao Tian, Dong Li, New Research Advances Of Facial Expression Recognition,IEEE Proceedings of the Eighth International Conference on Machine Learning and Cybernetics, Baoding, 12-15 July 2009 [6] Aruna Bhadu, Rajbala Tokas, Dr. Vijay Kumar, Facial Expression Recognition Using DCT, Gabor and Wavelet Feature Extraction Techniques, International Journal of Engineering and Innovative Technology, Volume 2, Issue 1, July 2012 [7] Senthil Ragavan Valayapalayam Kittusamy and Venkatesh Chakrapani, Facial Expressions Recognition Using Eigenspaces, Journal of Computer Science 8, 2012 [8] Jung-Wei Hong and Kai-Tai Song, Facial Expression Recognition Under Illumination Variation,IEEE 2007 Shiqing Zhang, Xiaoming Zhao, Bicheng Lei, Facial Expression Recognition Based on Local Binary Patterns and Local Fisher Discriminant Analysis, Wseas Transactions On Signal Processing January 2012. [9] B. J. Matuszewski, W. Quan, L-K Shark, Facial Expression Recognition in Biometrics Unique and Diverse Applications in Nature, Science, and Technology, M. Albert (Ed.). InTech, 2011. 2014, IJCSMC All Rights Reserved 23

[10] C. Shan, S. Gong, and P. W. McOwan, Robust Facial Expression Recognition Using Local Binary Patterns, Proc. IEEE International Conference on Image Processing (ICIP'05), Genova, Italy, September 2005. [11] Z. Zeng, Y. Fu, G. I. Roisman, Z. Wen, Y. Hu, and T. S. Huang, Spontaneous Emotional Facial Expression Detection, Journal of Multimedia, vol. 1, no. 5, pp. 1-8, 2006. [12] J. Wang, L. Yin, X. Wei, and Y. Sun, 3D facial expression recognition based on primitive surface feature distribution, in Proc. Conf. Computer Vision and Pattern Recognition, 2006, pp. 1399 1406. [13] P. Ekman and W. Friesen, Facial Action Coding System, Consulting Psychologists Press, 1977. [14] I. Kotsia, I. Pitas, Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machines, Image Processing, IEEE Transactions on, vol. 16, Issue 1, pp. 172 187, Jan. 2007. [15] J. Ahlberg, CANDIDE-3 -- an updated parameterized face, Report No. LiTH-ISY-R-2326, Dept. of Electrical Engineering, Linköping University, Sweden, 2001. [16] F. Dornaika and B. Raducanu, Inferring facial expressions from videos: Tool and application, Image Commun., vol. 22, pp. 769 784, October 2007. 2014, IJCSMC All Rights Reserved 24