Facial Expression Recognition in Real Time

Similar documents
Emotion Detection System using Facial Action Coding System

Multiple Kernel Learning for Emotion Recognition in the Wild

A Real Time Facial Expression Classification System Using Local Binary Patterns

Facial expression recognition using shape and texture information

Re-mapping Animation Parameters Between Multiple Types of Facial Model

Facial Expression Recognition

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

Facial expression recognition is a key element in human communication.

Robust Facial Expression Classification Using Shape and Appearance Features

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Computer Animation Visualization. Lecture 5. Facial animation

A HYBRID APPROACH BASED ON PCA AND LBP FOR FACIAL EXPRESSION ANALYSIS

Appearance Manifold of Facial Expression

Real time facial expression recognition from image sequences using Support Vector Machines

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse

Recognizing Micro-Expressions & Spontaneous Expressions

Learning to Recognize Faces in Realistic Conditions

Facial Expression Detection Using Implemented (PCA) Algorithm

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION

An Algorithm based on SURF and LBP approach for Facial Expression Recognition

Face Recognition using SURF Features and SVM Classifier

Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion

Enhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets.

Detection of Facial Landmarks of North Eastern Indian People

Face Alignment Under Various Poses and Expressions

Object Tracking using HOG and SVM

Facial Expression Analysis

Multi-view Facial Expression Recognition Analysis with Generic Sparse Coding Feature

Expression Transfer between Photographs through Multilinear AAM s

Facial Expression Recognition with Emotion-Based Feature Fusion

Facial Expression Recognition Using Non-negative Matrix Factorization

EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE

A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions

Robust facial action recognition from real-time 3D streams

DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS

Learning based face hallucination techniques: A survey

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)

Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li

Emotion Classification

Recognizing emotions by analyzing facial expressions

Recognition of facial expressions in presence of partial occlusion

A Facial Expression Imitation System in Human Robot Interaction

Model Based Analysis of Face Images for Facial Feature Extraction

LBP Based Facial Expression Recognition Using k-nn Classifier

Emotion Recognition With Facial Expressions Classification From Geometric Facial Features

Gender Classification Technique Based on Facial Features using Neural Network

Image Based Feature Extraction Technique For Multiple Face Detection and Recognition in Color Images

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Facial Expressions Recognition from Image Sequences

Generic Face Alignment Using an Improved Active Shape Model

A New Multi Fractal Dimension Method for Face Recognition with Fewer Features under Expression Variations

Evaluation of Expression Recognition Techniques

FACE RECOGNITION FROM A SINGLE SAMPLE USING RLOG FILTER AND MANIFOLD ANALYSIS

Describable Visual Attributes for Face Verification and Image Search

Human Face Classification using Genetic Algorithm

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN

Atlas Construction and Sparse Representation for the Recognition of Facial Expression

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Automatic Facial Expression Recognition based on the Salient Facial Patches

Facial-component-based Bag of Words and PHOG Descriptor for Facial Expression Recognition

A Framework for Automated Measurement of the Intensity of Non-Posed Facial Action Units

Dynamic facial expression recognition using a behavioural model

Facial Emotion Recognition using Eye

Face and Facial Expression Detection Using Viola-Jones and PCA Algorithm

Cross-pose Facial Expression Recognition

3D Object Recognition using Multiclass SVM-KNN

GENDER CLASSIFICATION USING SUPPORT VECTOR MACHINES

NOWADAYS, more and more robots are designed not

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis

Real-Time Facial Expression Recognition with Illumination-Corrected Image Sequences

Facial Expression Recognition Using Improved Artificial Bee Colony Algorithm

Emotional States Control for On-line Game Avatars

Facial Expression Recognition Using a Hybrid CNN SIFT Aggregator

Facial Feature Extraction Based On FPD and GLCM Algorithms

Machine Learning Approach for Smile Detection in Real Time Images

AUTOMATIC VIDEO INDEXING

IBM Research Report. Automatic Neutral Face Detection Using Location and Shape Features

C.R VIMALCHAND ABSTRACT

Action Unit Based Facial Expression Recognition Using Deep Learning

Estimation of Age Group using Histogram of Oriented gradients and Neural Network

Masked Face Detection based on Micro-Texture and Frequency Analysis

MULTICLASS SUPPORT VECTOR MACHINES AND METRIC MULTIDIMENSIONAL SCALING FOR FACIAL EXPRESSION RECOGNITION

Personal style & NMF-based Exaggerative Expressions of Face. Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University

Table of Contents. Recognition of Facial Gestures... 1 Attila Fazekas

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component

A Survey on Feature Extraction Techniques for Palmprint Identification

network and image warping. In IEEE International Conference on Neural Networks, volume III,

Human Facial Age Classification Using Active Shape Model, Geometrical Feature, and Support Vector Machine on Early Growth Stage

Exploring Bag of Words Architectures in the Facial Expression Domain

Exploring Facial Expressions with Compositional Features

Facial Expression Recognition Using Local Binary Patterns

FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS

An Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features

FGnet - Facial Expression and. Emotion Database

Automatic Pixel Selection for Optimizing Facial Expression Recognition using Eigenfaces

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib

Transcription:

Facial Expression Recognition in Real Time Jaya Prakash S M 1, Santhosh Kumar K L 2, Jharna Majumdar 3 1 M.Tech Scholar, Department of CSE, Nitte Meenakshi Institute of Technology, Bangalore, India 2 Assistant Professor, Department of CSE (PG), Nitte Meenakshi Institute of Technology, Bangalore, India 3 Dean R&D, Professor and Head CSE (PG), Nitte Meenakshi Institute of Technology, Bangalore, India Abstract: Automatic recognition of facial expression is a necessary step toward the design of more natural human-computer interaction systems. This work presents an approach for the recognition of facial expressions in real time image sequence or video sequences. The faces features are tracked using SDM (Supervised Descent Method) Classification of the expression in the video sequences are performed using a Rule Based and SVM classification methods. For SVM training we used TFEIM (Taiwanese Face Expression Image Database). We tested our approach and achieved good results. Keywords: Face Expression Recognition, Face Feature Extraction, Classification 1. Introduction Facial expressions represent one of the most important ways for humans to transmit and recognize feelings and intentions. Paul Ekman s et.al [14] suggested that all emotions belong to a small set of categories. These basic emotions such as anger, disgust, fear, happiness, sadness, and surprise are expressed by the same facial movements across different cultures and therefore represent an appealing choice when designing automatic methods for facial expression classification. Automated facial image analysis confronts a series of challenges. The face and facial features must be detected in video; shape or appearance information must be extracted and then normalized for variation in pose, illumination and individual differences; the resulting normalized features are used to segment and classify facial actions. Partial occlusion is a frequent challenge that may be intermittent or continuous. While human observers easily accommodate for changes in pose, scale, illumination, occlusion, and individual differences and other sources of variation represent considerable challenges for computer vision. In used regression techniques to fill out the missing information. Bettinger et al. [5] used a sampled mean shift and a variable length Markov model to generate personspecific sequences of facial expressions. Zalewski et al. [6] clustered the shape and texture components with a mixture of probabilistic PCA. Each cluster corresponds to a facial expression and clusters are used for FES. Chang et al. [7] introduced a probabilistic model to learn a nonlinear dynamical model on a manifold of expressions containing the neutral and six universal expressions. 3. Proposed Method We propose a new method for recognizing facial expression. Our method contains three phases. 1. Face Detection 2. Face Feature Extraction 3. Classification of Facial Expression this paper we proposed an approach to recognize facial expression using SDM (Supervised Descent Method) for facial landmarks tracking. 2. Related work Liu et al. [2] proposed a geometric warping algorithm in conjunction with the Expression Ratio Image (ratio between the neutral image and the image of a given expression) to synthesize new expressions preserving subtle details such as wrinkles and cast shadows. Zhang et al. [3] synthesized facial expressions using a local face model. Each region of the face was reconstructed as a convex combination of the corresponding regions in the training set. The synthesized face regions were later blended along the region boundaries. Regression-based approach finds solution as the weighted combinations of the training data. However, it is unclear how the combination of training data can reproduce subtle local appearance features presented only in the testing samples such as wrinkles, glasses, beard, or pimples. Nguyen et al. [4] used extensions of Principal Component Analysis (PCA) to remove glasses and beards in images, and 3.1 Face Detection The video input from the camera and uses the OpenCV library to analyze the video. If a face is detected in the video, the Open CV library will give the processing sketch and coordinates of the face. OpenCV uses a type of face detector called a Haar Cascade classifier. Given a frame, from a stored file or live video, the face detector examines each image location and classifies it as "Face" or "Not Face." Classification assumes a fixed scale for the face, say 50x50 pixels. Since faces in an image might be smaller or larger than this, the classifier runs over the image several times, to search for faces across a range of scales. The classification is very fast, even when it's applied at several scales. The classifier uses data stored in an XML file to decide how to classify each image location. Paper ID: 02015110 2318

3.2 Face Feature Extraction International Journal of Science and Research (IJSR) 3.2.1 Landmarks Extraction The landmarks of face are extracted using SDM (Supervised Descent Method) [1] method takes face co-ordinates detected using opencv as an input. In the first frame the method initialize the landmarks. Then it tracks the landmarks as shown in fig 1. The normalization is performed on the obtained landmarks [13]. 3.3.1 Multi Class SVM The LibSVM [12] is used to train the dataset on the learned model created then evaluator is used to classify the emotions based on the learned model. Steps for LibSVM Transform data to the format of an SVM package Conduct simple scaling on the data Consider the RBF kernel K(x, y) =e -γ x-y 2 Use cross validation to find the best parameter C and γ Use the best parameter C and γ to train the whole training set Figure 1: Detection of Facial Landmarks 3.2.2 Normalization Process First, the coordinates of the center pixel in each eye were computed as the mean of the six corresponding landmarks (0 to 7 for the left eye, and 8 to 12 for the right eye). Second, the inter-eye distance d was computed and all landmark coordinates were multiplied by 50/d. This sets the inter-eye distance to 50 pixels. 3.2.3 SDM (Supervised Descent Method) A Supervised Descent Method (SDM) [1] is used for localizing facial landmarks. At first, it takes an image with manually labeled landmarks, then it run through training images to give initial configuration of landmarks. It uses SIFT function to extract initial landmarks. In training phase, SDM tries to minimize difference between manually labeled landmarks and initially located landmarks by SIFT (Δx). This method does not learn any shape or appearance model in advance from training data. The SDM learns a series of descent direction and re-scaling factors to produce a sequence of updates.sdm directly learns descent direction from training data by learning a linear regression between, Δx and difference of SIFT value of manually and extracted landmarks (ΔΦ). In testing phase, based on descent direction and re-scaling factors learn in training SDM estimates landmarks. SDM learns descent direction without computing neither Jacobian nor Hessian matrix, which are computationally expensive. SDM is fast and accurate as compare to shape models. 3.3 Classification of Facial Expression The RBF kernel nonlinearly maps samples into a higher dimensional space, unlike the linear kernel, can handle the case when the relation between class labels and attributes is nonlinear. The linear kernel is a special case of RBF. The polynomial kernel has more hyper parameters than RBF kernel. Most computational overhead resides in the training phase. However, due to the fact that the training set is interactively created by the user and hence limited in magnitude and that the individual training examples are of constant and small size, overhead is low for typical training runs. Training on different individuals lead to model being more of person independent nature. For training purposes we used five different features such as distance between lip end points, distance between two eye brows, distance between left eye and left eye brow, distance between right eye and right eye brow and distance between inner lips are used to recognize the four expressions such as Neutral, Smile, Sad, and Surprise. The fig 2 shows the flow chart for algorithm using facial Landmarks and SVM Classification 3.3.2 Algorithm for recognizing the facial expression Input: Real time Input from Normal Camera or video Output: Expression of the human standing in front of the camera or human in video. Steps to identify the expression of the human 1. Identify the face using OpenCV face detector function 2. Apply SDM (Supervised Descent Method) method for initializing and tracking the landmarks. 3. Normalize the Landmarks to 50 0r 100 pixels 4. Calculate the distance metric using Euclidean Distance Method D (p, q) = (p1 q1) 2 + (p2 q2) 2 for the following a. Between the Lip End points. b. Between the Inner Lips. c. Between the Eyes and Eye Brows. 5. Apply the SVM classification or Rule-based classification on calculated distance metric in Step 4 to classify the expression like Surprise, Smile, Sad, & Normal. We used Multiclass LibSVM tool to classify the expressions (Normal, Sad, Smile, & Surprise). To classify the expression we used Distance as metric. The distance between interested landmarks is calculated and trained to SVM.The TFEID (Taiwanese Face Expression Image Database) is used to train the SVM. The calculated distance is used to predict the expression. The following rules are applied to classify the expression using Rule Based Classification. IF (Distance between Lip End Points > 48) THEN Expression = Smile IF (Distance between Eyes and Eye Brows > 15 AND Distance between inner lips >20) THEN Expression = Surprise Paper ID: 02015110 2319

IF (Distance between Eyes and Eye Brows < 10 AND Distance between inner lips < 4) THEN Expression = Sad The Figure 2 shows the flow chart of the facial expression recognition system. 4. Experimental Results Face Expression Recognition has been written in Windows Presentation Foundation technology with using VC++ language. To write and compile application code we used Visual Studio Professional 2010. The entire process of developing taken place on computer with Windows 7 system. The Table 1 & 2 shows the Expression of the person using Facial Landmarks and SVM Classification The Table 3 & 4 shows the Expression of the person using Facial Landmarks and Rule Based Classification The Table 5 Shows Results analysis for SVM Classification and Table 6 shows Result Analysis of Rule Based Classification Figure 2: Flow chart for Facial Expression Recognition System 5. Conclusion In this paper we presented the Facial Expression Recognition system based on Rule- Based and SVM Classification. From the experimental results the SVM classifier gives the satisfactory results for all type expression such as Normal, Sad, Surprise, and Smile compared to Rule- Based classifier. Table 1: Set I Results of Rule based Classifier Paper ID: 02015110 2320

International Journal of Science and Research (IJSR) Table 2: Set II Results of Rule based Classifier Table 3 Set I Results of SVM Classifier Table 4 Set II Results of SVM Classifier Paper ID: 02015110 2321

Table 5: Result Analysis of SVM classification Normal Sad Surprise Smile [14] P. Ekman, W.V. Friesen, Facial Action Coding System: Investigator s Guide, Consulting Psychologists Press, Palo Alto, CA, 1978. Normal 63.33% 16.66% 10.66% 9.35% Sad 23% 66.33% 17.33% - Surprise - - 93.33% 6.6% Smile 6.6% - 10% 83.33% Table 6: Result Analysis of Rule Based Classification References Normal Sad Surprise Smile Normal 56.25% 12.15% 31.6% - Sad 37.75% 62.5% - - Surprise - - 87.75% 12.5% Smile - - 18.75% 81.25% [1] Xuehan Xiong Fernando De la Torre : Supervised Descent Method and its Applications to Face Alignment.2013 [2] Liu, Z., Shan, Y., Zhang, Z.: Expressive expression mapping with ratio images. In Proc. of Ann. Conf. on Computer Graphics and Interactive Techniques (2001) [3] Zhang, Q., Liu, Z., Guo, B., Shum, H.: Geometrydriven photorealistic facial expression synthesis. IEEE Trans. VCG 12, 48 60 (2006) [4] Nguyen, M., Lalonde, J., Efros, A., De la Torre, F.: Image-based shaving. Computer Graphics Forum (Eurographics) 27, 627 635 (2008) Bilinear Kernel Reduced Rank Regression for Facial Expression Synthesis 377 [5] Bettinger, F., Cootes, T., Taylor, C.: Modelling facial behaviours. In: BMVC, vol. 2 (2002) [6] Zalewski, L., Gong, S.: Synthesis and recognition of facial expressions in virtual 3D views. In: AFGR (2004) [7] Chang, Y., Hu, C., Ferisa, R., Turk, M.: Manifold based analysis of facial expression. Image and Vision Computing 24, 605 614 (2005) [8] Kouadio, C., Poulin, P., Lachapelle, P.: Real-time facial animation based upon a bank of 3D facial expressions. In: Proc. of Computer Animation (1998) [9] Pighin, F., Szeliski, R., Salesin, D.: Resynthesizing facial animation through 3D model-based tracking. In: ICCV (1999) [10] Pyun, H., Kim, Y., Chae, W., Kang, H., Shin, S.: An example-based approach for facial expression cloning. In: IGGRAPH/Eurographics SCA, pp. 167 176 (2003) [11] Parke, F.I.,Waters, K.: Computer facial animation. AK Peters, Wellesley (1996) [12] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines,(2001). Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm [13] David Masip, Michael S. North, Alexander Todorov, Daniel N. Osherson Automated Prediction of Preferences Using Facial Expressions PLOS ONE February 2014 Volume 9 Issue 2 e87434 Paper ID: 02015110 2322