Automatic Facial Expression Recognition Using Neural Network

Similar documents
FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

A Facial Expression Classification using Histogram Based Method

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

APPLICATION OF RECIRCULATION NEURAL NETWORK AND PRINCIPAL COMPONENT ANALYSIS FOR FACE RECOGNITION

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

Hybrid Computing Algorithm in Representing Solid Model

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

Research Article Scene Semantics Recognition Based on Target Detection and Fuzzy Reasoning

Image Metamorphosis By Affine Transformations

Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li

Recognition of facial expressions in presence of partial occlusion

A Circle Detection Method Based on Optimal Parameter Statistics in Embedded Vision

Real time facial expression recognition from image sequences using Support Vector Machines

Pupil Center Detection Using Edge and Circle Characteristic

Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity

Human Face Classification using Genetic Algorithm

3D Face Reconstruction Using the Stereo Camera and Neural Network Regression

C.R VIMALCHAND ABSTRACT

network and image warping. In IEEE International Conference on Neural Networks, volume III,

Research on Emotion Recognition for Facial Expression Images Based on Hidden Markov Model

Announcements. Recognition I. Optical Flow: Where do pixels move to? dy dt. I + y. I = x. di dt. dx dt. = t

GPR Objects Hyperbola Region Feature Extraction

Dynamic facial expression recognition using a behavioural model

Facial Emotion Recognition using Eye

Emotion Detection System using Facial Action Coding System

Automatic Facial Expression Recognition Based on Hybrid Approach

Automatic Facial Expression Recognition based on the Salient Facial Patches

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse

Proposal of a Touch Panel Like Operation Method For Presentation with a Projector Using Laser Pointer

Subjective Image Quality Prediction based on Neural Network

Real-time Talking Head Driven by Voice and its Application to Communication and Entertainment

E V ER-growing global competition forces. Accuracy Analysis and Improvement for Direct Laser Sintering

Facial expression recognition is a key element in human communication.

D-Calib: Calibration Software for Multiple Cameras System

A Modular Approach to Facial Expression Recognition

An Fuzzy Neural Approach for Medical Image Retrieval

Emotion Recognition With Facial Expressions Classification From Geometric Facial Features

Automatic Classification of Facial Expressions from Video Stream using Decision Tree

PredictioIn the limiting drawing ratio in Deep Drawing process by Artificial Neural Network

FGnet - Facial Expression and. Emotion Database

Facial Expressions Recognition Using Eigenspaces

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

Real-Time Object Recognition Using a Modified Generalized Hough Transform

Facial Action Detection from Dual-View Static Face Images

Affective Embodied Conversational Agents. Summary of programme Affect and Personality in Interaction with Ubiquitous Systems.

Developed in Consultation with Tennessee Educators

Ashish Negi Associate Professor, Department of Computer Science & Engineering, GBPEC, Pauri, Garhwal, Uttarakhand, India

LBP Based Facial Expression Recognition Using k-nn Classifier

Visual compensation in localization of a robot on a ceiling map

Facial Expression Recognition based on Affine Moment Invariants

On Modeling Variations for Face Authentication

Optical flow Estimation using Fractional Quaternion Wavelet Transform

A COMPARISON OF GRAY-LEVEL RUN LENGTH MATRIX AND GRAY-LEVEL CO-OCCURRENCE MATRIX TOWARDS CEREAL GRAIN CLASSIFICATION

Radial Basis Network for Facial Expression Synthesis. I. King H. T. Hou. The Chinese University of Hong Kong

3D Reconstruction of a Human Face with Monocular Camera Based on Head Movement

Visual compensation in localization of a robot on a ceiling map

Facial Expression Recognition for HCI Applications

Topics for thesis. Automatic Speech-based Emotion Recognition

2 Stereo Vision System

Feature Point Detection by Combining Advantages of Intensity-based Approach and Edge-based Approach

An Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features

Emotion Recognition Based on Texture Analysis of Facial Expressions Using Wavelets Transform

Robust Facial Expression Classification Using Shape and Appearance Features

Scale Invariant Feature Transform (SIFT) CS 763 Ajit Rajwade

Enhanced Facial Expression Recognition using 2DPCA Principal component Analysis and Gabor Wavelets.

Approaches to Simulate the Operation of the Bending Machine

Cardiac Segmentation from MRI-Tagged and CT Images

ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 1, July 2013

Mood detection of psychological and mentally disturbed patients using Machine Learning techniques

Facial Expression Recognition

A CW-SSIM Kernel-based Nearest Neighbor Method for Handwritten Digit Classification

Automatic recognition of smiling and neutral facial expressions

Module 2, Section 2 Graphs of Trigonometric Functions

3D Facial Action Units Recognition for Emotional Expression

IBM Research Report. Automatic Neutral Face Detection Using Location and Shape Features

Photo by Carl Warner

Think About. Unit 5 Lesson 3. Investigation. This Situation. Name: a Where do you think the origin of a coordinate system was placed in creating this

Principal Component Analysis of Two-dimensional Flow Vector Fields on Human Facial Skin for Efficient Robot Face Design

Identification and Classification of Bulk Fruits Images using Artificial Neural Networks Dayanand Savakar

An edge is not a line... Edge Detection. Finding lines in an image. Finding lines in an image. How can we detect lines?

PERSONAL IDENTIFICATION USING RETINA 1. INTRODUCTION

Person-Independent Facial Expression Recognition Based on Compound Local Binary Pattern (CLBP)

Determining the 2d transformation that brings one image into alignment (registers it) with another. And

Evaluation of Expression Recognition Techniques

HFAN Rev.1; 04/08

A Real Time Facial Expression Classification System Using Local Binary Patterns

Tubes are Fun. By: Douglas A. Ruby Date: 6/9/2003 Class: Geometry or Trigonometry Grades: 9-12 INSTRUCTIONAL OBJECTIVES:

Facial Expression Simulator Control System using OpenGL Graphical Simulator Indra Adji Sulistijono, Rachman Ardyansyah Keywords 1.

Computer Animation Visualization. Lecture 5. Facial animation

Facial Expression Classification Using RBF AND Back-Propagation Neural Networks

Facial Expression Recognition Using Artificial Neural Networks

Neural Network based Face Recognition with Gabor Filters

World Academy of Science, Engineering and Technology International Journal of Computer and Information Engineering Vol:10, No:4, 2016

COMPOUND LOCAL BINARY PATTERN (CLBP) FOR PERSON-INDEPENDENT FACIAL EXPRESSION RECOGNITION

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition

CL7204-SOFT COMPUTING TECHNIQUES

A Simple Approach to Facial Expression Recognition

Facial Feature Extraction Based On FPD and GLCM Algorithms

Transcription:

Automatic Facial Epression Recognition Using Neural Network Behrang Yousef Asr Langeroodi, Kaveh Kia Kojouri Electrical Engineering Department, Guilan Universit, Rasht, Guilan, IRAN Electronic Engineering Department, Islamic Azad Universit of Nowshahr, Nowshahr, Mazandaran, IRAN Abstract - Computer intelligent reaction is a wa that human can use computer as an assistant. For mutual reaction between computer and human, the computer should learn human skills. An important skill is epression recognition (Sadness, Happiness, Surprise and Angr). Automatic recognition of facial epressions can be an important component of humanmachine interfaces; it ma also be used in behavioral science. This paper presents an automated facial epression recognition sstem using neural network classifiers. First all pictures have normalized manuall then we use the vertical and horizontal projections, cann edge detector, mathematical morpholog and point contour detection method to etract 30 facial characteristic points. Finall four facial epressions have been classified with a back propagation feed forward network. This sstem accurac rate is 9.5%. Kewords: Facial Epression; Feature Etraction; Face feature localization; Neural Network; Template Matching; Face projection. 1 Introduction Automatic facial epression recognition has man potential applications in areas such as human-computer interaction (HCI), emotion analsis, interactive video, indeing and retrieval of image and video data, image understanding, and snthetic face animation. Due to the increasing importance of computers in ever da life [1], HCI has become ver important in toda s societ. Most of the current HCI techniques rel on modalities such as, ke press, mouse movement, or speech input, and therefore do not provide natural humanto-human-like communication. The information contned in facial epressions, ee movement, hand movement, etc., is usuall ignored. Developing a sstem which could detect the presence of humans (using face detection), determine their identit (using face, voice, or audio-visual person recognition), and understand their behavior (using facial epression analsis, audiovisual speech recognition, etc.) in order to respond to their needs or requests, would significantl improve performance of HCI sstems. Automatic facial epression analsis is an important part of such a sstem. Human faces contn significant information about emotions and the mental state of a person that can be utilized in order to enable non-verbal communication with computers. Scientific stud of facial epressions began with the team led b Ekman []. The analzed si facial epressions, which included surprise, fear, disgust, anger, happiness, and sadness. Each epression was summarized b distinctive clues in the appearance of the eebrows, ees, mouth, jaw, etc. These facial epression clues are further investigated and encoded into the so-called Facial Action Coding Sstem (FACS) describe all visuall distinguishable facial movements. FACS enumerates Action Units (AUs) of a face that cause facial movements. In FACS, there are 46 AUs that account for changes in facial epression. The combination of these action units results in a large set of possible facial epressions. This paper proposes an automated facial epression recognition sstem using neural network. The rest of this paper is organized as follows In Section feature etraction modules for facial epression recognition are introduced. First, we briefl use vertical and horizontal projection to obtn the boundaries of the facial features including eebrow, ee, and mouth. In the sequel, the cann and morphological edge detector for precise local estimation of the ees and the mouth is introduced. Facial characteristic points of a face and normalization are then introduced. Net facial epression classifier, which is implemented b a feed forward back propagation network, is proposed using these 30 point inputs. The data is presented in Section 4. Finall, the conclusion is given in Section 5. Feature etraction Facial feature etraction is d on the observation that facial features differ from the rest of the face [3]. Therefore, facial features are determined b searching for minimum in the projection of piel gra values. There are two kinds of projection, X-projection and Y-projection that are computed b considering the average of piel values of the segmented face region along the vertical direction (columns) and along the horizontal direction (rows), respectivel. It is assumed that each facial feature generates a minimum in Y-projection and has particular X-projection characteristics. In Fig.1 Y-projection is shown each minimum is related to a facial feature. Also corresponding X-projections are shown. Note that nose

line is the line that passes through the nose and it acts as the Y ais for the face. It is epected that the first significant minimum on Y-Relief correspond to the eebrows, the second minimum correspond to the ees, third to nostrils, fourth to mouth, and the last to chin [4]. The position of a minimum in Y-projection along with the shape of corresponding X-projection is used to etract of facial feature. In Y-projection, usuall the number of minima is greater than the number of features. Thus, for robustness, we used a minimum distance between minimum point and previous maimum in Y-projection. mouth and eebrows. The vertical size of rectangles is obtned eperimentall. Net 30 facial feature points are etracted from face (8 point for each ee, 3 point for each eebrow and 8 point for mouth). Before performing feature etraction, the edge image b the morphological technique is generated. Two operations is used a dilation and an erosion. Eq.1 shows morphological edge detection: Edge Dilation( I ) Erosion( I) (1) Furthermore relative positions of facial features with respect to each other are used as well. After etensive eperimentation with a trning set of face images, a characteristic (tpical) X-projection is derived for each facial feature. But in X-projection for ees and mouth, edge image (cann detector) is used to obtn more accurac, then distance between two maimums with a constant margin determine horizontal area of ees and mouth. Result is shown in Fig.. [5] Figure 3. Segmentation result for eebrows, ees and mouth. The result of this operation on image has shown in Fig.4. Figure 4. Morphological edge detection Figure 1. Y- and X-projections for a sample image. Figure. X-projection for mouth and ees Fig. 3 shows segmentation result for eebrows, ees and mouth. White rectangles are drawn around ees, As mentioned above 8 feature points is assumed for each ee to etracting points from ees we choose four points on the upper side and four points on the lower side of rectangle of ee on the edge image. Intensit of each point is taken zero, it means black piel. Each of the four landmark points on the upper boundar will move downward graduall to the position which gives the maimum intensit difference. The lower four landmark points will be relocated in a similar manner. Threshold value is chosen 75. We choose eight points for mouth and three points for each eebrow and continue in similar manner. Location of points is determined with facial characteristic points that are shown in Fig. 5; these points are chosen to pa attention to movement in epression face and to reach best curve fitness. a i is a vector defining the coordinate of the i-th FCP (facial characteristic point), i.e., is described as ( X, Y ) i 1,,...,30 () a i a i

As shown in Fig. 5, the a a coordinate sstem is the absolute coordinate sstem, with its origin being chosen as a point on the length,, downward the mid point between the left and right ees, which is almost the same center used for image alignment in [6]. Parameter describe the distance between the two ees and is given b: ( a a 1) + ( a a 1) (3) Let (0, 0) be the coordinate of the mid point between the left and right ees, namel, o o + a + a. (5) From Fig. 6, the origin of the new coordinate sstem is then calculated as O O 0 0 + sin( θ ) cos( θ ) (6) The coordinate (, ) of an FCP is transformed into the " " coordinate sstem, first b translating the origin to the coordinates of Eq. (6) above and then rotating an angle of orientation. The relationships to implement this affine transform are given b successivel eecuting the following two sets of equations: Figure 5. Location of feature points It is a length normalization factor so that each image length will be normalized b to make ever face image instance have a length of unit between the two ees. This normalization factor is also emploed in [7] [8]. Furthermore, to offset the 3-D pose of the head in a picture, we introduce a coordinate rotation angle, which is the inclination of the face and is specified b the line of the two ees with respect to the horizontal, defined b 1 θ tan a (4) a cos( θ ) + sin( θ ) sin( θ ) + cos( θ ) Finall to normalize the input face image, dividing (, ) b gives i 1,,...,30 3 Classif of facial epression b neural network In this paper feed forward propagation network was used to classif four epressions (angr, surprised, happ and neutral). (7) (8) This Network has contned of two laers, the mid laer has 10 neurons and the output laer has 4 neurons. Sigmoid function was used for mid laer Eq. 9 and Fig.7 shows this function. [9][10] out 1 1 + e NET (9) Finall learning algorithm is supposed to Levenberg- Marquart and the number of epochs was determined 1500. Figure 6. New coordinate sstem

Fig.8 shows network structure in MATLAB. As ou see size of input vector is 60 it causes of 30 normalized feature points on face( and coordinate). Output laer has four neurons that each neuron shows an epression. Table 1 shows Assignments. Table 1. Epression Assignment Neuron Epression 1 3 4 Angr Happ Surprised Neutral Figure7. Sigmoid function Figure 9. Simulation using Matlab Figure 8. Neural network Simulation result using Matlab is shown in Fig. 9. Charts in Fig.9 show classifier output. Number of bars is related to epression according to Table 1. 4 Data In this paper Japanese female face epression data (JAFFE) has used to epression recognition. The resolution of images was 56 56 piels. There were 115 face images collected in our face image data, which includes ten different female persons and each person poses for several neutral, anger, surprised and happiness epressions. Sample of data are shown in Fig.10. 5 Conclusion Figure 10. Data sample First of all we used vertical integral projection to localize vertical region of eebrows, ees and mouth with hierarchical algorithm then using of horizontal projection and edge detection we found horizontal area of features. After that with the m of morphological edge detection, minimizing error and interpolation, 30 feature points etracted. Net feature points rotated and normalized in

new coordination sstem. Finall four epressions classified using neural classifier. In the conclusion the sstem accurac has shown in Table. In comparison our sstem is faster than some sstems that use Gabor wavelet as feature because we use principle point without using of so man inputs to network and PCA. Table. Accurac rate Epression Accurac rate Angr 94.4% Happ 94.4% Surprised 86.6% Neutral 94.4% 6 References [1] B.Fasel and J.Luettin, Automatic Facial Epression Analsis: A Surve, Pattern Recognition, vol. 36,no.,003,pp. 59-75. [] P. Ekman and W.V. Friesen, "The Facial Action Coding Sstem", San Francisco: Consulting Pschologist Press, 1978. [3] Sobottka, K., Pitas, I., "Etraction of facial regions and features using color and shape information", in: 13 th International Conference on Pattern Recognition, Vienna, Austria, August 1996, pp. 41-45. [4] Sharmne V. Cerez, "Facial Feature Detection using a Geometric Face Model", CMSC 190 Special Problem, Institute of Computer Science, ICS Universit of the Philippines Los Banos, 007 [5] Gonzalez, R.C., and Woods, R.E., Digital Image Processing (Addison Wesle, Reading, MA, 199). [6] H. Kobaashi, and F. Hara, Recognition of Si Basic Facial Epressions and Their Strength b Neural Network, IEEE International Workshop on Robot and Human Communication, New York, NY., pp. 381-386, 199. [7] C.L. Huang and C.W. Chen, Human Facial Feature Etraction for Face Interpretation and Recognition, Pattern Recognition, Vol. 5, No. 1, pp. 1435-1444, 199. [8] Zhang, Z., Feature-Based Facial Epression Recognition: Sensitivit Analsis and Eperiments with a Multilaer Perceptron, International Journal of Pattern Recognition and Artificial Intelligence, Vol. 13, No. 6, pp. 893-911, 1999. [9] H. Kobaashi and F. Hara, Analsis of the neural network recognition characteristics of si basic facial epressions, 3 rd IEEE International Workshop on Robot and Human Communication, New York, NY., pp. -7, 1994. [10] Anil K. Jn, K. M. Mohiuddin and Jiangchang Mao Artificial Neural Networks a Tutorial IEEE TRANSACTION, pp 31-44, 1996.