FACIAL EXPRESSION DETECTION AND RECOGNITION SYSTEM

Size: px
Start display at page:

Download "FACIAL EXPRESSION DETECTION AND RECOGNITION SYSTEM"

Transcription

1 FACIAL EXPRESSION DETECTION AND RECOGNITION SYSTEM W.K. Teo 1, Liyanage C De Silva 2 and Prahlad Vadakkepat 1 ABSTRACT In this paper, the integration of face feature detection and extraction, and facial expression recognition are discussed. In this paper, we propose an algorithm that utilizes multi-stage integral projection to extract facial features. Furthermore, in this project, we propose a statistical approach to process the optical flow data to obtain the overall value for the respective feature region in the face. This approach has eliminated the requirement of accurate identification of the feature boundary. Optical flow computations are utilized to identify the directions and the amount of motions in image sequences that are caused by human facial expressions. The optical flow computation results are processed using Kalman filtering. The filtered results are given to a neural network to realize a mapping into the facial expression space. This technique is used on a set of training and testing face images. Preliminary experiments indicate an accuracy between 60% - 80% on the Kalman filtered data when recognizing four types of expressions: anger, sad, happy and surprise. In an attempt to further improve the recognition results, we proposed a technique to process the optical flow results using a statistical approach instead of using Kalman filtering. The preliminary experiments on this proposal approach produced accuracy between 70% - 100% on the original optical flow results that is better than the Kalman filter technique. INTRODUCTION Detection of face features such as eyes and mouth have been major issues of facial image processing which may be required for various areas such as emotion recognition [1] and face identification [2]. Face feature detection can be used to determine the face features from images to be used later as input for other functions like face and emotion recognition. Facial expression plays an important role in smooth communication among individuals. The extraction and recognition of facial expression has been the topic of various researches subject to enable smooth interaction between computer and their users. In this way, computers in the future will be able to offer advice in response to the mood of the users. 1 Department of Electrical and Computer Engineering, National University of Singapore, 4, Engineering Drive 3, Singapore Institute of Information Sciences and Technology, Massey University, Palmerston North, Pvt. Bag , New Zealand 14

2 Computer-based recognition of facial expressions goes a long way, and various methods have been proposed. All the method can be classified into two broad-based category: probabilistic approach and feature based approach. The feature-based method utilizes the Facial Action Coding System (FACS) designed by Ekman and Friser [3]. In FACS, the motions of the face are divided into 44 action units (AU), and their combinations may describe any facial expression. More than 7,000 combinations of AU have been observed [4]. However, FACS itself is purely descriptive, uses no emotion or other inferential labels, and provides the necessary background to describe facial expression. The probabilistic-based method does not give preference to facial features such as eyes and mouth. Instead, the feature vector can be the random distribution of image intensities and these vectors may differ from each emotion. The vectors are calculated per emotion and classification algorithms like HMM, Neural Network (NN) or a hybrid approach (HMM an NN) [5] are applied. In this work, we propose a method to combine feature detection and extraction and facial expression recognition into an integrated system so that the recognition results will not be influenced by subjective factors and the bound of areas are invariant during the whole sequence. We propose a method for facial expression recognition that uses integral projection, statistical computation, a neural network and kalman filtering. The face feature detection method uses multi-stage integral projection. Optical flow computation [6] will be used on the detected feature namely, the eyebrows and the lip to extract movement. One advantage of optical flow is that optical flow information can be extracted easily even at low contrast edges. The extracted feature vectors are also preprocessed using Kalman filtering [7], [8] to smooth the data. The original as well as the Kalman filtered feature vectors are then fed in separately into the neural network to realize a mapping into the expression space. The face image database created by Carnegie Mellon University (CMU) is used in this analysis. In this paper, firstly, we propose a robust face feature detection method that we called multi-stage integral projection. At the moment, there are several approaches such as the one using color space [9] and using deformable template matching [10]. The performance of these approaches is affected by several external conditions such as illumination and skin color. Our proposed method makes use of edges extracted from the facial image that are considered as general features during our feature detection process. Then we make use of multi-stage integral projection to obtain the feature positions. After each stage of integral projection, we will get closer to the position of the features. Our detection algorithm makes use of a few parameters that we designed using a facial image database containing faces from 30 different people. Our detection algorithm is robust and it works well for people of different skin color. We make two assumptions: 1) the subjects are in front of a background that contains only a few elements; 2) the subject does not wear glasses and hats. Secondly, we use a statistical approach and Kalman filter approach to process the values obtain from the optical flow computation on the detected features. Kalman filtering is applied on the optical flow values of the images sequences in an attempt to improve the overall recognition rate of the system. Kalman filter provides an efficient computational 15

3 (recursive) solution of the least-squares method. The filter is very powerful in several aspects it supports estimations of past, present, and even future states, and it can do so even when the precise nature of the modeled system is unknown. With a statistical approach, there is no need to track the feature movement in the subsequent image frames, as the one with the highest possibilities will indicate the average movement of the feature. Figure 1 gives a complete overview of the facial feature detection, extraction and emotion recognition proposed in this paper. In Section 2, a feature detection method called multistage integral projection to detect the position of feature is explained. In Section 3, facial feature extraction is discussed. After this process, the average output is also process using Kalman filter. Here we also utilize a statistical approach to process the optical flow values in an attempt to improve the results. In Section 4, the structure of the neural network is discussed. In Section 5, the experimental results after feeding the feature vectors into the neural network are discussed. Finally, conclusion to the paper is presented. Image Sequences for Training Feature Detection Feature Extraction Feature Vectors Kalman Filter/ Statistical Approach Training Neural Network Neural Network Image Sequences to be Recognitzed Feature Detection Feature Extraction Feature Vectors Kalman Filter/ Statistical Approach Mapping to expression space Recognition Results Figure 1: Block diagram of the proposed facial expression recognition system FACIAL FEATURE EXTRACTION In this section, the edge detection will be discussed here. Next, head location and size estimation, and location of feature using integral projection is discussed. Edge detection The forming of the image of the face on the retina does not include fine edge details of the faces when the person is far away. Instead, we see a blurred replica of the face. We can get the same effect when an image is blurred using a gaussian filter with a large σ and window size. In the first step, the image is blurred using a Gaussian operator h that can be given by g( w = w w w h( α, β ) f ( x α, y β ) dαdβ (1) 16

4 where, 2 2 x + y h( = exp( ), 2 2σ w = window size of 7 x 7, and σ = 7, standard deviation of the Gaussian Filter. To calculate the edges of the image, we convolve the blurred image with vertical and horizontal Sobel edge operator, such that: g ( = g( s x x ( = g( x 1, y + 1) + g( x + 1, y + 1) 2g( x 1, + 2g( x + 1, g( x 1, y 1) + g( x + 1, y 1) (2) g ( = g( s y y ( = g( x 1, y + 1) g( x 1, y 1) + 2g( y + 1) 2g( y 1) + g( x + 1, y + 1) g( x + 1, y 1) (3) The edge image G( can be given, G( = g( = 1 if t > T = 0 else (4) 2 2 where t = ( g x ( ) + ( g y ( ) and 2 2 T is the root mean value of {( g x ( ) + ( g x ( ) } over the total image area. This edge detection method uses a standard algorithm, the Sobel operator. This step is necessary as a pre-process before using integral projection to obtain the feature location. One advantage of this step is that the threshold is based on the statistics of the image data instead of manual value and is acceptable to small amount of noise. Head size and location estimation, and feature location using multi-stage integral projection In this section, estimation of head size and location, and feature location using the proposed multi-stage integral projection is discussed. The integral projection technique was originally proposed by Kanade [11] and modified by the authors to suit the problem of face position, size estimation and feature location. 17

5 In the 1 st stage, we implement a method proposed in a paper by De Silva [12] that estimates the head size and location. However, we have changed the parameters to fit the statistics of the image database. The head size and location is estimated to separate the head area from the background and the other body parts. The edge-detected image is subjected to vertical integral projection. The algorithm will detect the eye position, eyepos of the face, starting from the top of the head. One important aspect to note is that the value eyepos is not precise but only an estimate. The value eyepos will be used in the next stage that will be discussed later. The object width of the head is taken as the width of the head. By observing the faces of 30 different people, we got 2.0 times eyepos as the head height. Instead of an elliptical area, we have implemented a rectangular area with the breadth equal to the head width (HW) and the length equal to the head length (HL). See Figure 2 for the results of the head area estimation. HW eyepos HL Figure 2: Results of head area estimation In the 2 nd stage, the head area is subjected to horizontal projection. The horizontal integral projection of an edge image is given by H ( = HW x= 1 G head area ( (5) We find the exact position of the eyeplane by observing the maximum projection in a pre-defined search area. The search area is limited using the eyepos; the lower limit is equal 0.5 times eyepos and the upper limit equal 1.2 times eyepos. Next, the eyebrow position is located by the next highest horizontal projection before the eyeplane. In addition, a threshold condition is built into the algorithm. A threshold value is use to determine if the eyebrow slit passed detected earlier is actually the true eyebrow slit and not the forehead. By doing horizontal projection in the detected eyebrow slit and comparing the maximum value with the threshold of 15 pixels, it determines if the detected eyebrow slit is the actual one. We also find the lip slit by finding the maximum projection in a pre-defined search area. The search area is also limited using the eyepos, the lower limit is times eyepos and the upper limit is HL. Both the search area and 18

6 the threshold value is obtained from the statistical data of 30 different faces. See Figure 3 for the horizontal projection indicating the position of the eyebrow, lips and eyes. In the 3rd stage, the exact location of the eyebrow is obtained by using vertical projection of their edge image slit. A hierarchical searching process is adopted to find the location of the left and right eyebrows. Here when searching from the left to the right side of the head, we can observe that the projection is divided into ranges. The two largest ranges are observed to be bounded by the left and right extremes of eyebrows. However if the eyebrow is covered by hair this condition maybe violated. Figure 3: Horizontal projection of head area Next, the exact location of the lip is located by using vertical projection of their edge image slit. A hierarchical searching process is adopted to find the location of the lip. Here when searching from the left to the right side of the head, we can observe that the projection is divided into several ranges. The broadest range is observed to be bounded by left and right extremes of the lip position. A horizontal projection is done in the window, bounded by the left and right extremes of the lip position. The maximum horizontal projection is the center of the upper and lower lip. See Figure 4 for the vertical projection of the eyebrows and lip slit. Figure 4: Vertical projection of eyebrows and lips 19

7 In Figure 5, results of the multi-stage projection are shown. The main advantage is that this is a simple algorithm that is very fast. The detected feature location is to be used for the facial feature extraction that is discussed next. Figure 5: Final Result of Projection FACE FEATURE EXTRACTION In this section, facial feature extraction is discussed. We will explain about utilizing the proposed statistical approach to process the optical flow values in an attempt to improve the results. Next, we will discuss the usage using Kalman filter. Optical flow computation The optical flow algorithm proposed by P. Anandan [6] is implemented here. The optical flow computation is applied on the detected eyebrows. In the case of the lips, it is divided into 4 areas as shown in Figure 6 and the optical flow computation is applied on each of the regions. A statistical approach is used to compute the average optical flow of the facial feature. A statistics is created from the optical flow field for each window. The most common motion is selected as the overall movement of the feature. The motions components x and y are divided into a few possible ranges. The range with the highest percentage is then selected and the average motion inside this range is taken as the average flow. This is illustrated in Figure 7. % Min pixel Max Figure 6: Lips partition Figure 7: Illustration of motion statistics 20

8 Kalman filter The Kalman filter is a set of mathematical equations that provides an efficient computational (recursive) solution of the least-squares method. The filter is very powerful in several aspects: it supports estimations of past, present, and even future states, and it can do so even when the precise nature of the modeled system is unknown. It has been the subject of extensive research and application, particularly in the area of autonomous or assisted navigation. This is applied to perform smoothing of the optical flow values. In this case, we attempt to apply Kalman filter-smoother to recognition of emotion in an attempt to improve the recognition rate. A Linear Dynamical System is a partially observed stochastic process with linear dynamics and linear observations, both subject to Gaussian noise. It can be defined as follows, where X(t) is the hidden state at time t, and Y(t) is the observation. The conditional independence assumptions that are being made can be concisely modeled using a Bayesian network as shown in Figure 8. (Circles denote random variables with linear-gaussian distributions, clear means hidden, shaded means observed.) The Kalman filter is an algorithm for performing filtering on this model, i.e., computing P(X(t) Y(1),..., Y(t)). Figure 8: Bayesian network We filtered the optical flow values of the velocity (pixels/frame) for a sequence of images leading to the surprise emotion and show the smooth value and the observed value using statistical computation in Figure 9. The starting and ending point of the filtered estimate is now above X2 = 0 line compared to the observed value that is below it. We know that it is better as the upper lip of the surprise expression should be moving up instead of down. However, it has smoothed the sudden changes in facial feature movement that forms an important part in facial expression. The summation of the observed value and the smooth estimates for the lip and eyebrows is fed into the neural network that will be discussed in the next section. (a) Left eyebrow (b) Right eyebrow 21

9 (c) Left section of Lip (d) Right section of Lip (e) Upper section of Lip Figure 9: Plot of observed and filter values (f) Lower section of Lip STRUCTURE OF THE NEURAL NETWORK AND ITS TRAINING ALGORITHM Neural network learning methods provide a robust approach to approximating real-valued, discrete-valued, and vector-valued target functions. For certain types of problems, such as learning to interpret complex real-world sensor data, artificial neural networks are among the most effective learning method currently known. The backpropagation algorithm that has proven successful in many practical problems is implemented here. In this paper, the one-hidden-layer neural network is proposed. There are 6 inputs units into the neural network: left and right eyebrows (2 units), and left, right, upper and lower section of lip (4 units). The output layer has four units that correspond to the four facial expression categories: anger, happiness, sadness and surprise. See Figure 10 for an illustration of the structure. Inputs hidden Outputs Figure 10: Neural network structure 22

10 The Levenberg-Marquardt algorithm is used to update the network weights and biases. One iteration of this algorithm can be written as T 1 T [ J J I] J e x + 1 = x + µ k k (7) where x k is a vector of current weights and biases, J is the Jacobian matrix that contains first derivatives of the network error with respect to the weights and biases, and e is a vector of network errors. This algorithm appears to be the fastest method for training moderate- sized feed-forward neural network. The tan-sigmoid transfer function is used for the activation of the hidden network units and the linear transfer function is used for the activation of the output network units. The network identify the facial expression by finding the output unit that shows the largest value among all of the output units, and taking the order of that unit as the facial expression category (winner takes all). EXPERIMENTS AND RESULTS The experiments are done under the following conditions: 1. Only the frontal view of the facial image sequences is analyzed through the whole image of sequence. 2. The head motion between two consecutive frames is considered small. 3. The subjects do not have facial hair and they are not wearing glasses. Our image database has 122 image sequences taken from 30 subjects. They are 30 of anger, 32 of happiness, 30 of sad and 30 of surprise. We use 20 of each emotion to train the neural network. Our experiments include two parts: (i) Using kalman filtered resultant optical flow as neural input. (ii) Using statistically computed optical flow value as neural network input The recognition results of using the Kalman filter and statistically computed optical flow are shown in Tables 1 and 2 respectively. From the result, we can see that using the original resultant optical flow yield high accuracy than using the Kalman filter output. While using the original resultant optical flow, a recognition accuracy as high as 100% is achieved for the emotion of surprise. Table 1: Recognition results using kalman filter Anger Happiness Sad Surprise Anger Happiness Sad Surprise Accuracy 70% 75% 60% 70% 23

11 Table 2: Recognition results using statistical computed optical flow Anger Happiness Sad Surprise Anger Happiness Sad Surprise Accuracy 70% 75% 70% 100% CONCLUSION In this paper we have proposed to combine feature detection and extraction and recognition of facial expression into one system. We proposed a new method of feature detection and statistical approach is introduced in an attempt to improve the recognition rate. The proposed feature detection using multi-stage integral projection is simple, robust and efficient. Using integral projection, we were able to locate the eyebrows and lips. Then using a statistical approach on the optical flow field, we found the overall movement of the features in the window detected earlier without the need to pinpoint the exact location of the feature. The main advantage of this approach is that it does not require any initial manual settings such as location of head. The initial settings are predetermined using normalized coefficients obtained using a facial image database. Second, Kalman filtering is applied on the resultant optical flow value to calculate the recognition rate. From the recognition rate we can see that directly feeding in the Kalman filter to the neural network lead to a recognition rate of 70%. However, by applying the proposed statistical approach on the optical flow results and feeding it to the neural network lead to an improvement to 80%. This is because of the facial movement due to expressions consists of sudden changes and it is difficult to model a facial expression system. In Kalman filtering the system model has to be defined beforehand -- in the case of expression, the amount of movement need to be known. However, the amount of movement of the facial features varies from person to person and to some people for example, happiness is only a slight movement of the lips while to others, the movement can be very large. Therefore, the use of Kalman filtering lead to poor performance because of the difficulty in designing a suitable system model. FUTURE DIRECTIONS Future areas of development can include using other types of Kalman filters like the extended Kalman filter and the unscented Kalman filter that are designed for system with non-linear Gaussian noise. Global motion of the head also affects the results obtained from the optical flow values. By taking into consideration global motion, the average optical flow movement of the facial features can be more accurately extracted. We can also extent the proposed method to a larger database that consists of people from different races and ages. We can also further analyze the selection of the network structure and to modify the network-training algorithm for improved recognition rates. 24

12 REFERENCES Yingli Tian, T. Kanade and J. F. Cohn, Recognizing Action Units for Facial Expression Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, February, R. Brunelli and T. Poggio, Face recognition: features versus templates, IEEE PAMI, vol.15. no 10, pp , Oct P. Ekman and W.V. Frisen, Facial action coding system, Consulting Psychologists Press, 1977 P. Ekman, Methods for measuring facial actions, In K.R. Scherer & P. Ekman (Eds.), Handbook of Methods in Nonverbal Behavior Research, Cambridge: Cambridge University 1982, pp Tianming Hu, Liyanage C. De Silva, Kuntal Sengupta, A Hybrid Approach of NN and HMM for Facial Emotion Classification, in ELSEVIER Pattern Recognition Letters Journal, vol. 23, no. 11, pp , November P. Anandan, A Computational Framwork and an Algorithm for the Measurement of Visual Motion, Interational Journal of Computer Vision, 2, pp , Kalman, Rudolph and Emil, A New Approach to Linear Filtering and Prediction Problems, Transactions of the ASME--Journal of Basic Engineering, vol 82, series D, pp.35-45, 1960 Greg Welch and Gary Bishop., An Introduction to the Kalman Filter, UNC-Chapel Hill, TR , Feb J. Yang, W. Lu, and A. Waibel, Skin-color modeling and adaption, Technical Report CMU-CS , School of Computer Science, Carnegie Mello University, A.L. Yuille, P.W. Hallinan, and D.S. Cohen, Feature extraction from faces using deformable templates. Int. Journal of Computer Vision, vol. 8, no.2, pp , T. Kanade, Picture Processing System by Computer Complex and Recognition of Human Faces, Ph.D. thesis, Kyoto University, Japan, Nov Liyanage C. De Silva, K. Aizawa and M. Hatori, Detection and Tracking of Facial Features by using a Facial Feature Model and Deformable Circular Templates, in IEICE (Inst. of Electronics, Information and Communication Engineers, Japan) Transactions on Information and Systems, vol.e78-d, no.9, pp , Sep Burt, Fast filter transforms for image processing, Computer Graphics Image Processing 16:20-51,

13 R.Y. Wong and E.L. Hall, Sequential hierarchical scene matching, IEEE Transaction on Computer, vol. 27, no. 4, pp , A. Rosenfeld and A.C. Kak, Digital Picture Processing. Academic Press: New York, P.J. Burt, T.H. Hong and A. Rosenfeld, Image segmentation and region property computation by cooperative hierarchical computation, IEEE Transactions On Systems, Man, And Cybernetics, vol. 11, pp ,

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks HADI SEYEDARABI*, WON-SOOK LEE**, ALI AGHAGOLZADEH* AND SOHRAB

More information

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory Computers and Mathematics with Applications 61 (2011) 2101 2106 Contents lists available at ScienceDirect Computers and Mathematics with Applications journal homepage: www.elsevier.com/locate/camwa An

More information

Human Face Classification using Genetic Algorithm

Human Face Classification using Genetic Algorithm Human Face Classification using Genetic Algorithm Tania Akter Setu Dept. of Computer Science and Engineering Jatiya Kabi Kazi Nazrul Islam University Trishal, Mymenshing, Bangladesh Dr. Md. Mijanur Rahman

More information

On Modeling Variations for Face Authentication

On Modeling Variations for Face Authentication On Modeling Variations for Face Authentication Xiaoming Liu Tsuhan Chen B.V.K. Vijaya Kumar Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA 15213 xiaoming@andrew.cmu.edu

More information

Facial expression recognition using shape and texture information

Facial expression recognition using shape and texture information 1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,

More information

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis From: AAAI Technical Report SS-03-08. Compilation copyright 2003, AAAI (www.aaai.org). All rights reserved. Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis Ying-li

More information

A Simple Approach to Facial Expression Recognition

A Simple Approach to Facial Expression Recognition Proceedings of the 2007 WSEAS International Conference on Computer Engineering and Applications, Gold Coast, Australia, January 17-19, 2007 456 A Simple Approach to Facial Expression Recognition MU-CHUN

More information

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1 Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

More information

FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS

FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS M.Gargesha and P.Kuchi EEE 511 Artificial Neural Computation Systems, Spring 2002 Department of Electrical Engineering Arizona State University

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

Facial Emotion Recognition using Eye

Facial Emotion Recognition using Eye Facial Emotion Recognition using Eye Vishnu Priya R 1 and Muralidhar A 2 1 School of Computing Science and Engineering, VIT Chennai Campus, Tamil Nadu, India. Orcid: 0000-0002-2016-0066 2 School of Computing

More information

Facial Expression Recognition based on Affine Moment Invariants

Facial Expression Recognition based on Affine Moment Invariants IJCSI International Journal o Computer Science Issues, Vol. 9, Issue 6, No, November 0 ISSN (Online): 694-084 www.ijcsi.org 388 Facial Expression Recognition based on Aine Moment Invariants Renuka Londhe

More information

Face recognition using Singular Value Decomposition and Hidden Markov Models

Face recognition using Singular Value Decomposition and Hidden Markov Models Face recognition using Singular Value Decomposition and Hidden Markov Models PETYA DINKOVA 1, PETIA GEORGIEVA 2, MARIOFANNA MILANOVA 3 1 Technical University of Sofia, Bulgaria 2 DETI, University of Aveiro,

More information

Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity

Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Ying-li Tian 1 Takeo Kanade 2 and Jeffrey F. Cohn 2,3 1 IBM T. J. Watson Research Center, PO

More information

Research on Dynamic Facial Expressions Recognition

Research on Dynamic Facial Expressions Recognition Research on Dynamic Facial Expressions Recognition Xiaoning Peng & Beii Zou School of Information Science and Engineering Central South University Changsha 410083, China E-mail: hhpxn@mail.csu.edu.cn Department

More information

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions Tomokazu Wakasugi, Masahide Nishiura and Kazuhiro Fukui Corporate Research and Development Center, Toshiba Corporation

More information

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition ISSN: 2321-7782 (Online) Volume 1, Issue 6, November 2013 International Journal of Advance Research in Computer Science and Management Studies Research Paper Available online at: www.ijarcsms.com Facial

More information

An Introduction to Pattern Recognition

An Introduction to Pattern Recognition An Introduction to Pattern Recognition Speaker : Wei lun Chao Advisor : Prof. Jian-jiun Ding DISP Lab Graduate Institute of Communication Engineering 1 Abstract Not a new research field Wide range included

More information

Dual-state Parametric Eye Tracking

Dual-state Parametric Eye Tracking Dual-state Parametric Eye Tracking Ying-li Tian 1;3 Takeo Kanade 1 and Jeffrey F. Cohn 1;2 1 Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213 2 Department of Psychology, University

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Real time facial expression recognition from image sequences using Support Vector Machines

Real time facial expression recognition from image sequences using Support Vector Machines Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,

More information

Facial Animation System Design based on Image Processing DU Xueyan1, a

Facial Animation System Design based on Image Processing DU Xueyan1, a 4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 206) Facial Animation System Design based on Image Processing DU Xueyan, a Foreign Language School, Wuhan Polytechnic,

More information

Adaptive Skin Color Classifier for Face Outline Models

Adaptive Skin Color Classifier for Face Outline Models Adaptive Skin Color Classifier for Face Outline Models M. Wimmer, B. Radig, M. Beetz Informatik IX, Technische Universität München, Germany Boltzmannstr. 3, 87548 Garching, Germany [wimmerm, radig, beetz]@informatik.tu-muenchen.de

More information

Robust color segmentation algorithms in illumination variation conditions

Robust color segmentation algorithms in illumination variation conditions 286 CHINESE OPTICS LETTERS / Vol. 8, No. / March 10, 2010 Robust color segmentation algorithms in illumination variation conditions Jinhui Lan ( ) and Kai Shen ( Department of Measurement and Control Technologies,

More information

Facial expression recognition is a key element in human communication.

Facial expression recognition is a key element in human communication. Facial Expression Recognition using Artificial Neural Network Rashi Goyal and Tanushri Mittal rashigoyal03@yahoo.in Abstract Facial expression recognition is a key element in human communication. In order

More information

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

COMBINING NEURAL NETWORKS FOR SKIN DETECTION COMBINING NEURAL NETWORKS FOR SKIN DETECTION Chelsia Amy Doukim 1, Jamal Ahmad Dargham 1, Ali Chekima 1 and Sigeru Omatu 2 1 School of Engineering and Information Technology, Universiti Malaysia Sabah,

More information

Speech Driven Synthesis of Talking Head Sequences

Speech Driven Synthesis of Talking Head Sequences 3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

IBM Research Report. Automatic Neutral Face Detection Using Location and Shape Features

IBM Research Report. Automatic Neutral Face Detection Using Location and Shape Features RC 22259 (W0111-073) November 27, 2001 Computer Science IBM Research Report Automatic Neutral Face Detection Using Location and Shape Features Ying-Li Tian, Rudolf M. Bolle IBM Research Division Thomas

More information

Mood detection of psychological and mentally disturbed patients using Machine Learning techniques

Mood detection of psychological and mentally disturbed patients using Machine Learning techniques IJCSNS International Journal of Computer Science and Network Security, VOL.16 No.8, August 2016 63 Mood detection of psychological and mentally disturbed patients using Machine Learning techniques Muhammad

More information

Face Recognition based Only on Eyes Information and Local Binary Pattern

Face Recognition based Only on Eyes Information and Local Binary Pattern Face Recognition based Only on Eyes Information and Local Binary Pattern Francisco Rosario-Verde, Joel Perez-Siles, Luis Aviles-Brito, Jesus Olivares-Mercado, Karina Toscano-Medina, and Hector Perez-Meana

More information

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib

Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi Sib 3rd International Conference on Materials Engineering, Manufacturing Technology and Control (ICMEMTC 201) Facial expression recognition based on two-step feature histogram optimization Ling Gana, Sisi

More information

Facial Processing Projects at the Intelligent Systems Lab

Facial Processing Projects at the Intelligent Systems Lab Facial Processing Projects at the Intelligent Systems Lab Qiang Ji Intelligent Systems Laboratory (ISL) Department of Electrical, Computer, and System Eng. Rensselaer Polytechnic Institute jiq@rpi.edu

More information

Recognition of facial expressions in presence of partial occlusion

Recognition of facial expressions in presence of partial occlusion Recognition of facial expressions in presence of partial occlusion Ioan Buciu, 1 Irene Kotsia 1 and Ioannis Pitas 1 AIIA Laboratory Computer Vision and Image Processing Group Department of Informatics

More information

Principal Component Analysis and Neural Network Based Face Recognition

Principal Component Analysis and Neural Network Based Face Recognition Principal Component Analysis and Neural Network Based Face Recognition Qing Jiang Mailbox Abstract People in computer vision and pattern recognition have been working on automatic recognition of human

More information

Evaluation of Expression Recognition Techniques

Evaluation of Expression Recognition Techniques Evaluation of Expression Recognition Techniques Ira Cohen 1, Nicu Sebe 2,3, Yafei Sun 3, Michael S. Lew 3, Thomas S. Huang 1 1 Beckman Institute, University of Illinois at Urbana-Champaign, USA 2 Faculty

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs

More information

Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li

Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li Data Mining Final Project Francisco R. Ortega Professor: Dr. Tao Li FALL 2009 1.Introduction In the data mining class one of the aspects of interest were classifications. For the final project, the decision

More information

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques

Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques Digital Image Computing: Techniques and Applications. Perth, Australia, December 7-8, 1999, pp.143-148. Fast Optical Flow Using Cross Correlation and Shortest-Path Techniques Changming Sun CSIRO Mathematical

More information

EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE

EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE V. Sathya 1 T.Chakravarthy 2 1 Research Scholar, A.V.V.M.Sri Pushpam College,Poondi,Tamilnadu,India. 2 Associate Professor, Dept.of

More information

Detection and Tracking of Faces in Real-Time Environments

Detection and Tracking of Faces in Real-Time Environments Detection and Tracking of Faces in Real-Time Environments R.C.K Hua, L.C. De Silva and P. Vadakkepat Department of Electrical and Computer Engineering National University of Singapore 4 Engineering Drive

More information

Image Enhancement Techniques for Fingerprint Identification

Image Enhancement Techniques for Fingerprint Identification March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement

More information

Image-based Fraud Detection in Automatic Teller Machine

Image-based Fraud Detection in Automatic Teller Machine IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.11, November 2006 13 Image-based Fraud Detection in Automatic Teller Machine WenTao Dong and YoungSung Soh Department of

More information

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200,

More information

Robust Lip Tracking by Combining Shape, Color and Motion

Robust Lip Tracking by Combining Shape, Color and Motion Robust Lip Tracking by Combining Shape, Color and Motion Ying-li Tian Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213 yltian@cs.cmu.edu National Laboratory of Pattern Recognition Chinese

More information

Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems Sensors 213, 13, 16682-16713; doi:1.339/s131216682 Article OPEN ACCESS sensors ISSN 1424-822 www.mdpi.com/journal/sensors Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

More information

Research on Emotion Recognition for Facial Expression Images Based on Hidden Markov Model

Research on Emotion Recognition for Facial Expression Images Based on Hidden Markov Model e-issn: 2349-9745 p-issn: 2393-8161 Scientific Journal Impact Factor (SJIF): 1.711 International Journal of Modern Trends in Engineering and Research www.ijmter.com Research on Emotion Recognition for

More information

C.R VIMALCHAND ABSTRACT

C.R VIMALCHAND ABSTRACT International Journal of Scientific & Engineering Research, Volume 5, Issue 3, March-2014 1173 ANALYSIS OF FACE RECOGNITION SYSTEM WITH FACIAL EXPRESSION USING CONVOLUTIONAL NEURAL NETWORK AND EXTRACTED

More information

Object Detection System

Object Detection System A Trainable View-Based Object Detection System Thesis Proposal Henry A. Rowley Thesis Committee: Takeo Kanade, Chair Shumeet Baluja Dean Pomerleau Manuela Veloso Tomaso Poggio, MIT Motivation Object detection

More information

A Facial Expression Classification using Histogram Based Method

A Facial Expression Classification using Histogram Based Method 2012 4th International Conference on Signal Processing Systems (ICSPS 2012) IPCSIT vol. 58 (2012) (2012) IACSIT Press, Singapore DOI: 10.7763/IPCSIT.2012.V58.1 A Facial Expression Classification using

More information

Face Detection and Recognition in an Image Sequence using Eigenedginess

Face Detection and Recognition in an Image Sequence using Eigenedginess Face Detection and Recognition in an Image Sequence using Eigenedginess B S Venkatesh, S Palanivel and B Yegnanarayana Department of Computer Science and Engineering. Indian Institute of Technology, Madras

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

A Non-linear Supervised ANN Algorithm for Face. Recognition Model Using Delphi Languages

A Non-linear Supervised ANN Algorithm for Face. Recognition Model Using Delphi Languages Contemporary Engineering Sciences, Vol. 4, 2011, no. 4, 177 186 A Non-linear Supervised ANN Algorithm for Face Recognition Model Using Delphi Languages Mahmood K. Jasim 1 DMPS, College of Arts & Sciences,

More information

Facial Feature Extraction Based On FPD and GLCM Algorithms

Facial Feature Extraction Based On FPD and GLCM Algorithms Facial Feature Extraction Based On FPD and GLCM Algorithms Dr. S. Vijayarani 1, S. Priyatharsini 2 Assistant Professor, Department of Computer Science, School of Computer Science and Engineering, Bharathiar

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

DATA EMBEDDING IN TEXT FOR A COPIER SYSTEM

DATA EMBEDDING IN TEXT FOR A COPIER SYSTEM DATA EMBEDDING IN TEXT FOR A COPIER SYSTEM Anoop K. Bhattacharjya and Hakan Ancin Epson Palo Alto Laboratory 3145 Porter Drive, Suite 104 Palo Alto, CA 94304 e-mail: {anoop, ancin}@erd.epson.com Abstract

More information

The Template Update Problem

The Template Update Problem The Template Update Problem Iain Matthews, Takahiro Ishikawa, and Simon Baker The Robotics Institute Carnegie Mellon University Abstract Template tracking dates back to the 1981 Lucas-Kanade algorithm.

More information

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Image Processing Pipeline for Facial Expression Recognition under Variable Lighting Ralph Ma, Amr Mohamed ralphma@stanford.edu, amr1@stanford.edu Abstract Much research has been done in the field of automated

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

Face Quality Assessment System in Video Sequences

Face Quality Assessment System in Video Sequences Face Quality Assessment System in Video Sequences Kamal Nasrollahi, Thomas B. Moeslund Laboratory of Computer Vision and Media Technology, Aalborg University Niels Jernes Vej 14, 9220 Aalborg Øst, Denmark

More information

Real-time Driver Affect Analysis and Tele-viewing System i

Real-time Driver Affect Analysis and Tele-viewing System i Appeared in Intelligent Vehicles Symposium, Proceedings. IEEE, June 9-11, 2003, 372-377 Real-time Driver Affect Analysis and Tele-viewing System i Joel C. McCall, Satya P. Mallick, and Mohan M. Trivedi

More information

Emotion Detection System using Facial Action Coding System

Emotion Detection System using Facial Action Coding System International Journal of Engineering and Technical Research (IJETR) Emotion Detection System using Facial Action Coding System Vedant Chauhan, Yash Agrawal, Vinay Bhutada Abstract Behaviors, poses, actions,

More information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information

Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Fully Automatic Methodology for Human Action Recognition Incorporating Dynamic Information Ana González, Marcos Ortega Hortas, and Manuel G. Penedo University of A Coruña, VARPA group, A Coruña 15071,

More information

Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer

Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer Maryam Vafadar and Alireza Behrad Faculty of Engineering, Shahed University Tehran,

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with

More information

Evaluation of Face Resolution for Expression Analysis

Evaluation of Face Resolution for Expression Analysis Evaluation of Face Resolution for Expression Analysis Ying-li Tian IBM T. J. Watson Research Center, PO Box 704, Yorktown Heights, NY 10598 Email: yltian@us.ibm.com Abstract Most automatic facial expression

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification

Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification Tomohiro Tanno, Kazumasa Horie, Jun Izawa, and Masahiko Morita University

More information

Edge Enhancement and Fine Feature Restoration of Segmented Objects using Pyramid Based Adaptive Filtering

Edge Enhancement and Fine Feature Restoration of Segmented Objects using Pyramid Based Adaptive Filtering Edge Enhancement and Fine Feature Restoration of Segmented Objects using Pyramid Based Adaptive Filtering A. E. Grace and M. Spann School of Electronic and Electrical Engineering, The University of Birmingham,

More information

Categorization by Learning and Combining Object Parts

Categorization by Learning and Combining Object Parts Categorization by Learning and Combining Object Parts Bernd Heisele yz Thomas Serre y Massimiliano Pontil x Thomas Vetter Λ Tomaso Poggio y y Center for Biological and Computational Learning, M.I.T., Cambridge,

More information

Eigenfaces versus Eigeneyes: First Steps Toward Performance Assessment of Representations for Face Recognition

Eigenfaces versus Eigeneyes: First Steps Toward Performance Assessment of Representations for Face Recognition Lecture Notes in Artificial Intelligence, vol. 1793, pp. 197-206, April 2000, Springer-Verlag press http://www.springer.de/comp/lncs/index.html (MICAI-2000, Acapulco) Eigenfaces versus Eigeneyes: First

More information

Eye detection, face detection, face recognition, line edge map, primary line segment Hausdorff distance.

Eye detection, face detection, face recognition, line edge map, primary line segment Hausdorff distance. Eye Detection using Line Edge Map Template Mihir Jain, Suman K. Mitra, Naresh D. Jotwani Dhirubhai Institute of Information and Communication Technology, Near Indroda Circle Gandhinagar,India mihir_jain@daiict.ac.in,

More information

Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection

Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection 1 Chongqing University of Technology Electronic Information and Automation College Chongqing, 400054, China E-mail: zh_lian@cqut.edu.cn

More information

Region Segmentation for Facial Image Compression

Region Segmentation for Facial Image Compression Region Segmentation for Facial Image Compression Alexander Tropf and Douglas Chai Visual Information Processing Research Group School of Engineering and Mathematics, Edith Cowan University Perth, Australia

More information

Vision-based Frontal Vehicle Detection and Tracking

Vision-based Frontal Vehicle Detection and Tracking Vision-based Frontal and Tracking King Hann LIM, Kah Phooi SENG, Li-Minn ANG and Siew Wen CHIN School of Electrical and Electronic Engineering The University of Nottingham Malaysia campus, Jalan Broga,

More information

Auto-Digitizer for Fast Graph-to-Data Conversion

Auto-Digitizer for Fast Graph-to-Data Conversion Auto-Digitizer for Fast Graph-to-Data Conversion EE 368 Final Project Report, Winter 2018 Deepti Sanjay Mahajan dmahaj@stanford.edu Sarah Pao Radzihovsky sradzi13@stanford.edu Ching-Hua (Fiona) Wang chwang9@stanford.edu

More information

Applications Video Surveillance (On-line or off-line)

Applications Video Surveillance (On-line or off-line) Face Face Recognition: Dimensionality Reduction Biometrics CSE 190-a Lecture 12 CSE190a Fall 06 CSE190a Fall 06 Face Recognition Face is the most common biometric used by humans Applications range from

More information

Application of the Fourier-wavelet transform to moving images in an interview scene

Application of the Fourier-wavelet transform to moving images in an interview scene International Journal of Applied Electromagnetics and Mechanics 15 (2001/2002) 359 364 359 IOS Press Application of the Fourier-wavelet transform to moving images in an interview scene Chieko Kato a,,

More information

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION

LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION LOCAL-GLOBAL OPTICAL FLOW FOR IMAGE REGISTRATION Ammar Zayouna Richard Comley Daming Shi Middlesex University School of Engineering and Information Sciences Middlesex University, London NW4 4BT, UK A.Zayouna@mdx.ac.uk

More information

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin

More information

3D Facial Action Units Recognition for Emotional Expression

3D Facial Action Units Recognition for Emotional Expression 3D Facial Action Units Recognition for Emotional Expression Norhaida Hussain 1, Hamimah Ujir, Irwandi Hipiny and Jacey-Lynn Minoi 1 Department of Information Technology and Communication, Politeknik Kuching,

More information

Motion Detection Algorithm

Motion Detection Algorithm Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection

More information

A Real Time Facial Expression Classification System Using Local Binary Patterns

A Real Time Facial Expression Classification System Using Local Binary Patterns A Real Time Facial Expression Classification System Using Local Binary Patterns S L Happy, Anjith George, and Aurobinda Routray Department of Electrical Engineering, IIT Kharagpur, India Abstract Facial

More information

Predictive Interpolation for Registration

Predictive Interpolation for Registration Predictive Interpolation for Registration D.G. Bailey Institute of Information Sciences and Technology, Massey University, Private bag 11222, Palmerston North D.G.Bailey@massey.ac.nz Abstract Predictive

More information

Tracking facial features using low resolution and low fps cameras under variable light conditions

Tracking facial features using low resolution and low fps cameras under variable light conditions Tracking facial features using low resolution and low fps cameras under variable light conditions Peter Kubíni * Department of Computer Graphics Comenius University Bratislava / Slovakia Abstract We are

More information

Spatio-Temporal Stereo Disparity Integration

Spatio-Temporal Stereo Disparity Integration Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz

More information

From Gaze to Focus of Attention

From Gaze to Focus of Attention From Gaze to Focus of Attention Rainer Stiefelhagen, Michael Finke, Jie Yang, Alex Waibel stiefel@ira.uka.de, finkem@cs.cmu.edu, yang+@cs.cmu.edu, ahw@cs.cmu.edu Interactive Systems Laboratories University

More information

International Journal of Advance Engineering and Research Development

International Journal of Advance Engineering and Research Development Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative

More information

A Robust Facial Feature Point Tracker using Graphical Models

A Robust Facial Feature Point Tracker using Graphical Models A Robust Facial Feature Point Tracker using Graphical Models Serhan Coşar, Müjdat Çetin, Aytül Erçil Sabancı University Faculty of Engineering and Natural Sciences Orhanlı- Tuzla, 34956 İstanbul, TURKEY

More information

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models

A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models A Visualization Tool to Improve the Performance of a Classifier Based on Hidden Markov Models Gleidson Pegoretti da Silva, Masaki Nakagawa Department of Computer and Information Sciences Tokyo University

More information

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm

Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Face Tracking : An implementation of the Kanade-Lucas-Tomasi Tracking algorithm Dirk W. Wagener, Ben Herbst Department of Applied Mathematics, University of Stellenbosch, Private Bag X1, Matieland 762,

More information

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION

A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION A GENERIC FACE REPRESENTATION APPROACH FOR LOCAL APPEARANCE BASED FACE VERIFICATION Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, Universität Karlsruhe (TH) 76131 Karlsruhe, Germany

More information

Convolutional Neural Networks for Facial Expression Recognition

Convolutional Neural Networks for Facial Expression Recognition Convolutional Neural Networks for Facial Expression Recognition Shima Alizadeh Stanford University shima86@stanford.edu Azar Fazel Stanford University azarf@stanford.edu Abstract In this project, we have

More information

A Modular Approach to Facial Expression Recognition

A Modular Approach to Facial Expression Recognition A Modular Approach to Facial Expression Recognition Michal Sindlar Cognitive Artificial Intelligence, Utrecht University, Heidelberglaan 6, 3584 CD, Utrecht Marco Wiering Intelligent Systems Group, Utrecht

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 19: Optical flow http://en.wikipedia.org/wiki/barberpole_illusion Readings Szeliski, Chapter 8.4-8.5 Announcements Project 2b due Tuesday, Nov 2 Please sign

More information

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM

LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM LOCAL APPEARANCE BASED FACE RECOGNITION USING DISCRETE COSINE TRANSFORM Hazim Kemal Ekenel, Rainer Stiefelhagen Interactive Systems Labs, University of Karlsruhe Am Fasanengarten 5, 76131, Karlsruhe, Germany

More information

Image Quality Assessment Techniques: An Overview

Image Quality Assessment Techniques: An Overview Image Quality Assessment Techniques: An Overview Shruti Sonawane A. M. Deshpande Department of E&TC Department of E&TC TSSM s BSCOER, Pune, TSSM s BSCOER, Pune, Pune University, Maharashtra, India Pune

More information

Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing

Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai,

More information