FACIAL EXPRESSION USING 3D ANIMATION TECHNIQUE

Similar documents
FACIAL EXPRESSION USING 3D ANIMATION

Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity

Real-time Automatic Facial Expression Recognition in Video Sequence

Emotion Detection System using Facial Action Coding System

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Computer Animation Visualization. Lecture 5. Facial animation

Facial Emotion Recognition using Eye

DA Progress report 2 Multi-view facial expression. classification Nikolas Hesse

Modelling Human Perception of Facial Expressions by Discrete Choice Models

Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks

Multi-Modal Human- Computer Interaction

Boosting Coded Dynamic Features for Facial Action Units and Facial Expression Recognition

HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION

Action Unit Based Facial Expression Recognition Using Deep Learning

Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Real time facial expression recognition from image sequences using Support Vector Machines

AUTOMATIC VIDEO INDEXING

Automated Facial Expression Recognition Based on FACS Action Units

Facial Expression Detection Using Implemented (PCA) Algorithm

Robust Lip Tracking by Combining Shape, Color and Motion

Facial Expression Recognition for HCI Applications

Affective Embodied Conversational Agents. Summary of programme Affect and Personality in Interaction with Ubiquitous Systems.

Facial Expression Recognition using Principal Component Analysis with Singular Value Decomposition

C.R VIMALCHAND ABSTRACT

A Facial Expression Classification using Histogram Based Method

LOCAL FEATURE EXTRACTION METHODS FOR FACIAL EXPRESSION RECOGNITION

Dynamic facial expression recognition using a behavioural model

A Simple Approach to Facial Expression Recognition

Real-time Talking Head Driven by Voice and its Application to Communication and Entertainment

IBM Research Report. Automatic Neutral Face Detection Using Location and Shape Features

Appearance Manifold of Facial Expression

Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn

Facial Expressions Recognition Using Eigenspaces

Girl6-Genesis2FemaleXprssnMagic 2014 Elisa Griffin, all rights reserved

Facial Expression Simulator Control System using OpenGL Graphical Simulator Indra Adji Sulistijono, Rachman Ardyansyah Keywords 1.

Facial expression recognition is a key element in human communication.

Facial expression recognition using shape and texture information

Recognition of facial expressions in presence of partial occlusion

Emotion Recognition With Facial Expressions Classification From Geometric Facial Features

LBP Based Facial Expression Recognition Using k-nn Classifier

By Paul Ekman - What The Face Reveals: Basic And Applied Studies Of Spontaneous Expression Using The Facial Action Coding System (FACS): 2nd (second)

An Algorithm based on SURF and LBP approach for Facial Expression Recognition

Human Face Classification using Genetic Algorithm

Feature-Point Tracking by Optical Flow Discriminates Subtle Differences in Facial Expression

Chapter 7: Communication. Organizational Behaviour 5 th Canadian Edition 7-1. Langton / Robbins / Judge Copyright 2010 Pearson Education Canada

Personal style & NMF-based Exaggerative Expressions of Face. Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University

Evaluation of Face Resolution for Expression Analysis

Facial Expression Recognition Based on Local Directional Pattern Using SVM Decision-level Fusion

Data-Driven Face Modeling and Animation

3D Facial Action Units Recognition for Emotional Expression

An Approach for Face Recognition System Using Convolutional Neural Network and Extracted Geometric Features

Facial Expression Cloning with Fuzzy Membership Functions

M I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies?

FACIAL EXPRESSION RECOGNITION USING ARTIFICIAL NEURAL NETWORKS

D DAVID PUBLISHING. 3D Modelling, Simulation and Prediction of Facial Wrinkles. 1. Introduction

A Real Time Facial Expression Classification System Using Local Binary Patterns

The Use of Emoticons in Polite Phrases of Greeting and Thanks

Beyond Prototypic Expressions: Discriminating Subtle Changes in the Face. January 15, 1999

Convolutional Neural Networks for Facial Expression Recognition

Finally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field

Dynamic Facial Expression Recognition Using A Bayesian Temporal Manifold Model

Facial Expression Recognition using a Dynamic Model and Motion Energy

Computer Animation INF2050

Facial Expression Recognition Using Non-negative Matrix Factorization

Model-Based Face Computation

Facial Expressions recognition Based on Principal Component Analysis (PCA)

Dual-state Parametric Eye Tracking

Computers and Mathematics with Applications. An embedded system for real-time facial expression recognition based on the extension theory

Recognizing Upper Face Action Units for Facial Expression Analysis

Mood detection of psychological and mentally disturbed patients using Machine Learning techniques

Facial Expressions Recognition from Image Sequences

Real-time Driver Affect Analysis and Tele-viewing System i

Facial Expression Analysis

Facial Expression Representation Based on Timing Structures in Faces

Cross-pose Facial Expression Recognition

Image Processing Pipeline for Facial Expression Recognition under Variable Lighting

A Novel LDA and HMM-based technique for Emotion Recognition from Facial Expressions

Chapter 7. Conclusions and Future Work

Exploring Facial Expressions with Compositional Features

MediaTek Video Face Beautify

Face analysis : identity vs. expressions

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

International Journal of Scientific & Engineering Research, Volume 7, Issue 2, February-2016 ISSN

Differential Processing of Facial Motion

Fully Automatic Facial Action Recognition in Spontaneous Behavior

Visualizing Flow Fields by Perceptual Motion

Recognizing Micro-Expressions & Spontaneous Expressions

Research on Dynamic Facial Expressions Recognition

Facial Action Detection from Dual-View Static Face Images

Re-mapping Animation Parameters Between Multiple Types of Facial Model

Imitating individualized facial expressions in a human-like avatar through a hybrid particle swarm optimization - tabu search algorithm

Tailoring Model-based Techniques to Facial Expression Interpretation

Implementation of Emotion Recognition by Expression with the Help of Matlab Tool

FGnet - Facial Expression and. Emotion Database

Visual Representation from Semiology of Graphics by J. Bertin

Recognizing Facial Actions by Combining Geometric Features and Regional Appearance Patterns

Recognizing Lower Face Action Units for Facial Expression Analysis

Facial Expression Recognition based on Affine Moment Invariants

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component

Transcription:

FACIAL EXPRESSION USING 3D ANIMATION TECHNIQUE Vishal Bal Assistant Prof., Pyramid College of Business & Technology, Phagwara, Punjab, (India) ABSTRACT Traditionally, human facial language has been studied using either 2D still images or 2D video sequences. The 2Dbased analysis is difficult to handle large pose variations and understated facial behavior. The analysis of 3D facial expressions will facilitate the examination of the fine structural changes inherent in the artless expressions. The aims to achieve a high rate of accuracy in identifying a wide range of facial language, with the ultimate goal of increasing the general understanding of facial behavior and 3D structure of facial expressions on a detailed level. Facial expression provides cues about emotion, regulates interpersonal behavior, and communicates psychopathology. Human-observer based methods for measuring facial expression are labor intensive, qualitative, and difficult to standardize. I INTRODUCTION A facial expression results from one or more motions or positions of the muscles of the face. These movements convey the emotional state of the individual to observers. Facial expressions are a form of nonverbal communication. They are a primary means of conveying social information among humans, but also occur in most other mammals and some other animal species. Humans can adopt a facial expression as a voluntary action. However, because expressions are closely tied to emotion, they are more often involuntary. It can be nearly impossible to avoid expressions for certain emotions, even when it would be strongly desirable to do so; a person who is trying to avoid insult to an individual he or she finds highly unattractive might nevertheless show a brief expression of disgust before being able to reassume a neutral expression. The close link between emotion and expression can also work in the other direction; it has been observed that voluntarily assuming an expression can actually cause the associated emotion. Some expressions can be accurately interpreted even between members of different species- anger and extreme contentment being the primary examples. Others, however, are difficult to interpret even in familiar individuals. For instance, disgust and fear can be tough to tell apart. Because faces have only a limited range of movement, expressions rely upon fairly minuscule differences in 271 P a g e

the proportion and relative position of facial features, and reading them requires considerable sensitivity to same. Some faces are often falsely read as expressing some emotion, even when they are neutral, because their proportions naturally resemble those another face would temporarily assume when emoting. Expression suggests a disclosure about the qualities of a man, a message about something inside to the expresser. With regards to the face and nonverbal correspondence, demeanor more often than not infers a change of a visual example after some time, however as a static painting can express a mind-set or catch a notion, so too the face can express generally static qualities. The idea of outward appearance, accordingly, incorporates: 1. a characteristic of a person that is represented, i.e., the signified; 2. a visual configuration that represents this characteristic, i.e., the signifier; 3. the physical basis of this appearance, or sign vehicle, e.g., the skin, muscle movements, fat, wrinkles, lines, blemishes, etc.; and 4. Typically, some person or other perceiver that perceives and interprets the signs. Facial Action Coding System FACS is anatomically based and allows the reliable coding of any facial action in terms of the smallest visible unit of muscular activity. These smallest visible units are called Action Units (AU), each referred to by a numerical code. With FACS, data collection is independent of data interpretation. There is no relation between the code and the meaning of the facial action. As a consequence, coding is independent of prior assumptions about prototypical emotion expressions. II FACIAL EXPRESSIONS Some examples of feelings that can be expressed are: Anger Concentration Confusion Contempt Desire Disgust Excitement Fear Frustration Glare Happiness 272 P a g e

Sadness Snarl Surprise The investigation of human outward appearances has numerous angles, from PC recreation and examination to understanding its part in craftsmanship, nonverbal correspondence, and the enthusiastic procedure. Many inquiries concerning outward appearances stay unanswered and a few zones are generally unexplored. To get an expansive photo of the sorts of inquiries that have been solicited, answers to some from these inquiries, and the logical research about the face that should be finished to answer them, see the online archive Understanding the Face: Report to the National Science Foundation. Outward appearances and the capacity to comprehend them are vital for fruitful relational relations, so enhancing these abilities is frequently looked for. See the Guide: How to Read Face for tips on enhancing your capacities. III DEVELOP A SYSTEM FOR GENERATING FACIAL ANIMATION OF ANY GIVEN 2D OR 3D MODEL GIVEN NOVEL TEXT OR SPEECH INPUT Step 1: Capture data for realism, motion capture points of a human speaking must be captured that encompass the entire phonetic alphabet and facial gestures and expressions. 3Ds Max. Step 2: Choose feature points from data A set of points from 3Ds Max Mocap data must be selected for use as the feature points. Only the motion of feature 3Ds Max Mocap points is needed for animating a model. Step 3: Select same feature points on model. User must tell the program what points on the model correspond to the 3Ds Max Mocap selected points. Step 4: Compute FP regions/weights Program searches out from each feature points, computing weights for each vertex in surrounding neighborhood until weight close to 0. 273 P a g e

Step 5: Animate the model Given information about vector movement of motion capture points, each feature point is moved and then neighbors are moved according to weights. IV TOOLS FOR STUDYING FACIAL EXPRESSION PRODUCED BY MUSCULAR ACTION The Facial Action Coding System (FACS) is use to measure facial expressions by identifying the muscular activity underlying transient changes in facial appearance. Researchers use in facial analysis to determine the elementary behaviors that pictures of facial expressions portray. The FACS Affect Interpretation Database (FACSAID) is a tool for understanding what the muscular actions that FACS measures mean in terms of psychological concepts. FACSAID interprets the facial expressions in terms of meaningful scientific concepts. 3D DATABASE 3D facial models have been extensively used for 3D face recognition and 3D face animation, the usefulness of such data for 3D facial expression recognition is unknown. To foster the research in this field, we created a 3D facial expression database, which includes 100 subjects with 2500 facial expression models. The database presently contains 100 subjects (56% female, 44% male), ranging age from 18 years to 70 years old, with a variety of ethnic/racial ancestries, including White, Black, and East-Asian, Middle-east Asian, Indian, and Hispanic Latino. Each subject performed seven expressions in front of the 3D face scanner. With the exception of the neutral expression, each of the six prototypic expressions (happiness, disgust, fear, angry, surprise and sadness) includes four levels of intensity. Therefore, there are 25 instant 3D expression models for each subject, resulting in a total of 2,500 3D facial expression models in the database. Associated with each expression shape model, is a corresponding facial texture image captured at two views (about +45 and -45 ). As a result, the 274 P a g e

database consists of 2,500 two-view s texture images and 2,500 geometric shape models. V CONCLUSIONS Expressions of an actor can be virtually built and coded in a small set of real number. We can measure face expressions with two cameras to reason its correlations with principal language. A recognition software makes two images of a 3D object and compares it to a reference (neutral face) then a dynamic structural software analyses the comparison (digitized expression) in the modal basis of the reference. This modal basis can be built by a mathematic process (natural modes basis) or be modified by introducing specific expressions (sadness, happiness, anger, surprise) which are semantic modes. This process can be used in Human Machine Interface as a dynamic input (3D reading of an actor intention) or output (3D dynamic avatar) REFERENCES 1. Cohn, J.F., Zlochower, A., Lien, J., Wu. Y.T., & Kanade, T. (July, 1997). Auto mated face coding: A computer-vision based method of facial expression analysis. 7th European Conference on Facial Expression, Measurement, and Meaning, Salzbur g, Austria. 2. Cohn, J.F., Kanade, T.K., Wu, Y.T., Lien, J., & Zlochower, A. (August, 1996). Fa cial expression analysis: Preliminary results of a new image-processing based me thod. International Society for Research in Emotion, Toronto. 3. Yu-Te Wu, "Image Registration Using Wavelet-Based Motion Model And Its Applications", Ph.D. thesis, Dept. of EE, University of Pittsburgh, September, 1997. 4. Yu-Te Wu, Kanade T., Cohn, J.F. and Li, C.C., ``Optical Flow Estimation Using Wavelet Motion Model'', ICCV'98. 275 P a g e