FACS Based Generating of Facial Expressions
|
|
- Leslie Taylor
- 5 years ago
- Views:
Transcription
1 FACS Based Generating of Facial Expressions A. Wojdeł L.J.M. Rothkrantz Knowledge Based Systems Group, Faculty of Information Technology and Systems, Delft University of Technology, Zuidplantsoen BZ Delft, The Netherlands A.Wojdel@cs.tudelft.nl L.J.M.Rothkrantz@cs.tudelft.nl Keywords: facial animation, 3D graphics, non-verbal communication, user interfaces Abstract In this paper we describe a semiautomatic system for facial animation based on the Facial Action Coding System. The overview of the system highlights its modularization and the distribution of the knowledge about facial expressions that is being used. Further in this article we describe the two last modules in the expression processing part of the system. Both the Facial Model and AUs Blender modules influence in a direct way the visual outcome of the expression being rendered. 1 Introduction A human face is an extremely interesting object. At the first sight, all faces are the same: a pair of eyes, a nose situated in the middle, a mouth in the lower part of the face etc. Yet it is the face that gives us the primary information about the identity of the person. The face provides information about the sex and the age of the subject. Even more; it is the face that communicates the emotions which are an integral part of our daily life. Human face remains very rarely still. Even children learn how to communicate with the facial expressions long before they grasp the language in its verbal representation. The face itself is therefore a very important component of the communication between humans. It provides background information about the mood of the other participant of the conversation. It shows how the person perceives the form and the content of our words. Facial expressions can complement the verbal part of the speech. We often shake the head as a sign of confirmation, we use the gaze direction to stress or specify the verbal description of the spatial dependencies. On the other hand, facial expressions provide a flexible means of controlling the dialogue. Without interfering with the acoustic part of the conversation we use our face to draw the attention of the other person, to signal our readiness to respond or to show that we re waiting for the response. People feel much better if during the course of conversation they can observe the other person. The importance of the face in a human-oriented communication has been noticed by the designers of interactive multi-modal interfaces. Since the early seventies there is an ongoing research in developing a realistic and expressive models of the face. The first one, proposed by F. Parke [1] in 1972 was based on the key-frame modeling. In this approach, for each emotion/expression a new wire-frame has to be generated and stored in the library of facial expressions. An animation is performed by linear interpolation between the two specific masks from the library. This approach, although still used in some cases, is tedious and data expensive. The need to reduce the amount of data being stored and to allow for more subtle facial movements to be reproduced was a driving force in the development of the new facial animation models. At first the so called parametric models appeared [2, 3, 4, 5] and later, as the computation capabilities of the computers grew also the models based on facial anatomy, structure and functionality of facial muscles [6, 7, 8]. Both types of facial models are used in different application fields. The physically based modeling is used in cases when the precision and realism of the generated face is crucial (e.g. in the medical environment). The simpler modeling is used to perform a real-time facial animation. Apart from generating the visually appropriate face image it is also important to have a system that would generate a psychologically proper facial animation in a given context. That means a system
2 Text Text Processing Text with signs Facial Expressions Generator Dictionary of Facial Expressions Knowledge Expressions Processing Expressions Synchronizator Sign(t) sign->set of AUs Translator AU(t) Action Units Blender Combined AUs Model of the Face Face Animator Lips Synchronizator AU(t) 3D Animated Face Figure 1: Design of the system for generating facial expressions with the human face that could be a substitute of a real person in the conversation. Such a system must for example decide which facial expression should be shown, with what intensity and for how long should it last. Most of the systems developed with such a task in mind are rule-based [9, 10]. The sets of rules used in these systems were developed on the basis of psychological research that described the relationships between the textual content, the intonation and the accompanying facial expressions. Such sets are generic in nature, they describe the average responses of a large set of people and disregard the person-specific variations. Each one of us has a very characteristic facial movements used (usually subconsciously) in different situations. Such a personification of the facial animation is next to impossible to be obtained in the automatic generating of the facial expressions. For that reason we decided to concentrate our research on a semiautomatic generating of the facial animation. Our goal is to provide the user with a simple tool for designing facial animations. Such animations when designed by human can incorporate a person specific behavior. It is up to the user, which facial expression would be appropriate in a given context. The system will however support the user on various levels of the design process, so that the obtained animation will come as close to being realistic as possible. 2 Overview of the system At the time of designing the generic form of the system we assumed that it should be based on the already established standards instead of introducing new ones. We decided to use the Facial Action Coding System (FACS) developed by Ekman and Friesen [11] which is a standard way of describing facial expressions in both psychology and computer animation. FACS is based on 44 Action Units (AUs) that represent facial movements that cannot be decomposed into smaller ones. It is argued by Ekman and Friesen that all possible facial expressions can be described as a combinations of those AUs. Our facial animation system is directly based on them. The user of our system does not however have to be an expert in FACS in order to use the system itself. For the user we designed a facial expressions script language that wraps up the AUs in more intuitive terms. It allows the user to chose from the predefined facial expressions that are accompanied by their verbal description and usage examples. The dictionary of the proposed language contains also the information about such facts as for example: when people use a given expression and what they usually communicate through it. More on the script language itself can be found in [12]. In this way, the user can generate a facial animation by putting the predefined emblems in freely chosen places of the text. Those places determine the time at which user wishes the given expression to be shown with a full intensity. The schematic diagram of the system is presented in Fig. 1. The system consists of two independent parts, the first of which concerns the text processing. The second one deals solely with the generating facial expressions. In our system, the text processing part is relatively simple. It contains only one module (Facial Expression Generator) that bases its output on the interaction with a user. It is here that the user uses the facial expression script language and decides at which times what kind of facial expressions should appear. The output of this module consisting of the text accompanied with the emblematic facial expressions forms the input for the second part of the system. The expression processing part contains the 6 modules that automatically process the incoming data. The first step in facial expression processing
3 deals with the time synchronization of the expressions. It is done by two modules: Expressions Synchronizator and Lips Synchronizator. The Lips Synchronizator is responsible for producing the appropriate lip movements according to the text. The Expressions Synchronizator determines the time characteristics of the expressions provided by the user. It decides when the given expression should start, how it should intensify and when it should cease to influence the face. In the next step facial expressions have to be translated into sets of AUs with their appropriate activations. The task of the AUs Translator is to produce the set of AU activations for each of the incoming facial expressions at a given expression intensity. The output of this module is defined as a set of AUs, their timing and intensities. The same type of output is generated by Lips Synchronizator and both must be provided further to the AUs Blender. It is worth noting that the information about the AU activations may contain redundant or conflicting information at this stage. For example it may happen that the Lips Synchronization module offers a face with closed lips at the same time at which the user chose to show a facial expression that describes the mouth as being open. It is the task of AUs Blender to combine those AUs in such a way that they can be shown on the synthetic face. The AUs Blender may decide to appropriately change the intensity, the timing or even the fact of occurrence of a specific AU in order to provide the consistency of the expression (see section 4). At the end of the processing, the prepared AUs are sent to the Facial Model module where they are properly interpreted and finally transformed into the 3D animation of the synthetic face. It is worth noticing that in our design, each of the modules contains some independent fragment of the knowledge about the relationships between facial expressions and AUs. Such a structure allows for an incremental approach in constructing the system and at each stage gives increasing support for the user in designing the animation. For example, the Facial Model itself takes the burden of modifying the wire-frame at the vertex level off the user s shoulders. Going one step earlier, the AUs Blender module takes care of resolving conflicts and inconsistencies at the AU level. The AUs Translator introduces the higher level of the representation and provides the means of using a more intuitive language of facial expressions instead of AUs. This scheme can be prolonged further even to the text processing part of the system in which case we would have a fully automated system for generating facial animation. Moreover the modularity of the system and the independence of the knowledge used in different modules allows for an easy way of improving or modifying specific aspects of the system. 3 Facial Model Our facial model is performance based and at the same time a parametric one. It is based on AUs which means that for each AU we have defined a set of parameters that describe in which way the wire-frame vertices must be displaced on the synthetic face so that it shows the appropriate AU. Moreover, as the typical facial expression incorporates multiple AUs, we have defined also the procedures to accumulate the displacements together. We can divide the AUs in three categories based on their area of influence and on which facial objects they act. Those categories differ not only in definition but also in the implementation details. The categories are namely: Single Object AUs, Sub-object AUs and Multiple Object AUs. We will further describe the implementation of the first type of AUs and highlight the differences between this generic model and the other categories of AUs. 3.1 Single Object AUs The biggest group of AUs are the Single Object AUs. Their implementation provides therefore the basic framework for implementing facial movements. Each of the AUs is represented as the intensity value and the two functions that form together the description of the displacement. For each AU we must define therefore: ³ Úµ The density function. It represents the area at which the AU influences the face and the displacement amount with the AU activated in 100%. Úµ The direction function. This function defines the direction of the displacement caused by a given AU. ¾ ¼ ½ The intensity value. When the AU is not activated at all ¼ and in case of full activation ½. For most of the AUs we can define the displacement of the face at a given point Ú as: Ú Úµ³ Úµ (1) The above equation can be used in all cases when we can assume the linear dependency between the AU intensity and the displacement. There are however some AUs that do not behave in this way. Such AUs are for example the rotations of the eyes or the movements of the head as a whole. For those AUs we use a more generic displacement calculation: Ú Ú ³ Úµ µ (2) where the direction of the displacement depends not only on the initial position of the point but also on
4 a) b) Figure 2: Landmark points in a neutral face (a) and a face showing AU15 - Lip Corner Depressor (b) the intensity of the activated AU and the value of the density function in this point. The implementation of this generic model of an AU is done by finding the appropriate forms of both of the functions for each of the AUs. The functions are customized to a specific person in the two stages. As it was stated in the beginning of this section, our model is performance based and therefore the first stage consists of measuring the facial movements of the person being modeled. We need here the 3D measurements of a real face as deformed when showing the pure and fully activated AUs. The measurements are not constrained by the choice of the wire-frame that will later be modeled. The only constraint on the measurements is that they describe the facial movements in a complete way. That means that all of the significant changes on the face must be reflected in the measured data. In our experiments we used 36 landmark points placed on one side of the face assuming that both sides are equivalent to each other (see Fig. 2). We used further also the visible contours of the eyes, eyebrows and lips as the natural landmarks. The second stage of implementing the AU relies on fitting the parameters of the density and direction functions on the measured data. Thanks to the form in which the displacement is being calculated (Eq. 1) for most of the AUs both functions can be optimized independently, which simplifies the fitting process greatly. A more in-depth description of such an optimization can be found in [13] together with the validation of the fitting results. 3.2 Sub-Object AUs AUs from this group are characterized by the fact, that their activation effects in separation of the upper and lower lip. Separation of the lips induces rapid changes in the density as well as in the direction of the movement on a relatively small area of the face. Therefore in order to obtain a better optimization we decided to divide the wireframe representing the surface of the face into two parts. This division is de- Figure 3: Division of the wireframe into two parts for sub object AUs fined by the topology of the facial surface and can therefore be determined independently from the wireframe configuration. For the sake of simplicity, in our implementation we use a single plane that intersects with the face in positions corresponding to the mouth corners (see Figure 3). When fitting the sub-object AU on the measured data we need to consider that there are now in total four functions. There are two independent density functions and two independent direction functions. Fortunately they act on the separate parts of the wireframe in pairs and do not influence each other. Obviously, there is still only one intensity value that is used together with either pair of the functions depending on the initial position of the point being displaced. 3.3 Multiple Object AUs A model of the face can be built from a few objects as e.g. face, eyes, teeth. Usually a specific AU modifies only one object. For example moving the eyes influences only the eyes and does not change the face around them. On the another hand, closing the eyes acts only on the face and does not have any influence on the eyes. Although closing the eyelids obscures the eyeballs, their geometry is not influenced by this movement. However the activation of some AUs can result in deformation of more than one object. Such AUs include e.g. all AUs related to the movement of the whole head; when we rotate the head, the rotation acts on the facial surface as well as the eyes and the teeth even though eyes and teeth are not necessarily visible. Another example can be AU26 Jaw Drop. Although showing this single AU the mouth is closed, we should remember about the teeth. When we for example combine AU26 with AU25 Lips Part the teeth can become visible and they should be appropriately moved. Therefore while implementing multiple object
5 a) b) Figure 4: Displacement vectors calculation in additive mixer (a) and successive mixer (b) AUs we have to remember about defining appropriate AU components for all of the objects from a facial model that a given AU can affect. 3.4 Combining AUs As we know, the facial expressions used in a reallife rarely contain only a single AU activation. A typical facial expression consists of three or more AUs. Therefore the definition of interactions between the AUs on the geometrical level must be the integral part of our model. We use here two different types of AU mixers; additive and successive. In an additive mixer, the composite vectors of the movement are calculated separately for each of the AUs. Then the resulting vector of the movement is a summation of the composite vectors and can be applied on the original model. In this way the result of the rendering does not depend on the order in which the AUs are modeled (see Figure 4a). In case of successive mixing of AUs, the original wireframe is adapted through the successive AU modifiers in a specific order. The wireframe vertices change their positions while applying one AU after another (see Figure 4b). Which one of the above defined mixers is used in a given combination depends on the types of the AUs that take part in the expression. For example, most of the single object AUs can be combined using additive mixing. The only exception here are the nonlinear AUs (described by Eq. 2), which by their nature must be combined in a successive way. Also the sub-object AUs are usually better combined with the additive mixer. This kind of AUs relates to the rapid changes in some small areas of the face and therefore using the successive mixing may produce unrealistic and unexpected facial expressions (see Figure 5). The multiple object AUs present a special case. Generally we must apply a successive mixer with regards to all of the secondary objects. That means, that at first we have to modify a secondary object according to those AUs from a combination that act only on it and later on take into consideration the influence of the multiple object AUs. For example, in order to move a head we have to activate all of the AUs for the face, the eyes and the teeth first and then the AUs responsible for the movement of the whole head should be applied on all modified objects. a) b) Figure 5: Combination of AU12 and AU25 using an additive mixer (a) successive mixer (b) 4 AUs Blender The model of the face itself does not contain any information about the dependencies between specific AUs. It contains only a geometrical description of the way in which the AUs are combined. Whether there is or not a physiological sense in combining those AUs is of no concern in that module. It is the task of AUs Blender to prepare the set of activated AUs in such a way that it can be directly rendered by the facial model. There are several restrictions on the possible combinations of AUs. Some of the AUs are contradictory to each other (such as opening and closing the eyes), and some cannot be activated together at the same time even though they don t seem contradicting at the first sight. According to Ekman there are 5 distinct ways in which the AUs combine and influence each other. First of all the combination of the AUs can be additive. In such a case they operate as if they were activated separately and the resulting facial movement is a plain summation of the separate displacements. The AUs can combine also in such a way that one dominates over the other, diminishing therefore the results of the activation of the latter AU. In case when the AUs are contradictory to each other, they combine in an alternative way. There is also a possibility of substitution in case when the occurrence of two AUs at the same time is equivalent to activation of the third AU alone. In the end all of the exceptions that cannot be modeled in the above mentioned ways fall into a group of different ways of combining AUs. The AUs Blender does not implement the above description in a direct way. Its task is not only to make sure that the activations of the AUs are appropriate, but it also has to support the user in using the AUs for animation. Therefore, instead of blocking the activation of the contradictory AUs, the blender will try to infer the activation value for just one of them so that it corresponds in the best way to the intent of the
6 user. For example, a 100% activation of AU51 (Head Turn Left) with 60% activation of AU52 (Head Turn Right) will result in the 40% activation of the AU51 alone. In the similar way the domination of a given AU over another one cannot be done by simply removing the activation of the latter one. We need to make sure that the smoothness of the changes in the face is preserved. Therefore the resulting activations of the AUs have to be recalculated so that both the smoothness and the dominating role of one of the AUs are properly represented. Moreover, there are also some constraints on the activation intensities even if the AUs do not influence each other directly. As an example here, the AU51 (Head Turn Left) and AU53 (Head Up) are simply additive. This is entirely true however only when scoring their appearance in the binary scale of the FACS system. Both of them cannot be activated to the full extent at the same time. The AUs Blender normalizes their activations so that their sum does not exceed 100% in this case. All of the AUs Blender s actions can be encapsulated in a set of simple if-then rules and some arithmetics. The end result however is a complex model of the AUs combinations that is compatible with the Ekman s definitions of the facial movements. 5 Conclusions The proposed system design in its current highly modular form allows for the high flexibility on both the user and the developer side. For the user the system provides the varying levels of the automatization. Depending on the user experience and abilities, there are ways to influence the facial animation at all levels of the facial expression processing. On the other hand, the knowledge about the facial animation and facial expressions is nicely encapsulated in a small independent chunks so that it can be easily extended and/or modified to a specific application. Our facial model and the AUs Blender are based on the well established work of P. Ekman. This allows us to use the existing expertise and data in developing the system further. References [1] Frederic I. Parke. Computer generated animation of faces. In Proceedings of the ACM National Conference, pages , [2] Frederic I. Parke. Parametrized models for facial animation. IEEE Computer Graphics, 2(9):61 68, [3] Keith Waters. A muscle model for animating three-dimentional facial epressions. Computer Graphics (SIGGRAPH 87), 21(4):17 24, July [4] Nadia Magnenat-Thalmann, E. Prmeau, and Daniel Thalmann. Abstract muscle action procedures for human face animation. The Visual Computer, 3(5): , [5] Prem Kalra, Angelo Mangili, Nadia Magnenat- Thalmann, and Daniel Thalmann. Simulation of facial muscle actions using rational freeform deformations. In A. Kilgour and L. Kjelldahl, editors, Proceedings of Eurographics 92 Computer Graphics Forum, volume 11, pages 59 69, Oxford, UK, NCC Blackwell. [6] Demetri Terzopoulos and Keith Waters. Analisis and synthesis of facial image sequences using physical and anatomical models. IEEE Transaction on Pattern Analysis and Machine Intelligence, 15(6): , [7] Yuencheng Lee, Demetri Terzopoulos, and Keith Waters. Realistic modeling for facial animation. In Computer Graphics Proceedings, Annual Conference Series, pages 55 61, [8] Rolf M. Koch, Markus Hans Gross, and Albert A. Bosshard. Emotion editing using finite elements. Technical Report 281, ETH Zürich, Institut of Scientific Computing, January [9] Justine Cassell, Catherine Pelachaud, Norman I. Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Skott Prevost, and Matthew Stone. Animated conversation: Rulebased generation of facial expression, gesture and spoken intonation for multiple conversational agents. In Proceedings of ACM SIG- GRAPH, pages , Orlando (FL.), [10] Catherine Pelachaud, Norman I. Badler, and Mark Steedman. Generating facial expressions for speech. Cognitive Science, 20(1):1 46, [11] Paul Ekman and Wallace F. Friesen. Unmasking the Face. Prentice-Hall, Inc., Englewood Cliffs, New Jersey, USA, [12] Anna Wojdel and Leon J. M. Rothkrantz. Intelligent system for semiautomatic facial animation. In Proceedings of Euromedia 2000, pages , May [13] Anna Wojdel and Leon J. M. Rothkrantz. A performance based parametric model for facial animation. In Proceedings of IEEE International Conference on Multimedia and Expo 2000, New York, NY USA, July-August 2000.
Real-time Talking Head Driven by Voice and its Application to Communication and Entertainment
ISCA Archive Real-time Talking Head Driven by Voice and its Application to Communication and Entertainment Shigeo MORISHIMA Seikei University ABSTRACT Recently computer can make cyberspace to walk through
More informationComputer Animation Visualization. Lecture 5. Facial animation
Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based
More informationM I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies?
MIRALab Where Research means Creativity Where do we stand today? M I RA Lab Nadia Magnenat-Thalmann MIRALab, University of Geneva thalmann@miralab.unige.ch Video Input (face) Audio Input (speech) FAP Extraction
More informationFacial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn
Facial Image Synthesis Page 1 of 5 Facial Image Synthesis 1 Barry-John Theobald and Jeffrey F. Cohn 1 Introduction Facial expression has been central to the
More informationCommunicating with virtual characters
Communicating with virtual characters Nadia Magnenat Thalmann, Prem Kalra, Marc Escher MIRALab, CUI University of Geneva 24, rue du General-Dufour 1211 Geneva, Switzerland Email: {thalmann, kalra, escher}@cui.unige.ch
More informationA PLASTIC-VISCO-ELASTIC MODEL FOR WRINKLES IN FACIAL ANIMATION AND SKIN AGING
MIRALab Copyright Information 1998 A PLASTIC-VISCO-ELASTIC MODEL FOR WRINKLES IN FACIAL ANIMATION AND SKIN AGING YIN WU, NADIA MAGNENAT THALMANN MIRALab, CUI, University of Geneva DANIEL THALMAN Computer
More informationAnimated Talking Head With Personalized 3D Head Model
Animated Talking Head With Personalized 3D Head Model L.S.Chen, T.S.Huang - Beckman Institute & CSL University of Illinois, Urbana, IL 61801, USA; lchen@ifp.uiuc.edu Jörn Ostermann, AT&T Labs-Research,
More informationCS 231. Deformation simulation (and faces)
CS 231 Deformation simulation (and faces) Deformation BODY Simulation Discretization Spring-mass models difficult to model continuum properties Simple & fast to implement and understand Finite Element
More informationFacial Deformations for MPEG-4
Facial Deformations for MPEG-4 Marc Escher, Igor Pandzic, Nadia Magnenat Thalmann MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211 Geneva 4, Switzerland {Marc.Escher, Igor.Pandzic, Nadia.Thalmann}@cui.unige.ch
More informationCS 231. Deformation simulation (and faces)
CS 231 Deformation simulation (and faces) 1 Cloth Simulation deformable surface model Represent cloth model as a triangular or rectangular grid Points of finite mass as vertices Forces or energies of points
More informationCONSTRUCTING VIRTUAL HUMAN LIFE SIMULATIONS
CONSTRUCTING VIRTUAL HUMAN LIFE SIMULATIONS Marcelo Kallmann, Etienne de Sevin and Daniel Thalmann Swiss Federal Institute of Technology (EPFL), Computer Graphics Lab (LIG), Lausanne, Switzerland, CH-1015,
More information3D Face Deformation Using Control Points and Vector Muscles
IJCSNS International Journal of Computer Science and Network Security, VOL.7 No.4, April 2007 149 3D Face Deformation Using Control Points and Vector Muscles Hyun-Cheol Lee and Gi-Taek Hur, University
More informationAffective Embodied Conversational Agents. Summary of programme Affect and Personality in Interaction with Ubiquitous Systems.
Summary of programme Affect and Personality in Interaction with Ubiquitous Systems speech, language, gesture, facial expressions, music, colour Professor Ruth Aylett Vision Interactive Systems & Graphical
More informationSynthesizing Realistic Facial Expressions from Photographs
Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1
More informationSMILE: A Multilayered Facial Animation System
SMILE: A Multilayered Facial Animation System Prem Kalra, Angelo Mangili, Nadia Magnenat-Thalmann, Daniel Thalmann ABSTRACT This paper describes a methodology for specifying facial animation based on a
More informationVISEME SPACE FOR REALISTIC SPEECH ANIMATION
VISEME SPACE FOR REALISTIC SPEECH ANIMATION Sumedha Kshirsagar, Nadia Magnenat-Thalmann MIRALab CUI, University of Geneva {sumedha, thalmann}@miralab.unige.ch http://www.miralab.unige.ch ABSTRACT For realistic
More informationVirtual Chess Player with Emotions
EUROGRAPHICS 0x / László Szirmay-Kalos and Gábor Renner (Editors) Volume 0 (1981), Number 0 Virtual Chess Player with Emotions Gy. Kovács 1, Zs. Ruttkay 2,3 and A. Fazekas 1 1 Faculty of Informatics, University
More informationMODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL
MODELING AND ANIMATING FOR THE DENSE LASER-SCANNED FACE IN THE LOW RESOLUTION LEVEL Lijia Zhu and Won-Sook Lee School of Information Technology and Engineering, University of Ottawa 800 King Edward Ave.,
More informationFacial Expression Analysis for Model-Based Coding of Video Sequences
Picture Coding Symposium, pp. 33-38, Berlin, September 1997. Facial Expression Analysis for Model-Based Coding of Video Sequences Peter Eisert and Bernd Girod Telecommunications Institute, University of
More informationTransfer Facial Expressions with Identical Topology
Transfer Facial Expressions with Identical Topology Alice J. Lin Department of Computer Science University of Kentucky Lexington, KY 40506, USA alice.lin@uky.edu Fuhua (Frank) Cheng Department of Computer
More informationUsing the rear projection of the Socibot Desktop robot for creation of applications with facial expressions
IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Using the rear projection of the Socibot Desktop robot for creation of applications with facial expressions To cite this article:
More informationResearch On 3D Emotional Face Animation Based on Dirichlet Free Deformation Algorithm
2017 3rd International Conference on Electronic Information Technology and Intellectualization (ICEITI 2017) ISBN: 978-1-60595-512-4 Research On 3D Emotional Face Animation Based on Dirichlet Free Deformation
More informationModeling Facial Expressions in 3D Avatars from 2D Images
Modeling Facial Expressions in 3D Avatars from 2D Images Emma Sax Division of Science and Mathematics University of Minnesota, Morris Morris, Minnesota, USA 12 November, 2016 Morris, MN Sax (U of Minn,
More informationReal time facial expression recognition from image sequences using Support Vector Machines
Real time facial expression recognition from image sequences using Support Vector Machines I. Kotsia a and I. Pitas a a Aristotle University of Thessaloniki, Department of Informatics, Box 451, 54124 Thessaloniki,
More informationFacial Expression Morphing and Animation with Local Warping Methods
Facial Expression Morphing and Animation with Local Warping Methods Daw-Tung Lin and Han Huang Department of Computer Science and Information Engineering Chung Hua University 30 Tung-shiang, Hsin-chu,
More informationK A I S T Department of Computer Science
An Example-based Approach to Text-driven Speech Animation with Emotional Expressions Hyewon Pyun, Wonseok Chae, Yejin Kim, Hyungwoo Kang, and Sung Yong Shin CS/TR-2004-200 July 19, 2004 K A I S T Department
More informationSurgery Simulation and Planning
Surgery Simulation and Planning S. H. Martin Roth Dr. Rolf M. Koch Daniel Bielser Prof. Dr. Markus Gross Facial surgery project in collaboration with Prof. Dr. Dr. H. Sailer, University Hospital Zurich,
More informationModelling and Animating Hand Wrinkles
Modelling and Animating Hand Wrinkles X. S. Yang and Jian J. Zhang National Centre for Computer Animation Bournemouth University, United Kingdom {xyang, jzhang}@bournemouth.ac.uk Abstract. Wrinkles are
More informationFacial Action Detection from Dual-View Static Face Images
Facial Action Detection from Dual-View Static Face Images Maja Pantic and Leon Rothkrantz Delft University of Technology Electrical Engineering, Mathematics and Computer Science Mekelweg 4, 2628 CD Delft,
More informationREAL TIME FACIAL INTERACTION
MIRALab Copyright Information 1998 REAL TIME FACIAL INTERACTION Igor Sunday Pandzic, Prem Kalra, Nadia Magnenat Thalmann MIRALab, University of Geneva ABSTRACT Human interface for computer graphics systems
More informationINTERNATIONAL JOURNAL OF GRAPHICS AND MULTIMEDIA (IJGM)
INTERNATIONAL JOURNAL OF GRAPHICS AND MULTIMEDIA (IJGM) International Journal of Graphics and Multimedia (IJGM), ISSN: 0976 6448 (Print) ISSN: 0976 ISSN : 0976 6448 (Print) ISSN : 0976 6456 (Online) Volume
More informationFacial Motion Cloning 1
Facial Motion Cloning 1 Abstract IGOR S. PANDZIC Department of Electrical Engineering Linköping University, SE-581 83 Linköping igor@isy.liu.se We propose a method for automatically copying facial motion
More informationHuman Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya
Hartmann - 1 Bjoern Hartman Advisor: Dr. Norm Badler Applied Senior Design Project - Final Report Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Introduction Realistic
More informationRecently, research in creating friendly human
For Duplication Expression and Impression Shigeo Morishima Recently, research in creating friendly human interfaces has flourished. Such interfaces provide smooth communication between a computer and a
More informationVolumetric Deformable Models for Simulation of Laparoscopic Surgery
Volumetric Deformable Models for Simulation of Laparoscopic Surgery S. Cotin y, H. Delingette y, J.M. Clément z V. Tassetti z, J. Marescaux z, N. Ayache y y INRIA, Epidaure Project 2004, route des Lucioles,
More informationClassification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks
Classification of Upper and Lower Face Action Units and Facial Expressions using Hybrid Tracking System and Probabilistic Neural Networks HADI SEYEDARABI*, WON-SOOK LEE**, ALI AGHAGOLZADEH* AND SOHRAB
More informationKNOWLEDGE DRIVEN FACIAL MODELLING
KNOWLEDGE DRIVEN FACIAL MODELLING PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. dr. ir. J.T. Fokkema, voorzitter van
More informationMuscle Based facial Modeling. Wei Xu
Muscle Based facial Modeling Wei Xu Facial Modeling Techniques Facial modeling/animation Geometry manipulations Interpolation Parameterizations finite element methods muscle based modeling visual simulation
More informationFace analysis : identity vs. expressions
Face analysis : identity vs. expressions Hugo Mercier 1,2 Patrice Dalle 1 1 IRIT - Université Paul Sabatier 118 Route de Narbonne, F-31062 Toulouse Cedex 9, France 2 Websourd 3, passage André Maurois -
More informationAnalysis and Synthesis of Facial Expressions with Hand-Generated Muscle Actuation Basis
Proceedings of Computer Animation 2001, pages 12 19, November 2001 Analysis and Synthesis of Facial Expressions with Hand-Generated Muscle Actuation Basis Byoungwon Choe Hyeong-Seok Ko School of Electrical
More informationFACIAL EXPRESSION USING 3D ANIMATION
Volume 1 Issue 1 May 2010 pp. 1 7 http://iaeme.com/ijcet.html I J C E T IAEME FACIAL EXPRESSION USING 3D ANIMATION Mr. K. Gnanamuthu Prakash 1, Dr. S. Balasubramanian 2 ABSTRACT Traditionally, human facial
More informationExpert system for automatic analysis of facial expressions
Image and Vision Computing 18 (2000) 881 905 www.elsevier.com/locate/imavis Expert system for automatic analysis of facial expressions M. Pantic*, L.J.M. Rothkrantz Faculty of Information Technology and
More informationAnimation & AR Modeling Guide. version 3.0
Animation & AR Modeling Guide version 3.0 Contents 1. Introduction... 3 2. Face animation modeling guide...4 2.1. Creating blendshapes...4 2.1.1. Blendshape suggestions...5 2.2. Binding configuration...6
More informationCloth Simulation. Tanja Munz. Master of Science Computer Animation and Visual Effects. CGI Techniques Report
Cloth Simulation CGI Techniques Report Tanja Munz Master of Science Computer Animation and Visual Effects 21st November, 2014 Abstract Cloth simulation is a wide and popular area of research. First papers
More information3D FACIAL EXPRESSION TRACKING AND REGENERATION FROM SINGLE CAMERA IMAGE BASED ON MUSCLE CONSTRAINT FACE MODEL
International Archives of Photogrammetry and Remote Sensing. Vol. XXXII, Part 5. Hakodate 1998 3D FACIAL EXPRESSION TRACKING AND REGENERATION FROM SINGLE CAMERA IMAGE BASED ON MUSCLE CONSTRAINT FACE MODEL
More information3D Facial Action Units Recognition for Emotional Expression
3D Facial Action Units Recognition for Emotional Expression Norhaida Hussain 1, Hamimah Ujir, Irwandi Hipiny and Jacey-Lynn Minoi 1 Department of Information Technology and Communication, Politeknik Kuching,
More informationAn Interactive Interface for Directing Virtual Humans
An Interactive Interface for Directing Virtual Humans Gael Sannier 1, Selim Balcisoy 2, Nadia Magnenat-Thalmann 1, Daniel Thalmann 2 1) MIRALab, University of Geneva, 24 rue du Général Dufour CH 1211 Geneva,
More informationAnimation of 3D surfaces.
Animation of 3D surfaces Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible =
More informationFinal Report to NSF of the Standards for Facial Animation Workshop
University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science January 1994 Final Report to NSF of the Standards for Facial Animation Workshop Catherine
More informationTopics for thesis. Automatic Speech-based Emotion Recognition
Topics for thesis Bachelor: Automatic Speech-based Emotion Recognition Emotion recognition is an important part of Human-Computer Interaction (HCI). It has various applications in industrial and commercial
More informationIFACE: A 3D SYNTHETIC TALKING FACE
IFACE: A 3D SYNTHETIC TALKING FACE PENGYU HONG *, ZHEN WEN, THOMAS S. HUANG Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign Urbana, IL 61801, USA We present
More informationFace Synthesis in the VIDAS project
Face Synthesis in the VIDAS project Marc Escher 1, Igor Pandzic 1, Nadia Magnenat Thalmann 1, Daniel Thalmann 2, Frank Bossen 3 Abstract 1 MIRALab - CUI University of Geneva 24 rue du Général-Dufour CH1211
More informationFast Facial Motion Cloning in MPEG-4
Fast Facial Motion Cloning in MPEG-4 Marco Fratarcangeli and Marco Schaerf Department of Computer and Systems Science University of Rome La Sapienza frat,schaerf@dis.uniroma1.it Abstract Facial Motion
More informationMulti-Modal Human- Computer Interaction
Multi-Modal Human- Computer Interaction Attila Fazekas University of Debrecen, Hungary Road Map Multi-modal interactions and systems (main categories, examples, benefits) Face detection, facial gestures
More informationMaster s Thesis. Cloning Facial Expressions with User-defined Example Models
Master s Thesis Cloning Facial Expressions with User-defined Example Models ( Kim, Yejin) Department of Electrical Engineering and Computer Science Division of Computer Science Korea Advanced Institute
More informationdoi: / The Application of Polygon Modeling Method in the Maya Persona Model Shaping
doi:10.21311/001.39.12.37 The Application of Polygon Modeling Method in the Maya Persona Model Shaping Qinggang Sun Harbin University of Science and Technology RongCheng Campus, RongCheng Shandong, 264300
More informationCombination of facial movements on a 3D talking head
Combination of facial movements on a 3D talking head The Duy Bui Dirk Heylen University of Twente Department of Computer Science The Netherlands {theduy,heylen,anijholt}@cs.utwente.nl Anton Nijholt Abstract
More informationHigh-Fidelity Facial and Speech Animation for VR HMDs
High-Fidelity Facial and Speech Animation for VR HMDs Institute of Computer Graphics and Algorithms Vienna University of Technology Forecast facial recognition with Head-Mounted Display (HMD) two small
More informationCaricaturing Buildings for Effective Visualization
Caricaturing Buildings for Effective Visualization Grant G. Rice III, Ergun Akleman, Ozan Önder Özener and Asma Naz Visualization Sciences Program, Department of Architecture, Texas A&M University, USA
More informationApplication of the Fourier-wavelet transform to moving images in an interview scene
International Journal of Applied Electromagnetics and Mechanics 15 (2001/2002) 359 364 359 IOS Press Application of the Fourier-wavelet transform to moving images in an interview scene Chieko Kato a,,
More informationFACIAL EXPRESSION USING 3D ANIMATION TECHNIQUE
FACIAL EXPRESSION USING 3D ANIMATION TECHNIQUE Vishal Bal Assistant Prof., Pyramid College of Business & Technology, Phagwara, Punjab, (India) ABSTRACT Traditionally, human facial language has been studied
More informationSpeech Driven Synthesis of Talking Head Sequences
3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University
More informationWe present a method to accelerate global illumination computation in pre-rendered animations
Attention for Computer Graphics Rendering Hector Yee PDI / DreamWorks Sumanta Pattanaik University of Central Florida Corresponding Author: Hector Yee Research and Development PDI / DreamWorks 1800 Seaport
More informationThe Simulation of a Virtual TV Presentor
MIRALab Copyright Information 1998 The Simulation of a Virtual TV Presentor Abstract Nadia Magnenat Thalmann, Prem Kalra MIRALab, University of Geneva This paper presents the making of six short sequences
More informationData-Driven Face Modeling and Animation
1. Research Team Data-Driven Face Modeling and Animation Project Leader: Post Doc(s): Graduate Students: Undergraduate Students: Prof. Ulrich Neumann, IMSC and Computer Science John P. Lewis Zhigang Deng,
More informationGetting Started with Crazy Talk 6
Getting Started with Crazy Talk 6 Crazy Talk 6 is an application that generates talking characters from an image or photo, as well as facial animation for video. Importing an Image Launch Crazy Talk and
More informationFacial Animation. Joakim Königsson
Facial Animation Joakim Königsson June 30, 2005 Master s Thesis in Computing Science, 20 credits Supervisor at CS-UmU: Berit Kvernes Examiner: Per Lindström Umeå University Department of Computing Science
More informationFACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT
FACE ANALYSIS AND SYNTHESIS FOR INTERACTIVE ENTERTAINMENT Shoichiro IWASAWA*I, Tatsuo YOTSUKURA*2, Shigeo MORISHIMA*2 */ Telecommunication Advancement Organization *2Facu!ty of Engineering, Seikei University
More informationPersonal style & NMF-based Exaggerative Expressions of Face. Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University
Personal style & NMF-based Exaggerative Expressions of Face Seongah Chin, Chung-yeon Lee, Jaedong Lee Multimedia Department of Sungkyul University Outline Introduction Related Works Methodology Personal
More informationHuman body animation. Computer Animation. Human Body Animation. Skeletal Animation
Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,
More informationTopology Optimization of an Engine Bracket Under Harmonic Loads
Topology Optimization of an Engine Bracket Under Harmonic Loads R. Helfrich 1, A. Schünemann 1 1: INTES GmbH, Schulze-Delitzsch-Str. 16, 70565 Stuttgart, Germany, www.intes.de, info@intes.de Abstract:
More informationnetwork and image warping. In IEEE International Conference on Neural Networks, volume III,
Mary YY Leung, Hung Yen Hui, and Irwin King Facial expression synthesis by radial basis function network and image warping In IEEE International Conference on Neural Networks, volume III, pages 1{15, Washington
More informationEvaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity
Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity Ying-li Tian 1 Takeo Kanade 2 and Jeffrey F. Cohn 2,3 1 IBM T. J. Watson Research Center, PO
More informationEvaluation of Expression Recognition Techniques
Evaluation of Expression Recognition Techniques Ira Cohen 1, Nicu Sebe 2,3, Yafei Sun 3, Michael S. Lew 3, Thomas S. Huang 1 1 Beckman Institute, University of Illinois at Urbana-Champaign, USA 2 Faculty
More informationFundamentals of STEP Implementation
Fundamentals of STEP Implementation David Loffredo loffredo@steptools.com STEP Tools, Inc., Rensselaer Technology Park, Troy, New York 12180 A) Introduction The STEP standard documents contain such a large
More informationFacial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation
Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation Qing Li and Zhigang Deng Department of Computer Science University of Houston Houston, TX, 77204, USA
More informationTony Kobayashi. B.Sc., Carnegie Mellon University, 1994 THE FACULTY OF GRADUATE STUDIES. I accept this essay as conforming
Using Recorded Motion for Facial Animation by Tony Kobayashi B.Sc., Carnegie Mellon University, 1994 AN ESSAY SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF Master of Science in
More informationFACIAL MOVEMENT BASED PERSON AUTHENTICATION
FACIAL MOVEMENT BASED PERSON AUTHENTICATION Pengqing Xie Yang Liu (Presenter) Yong Guan Iowa State University Department of Electrical and Computer Engineering OUTLINE Introduction Literature Review Methodology
More information2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into
2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel
More informationCharacter Modeling COPYRIGHTED MATERIAL
38 Character Modeling p a r t _ 1 COPYRIGHTED MATERIAL 39 Character Modeling Character Modeling 40 1Subdivision & Polygon Modeling Many of Maya's features have seen great improvements in recent updates
More informationNOWADAYS, more and more robots are designed not
1 Mimic Expression System for icub Ana Cláudia Sarmento Ramos Marques Institute for Systems and Robotics Institute Superior Técnico Av. Rovisco Pais, 1; Lisbon, Portugal claudiamarques@tecnico.ulisboa.pt
More informationHUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION
HUMAN S FACIAL PARTS EXTRACTION TO RECOGNIZE FACIAL EXPRESSION Dipankar Das Department of Information and Communication Engineering, University of Rajshahi, Rajshahi-6205, Bangladesh ABSTRACT Real-time
More informationSurgical Cutting on a Multimodal Object Representation
Surgical Cutting on a Multimodal Object Representation Lenka Jeřábková and Torsten Kuhlen Virtual Reality Group, RWTH Aachen University, 52074 Aachen Email: jerabkova@rz.rwth-aachen.de Abstract. In this
More informationA. Egges, X. Zhang, S. Kshirsagar, N. M. Thalmann. Emotional Communication with Virtual Humans. Multimedia Modelling, Taiwan
A. Egges, X. Zhang, S. Kshirsagar, N. M. Thalmann. Emotional Communication with Virtual Humans. Multimedia Modelling, Taiwan. 2003. Emotional communication with virtual humans Arjan Egges, Xuan Zhang,
More informationEdge Detection for Facial Expression Recognition
Edge Detection for Facial Expression Recognition Jesús García-Ramírez, Ivan Olmos-Pineda, J. Arturo Olvera-López, Manuel Martín Ortíz Faculty of Computer Science, Benemérita Universidad Autónoma de Puebla,
More informationSocially Communicative Characters for Interactive Applications
Socially Communicative Characters for Interactive Applications Ali Arya imediatek, Inc., Vancouver, BC, Canada aarya@sfu.ca Steve DiPaola Simon Fraser University, Surrey, BC, Canada sdipaola@sfu.ca Lisa
More informationRichard Williams Study Circle Handout: Disney 12 Principles of Animation. Frank Thomas & Ollie Johnston: The Illusion of Life
Frank Thomas & Ollie Johnston: The Illusion of Life 1 1. Squash and Stretch The principle is based on observation that only stiff objects remain inert during motion, while objects that are not stiff, although
More informationRENDERING AND ANALYSIS OF FACES USING MULTIPLE IMAGES WITH 3D GEOMETRY. Peter Eisert and Jürgen Rurainsky
RENDERING AND ANALYSIS OF FACES USING MULTIPLE IMAGES WITH 3D GEOMETRY Peter Eisert and Jürgen Rurainsky Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institute Image Processing Department
More informationMeticulously Detailed Eye Model and Its Application to Analysis of Facial Image
Meticulously Detailed Eye Model and Its Application to Analysis of Facial Image Tsuyoshi Moriyama Keio University moriyama@ozawa.ics.keio.ac.jp Jing Xiao Carnegie Mellon University jxiao@cs.cmu.edu Takeo
More informationAutomatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis
From: AAAI Technical Report SS-03-08. Compilation copyright 2003, AAAI (www.aaai.org). All rights reserved. Automatic Detecting Neutral Face for Face Authentication and Facial Expression Analysis Ying-li
More informationADVANCED DIRECT MANIPULATION OF FEATURE MODELS
ADVANCED DIRECT MANIPULATION OF FEATURE MODELS Rafael Bidarra, Alex Noort Faculty of Electrical Engineering, Mathematics and Computer Science, Delft University of Technology, The Netherlands A.R.Bidarra@tudelft.nl,
More informationA Sketch Interpreter System with Shading and Cross Section Lines
Journal for Geometry and Graphics Volume 9 (2005), No. 2, 177 189. A Sketch Interpreter System with Shading and Cross Section Lines Kunio Kondo 1, Haruki Shizuka 1, Weizhong Liu 1, Koichi Matsuda 2 1 Dept.
More informationAccelerated Ambient Occlusion Using Spatial Subdivision Structures
Abstract Ambient Occlusion is a relatively new method that gives global illumination like results. This paper presents a method to accelerate ambient occlusion using the form factor method in Bunnel [2005]
More informationD DAVID PUBLISHING. 3D Modelling, Simulation and Prediction of Facial Wrinkles. 1. Introduction
Journal of Communication and Computer 11 (2014) 365-370 doi: 10.17265/1548-7709/2014.04 005 D DAVID PUBLISHING 3D Modelling, Simulation and Prediction of Facial Wrinkles Sokyna Alqatawneh 1, Ali Mehdi
More informationDynamic Editing Methods for Interactively Adapting Cinematographic Styles
Dynamic Editing Methods for Interactively Adapting Cinematographic Styles Martin Rougvie Culture Lab School of Computing Science Newcastle University Newcastle upon Tyne NE1 7RU m.g.rougvie@ncl.ac.uk Patrick
More informationFacial Animation. Chapter 7
Chapter 7 Facial Animation Although you can go a long way toward completing a scene simply by animating the character s body, animating the character s face adds greatly to the expressiveness of a sequence.
More informationFACIAL FEATURE EXTRACTION BASED ON THE SMALLEST UNIVALUE SEGMENT ASSIMILATING NUCLEUS (SUSAN) ALGORITHM. Mauricio Hess 1 Geovanni Martinez 2
FACIAL FEATURE EXTRACTION BASED ON THE SMALLEST UNIVALUE SEGMENT ASSIMILATING NUCLEUS (SUSAN) ALGORITHM Mauricio Hess 1 Geovanni Martinez 2 Image Processing and Computer Vision Research Lab (IPCV-LAB)
More informationHEFES: an Hybrid Engine for Facial Expressions Synthesis to control human-like androids and avatars
HEFES: an Hybrid Engine for Facial Expressions Synthesis to control human-like androids and avatars Daniele Mazzei, Nicole Lazzeri, David Hanson and Danilo De Rossi Abstract Nowadays advances in robotics
More informationMotion Synthesis and Editing. Yisheng Chen
Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional
More informationImage-Based Deformation of Objects in Real Scenes
Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method
More information