Facial Animation. Joakim Königsson

Size: px
Start display at page:

Download "Facial Animation. Joakim Königsson"

Transcription

1 Facial Animation Joakim Königsson June 30, 2005 Master s Thesis in Computing Science, 20 credits Supervisor at CS-UmU: Berit Kvernes Examiner: Per Lindström Umeå University Department of Computing Science SE UMEÅ SWEDEN

2

3 Abstract Facial animation has been an area within computer graphics since the beginning of the 1970 s and there is a wide range of application areas such as the entertainment industry, medicine, social avatars and telepresence. What makes it so intriguing is the importance of both technical and aesthetic aspects in order to produce appealing animations. The purpose of this thesis is to investigate the field of facial animation, examining some of the most common methods and also cover an implementation aimed towards telepresence, a real time facial animation viewer with a MPEG-4 FA interface. Keywords: Facial animation, Key frame Interpolation, Parameterization, Muscle models, Hierarchical skeletons, Inverse kinematics and MPEG-4 FA standard.

4 ii

5 iii Acknowledgements Carrying through with this thesis would not have been possible without the help and patience received from the people listed below in an alphabetic order by their first name. Thank you! Berit Kvernes Daniel Andrén Hans Rönnbäck Klas Markström Marcus Jonsson Family and friends

6 iv

7 Contents 1 Introduction Background Objective Structure of thesis Facial animation techniques Properties of a face model Polygon surfaces Parametric surfaces Subdivision surfaces Volumetric models Key-frames Key-frame interpolation Linear interpolation Quadratic interpolation Cubic Spline interpolation Key-frame interpolation in Maya Parameterization Facial action coding system (FACS) Minimal perceptible actions (MPA) MPEG-4 FA standard background Muscle models Muscle mechanics Linear muscles Sphincter muscles Sheet muscles Skin elasticity Limitations Pseudo muscle models v

8 vi CONTENTS 3 Background Theory Kinematics in hierarchical skeletons Skeletons in Maya Kinematics control in Maya Forward Kinematics (FK) Inverse kinematics Cyclic coordinate descent (CCD) Localized joint rotation IK solver based on CCD The Jacobian Jacobian matrix Inverse Pseudo-Inverse Transpose IK solvers based on Jacobian matrices CCD or Jacobian matrices Rotation using Quaternions Quaternion operations Properties Interpolated rotations Vertex blending Animation blending Animation control MPEG-4 Facial Animation standard Facial Definition Parameters Facial Animation Parameters Facial animation viewer (FAV) Overview Maya Basics of Maya Dependency graph Programmability MEL exporter Setting up a rig Placing the skeleton Placing the skeleton kinematics Creating poses Implementation Control GUI

9 CONTENTS vii MPEG-4 FAP interface Animation engine Blender Animation Layer IK solver Transformation Render Software analysis Development information Complexity Traversing the geometry Skeleton rotation Timing issues Test results Gallery Concluding remarks 75 A Abbreviations 77 B FACS AUs table 79 C MPAs table 81 D MPEG-4 FA tables 83 References 87

10 viii CONTENTS

11 List of Figures 2.1 Polygon surfaces NURBS surfaces Three level triangular subdivision Subdivision surfaces Volume based model, conceptual image Interpolation between two key frames Transition between two key frames Possible candidate for an in-between frame Adjusting deformation with control points Linear interpolation animation curve Quadratic interpolation animation curve Bézier curve segment Tangent types in Maya Parameterization principle Action unit MPA defined pose Layered facial anatomy model Skull platform for muscle placement Linear muscle vector Linear muscles area of effect Sphincter muscle Sphincter muscles area of effect Muscle placement in the forehead Sheet muscle Sheet muscles area of effect A tensionnet Basic spring configurations Screenshot from the Tintoy movie A joint in Maya Two joints and the bone vector ix

12 x LIST OF FIGURES 3.3 Bone driven animation Example of a skeleton hand in Maya Joint chain with effector and locator Raised IK problem Proposed solution by the iksolver Constructing a joint rotation CCD algorithm iterations Interpolation between quaternion rotations Linear versus spherical interpolation Generated in-between poses by interpolated rotations Rigid versus smooth binding in Maya General animation blending method Blending blocks network Add blender MPEG-4 FA Face and body animation structure MPEG-4 FA Feature points placement MPEG-4 FA FAP frame definition MPEG-4 FA FAP file format Nodes in Maya The hierarchy of a basic face rig in Maya Exported data Placement of the jaw, Maya screenshot Placing the jaw, Maya screenshot Setting up the jaw, Maya screenshot A jaw opens, Maya screenshot Neutral and sad poses, FAV screenshot Overview of FAV GUI and main window in FAV Execution flow in FAV Add and clamp blender in FAV Local frame of reference Skeleton rotation in FAV Emotions screenshots

13 List of Tables 3.1 FAPU definitions Test results B.1 AU list C.1 MPA list D.1 FAP(1) visemes list D.2 FAP(2) Expression list D.3 FAP(3) list, D.4 FAP(3) list, D.5 FAP(3) list, xi

14 xii LIST OF TABLES

15 Chapter 1 Introduction This chapter serves as a relatively gentle introduction to this thesis and it is constituted by three sub chapters, first there is a background explaining the reasons for selecting facial animation as the subject for this thesis, secondly there is a brief introduction to the area itself and the last sub chapter of the introduction describes the structure of this thesis. All abbreviations used in this thesis are listed in appendix A. 1.1 Background The word animation comes from the Latin word anima, which means the breath of life and in a more common sense, it simply means creating an illusion of movement. This is done by producing an ordered sequence of pictures called frames, visualizing them at a sufficient and constant rate of frames per second. The most common use of this technique is obviously regular cartoons, such as Disney productions. Creating animations using computer graphics (CG) has been done since computers gained enough computational power for the job and compared to other technologies the development of CG has been and still is evolving at a very high rate. As the hardware evolves, so do the software technologies used to create animations which in turn may require higher performing hardware. This scenario can be described as two racing cars taking turns being in the lead, constantly pushing each other to go faster. It is a challenging area because it touches both technical and aesthetic issues. Without the proper software and hardware technologies an animator is limited to fewer possibilities and if there is no aesthetic quality, the results will be poor no matter what techniques and hardware being used. Therefore an animation is always a compromise between the possibilities allowed by current technologies and what the animators are able to do with them. The human face and its universal communicating abilities through expressions has been a target for investigation for a very long time, Charles Darwin did some work on the subject and companies like Disney has also been using and developing techniques to create characters expressing their emotions in a convincing way. Naturally, animating facial expressions has become a rather distinct area within computer animation and this thesis starts precisely from this point. Where is facial animation (FA) used today? The application areas are many and 1

16 2 Chapter 1. Introduction here is a list of examples: Medicine: Reconstruction of faces, plastic surgery, FA is being used to simulate what the effects will be from a certain surgery. Game industry: Adding unique face attributes to the characters and allowing the possibility to express feelings and visemes certainly contributes to a realistic feeling in the game. Telepresence: When it is not an option to use streamed video, a sender can break down facial movements according to some coding system and send them as parameters to a receiving end. The parameters is then used to recreate the facial movements on a model representation of the sending end. Movie industry: In order to create convincing animated persons, proper animation techniques for the characters faces must be used. A good example of movies using FA techniques are Toy Story 2 and the animated character Gollum, from the film trilogy Lord of the rings. Movies such as Monsters Inc, Ice Age and finding Nemo are like Toy Story 2 fully animated movies and FA is a crucial element in such productions. Social avatars: This is quite similar to the game industry, a social avatar is basically an artificial intelligence (AI) agent process. FA is used to give these avatars realistic characteristics. Commercials: Animation, including FA is heavily used in commercials these days. Anthropology: Reconstruction of faces, by filling out the gaps when pieces are missing in order to see what the original face and its motions might have looked like. Scientific visualization: FA is a research area within scientific visualization. This thesis covers some of the most common methods, or paradigms, within FA and it is aimed towards telepresence by including an interface to a FA standard from moving picture experts group (MPEG), known as the MPEG-4 FA standard and thereby allowing the animation to be controlled by an incoming parameter stream. 1.2 Objective The purpose of this thesis is to investigate the area of FA by examining some common FA techniques and implementing a facial animation viewer (FAV). This thesis deals with geometric deformable models and it does not cover image manipulation. FAV s main purpose is to apply some of the techniques described in the theoretical parts and use the MPEG-4 FA standard, so the animations can be controlled by an incoming stream of facial animation parameters (FAP). The techniques used are currently popular ones used in graphical environments such as Maya[4], a program mainly used for modeling, animation and rendering.

17 1.3. Structure of thesis Structure of thesis This thesis consist of two parts. The first part deals with theory and the second deals with the implementation. The theoretic aspects, in this part common methods used in FA are examined and additional background knowledge used in the implementation are presented. Since the implementation stumbles into other research areas as well, such as inverse kinematics, hierarchical skeletons and interpolated rotations using quaternions. Knowledge in math, such as linear algebra and analysis will be helpful for the reader. The practical part is an implementation of FAV, using some of the techniques discussed in the theories. At the end of the implementation part there is an analysis and some discussions. At the end there is a chapter briefly reflecting over the entire thesis, both theory and implementation.

18 4 Chapter 1. Introduction

19 Chapter 2 Facial animation techniques In general, as computers have evolved in computational powers, methods for FA have also evolved. Since the 70 s there has been a lot of techniques, implementations and various kinds of model representations, all adapted to what the current technology at that time could offer. Pinpointing the exact date for when FA first was used is not easy, most papers claim that it started in the early 1970 s[12]. Since then several techniques for FA has been developed, this chapter and its classifications relies to some extent on the ones made by Keith Waters. Although the main purpose certainly is not to classify or entering into FA method classification debates, it is a bit difficult to avoid classification to some degree in order to maintain some form of structure. It is possible to distinguish two main sections within FA as explained by J. Noh and U. Neumann[18], one that concerns manipulation of digital images and the other that deals with deformations of geometric model representations. Since this thesis belongs in the area of 3D animation, the focus lies on geometry model representations. There are several methods used when deforming geometrical models and the ones selected for investigation in this thesis are key frame interpolation, parameterization, muscle and pseudo muscle models. As will be seen in the implementation part, FAV uses elements and ideas of the methods described in this chapter. But first it is necessary to examine the properties of a geometric face model. 2.1 Properties of a face model The representation of a face can be done in many ways. There are two main categories for geometric model representations: volumes or surfaces. Typically a volume is represented by a form of volume entities known as voxels, (a short for volume pixels) and surfaces are represented using a polygon mesh or a parametric surface with control points. As Waters[12] points out FA demands models that can support animations for all the complex movements in a human face. Needless to say, the more detailed and flexible a face model is, the better. The result is a trade-off between how fast the model is rendered on the applications target systems and the graphical qualities of the animation. Waters also made a list where he suggests the attributes a face model representation should have included: Skin: A visible 3D surface, representing the visible skin. This should be made so 5

20 6 Chapter 2. Facial animation techniques that it can be reshaped into convincing expressions. The continuing topics below are the details added to the face model, making it more realistic. Hair: There is plenty of research on how to create hair, but the goal is to have a hair that responds dynamically to the character movement and the surrounding forces, such as currents in air or water (underwater). Eyes: These are very important for creating convincing expressions and they should be very detailed and dynamic. Ears: Demands a detailed surface, due to cavities, skin folds and wrinkles. They are also highly individual from model to model. Lips: The surface around the mouth must be made in a way so the lips can be stretched along with all the mouth movements, so they need to be highly flexible. Teeth: Naturally the more detailed the better, but teeth moves rigidly along with the jaw and they are therefore not dynamic and demands no flexibility. Tongue: This will help the character to do more convincing visemes. Asymmetric details: No faces are in reality completely symmetric and to save some work, it is common to create just one half and then use that half to create the other one. This makes the two halves totally symmetric, which is not realistic. One must consider adding asymmetric details, making the halves different from each other. It is a necessity to specify the models attributes in advance, such a list assures that the model representation is constructed in a way which allows it to transform into the desired poses. There are several ways in which geometric models representations can be implemented. The data can be retrieved by scanning a real physical human or sculpture of a human, digitizing or by sculpting them in 3D graphical environments, such as Maya or 3D studio. There are a couple of techniques one can use to implement the actual geometry model. Surfaces are typically made with polygon meshes or a parametric surfaces and volumes are typically constructed from voxels. The process of developing models for FA is thoroughly described by Jason Osipa in his book[22]. 2.2 Polygon surfaces Polygon meshes is by far the most commonly used way to implement surfaces in geometric model representations, the meshes are formed into shapes, building up a human face as shown in figure 2.1. The most important thing when using polygon surfaces in FA is to create the polygons so the resulting mesh forms a facial surface that is easily deformed between various poses. The surface must be allowed to stretch in a way that looks natural compared to the movements of a real human face. The area around the mouth in particular requires a flexible surface.

21 2.3. Parametric surfaces 7 Figure 2.1: Polygon surfaces The level of detail is determined from the amount of polygons used and naturally the model should be as detailed as possible, considering the speed of the target system. Typically they are created by using tables containing vertex, edge and polygon information and their relations. FAV uses models with shapes constructed by polygon meshes in the implementation part. Advantages: Using a polygon mesh with an adapted level of detail is a simple way to create a flexible surface, compared to using parametric surfaces. Disadvantages: The graphical quality stands in proportion with the rendered objects and their level of detail, using highly detailed models will require a huge amount of calculations and for real time applications, the amount of polygons used is definitely limiting the overall performance. 2.3 Parametric surfaces Parametric surfaces are an analytic way of creating surfaces used in a model representation. There are several types of parametric surfaces and they are built like patchwork quilts, constructed by parametric curves. In Maya[4] these curves are implemented with a type called non-uniform rational B-spline (NURBS)[2] and they can be combined to form surfaces. The folded NURBS surface shown in figure 2.2 was done in Maya. These surfaces are flexible and can be connected with other surfaces. This modelling technique is often called trimming, where the animator is sawing the surface patches together. In FA this technique is often used while sculpturing the various parts of the face.

22 8 Chapter 2. Facial animation techniques Figure 2.2: NURBS surfaces A NURBS curve is defined as follows: p = order B = B-spline basis function P i = number of control points P 0,, P n w = weight n i=0 C(t) = B i,p(t)w i P i n i=0 B (2.1) i,p(t)w i A NURBS plane is defined as follows: p, q = order B = B-spline basis function P i = number of control points P 0,, P n w = weight m n i=0 j=0 S(u, v) = B i,p(u)b j,q (v)w i,j P i,j m n i=0 j=0 B (2.2) i,p(u)b j,q (v)w i,j More detailed information on how NURBS planes are generated can be found at sources [21][4][5]. Advantages: NURBS describes curves more accurately than polygon since it is based on analytic methods. They are flexible to work with and can be rendered at a sufficient speed by current hardware. Disadvantages: In some cases NURBS requires large amounts of memory. They are not intuitive and therefore not as easy to understand compared to using polygon meshes. In FA they have the downside of not being able to crease and wrinkle well[12]. Complex areas around the nose and eyes are examples of areas where these problems occur. Polygon meshes are the better choice in those areas[22]. 2.4 Subdivision surfaces Subdivision[31] is a method where surfaces based on polygon meshes are divided into finer detail levels. This method adds the possibility of smooth and accurate curves to

23 2.5. Volumetric models 9 polygon meshes, thereby combining the advantages of polygon meshes and NURBS. A subdivided surface can have booth smooth curved shapes and still be able to crease and wrinkle. Subdivision surfaces in Maya[4] allows an animator to toggle between various level of details, the animator can choose a suitable level while sculpturing parts of the object. Subdivision method is based on approximation or interpolation. Furthermore the methods differ depending on the type of polygon mesh used, if it is triangular, quadratic or other polygon formations. Figure 2.3 shows a basic conceptual example of a three level triangular subdivision of a 2D triangle and Figure 2.4 shows an example of subdivision in Maya. The article[31] contains more examples and references on subdivision methods. The purpose was to mention subdivision since it is a useful tool when creating surfaces used in FA. Figure 2.3: Three level triangular subdivision Figure 2.4: Subdivision surfaces Advantages: Combines the best features from polygons and NURBS Disadvantages: Subdivision surfaces are computationally complex. 2.5 Volumetric models Volumetric models are usually constructed by 3D building blocks known as voxels. A voxel is a volume element that basically works as a 3D pixel, composed by four or more

24 10 Chapter 2. Facial animation techniques vertices and a colour value attribute to the subvolume they constitute. These models are usually found within scientific areas such as the medical area, where computer model representation of human heads has been built from data gathered from equipment using techniques such as magnetic resonance imaging (MRI) or computer tomography (CT). Conceptually a volumetric model is constructed as shown in figure 2.5. Figure 2.5: Volume based model, conceptual image Advantages: Volumetric model representations offer an accurate way of representing human heads, which basically makes it the only choice for medical/scientific CA applications where just visualizing the surfaces would be insufficient Disadvantages: Due to heavy requirements on computational power, these models do not suit well as model representations used in real time computer animation (CA) applications, even if it is possible using the currently newest hardware technologies. This is something that be more common the next couple of years. Detailed parametric surfaces or polygon meshes are better choices if the purpose of the animation is just to visualize a facial surface and its contents are irrelevant. A reader interested in CA using volumetric methods emulating biological tissues will find a good start in[33]. 2.6 Key-frames In CA using key-frame interpolation methods, the model has to contain information sets that describe its appearance for each individual key-frame. Interpolation methods use the information in these sets as arguments and generate intermediate frames as output. In this thesis these frames are referred to as in-between frames. The actual implementation of a key-frame system can be done in different ways, depending on how the models in use have been implemented, as previously seen in chapters Important in a key-frame interpolation system is that interpolation methods used need the current set of model information and the set describing the target pose. The information used by the interpolation method has to be of the same type and correspond between the key-frames in use.

25 2.7. Key-frame interpolation 11 A quick example, using a method which interpolates between vertices cannot interpolate from a key-frame with an information set consisted of control points into a key-frame with an information set consisted of vertex positions. The information sets used by an interpolation method which transforms a model from one pose into another pose, has to use corresponding information between the key-frames. This is shown in figure 2.6. Figure 2.6: Interpolation between two key frames Since the type of information used in a model representations information sets differs between different projects it is a necessity to keep this at an abstract level while describing key-frame interpolation. Assuming there is some model representation and a proper interpolation method for the representation has been selected, the procedure of interpolating in-between frames using these key-frames is the same. This defining procedure is typically done in the same animation software environments used when building the model, such as Maya or 3D studio. Typically, key-frames in FA are captures of a model representation s extreme positions while expressing emotions and visemes, such as the ones described by the MPEG-4 FA standard in tables D.1-D Key-frame interpolation Interpolating between key-frames is one of the oldest methods in CA and it is still very useful and popular. The underlying problem is storage capacity, an animation usually require many frames. An animation running for one minute using 25 frames per second (FPS), storing an information set for each individual frame results in 1500 unique sets. Clearly this approach does not qualify in many cases. The animation in this example certainly is not extensive or unrealistic. As mentioned at the end of the previous chapter, key-frames capturing the extreme positions in the models motion is used instead. The in-between frames are calculated during runtime by the computer. A nice analogy of key-frame interpolation presented in[27] states that the procedure of key-frame interpolation is analogous to the procedures of a company producing cartoons, the more experienced animators draws key-frames and then less experienced or contracted animators draw the missing in-between frames.

26 12 Chapter 2. Facial animation techniques The idea is simple and straightforward, the animator fully defines the key-frames using some graphical tool and the computer using an interpolation method with a time coefficient t as argument, generates the missing in-between frames. A simple example of this procedure can be seen in the figure 2.7. Here we have two information sets used Figure 2.7: Transition between two key frames in a key-framing system, each set describes a deformation with the visual meanings of two emotions, joy and sadness. A possible candidate to a generated in-between frame x might look like in figure 2.8. Continuing with this example, imagine the shape of Figure 2.8: Possible candidate for an in-between frame the mouth is determined by three control points positioned along the characters mouth. The interpolation method performing the transition uses these control points as interpolation data as shown in figure 2.9. If the control points located in the mouth corners Figure 2.9: Adjusting deformation with control points are relocated and moved up, the character will look happy and if they are moved down it will look sad and in the middle the control points are neutrally positioned. The information sets 1 and 2 describing two deformations of the model consists of these three control points. A general function header for an interpolation method performing the transition between the key-frames corresponding control points looks like this: P new = interpolate(p 1, P 2, t)

27 2.8. Linear interpolation 13 The time coefficient t determines the interpolated positions for the control points which define the interpolated mouth shape displayed by the in-between frame x. Values for t closer to 0.0 results in shapes more similar to the one in key-frame 1 and when t is set to 0.0 the resulting frame is the same as key-frame 1. As t is increased the resulting in-between frames from the interpolation looks more like key-frame 2, when t is set to 1.0 the result is the same as in key-frame 2. There are several types of interpolation methods used for the same purpose, generating missing in-between frames. However, different interpolation methods results in different properties in the motion of the animated character. Some interpolation methods causes acceleration in the motion, while linear interpolation results in a constant motion. Therefore, it is important to select a method that offers the best result considering the sought properties of the animation. Common interpolation methods such as linear, quadratic and cubic spline interpolation are explained next. Advantages: Interpolating the intermediate in-between frames saves memory. Keyframe allows a high level of control of the result since the animator decides the appearances of the model in the key-frames by specifying the information sets used. Therefore, it is used in the movie industry[29], even if there is a lot more to it than explained here. Disadvantages: Key-frame interpolation is limited to the available information sets for the model representations in use, if an emotion has not been explicitly been defined by an animator it cannot be expressed by the representation. The high level of control also contributes with more work from an animator. 2.8 Linear interpolation Linear interpolation interpolates along the straight line between two points in space and it is is the easiest form of interpolation that can be used when creating in-between frames in a key-frame interpolation application. The linear interpolation formula looks like this: p new = p 1 (1.0 t) + p 2 t 0 t 1. (2.3) This tells us that the intermediate in-between frames generated during the transition between key-frame 1 and key-frame 2 will result in a transition with constant rate, there is no acceleration in the animated objects. An in-between frame generated with a time coefficient of 0.75 would result in a blended frame, made up by 25% of key-frame 1 and 75% of key-frame 2. The graphical result of linear interpolation is shown in figure 2.10, interpolated positions for a control point will be placed along the straight animation curve between the specified point p 1 and point p 2. Advantages: Linear interpolation is easy to implement, it offers a constant change when that property is sought. Disadvantages: Animation suffers from sudden changes in direction due to discontinuity in the control points, which results in a jerky animation.

28 14 Chapter 2. Facial animation techniques Figure 2.10: Linear interpolation animation curve 2.9 Quadratic interpolation This method adds acceleration to the transition between key-frame 1 and key-frame 2. The effect is caused using the quadratic value of the time coefficient t. The resulting transition between the two key-frames has the property of a slow start, a quadratic increase in velocity and at the end the transition moves fast. A typical formula for quadratic interpolation looks like this: p new = p 1 (1.0 t 2 ) + p 2 t 2 0 t 1. (2.4) A quadric interpolation animation curve between point p 1 and point p 2 looks something like the curve shown in figure Figure 2.11: Quadratic interpolation animation curve Compared to linear interpolation, a transition between key-frame 1 and key-frame 2 results in generated in-between frames with a transitional behaviour of a slow start in the beginning, using low values on the time coefficient t and as t increases the transition will accelerate and in the end move much faster than linear interpolation, causing accelerated motion in the animated objects. Advantages: Quadratic interpolation is an easy way to implement an interpolation method that causes acceleration in the transition between key-frames. Disadvantages: Like linear interpolation there is discontinuity in the control points which causes sudden changes in the animated object.

29 2.10. Cubic Spline interpolation Cubic Spline interpolation The problem of sharp directional changes between transitions made from one key-frame to another in linear and quadric interpolation can be avoided using cubic polynomial interpolation methods. These methods attempt to fit an animation curve through the deforming information sets in the key-frames, maintaining smooth transitions. These methods do require extra calculations compared to linear and quadratic interpolation and are therefore clearly slower, but the results are better and compared to methods using higher order polynomials, they are clearly faster. Due to an oscillation problem, known as Runge s phenomena[26] piecewise interpolation has to be used, which means interpolations are done using a few control points from each deformation set at a time. As described in[11] interpolation using cubic polynomials in general require n curve segments and n + 1 control points. At each control point where two curve segments joins together, certain boundaries must be defined, so that the coefficients a, b, c and d in equation 2.5 below can be found. Exactly how these boundaries are implemented is what separates the methods from each other. By using two key-frames, or control point the idea is to find positions along the animation curve between them using this equation system: x(t) = a x t 3 + b x t 2 + c x t + d x y(t) = a y t 3 + b y t 2 + c y t + d y 0 t 1 z(t) = a z t 3 + b z t 2 + c z t + d z (2.5) Advantages: Avoids jerky motion found in linear and quadratic interpolation and compared to interpolation using higher order polynomials they are fast. Disadvantages: Suffers from Runge s phenomena, requiring piecewise interpolation using some sort of boundaries. They are clearly much more difficult to understand and more complex compared to linear and quadratic interpolation. There are several cubic polynomial interpolation methods that can be used, examples are NURBS, Hermite and Bézier splines. An interested reader will find a good start in this book[11] Key-frame interpolation in Maya This chapter is based on the Maya documentation[4]. Animation curves between keyframes, no matter their polynomial degree are internally implemented with a restricted set of cubic two dimensional Bézier curves in Maya. The shape of the curve is determined by user defined tangent vectors, specified by either control points or by a tuple (weight, angle). This is an instance of an information set and it contains information about the animated objects attributes at a certain point in time. In more detail, there are four points of which two are control points and the other two are the points forming start and end positions for the curve segment. A tangent vector is a vector between a point and a control point. The animation curve is calculated in a piece wise manner, one segment at a time. Here is an example of the above discussion, the two points P 1 and P 2 is connected by a curve segment, the curves shape is determined by the two tangent vectors P 1 C 1 and P 2 C 2. Note that a point s out-tangent is formed between itself and the closest control

30 16 Chapter 2. Facial animation techniques point in future time and the in-tangent is formed between itself and the most recent control point, as shown in figure The shape of the animation curve is determined Figure 2.12: Bézier curve segment by the tangent vectors which can be weighted or non-weighted. If they are weighed an influence is added to the tangent, determining to which degree the animation curve will be affected by the tangent vector. There are several tangent types in Maya, the ones available are linear, flat, step, splines (smooth) and clamped. These tangent types and their resulting animation curves is shown in figure 2.13 All key-frame interpolations are implemented as special cases of cubic Bézier curves in the Maya animation engine, there is however one exception, when stepped tangents are used. Although the procedure is filled with additional mathematical properties and optimizations it is not necessary for a user to be concerned with that. A linear curve segment between two points is generated by directing the out tangent from one point towards the next point and the in tangent from the next point is directed back towards the previous point. Typically the curve changes direction very suddenly which might result in a jerky behavior. When a flat tangent is used, the in and out tangents are both horizontal which results in an animation curve with less jerky behavior. During stepped tangent types, the target information is ignored and the animation curve is horizontal. A value is constant during the period between the current position and the next, causing a square wave shaped animation curve. The transition between key-frames is done by sudden changes. Spline tangents have the property of being co-linear in the positions specified by the key-frames. The curve segments are connected to each other so the curve smoothly passes through the various positions along the curve. This results in a smooth animation curve and spline tangents are often referred to as smooth tangents. Clamped tangents have the combined effect of linear and spline tangents. When the corresponding values in two key-frames are sufficiently close, linear tangents is used to shape the animation curve between them, otherwise spline tangents are used. Clamping values that does not change much between key-frames and it solves the problem of sliding, an effect caused by the spline curve that might reach higher or lower values than specified by the nearby key-frame.

31 2.12. Parameterization 17 Figure 2.13: Tangent types in Maya Advantages: Bézier curves are less complex in comparison with other cubic interpolation methods and it offers sufficient flexibility. Bézier curves always passes through the first and the last control point and the curve is guaranteed to stay within the convex hull spanned by the control points. Various types of animation curves can be implemented with Bézier curves, as seen previously in this chapter. Disadvantages: Relatively large amount of control points is required and the control points affect the entire curve to some degree, requiring segmentation of the curve Parameterization Parameterization offers a possible solution for the problems with key-frame interpolation. Instead of storing every bit of deformation data in different key-frames, this approach uses parameter sets, were each parameter corresponds to some part of the face. By using different values on a certain parameter, a part of the model representation is deformed. The term ideal parameterization in FA is a set of parameters with a range that would allow a model representation to express everything a normal human is capable of. It is also commonly referred to as complete or universal parameterization. Obviously, such a set does not exist, but there are sets supporting plenty of expressive and facial details and these sets can give satisfying results[16]. Parameterization involves two steps, defining a set of parameters and the development of models compliant to the defined parameter set. The defining process has been done in several ways, some by examining facial surface of human heads. Others by focusing on the structures causing motions to the overlaying skin and there have been sets that relates to facial actions taken by the face. The common purpose of the defined sets is to provide with instructions, determining how the model representation is to be deformed. Typically, parameter sets are built according to geometric attributes of the

32 18 Chapter 2. Facial animation techniques face such as be height and width for the eyes, mouth, nose and eyebrows etc. By observing the example in figure 2.14 it is easy to realize how parameterization works. To the left there is a relaxed version of the model representation, defined by the parameter set {P 1, P 2, P 3 } which corresponds to the height of eye, nose and mouth. By changing this set of parameters into {P 1, P 2, P 3}, redefining the geometric details for the model, another expression of the model is generated. The second step is creating a Figure 2.14: Parameterization principle compliant model that can perform the actual transitions between expressions, as specified by the parameters. Exactly how this is done is highly implementation specific, but there has to exist some parameterization algorithm that updates the model according to the data specified by the current set of parameters. Parameterization is used in the implementation part of this thesis. FAV uses a parameterization corresponding to a skeleton structure, which in turn corresponds to the surfaces of the model. There has been several parameter systems developed over the years and the ones examined in this thesis has been selected according to their relevance to the MPEG- 4 FA standard, the selected systems are facial action coding system (FACS), minimal perceptible action(mpa) and the MPEG-4 FA standard itself Facial action coding system (FACS) FACS[13] was developed by Ekman & Friesen and was published in FACS is an attempt to map facial muscle movements to facial actions, FA was not the target area for FACS. It was designed for categorization purposes of facial behaviours within psychology. However, people involved with FA has successfully used FACS, since it can easily be adapted to control FAs. In FACS, specific muscle movements are referred to as an action unit (AU). These action units are usually combined with logic operations in order to build a pose describing an expression. An example of an action unit can be AU1, which means lower inner eye brows and AU45 is a blink with the eyes. In table B.1 these action units are listed along with their corresponding region of the face. Logic combination of action units generates facial behaviours. The pose describing fear is built by action units as follows: AU1 + AU4 + AU15 + AU23

33 2.12. Parameterization 19 Figure 2.15: Action unit A Pose defined by AUs is schematically shown in figure By combining these action units it is possible to specify expressions on a face model and this is how FACS is used to control FA. Advantages: The system works well and has been designed solely for describing facial expressions and there is plenty of research behind it. The system is easily adapted to FA. Disadvantages: A problem with FACS is that it does not describe complex movements well, such as the ones appearing in the mouth area. Therefore, FACS is good at describing expressions, but not speech. It is designed for psychologist s studying facial actions and not as a standard for FA, it is also much less detailed and values are not normalized according to standard spatial references, so FACS is highly model specific. Note: FACS has been revised in 2002[32] with new features added, such as intensity scores to the AUs, information about how well it suits FA has not been found during this thesis Minimal perceptible actions (MPA) MPA parameters was presented by Prem Kalra at Eurographics 92 and published in the article[23]. They were used as a part of a layered approach, describing the animation of the face using different layers of abstraction[12]. This work had an influence on a group of people that later designed the MPEG-4 FA standard. It introduced a normalization method based on intensity values applied to the parameters. MPAs are based on muscle movements and in a sense it is similar to the action units described in FACS with an additional normalization feature. The MPAs are listed in table C.1. An MPA starts in its relaxed position and when it is relaxed its intensity value is 0.0. There are two types of directional behaviour in an MPA, single direction and dual direction. A single direction uses intensity values in the interval 0.0 to 1.0 and dual direction uses the interval -1.0 to 1.0. An MPA describing a jaw that opens and closes can only move down from its relaxed position, which makes it an example of an MPA using single direction. MPAs describing eyebrows that can move both up and down from their relaxed position is an example of MPAs using dual direction. Poses defined by MPAs is schematically shown in figure Poses are built in a similar manner as parameterizations using FACS where multiple action units are combined into a pose describing an expression. In a MPA-parameterization it is the MPAs with their intensity values that are used to describe a pose.

34 20 Chapter 2. Facial animation techniques Figure 2.16: MPA defined pose Advantages: It offers an improvement over FACS in detail and normalized movements and it can successfully be used to describe expressions. Disadvantages: However, MPA-parameterization does not entirely qualify as a system to use in network standards. It works quite well on specific model representations, but it offers no standardization for facial movements between different models. MPA values for one model may not be suited for another model and vice versa. This downside prevents MPA-parameterization from being suitable to use in a network standard MPEG-4 FA standard background There is a more detailed explanation of the MPEG-4 FA standard in chapter This text briefly summarizes the reasons for the standard, tying things up with earlier discussions about FACS and MPA, which both suffers from the downsides of being model specific, making them less useful as a network standard. Such standards have to work for any model adapted to it. But since values will have different meaning in terms of movement, spatial references in both FACS and MPA, they cannot be used for this. The main feature with the MPEG-FA standard is a high-quality model independent parameterization. It is similar to both FACS and MPA, but more detailed. Incoming parameter values are scaled to better fit the target model. The scaling, usually referred to as normalization of the parameters is done by multiplying the incoming parameter values with a reference factor that is derived from spatial references in the target models facial structure. This mean that the same parameter values can be used for various types of model representations and still cause the same effects, since they are being scaled to fit by the receiving end. This makes the standard more useful compared to FACS and MPA, it allows things such as speech-synthesis between two different types of models Muscle models Muscle models works by emulating facial anatomy, biological tissues and bone structures. The term full scale muscle model implies that every single part of the human head that direct or indirect contributes to some sort of movement on the overlaying skin surface is being simulated. Naturally there are no full scale muscle models seen yet, although there are several simplified ones, such as the one proposed by Waters, the person who first announced the muscle model approach described thoroughly in the book[12]. Muscle model is a general term for any approach based on the anatomy and mechanics of the face. Exactly how this is implemented is not defined.

35 2.14. Muscle mechanics 21 As a subject for investigation Waters muscle model has been selected, since it is the first one announced and also one of the more thorough works seen using this method. Thereby this section is kept closer to the origins of the muscle model. This implementation uses a layered approach, where the mechanics in a human face is simulated in layers, as shown in figure Figure 2.17: Layered facial anatomy model The skeleton forms the foundation to which one side of the muscles is attached. The other end is attached to the muscle layer. Forces created by muscles are distributed in the muscle layer which in turn affects the next layer, the overlaying skin. The elastic behaviour of the skin is implemented with tension nets. The emulation of facial anatomy and its mechanics are described next in a bottom-up manner according to figure Muscle mechanics This chapter is about the actual muscle mechanics working between the bones and their collective effects in the above muscle layer. The muscle itself is connected to the bone by one end while the other end is free to move. This allows contraction and relaxing in the muscle, which results in forces in the end tied to a layer, called the muscle layer. Figure 2.18: Skull platform for muscle placement Waters[12] defined three types of muscles, linear, sphincter and sheet muscles. Linear muscles are implemented as vectors, sphincter muscles with ellipsoids and sheet muscles

36 22 Chapter 2. Facial animation techniques are implemented with specially defined square shaped areas. cause movements by contractions and relaxations. Like real muscles they 2.15 Linear muscles Linear muscles are implemented as vectors that contracts and relaxes. Their movement affects some selected parts in the above layer. These muscles are also often referred to as parallel muscles. Figure 2.19: Linear muscle vector The displacement of a point p inside an area affected by a linear muscle vector is calculated as follows: p = p + cos(θ 1)kpj 1 pj 1 (2.6) In figure 2.20 it is shown schematically how a muscle vector affects the connected parts of the above layer when it contracts. Figure 2.20: Linear muscles area of effect 2.16 Sphincter muscles Sphincter muscles affects ellipsoid shaped areas in the above layer, an example of a sphincter muscle is the mouth, and the displacement of a point in the affected area is calculated using the formula for parametric ellipsoids.

37 2.17. Sheet muscles 23 Figure 2.21: Sphincter muscle The displacement of a point in the affected area is determined by first calculating the muscle force f which determines how the point is either pushed or pulled towards the centre of the ellipsoid shaped area of effect. The force f applied to a point p affected by a sphincter muscle is calculated with the formula for parametric ellipsoids: y 2 p 2 x + x 2 p 2 y f = (1 ( )) (2.7) xy When f is known, the actual displacement is calculated as follows: p = f p (2.8) Figure 2.22 shows schematically how sphincter muscles affect its above layer when it is contracted. Figure 2.22: Sphincter muscles area of effect 2.17 Sheet muscles This type of muscles has the appearance of carpets of muscles covering larger areas. In FA we find this muscle type placed in the forehead. The easiest approach to implement sheet muscles is to spread out several linear muscle vectors placed in parallel, with the combined contraction or relaxing effect of a sheet muscle. While other approaches mean that sheet muscles has to be implemented as a unique muscle type. Sheet muscles are

38 24 Chapter 2. Facial animation techniques connected to an above layer in the same manner as linear and sphincter muscles. In figure 2.23 the placement of sheet muscles is shown. Figure 2.23: Muscle placement in the forehead Continuing with Waters muscle model as an example on how to implement sheet muscles, the displacement of a point inside a sheet muscles area of effect is calculated in the following way. Figure 2.24: Sheet muscle The displacement d of point p affected by Waters sheet muscle is calculated as follows: d = { cos(1 L t R f cos(1 Lt R f if p is inside square ABDC ( Vi V t + V f )) if p is inside square CDFE (2.9)

39 2.18. Skin elasticity 25 The effect made by a sheet muscle contraction is schematically shown in figure Figure 2.25: Sheet muscles area of effect The drawn line shows the up direction of the sheet muscle, it is not to be confused with a muscle vector, since they have different properties. Linear muscles has a cone shaped area of influence, that responds to the contraction of the muscle vector, whereas sheet muscles are square shaped and points located within these squares are displaced using totally different formulas. This type of muscle is rarely used since muscles are not flat since they are applied on the surface of a spherical shaped skull Skin elasticity Skin has a natural elasticity that must be mimicked for a convincing result. This can be done in several ways and Waters used a method derived from soft body deformation, where underlying forces caused by muscle movements are transferred through the muscle layer to a skin surface layer and distributed in the skin using tension nets as shown in figure Figure 2.26: A tensionnet The tension network is implemented as a network of nodes. Forces from the underlying muscle layer are distributed to a set of nodes in the surface layer and each node has a set of adjacent neighbours and these connections are implemented as elastic vectors, or springs. In this procedure a node receives forces from the beneath muscle layer and distributes them by some amount among its neighbours through these springs, which in turn sends a partition of the incoming force to their neighbours. At each node the forces passes, its magnitude fades and at some point it vanishes completely. The combined effect is a certain elastic property in the overlaying skin surface. Three primitive configurations that Waters used to create a skin lattice are shown in figure More composite spring configurations can be found in [12].

40 26 Chapter 2. Facial animation techniques Figure 2.27: Basic spring configurations A node contains attributes such as position, mass, friction, velocity and acceleration and formally it is defined like this. node = pos m f fr v a = δpos δt = δ2 pos δt 2 A spring is implemented as an elastic vector between two nodes and it is defined using its relaxed length and a constant k determining the stiffness of the spring. { lrl spring = k Calculating the netforce on a certain node, is done by calculating the extensions of each connected spring and determining the combined force from all connected springs and finally adding the netforce from the neighbour nodes. d is the distance vector between two neighbour nodes, e = d l rl is the spring deformation and f t = f s is the total force from the connected springs, applied to a certain node. The force from one connected spring is calculated as follows: f s = k e d d (2.10) Once the forces in each node is calculated, the entire netforce on nodes can be determined by integrating the motion equations obtained when inserting the node and spring attributes along with the calculated forces into a formula known as discrete Lagrange equation of motion. The netforce f net applied on each node is calculated as follows: f net = m δ2 pos δt 2 + f fr δpos δt + f t (2.11) Eventually the netforce f net is used to determine the displacement of the nodes positions in the surface layer. These netforces has to be calculated for each timestep, each iteration requiring new values on positions, velocity and acceleration.

41 2.19. Limitations 27 Summarizing this method, facial movements are created from forces with their origin in muscle movements, these forces propagates through the muscle layer into the attached skin layer. These forces are then distributed among the nodes in the overlaying skin surface through a network of springs, causing an effect of elasticity. An example often referred to when discussing muscle models are the baby in the short movie Tin Toy, made by Pixar[25] in Muscle vectors were used to generate facial expressions. Tin Toy is available at [25]. Figure 2.28: Screenshot from the Tintoy movie 2.19 Limitations Muscle models has limitations, the most obvious is that it is very tedious to provide a sufficiently detailed anatomy for a convincing FA and that will require heaps of calculations, compared to other FA techniques. Secondly, muscles and in particular linear muscles cause forces in above layers in the direction they are placed. The skin has a tendency to slide along the surface of the skull and muscle forces normally does not follow the surface well. Waters[12] solved this by calculating a force tangent from the original muscle force and the slope of the above surface and used that instead in the calculations. Another problem is rotations. None of these muscles mentioned is suitable for rotational behaviour, such as eyeballs, neck and jaw. Rotational behaviour has to be implemented by some other means than using muscles in the described way Pseudo muscle models In this method the focus lies not on emulating facial anatomy, instead these methods focuses on the resulting FA, rather than being anatomically correct. Pseudo muscle models can be implemented in numerous ways using various geometric methods that eventually cause the model representation, usually just a surface to be deformed in ways similar to how real faces are deformed by muscle movements. Simply, the goal is to simulate the effects of real muscles, rather than simulating the anatomy itself. Compared to muscle models, approaches using pseudo muscle models are typically less computationally complex, although it is difficult to generalize this subject since the results depends on how the FA has been implemented. But emulating anatomy requires lots of details which are difficult to capture and this results in extensive calculations, pseudo muscle models is free to use less expensive methods which might give equivalent visual results. Pseudo muscle techniques is a very common way to do realistic FA.

42 28 Chapter 2. Facial animation techniques

43 Chapter 3 Background Theory In this chapter the additional theories used in the implementation are presented. Since the method chosen for the implementation is based on a hierarchical skeleton, implemented with rotating joints, there is a need to examine the theory behind this skeleton, how these rotations are done and how frames are generated. As for control, the MPEG-4 FA standard is also described at the end of this chapter. The reader will find that much of the theory in this chapter has its origin in Robotics. 3.1 Kinematics in hierarchical skeletons Realistic character animation can be successfully implemented by using skeletons as structures causing motions to the surfaces of the model representation in use. A typical skeleton used in animation is hierarchical in its nature and consists of rotational entities, joints or bones. Since Maya refers to these entities as joints, this terminology is used. A skeleton is constructed as a hierarchical tree structure, formed by these joints. In practice, if one joint rotates all its successor joints rotate with the same rotation. Joints can be associated with parts of the surfaces, which cause its rotation to affect parts of the surfaces as well, by some specified influence value. This introduces kinematics theory, which is a research area of its own, primarily used in robotics but it is heavily used within animation as well. The fundamental problem to solve using kinematics theory, is positioning of these skeletons, allowing for some of its parts, referred to as effectors to reach their goals, or at least reach positions as close to their goals as possible, considering the properties of the skeleton. This positioning can be done in two ways. An animator specifies the rotation for each joint and thereby the skeletons positioning or these rotations are unknown and have to be calculated by a computer algorithm. This is known as inverse kinematics (IK). Both methods are explained in detail in separate chapters. This chapter has been divided into parts describing the skeleton itself, the elements involved in the skeleton kinematics process and kinematics. 3.2 Skeletons in Maya There exist several types of joints but the ones used in FA are joints describing a rotation and a translation. It should be pointed out that in robotics, since it is an example of an 29

44 30 Chapter 3. Background Theory area using kinematics, uses more types than just rotational joints. Joints in Maya are essentially just a position and a rotation in space and they are thoroughly described in the Maya documentation[1] that is included with the software product. There are three types of rotational joints and their classification refers to the joints degrees of freedom (DOF). Joints with the ability to rotate about all three of its own local axis are called ball joints, since it rotates as a ball. In a typical face rig, ball joints can be used for creating the neck and eyes. The second type of joint is called universal joint, this type of joints can rotate about two of its own local axis, allowing the joint to rotate within two DOFs. A universal joint is suitable to use when creating the jaw, it opens and closes by rotating about a vector perpendicular to the outshooting jaw and the axis pointing upwards inside the model. By allowing some rotation around the local axis pointing up, the jaw can slide sideways too. The last type called hinge joint has only one DOF and even if a kneecap certainly is not located in the face, it is the best example on how a hinge joint works. There are no obvious examples on how to use hinge joints in the face rig, but if the jaw is supposed to be incapable of moving sideways, a hinge joint is used. In figure 3.1 a joint in Maya can be seen, it basically consists of a local frame of reference, the three local axes with its origin in the joints position in space and the arcs provides visual feedback of the joints rotational possibilities. Figure 3.1: A joint in Maya The joints used in this thesis rotate relative to its own local axis and they can have from zero and up to three rotational DOF. Additionally they can be allowed to rotate within a certain angular interval within each allowed DOF. Skeletons are implemented from these rotational entities by placing the joints at desired positions in space, bones are only drawn between joints for visual feedback, showing the skeleton structure and the hierarchy. In Maya a bone is just the vector between two joints as shown in figure 3.2. Figure 3.2: Two joints and the bone vector Skeletons are composed of joints with the shape of a tree, there are no cycles and there is an internal hierarchy starting with a root joint and continuing with succeeding child joints, which in turn can have children of their own. The important thing to remember with these skeletons are when one of its joints rotates, the subtree formed by the

45 3.2. Skeletons in Maya 31 rotating joints successors will rotate along with the same amount. Updating successor joints in a skeleton is a very common operation. Kinematics theories are practiced in order to rotate skeletons into desired poses. This is done by assigning various paths in the skeleton to react on stimuli from a translated control point. These paths consist of joints forming a chain within the skeleton tree structure. At the end of these paths there is an effector and the purpose of this path is to re-position itself so the distance between its effector and the goal position for the effector is minimized. These paths of joint chains are solved by the using IK and the solving procedure determining rotation of the joints in the path are known as an iksolver. This procedure is explained further in chapters For practical reasons a joint can only be part of one single path, there cannot be multiple paths through the same joint and this is basically how skeleton moves, or re-positions itself. A seemingly good analogy of this functionality would be a marionette puppet. When a string is pulled, the puppet moves. A model representation includes both skeleton and surfaces, but for the moment they will be treated independently. Considering a representation, its surfaces have to be affected by the movements of the skeleton, it is how skeleton driven animation work. This is implemented so that certain parts of the skeleton are bound to certain parts of the model. This binding is done by an animator. The purpose of this binding procedure is to transfer skeleton movements to the surfaces of the model by some influence. Figure 3.3 schematically illustrates the above discussion. A joint in the skeleton rotates, affecting some points in the surface. Figure 3.3: Bone driven animation The procedure of composing skeletons and binding parts of the model to them can be very tedious, especially when animating faces. The main reason why FA is such a hard work is, humans are extremely sensitive to facial expressions and even slight mistakes will give the direct result of something unnatural about the. Even with the state of the art graphic technology and professional animators making animated characters for movies, such as lord of the rings, it is not difficult to see which characters that are animated and not, especially if one looks at the animated characters face and its movements. Since skeletons created in Maya and used in FA does not quite resemble of a human face, it suits the purposes better showing a skeleton made for other purposes. Figure 3.4 is a screenshot taken in Maya, showing a skeleton that could be used in a human hand.

46 32 Chapter 3. Background Theory Figure 3.4: Example of a skeleton hand in Maya The following subchapter treats subjects like control elements and kinematics algorithms. This chapter about skeletons rely on these sources of information[1][24] 3.3 Kinematics control in Maya In previous chapters, the subject control has been mentioned. In this chapter we take a look at how Maya has been used in this thesis to solve this matter. Object in a Maya scene is treated like nodes with attributes. Nodes can be connected to each other and be placed in a hierarchical relationship. A type of object known as locators, basically just a point in space and nothing more has been used. The locator has been assigned a parent relationship to another entity known as ikhandle. An ikhandle contains information such as the chain of joints in the paths mentioned in the previous chapter. It also contains a goal and has effectors related to it. When a parented locator moves, the ikhandle moves by the same amount and its goal is changed. The distance between its effector and its goal is the error vector e, which no longer is minimized. In order to minimize e the skeleton has to be re-positioned. A solving method known as an iksolver is used to calculate the required rotation in the ikhandles chain of joints in order to minimize the error vector e. Figure 3.5: Joint chain with effector and locator In figure 3.5, the leftmost dot represents the locator. The next dot represents the ikhandle s goal position currently at the same position as its effector. The remaining dots represents the joints. The line between the locator and the ikhandle is their parent to child relation. The other lines are bones. When the locator is re-positioned, a caption of the structure at this moment looks like in figure 3.6. Figure 3.6: Raised IK problem The distance between the moved locator and the end effector is the raised error vector e and now it is the job of an iksolver to calculate the rotations needed in the ikhandles

47 3.4. Forward Kinematics (FK) 33 joint chain, allowing the effector to reach a position which minimizes e. If the joint chain could not be rotated so the effector reached the goal due to rotational constraints in the joints, an iksolver might propose a final solution such as this one shown in figure 3.7. Figure 3.7: Proposed solution by the iksolver The two joints has been rotated as much as possible with rotations R 1 and R 2, repositioning part of the skeleton to minimize the error vector between the goal and the effector. The information in this chapter is based on the Maya documentation[1] and in the following chapters methods to solve these kinds of kinematics problems are presented. 3.4 Forward Kinematics (FK) Compared to IK, this procedure is quite easy and straight forward. The rotation for each individual joint in order for the effector to minimize its distance to the goal is specified in advance by an animator. FK is defined as follows: p eff = f(x) = f(x 1 ),, f(x n ) (3.1) The resulting effector position p eff is determined by the independent joint rotations made in the joint chain and these rotations are stored in a vector X. As described earlier in chapter 3.2 rotating joints affect all of the succeeding joints with their rotation. Forward kinematics starts with the innermost in the joint chain and performs its specified rotation. The rotation is typically added to a rotation vector which is passed down in the skeleton hierarchy and used for updating the positions of the joints successors. Whenever a joint rotates it adds its rotation to the vector, so this vector accumulates these rotations as it is being sent down the hierarchy of the joint chain. The following successor joints will have their positions updated by the rotations stored in the vector and add their own rotation if it is specified. This is how the procedure works and naturally it stops when they reach a joint with no successors. A joint is rotated by its parents rotations: R = R 1 R n (3.2) Note, these rotations are relative the joint that caused them and not some common point

48 34 Chapter 3. Background Theory of reference, therefore usually a corresponding translation is also passed down with a rotation or coordinate frame conversion routines are required. 3.5 Inverse kinematics The main difference between IK and FK is, an animator specifies the rotations of the joints in FK, but in IK they must be calculated using an IK algorithm. As the name reveals, this is the backward procedure of forward kinematics in terms of defining rotations. Once these rotations are known, the same procedure of actually rotating the joint chain described in FK is applied. There are several ways to calculate these rotations, given the current state of the joint chain, goal and effector positions. The IK algorithms used to solve problems can be divided up into analytical and numerical algorithms. In the following two chapters two numerical methods are described, Jacobian matrices and cyclic coordinate descent (CCD). Analytical methods are not covered since they are not suitable in animation, due to large solutions spaces and problems occurring when the goal cannot be reached. They are also computationally intense. IK is defined like this: X = f 1 (p eff ) (3.3) The goal position p eff is known, but the rotational elements in the vector containing the accumulated independent joint rotations are not known and therefore they have to be calculated by the inverse kinematics algorithm. After the IK problems is solved, the rest of the procedure is as described previously in chapter Cyclic coordinate descent (CCD) Cyclic coordinate descent (CCD) is an iterative algorithm using a simple approach to a complex problem. Without using lots of mathematics it will always find a solution and that makes it cheap in terms of CPU, robust and stable, another advantage is that it is relatively easy to implement. The method was first developed for robotics[19]. The CCD will take a joint chain and start with the outmost joint, construct a vector between the goal and the joint, called goal vector. Then it constructs a vector between the joint and the effector, which can be called focus vector. The idea is then to calculate the required rotation for the joint, the rotation of the current focus vector into the desired goal vector. When the outmost joint has a calculated rotation, the algorithm moves upwards in the joint chain to the next and repeats the described procedure for each joint at a time as it passes them. The downsides of this method are that it seems to favour rotations in the innermost joints, which can result in a bad distribution of rotations over the joint chain. Joints closer to the effector is not rotated as much as the ones closer to the beginning of the chain. This might result in unnatural poses of the joint chain. Another downside is a phenomenon when the algorithm ends up with a theoretical correct solution according to the rules of the algorithm, but the chain is posed incorrect. This phenomenon requires extra rules that constricts the algorithm, forces it to choose suitable rotations for the entire movement, examples of the counter measures are smaller step size and constraining

49 3.6. Cyclic coordinate descent (CCD) 35 the possible rotations for the joints. Compared to other IK algorithms, CCD will always find a solution and without extensive calculations. This makes it suitable in animation. Another convenient feature is that it localizes the problem of posing the joint chain, by treating one joint at a time, rather than the entire joint chain at once, which simplifies the problem of posing the chain greatly Localized joint rotation A CCD algorithm is able to work with several types of joints, but only rotational ones are described here. The procedure was briefly explained in the last section, but here is a more detailed explanation. As described, the first thing to do when calculating the rotation in a joint is to create two vectors, here they are called focus vector and goal vector. The purpose is to rotate the focus vector to the goal vector. In theory one creates a focus vector f in the current direction of a specific bone shooting out from the joint and the goal vector g is formed between the joint and the target. The idea is to rotate the joint so that f lines up with g, this is done by determining a rotation for the joint around the vector v, which is perpendicular to both f and g, by some angle α. The concept is shown in figure 3.8. A useful way of calculating the rotation from one vector to another is equation 3.34 presented in chapter Figure 3.8: Constructing a joint rotation IK solver based on CCD As the solver traverses the joint chain from the outmost joint inwards towards the first joint of the chain, it becomes clear how this method works as the effector converges towards the goal. An example of the procedure is shown in figure 3.9. Figure 3.9: CCD algorithm iterations During iteration an IK solver based on a CCD algorithm traverses through the joint chains and calculates the joint rotation as described. Iterating in this manner results in the effector being converged towards the goal. In order to retrieve a good solution, the

50 36 Chapter 3. Background Theory solver has to perform the procedure several times. Naturally there has to be some stop conditions for the algorithm, otherwise it would go on for all eternity. In a CCD algorithm the stop conditions usually are, a maximum of N iterations, the convergence of the joint chain towards the goal is not sufficient or if the goal has been reached. The rotations are accumulative, each individual joint accumulates the rotations it made during execution of the CCD algorithm into a final rotation R, which is used during the actual rotation of the skeleton. FAV uses a CCD algorithm such as the one described above in its IK solver. This chapter is based on the following references[17][8]. 3.7 The Jacobian The Jacobian matrix is constructed by first order partial derivatives. Each of these derivatives describes a rotational change in some certain joint. The entire matrix describes a mapping between the current set of rotational changes in the joints to a local displacement of the effector. The Jacobian matrix can be used to solve non linear IK problems by approximating a linear solution to the problem. It is being used to test various configurations of the joints and examining the resulting displacement in the effector. An IK algorithm based on the Jacobian matrix tests various joint rotation combinations to see which configuration minimizes the error vector e between the effector position and the goal. Mathematically, consider the equation 3.1 describing FK. The method using the Jacobian matrix will linearly approximate f for each of the joints, although f itself is not linear at all, due to the use of cos and sin in the expression, then test the configuration and keep the one that minimizes e. There are at least three ways in which the Jacobian matrix can be used to find a solution to an IK problem, the inverse, pseudo inverse and the Jacobian transpose. Information in this section is based on these three references[30][10][8] Jacobian matrix The matrix is an m n matrix, where m is the dimension of the space. Naturally, this is either two or three, due to the fact that animations are either in 2D or 3D. But n is the DOFs, this means that the matrix is not necessarily a square matrix and this problem is the reason behind the pseudo inverse method, which is a simple way of guaranteeing a square matrix. { peff = EffectorP osition R = R = Rotation J(p eff, R) = δ(p eff ) x δr 1 δ(p eff ) y δr 1 δ(p eff ) z δr 1 δ(p eff ) x δr 2 δ(p eff ) y δr 2 δ(p eff ) z δr 2 δ(p eff ) x δr 3 δ(p eff ) y δr 3 δ(p eff ) z δr 3 (3.4) This is what the Jacobian matrix looks like for a ball joint with 3 DOFs. This example turns out as a square matrix and the joint configuration is of the kind making the

51 3.7. The Jacobian 37 matrix invertible. The displacement of the effector is obtained by using the following equation. δp eff = J δr (3.5) Equation 3.5 describes the relationship between the change in position for the effector and the change in rotations in the joints. The Jacobian matrix maps the rotational change in the joints into a displacement vector for the effector position. But keep in mind that p eff is the answer to the approximated linear problem. It is not the exact answer Inverse It is useful when solving IK problems, since the current position of the effector is known and the goal is known, in other words the change in the effector position δp eff is known. The part that remains unknown is the change in rotation configuration in the joint chain, δr and it can be found by taking the inverse of the Jacobian matrix. δr = J 1 δp eff (3.6) But what if the Jacobian matrix contains singularities and is not invertible? Or if it is a non-square matrix? Then this method fails, so using the Jacobian matrix in this way is not stable, the methods of pseudo inverse and Jacobian transpose are alternatives that does not suffer from these problems. As with all systems of linear equations there can be no solutions, one solutions, or an infinite amount of solutions. -If the amount of DOFs in the set of joints in the Jacobian matrix exceeds the amount DOFs in the vector describing effector position, there are more columns in the Jacobian matrix than rows in the vector describing the effector position, the system is said to be under-constrained and that means there are an infinite amount of solutions. In this case, this method does not guarantee it will find the best solution since the algorithm has no means to judging the results and determining which solution that is best suited for an animation. -If there are more DOFs in the vector containing the effector position, compared to the amount of DOFs the joints in the Jacobian matrix, the system is called over-constrained and there are no solutions. Then this method has to be augmented to at least find the best possible solution, using other methods. -If there are exactly the same amount of DOFs in both effector and joints in the Jacobian matrix, the system is well-defined and it is possible to find a solution, using the inverse as described above. A mismatch in DOF s or problems with singularities or near singularities in the Jacobian matrix is not so uncommon it can be ignored, this causes problems for this methods that has to be solved by other means and therefore it has been augmented.

52 38 Chapter 3. Background Theory Pseudo-Inverse This solves the problem with non-square matrices, by constructing a pseudo-inverse for the Jacobian matrix and using that instead of the inverse of the Jacobian matrix. This is how the problems with under- and over-constrained systems are solved and to find the pseudo-inverse one simply uses the following formula. J + = J T (J T J) 1 (3.7) By inserting the pseudo-inverse J + into equation 3.4 and use pseudo-inverse instead of the non existing inverse, we get. δr = J + δp eff (3.8) However, Jacobian matrices still contain problems, such as instability whenever a singularity in the Jacobian matrix is close, this occurs when the joints are rotated so the vector derivatives in the Jacobian matrix have been lined up[30] and in general, calculating the inverse or pseudo-inverse is a relatively costly operation Transpose This method works well, just taking the transpose of the Jacobian matrix and use it instead of an inverse or pseudo-inverse. The results are not as good as in the pseudoinverse method. It converges slower since the answer is no longer the approximated solution retrieved by solving the linear system of equations as earlier and it suffers from scalar problems causing the joints close to the effector to rotate more than they should. But for applications where this methods accuracy is acceptable it should be used instead of Jacobian inverse or Pseudo-inverse methods, it is much cheaper to calculate since it does not require the inverse to be calculated each time the algorithm iterates. The transpose of the Jacobian matrix is much less complex to compose. This method also uses less memory and avoids singularity problems, which makes it comparatively stable. δr i = J T i δp eff (3.9) Each column i, representing one of the joints in the chain is evaluated at a time, revealing the rotation needed for the joint. It may seem odd that this method works, just using the transpose, but still it converges towards the goal. However, the equation should be scaled to fit the equation better. Exactly how this scaling is calculated varies between implementations. The article[10] gives a description of this method along with a possible solution for compensation of the additional error in the final solution, compared to linear approximations and it is done as follows. The reasoning is concentrated around this theorem. For all Jacobians J and error vectors e, the following holds true: < JJ T e, e > 0 (3.10)

53 3.7. The Jacobian 39 proof: < JJ T e, e > 0 < J T e, J T e > 0 J T 2 0 The theorem is used to prove that e can be reduced with a proper valued scalar a. Using the Jacobian transpose introduces additional errors compared to a linearly approximated solution, noted as the error vector e, which is defined as the difference between the goal and the actual calculated position retrieved from the iteration. A scalar value a is chosen to compensate for this, so the formula for rotational change has to be rewritten. δr = a J T δp eff (3.11) Note: For practical purposes, since each joint rotation is evaluated one by one, the scaling factor is usually implemented as a diagonal matrix, consisted by scalars for each joint. δr = K J T δp eff (3.12) Returning to the original equation, using J T instead of J we have. p eff = J T δr (3.13) When the rotation is updated with R = R + δr during iteration and δr = a J T e the resulting change in effector position is. δp eff = J T (a J e) (3.14) δp eff = ajj T e (3.15) The question is how to select a in order to reduce the error e? This can be done in many ways. Buss[10] assumes the change in effector position δp eff will always be ajj T e and selects a by using the formula presented in equation 3.10 in the above theorem. a = < e, JJT e > < JJ T e, JJ T e > (3.16) A new value on a is calculated during each update of the algorithm.

54 40 Chapter 3. Background Theory IK solvers based on Jacobian matrices So far the procedure during a single iteration has been described. These algorithms are of the type gradient descent which is an iterative process where the effector converges towards its goal. It is initialized with some start values and continues until it has found a combination of rotational changes in the joints that result in an effector being moved to a position, minimizing the distance between effector and the given goal. As long as the goal is not considered to be reached or some safety condition stopping the algorithm from iterating forever remains false, the algorithm will step by step add a rotational change to the joints and evaluate the result of the effector location. The evaluating function can be of the three types described above, inverse, pseudo inverse or using the transpose. 3.8 CCD or Jacobian matrices Compared to CCD the Jacobian methods converges slower towards the goal, if one uses the Jacobian inverse it can also suffer from singularity problems and getting stuck in a local minima. Although it is more likely to result in a better distributed rotations over the entire joint chain compared to the result from the CCD, which seems to favour rotations in the inner-most joints and might result in a less smooth bend over the entire joint chain. CCD is clearly simpler to understand and implement and it is also more stable and cheaper in terms of required calculations and memory use. 3.9 Rotation using Quaternions Quaternions are a result from the work by the mathematician sir William Rowan Hamilton, who lived and was active the first half of the 19 th century. He discovered theories about complex numbers in R 3. Quaternions most useful area is rotations and they can be noted like this using complex notation. w + ix + jy + kz (3.17) Vector notation looks like this. [scalar, imaginary] = [w, (i, j, k)] = [w, v] (3.18) The imaginaries have the following property. i 2 = j 2 = k 2 = ijk = 1 (3.19) It seems like the most common notation is vector notation and therefore it is used from now on in this thesis as well. Quaternions can be used to represent rotations in where one of the imaginaries represents the angle of rotation. The other two determines the plane in which the rotation will take place. The real value is used to scale the rotation in the plane the rotation is taking place.

55 3.9. Rotation using Quaternions 41 In [20] a reader will find excellent work on Quaternions and how they can be used within animation, along with references to previous works on Quaternions such as the work of Ken Shoemake. This section is based on [20][9][7] Quaternion operations The math behind quaternions is quite complex, it is just up to recently math scientists has been able to prove some of the theories wrong. For purposes within animation these complex theories are not interesting, it is the practical use contributed by quaternions that are of interest. For most times they are best viewed upon as a rotational entities describing orientations or rotations and knowing there are some basic operations that can be performed on these entities. Naturally some of the theories behind has to be covered otherwise some unexpected behaviours will be hard to explain. The first important thing when dealing with quaternions is that they should be of unit length if they represent rotations, according to Baker[7] they does not necessarily have to be of unit length as many articles claim, but unit quaternions results in less complex calculations. The quaternions describing rotations in this thesis are unit quaternions. Here follows the basic operations that can be made with quaternions, in the examples two quaternions q 1 and q 1 are used. Addition(q 1 + q 2 ) q 1 + q 2 = [w 1, v 1 ] + [w 2, v 2 ] = [w 1 + w 2, v 1 + v 2 ] = = [w 1 + w 2, (ix 1 + ix 2, jy 1 + jy 2, kz 1 + kz 2 )] (3.20) The corresponding elements in the two quaternions are simply added to each other. Subtraction is a special case of addition, q 1 + ( q 2 ) Multiplication(q 1 q 2 ) q 1 q 2 = [w 1 w 2 v 1 v 2, v 1 v 2 + w 1 v 2 + w 2 v 1 ] = = [w 1 w 2 (ix 1 ix 2 + jy 1 jy 2 + kz 1 kz 2 ), (jy 1 kz 2 jy 2 kz 1, (ix 1 kz 2 ix 2 kz 1 ), ix 1 jy 2 ix 2 jy 1 )+w 1 (ix 2, jy 2, kz 2 )+w 2 (ix 1, jy 1, kz 1 )] (3.21) The important thing to notice is the cross product in the formula for calculating the imaginary part of the product, quaternion multiplication, is not commutative q 1 q 2 is not the same as q 2 q 1. Conjugate(q ) q = [w, v] (3.22) The conjugate is defined as the negation of the quaternion s imaginary part. Norm( q ) q = w 2 + ix 2 + jy 2 + kz 2 (3.23)

56 42 Chapter 3. Background Theory The norm is calculated in the same way vector norms are being calculated. Inverse(q 1 ) q 1 = q q = [ w ix, ( q q, jy q, kz )] (3.24) q If we are dealing with unit quaternions, the inverse is the same as taking the conjugate of the quaternion, like this. q = 1 = q 1 = q This is a strong reason to strictly use unit quaternions, rotations involves using the inverse as will be seen. Rotation is an operation that is performed very often and using unit quaternions eliminates calculation of the norm and dividing the conjugate with that result. Instead one can use the conjugate which makes rotation a far less costly operation. Normalizing( q q ) q unit = This procedure is the same as when normalizing vectors. Identity(q I ) q q = [ w q, ( ix q, jy q, kz q ) (3.25) (q I ) odd = [w, v] = [0, (0, 0, 0)] (3.26) (q I ) even = [w, v] = [1, (0, 0, 0)] (3.27) Matrices has an identity matrix which multiplied with a matrix M yields M. Quaternion has a corresponding function which works the same way, but there is one difference, it is defined differently under addition or multiplication. During addition, the scalar parts cannot be allowed to add up, so they are set as zero. Rotation(q R ) Representing rotations is one of the major tasks for quaternions and it is defined like this. q r = [cos θ 2, (ix sinθ 2, jy sinθ 2, kzix sinθ )] (3.28) 2 An orientation is described like this q o = [1, (ix, jy, kz)] (3.29) The orientation is the quaternion representation of the point or vector p, in order to

57 3.9. Rotation using Quaternions 43 rotate p into p one must set the rotation, by defining the axis and angle as shown in equation 3.29 and create a quaternion orientation representing the vector to be rotated as shown in equation Quaternion rotation is then performed with the following formula. q p = q r q p q 1 r (3.30) And in the case of unit quaternions. q p = q r q p q r (3.31) The resulting orientation q p holds the rotated point position and can easily be converted back into p by using a quaternion to vector routine. So the procedure of rotating a vector can be quickly summarized like this, calculate the unit quaternion describing the rotation and its conjugate, convert the vector into a quaternion describing an orientation, perform the rotation and convert the orientation back into a vector. As with matrices rotations can be multiplied with each other into a combined rotation. q n q n 1 q 0 = R(q 0 ) R(q n 1 )R(q n ) (3.32) The product to the left in the above formula equals applying rotation to the orientation first, then applying the following rotations in an ascending order one by one until the final rotation R(q n ). Constructing the rotation from f into g is a useful operation in bone driven animation and frequently used in this thesis. It is described for both quaternions and matrices in this book[15]. The quaternion q describing the rotation from f into vector g is calculated as follows: { e = s t = cos(2θ) (3.33) q 1 2(1 + e) = ( (s t), ) (3.34) 2(1 + e) 2 When used, for sufficiently small angles between s and t, rotate using the identity and as the angle is sufficiently near 180 degrees, making the two vectors s and t point in the opposite direction, use any vector perpendicular to s to rotate into t Properties There are both advantages and disadvantages with quaternions. The purpose of this thesis is not to enter into any debates on the subject. But there is an unquestionable advantage with quaternions because they make interpolated rotations extremely easy.

58 44 Chapter 3. Background Theory This invaluable feature is used thoroughly in FAV and that is exclusively the only reason they have been selected instead of matrices in this thesis. Quaternions also do not exclude the use of matrices since it is possible to convert back and fourth between quaternions and matrices in cases when it is desired to do so. Another feature is that quaternions do not suffer from the Gimbal lock[20] which makes them suitable to use when rotating joint chains consisting of many joints. A clear disadvantage is the complex and not intuitive math making them harder to understand compared to traditional rotation matrices. Quaternions are only useful for rotation purposes. They cannot be used for other transformations in the same neat way as matrices Interpolated rotations As stated in the previous chapter, the major reason for using quaternions in this thesis instead of matrices is because quaternions can be interpolated very easily. All that is required are two quaternions, describing the start and goal orientations. Spherical linear interpolation (SLERP) works along the curve between these two orientations, the curve runs along the surface of the unit hypersphere. Visualizing this in a correct manner will not result in something intuitive for the human eye, since hyperspheres are in 4D. For that reason they are usually described as interpolations along an ordinary sphere in 3D[20][9], like in figure Figure 3.10: Interpolation between quaternion rotations Interpolations are always performed between two quaternions each describing the start and goal orientations. Performing an interpolation from q1 to q2 and from there to q3 is most likely not the same as interpolating directly from q1 to q3. As the name indicates, this interpolation method uses both linear and spherical interpolation. The reason for this is that spherical interpolation does not work well when the angle between the two orientations is sufficiently small. The spherical interpolation

59 3.9. Rotation using Quaternions 45 will start to wander of from its path and not continue to follow the shortest path along the surface of the unit hypersphere. In such cases linear interpolation will work much better, even if it will follow a straight line between the orientations and not along the surface of the hypersphere. The interpolation concept works like this, using a slice of the unit hypersphere and two orientations for demonstration. Figure 3.11: Linear versus spherical interpolation As mentioned spherical linear interpolation consist of two interpolation methods, spherical and linear. As shown in figure 3.11, spherical interpolation follows the arc between two quaternions representing orientations on the surface of the hypersphere, whereas linear interpolation follows the vector between these two orientations, straight through the hypersphere instead. By inspecting the formula for spherical interpolation we see that small angles leads to an expression closing in on a divide by zero, this will cause unstable behaviour and this is the reason why spherical interpolation starts to wander of its path if the angle between the orientations is sufficiently small. Therefore spherical interpolation cannot be used with sufficiently small angular values. SLERP (q from, q to, t) = q from sin((1 t)θ sinθ + q to sinθt sinθ (3.35) Instead linear interpolation is used when the two orientations are too close to each other, the method was described in chapter 2.8 and applied to quaternions it looks like in equation LERP (q from, q to, t) = q from (1 t) + q to t (3.36) Combined linear and spherical interpolation complements each other well and is very useful within animation. Spherical linear interpolation can be used to calculate fractions of a defined rotation and then retrieve in-between poses of a rotating skeleton. These in-between poses are

60 46 Chapter 3. Background Theory displayed and their amount is controlled using the time coefficient t, usually an increasing value. This is fundamental to the selected skeleton approach in FAV and it works like this. In figure 3.12 we see a simple skeleton structure with two rotating joints (the lower two). Their initial rotation is specified as a start orientation and their final rotation is the goal orientation, the start and goal poses are marked black and between them is the skeletons in-between poses, marked grey and they are used in the two generated in-between frames. These in-between poses are created using spherical linear interpolation to interpolate between the two joints start and goal orientations and using the interpolated result to construct an in-between pose for the skeleton. Figure 3.12: Generated in-between poses by interpolated rotations 3.10 Vertex blending A surface used to mimic skin must be able to flex and slide along the skull, which is not an easy task to do convincingly. Vertex blending offers a comparatively easy solution, although the results depend a lot on the animator, the complexity of the model and there are limitations to what is possible to do with this method as well. Vertex blending allows the surface to perform these wanted flexing and skin sliding effects, caused by skeletons movements. A point on the surface that is rigidly bound to a joint will be affected by that joints rotation with the same amount, vertex blending allows a vertex to be influenced by several joints. For each joint, the amount of influence is specified. The purpose is to distribute the total effect by the skeleton on a certain vertex over several joints, allowing a more elastic and realistic behaviour of the surface. This is implemented using weights, or influence values in the interval Basically the animator binds the skeleton to a mesh and determines each of the joints influences on the vertices of the mesh. The important thing to know is that a vertex is 100% bound to the skeleton, but the binding is divided up over several joints instead of just one, so the total sum of influences made by the skeleton on a particular vertex has to sum up to 1.0. The new position for a vertex affected by the bound skeletons movement is calculated as follows: p p new w T = vertex = relocated vertex = inf luenceweight = transf ormation(rotation)

61 3.11. Animation blending 47 p new = i w i T i p i (3.37) The difference between rigid binding and smooth binding in Maya is shown in the below figure Rigid binding has been used in the picture to the right, notice how the cylinder suffers from sharp corners and deformations that looks unnatural, while in the picture to the left the deformation has been distributed in a better way. Of course this is not the most scientific example, but it visualizes the binding properties discussed above. Figure 3.13: Rigid versus smooth binding in Maya 3.11 Animation blending Simultaneous facial expressions are needed to express more than one emotion at a time, which is a necessity for convincing FA. There are several existing types of blenders and the concept of a blender is easy to grasp by considering figure 3.14 below, describing the general blending method. Generally a pose can be viewed as set of information, describing various deformations of the model. The purpose of the blender is to generate an output pose using one or more poses as input, optionally the input is combined with some sort of parameters providing additional data to the blending method. As an example a pose can be vertex positions, control points or some other geometric information that describes a models deformation. Usually this blending method is implemented with an interpolation technique. Consider the following equation describing a cross dissolving blender, using linear interpolation: P (u) = F 0 (u)p 0 + F 1 (u)p 1 (3.38)

62 48 Chapter 3. Background Theory Figure 3.14: General animation blending method The final result is the blended pose and it is calculated by adding the products between the two blending functions F 0 and F 1 and their corresponding poses P 0 and P 1. This method is a reduced variant of the general blending method, capable of dealing with two arguments. The limitation is obvious. What if there is a need to blend more than two poses? In such cases, the concept of cross dissolving between two poses can be extended. The concept is easily extended by adding more blending blocks, allowing for more poses to be mixed as input as shown in figure The idea is very similar to an electronic circuit of logical gates. To create a bilinear blender, one uses three blending blocks. The first two blocks takes the four poses as input and generates an output, these two outputs are used as input to the third blending block, which yields the final pose as output. It is not uncommon to use bilinear or even trilinear blending methods. Figure 3.15: Blending blocks network In this manners an arbitrary amount of poses can be blended. However, figure 3.15 describes a general relationship over how the concept is extended, for practical reasons there is usually no more than two or three rows of blocks (squares) used, bilinear and

63 3.11. Animation blending 49 trilinear blending. Another group of blending methods are math blenders, which adds, subtracts and scales its argument poses. These types are often combined with a clamp method, which clamps any result from a math blender method that violates the models constraints to a minimized or maximized result, depending on the violated constraint s type. From an abstract view this type of blender adds the deformations specified by the argument poses, but on the implementation level other operations than addition might be used. Figure 3.16 shows an add blender from a schematic view. Figure 3.16: Add blender There are examples when adding the information provided by the argument poses do not give the expected result, one such example is rotations with quaternions. This matter is addressed in chapter 3.9. It is not possible to simply add two rotational quaternions and hoping for a quaternion describing the combined result of the two individual rotations. Adding rotations requires a multiplication of the two rotational quaternions. As an example of this blending type, consider a pose that deforms a face model into an expression, say happiness and another pose that deforms the model into a surprised look. The effects caused by these two poses are added to each other, yielding an output pose that deforms the face model according to both happy and surprise. Subtraction blender takes two poses and subtracts the deformation information in the second pose from the first. Again it is important to consider the information type so taking the difference really gives an expected result. Continuing with the surprised and happy example, subtraction could be used to deform the model into just looking happy, by subtracting the pose describing surprise from the pose describing both happy and surprised. Scale blenders are used to maintain reasonable values as they accumulate from the various poses used in a math blender. If the pose generating a happy expression require the eyebrows being lifted up by some amount and the pose describing surprise also requires the same eyebrows to be lifted, the job of a scale blender would be to adjust the final output from the add blender into a reasonable value, perhaps a wise choice is raising the eyebrow by some amount between the two suggestions from the poses. Exactly how this scaling is done is implementation specific, but it takes as an input a pose and a scaling factor. The final output is the input pose scaled with the factor.

64 50 Chapter 3. Background Theory These math blenders is also connected to each other as described earlier and they add some arithmetic functionality to the blending mechanism. Where animations can be added, subtracted and scaled Animation control Animations in general require a triggering function. It goes without saying this is the case for FA as well. A stimulus of some sort has to be provided to the geometric model, causing it to deform itself accordingly. This stimulus is typically generated externally with input devices such as keyboards, mice, joysticks or cameras. A timed program might generate this stimulus as well. The subject has been touched earlier in chapter 2.12 and in the next section the mpeg-4 FA standard is covered, since there is an MPEG- 4 FA interface implemented in FAV MPEG-4 Facial Animation standard The MPEG-4 ISO/IEC standard is a huge standard, covering many aspects of multimedia transfer over networks. MPEG stands for Moving Picture Experts Group and they are the ones developing the standard. The standard is made for transferring audiovisual data over networks and this content is referred to in the standard as media objects. One of the standards target areas is coding standards for graphical applications, which basically are different parameter sets dedicated to animated characters and it is structured like this. At the top of the structure we have the MPEG-4 face and body animation (FBA), which consists of two different objects, representing different types of data streams. The first object, called system is a definition stream and the second object, called visual is the parameter stream. If we look at the application, these objects are used to define and control a geometric model representation in a neutral pose. A nice feature for graphic applications is that these parameters are not model specific, so as long as the model in use is fully defined for MPEG-4, it can be used. In other words the model representations do not have to be similar to each other. FBA describes definitions and parameters for both faces and bodies, naturally since this thesis is about FA, it will be focused on faces. The definition parameters used for defining faces are called facial definition parameters (FDP), and the parameters for animation control are called facial animation parameters (FAP). Figure 3.17 shows what has been described in this text. Figure 3.17: MPEG-4 FA Face and body animation structure The body definition parameters and body animation parameters have been added within

65 3.13. MPEG-4 Facial Animation standard 51 parenthesis, merely to show their place in the FBA structure, but the parameters covered by this thesis are obviously the ones concerning FA Facial Definition Parameters These parameters adapts the model representation to the FAP stream, they are usually sent just once and their purpose is to define facial attributes determining things like how much incoming parameters from the control stream will affect the various parts of the model. This is done by defining distances in the model that has been positioned[16] according to the following guidelines: - The model uses a neutral pose. - Gaze is in the direction of z-axis. - Eyelids are tangent to the iris. - The pupil is one third of the diameter of the iris. - Mouth is closed and the line of the lips is in a straight line between the mouth corners. - Upper and lower teeth touch each other. - The position of the tongue is flat, horizontal and the tip touches the line between upper and lower teeth. In the figure 3.18 the mentioned distances can be seen and table 3.1 shows the scalar values, facial animation parameter units (FAPU) are defined, these FAPUs are calculated for the model and then used to normalize incoming FAP values. FAPU Spatial Reference Scalar IRISD Iris Diameter IRISD0/1024 ES Eye Separation ES0/1024 ENS Eye Nose Separation ENS0/1024 MNS Mouth Nose Separation MSN0/1024 MW Mouth width MW0/1024 AU Angle Unit 10 ( 5) radians Table 3.1: FAPU definitions Exactly how these distances and units are used by FAPs is described in the next chapter. Another thing specified is something called feature points (FP), which are grouped points in the model. There are 84 FPs in an MPEG-4 FA compliant model. They are used to define where facial attributes are located in the model. Incoming FAPs uses these points as references in movement calculations and some of the points are the actual control point in the face that is being moved by the incoming FAPs. In figure 3.18 shown below, all FPs has been placed at their position in a human face.

66 52 Chapter 3. Background Theory Figure 3.18: MPEG-4 FA Feature points placement Facial Animation Parameters There are 68 different parameters defined in the MPEG-4 FA standard, see tables D.1- D.5, and they correspond to some of the FPs seen in the previous chapter. The idea of FAPs was to create a set of parameters powerful enough to describe every movement that can be perceived in a human face and although people within FA are influenced

67 3.13. MPEG-4 Facial Animation standard 53 by various new techniques, the one that FAP has been based upon, is the idea of MPAs presented by Kalra[23] and it is explained in chapter The purpose of MPA is to simulate the effects of perceptible muscle actions in a human face and FAPs and MPA are very similar, however MPA values are specified for a unique model and it is impossible to use the same parameter values between different models and be guaranteed an expected result. This makes them practically useless for the purposes of the parameterizations offered in standards such as the MPEG-4 FA standard. FAP is an extension of the MPA concept, where the parameters are supposed to work on any MPEG-4 FA compliant model. This is done by using normalization methods. During the procedure of normalization, FAPUs, mentioned in the previous chapter are used. FAPUs are derived from spatial references in the model and their purpose is to scale the incoming FAP values, adapting them to the model in use at the receiving end. A FAP value is an offset value, which determines how much the corresponding FP in the models surfaces is displaced compared to its neutral position. In its non-normalized form the parametervalue would not be useful, it has to be normalized first. The direction of the movement specified by FAP and what FAPU to use, these are things that are listed in the FAP tables, see table D.3. There are three groups of FAPs. First there are two high level FAPs describing visemes and expressions. Visemes describes the mouths appearance during pronunciations and expressions describes the current emotion. Visemes is classified as FAP1 and expressions as FAP2. Low level FAPs or FAP3 are the parameters discussed above, the ones corresponding to FPs in the models surface. Visemes are listed in table D.1 and expressions in table D.2. The last group, FAP3, is listed in tables D.3-D.5. Figure 3.19: MPEG-4 FA FAP frame definition The transmission of FAPs are done by sending FAP frames with a corresponding bitmask. Each frame contains offsets for every FAP and the mask is made of Boolean values determining if the corresponding FAP value is going to be used. The short example shown in figure 3.20 explains how the FAP format works. Figure 3.20: MPEG-4 FA FAP file format Header: The first line is the header, which is specified like this, the number 2.1 is the FAP version number. Joy is the base name for the animation, allowing a FAP loader to attempt loading a FAP file named Joy in the current directory. It is also possible to use a search path in front of the filename, specifying some other location than current directory. 25 is the frame rate, which means that the FAP frames will be sent

68 54 Chapter 3. Background Theory with at a rate of 25 FAP frames per second. Last in the header is a sequence number telling how many frames there are in the FAP file, in this simple example there are only 1. Bitmask: The bitmask contains Boolean values determining which FAPs that are used in its corresponding FAP frame. There are always 68 bits present in the bitmask, representing the entire FAP frame. The bitmask is a frame containing boolean values that corresponds to the FAP values. An occurance of 1 at position x in the frame, implies that FAP x will is active and the value that will be used is retreived from the FAP values listed immediately below the bitmask. In this example the set of used FAPs would be {F AP 2, F AP 4, F AP 6}. FAP values: The first number is a sequence number, identifying the current FAP frame. The rest are offset values used in the current FAP frame. The mapping is trivial, the first occurrence of a 1 in the bitmask corresponds to the first offset value and the second occurrence with the second value and so on.

69 Chapter 4 Facial animation viewer (FAV) This chapter describes the practical part of this thesis, the implementation of FAV, which is aimed towards telepresence. More specifically, the receiving ends in a teleconference. The application is based on some of the theories described in chapter Overview FAV is basically made up by two major parts, a control part and its animation engine. The models used in FAV consist of textured polygon surfaces, a hierarchical skeleton and the additional objects that control the skeleton s movements. The skeleton is bound to the models surfaces through weightmaps. A vertex blending technique, also called skinning in Maya is used during the deformation of the surfaces. Finally the model is exported from Maya into the file format that FAV reads. In chapter , it is described how Maya has been used for preparing and exporting the models. Poses that are applied on the model are generated from the control part, which consists of a GUI and a MPEG-4 FAP interface. These poses, along with some auxiliary data are then being used as arguments to the animation engine which processes them and uses the result to deform the model geometry. Chapter describes the implementation. 4.2 Maya Maya is a relatively advanced computer graphics environment. The company behind Maya is Alias Wavefront and since their launching of Maya in 1998, their product has become very popular. Covering Maya is not within the boundaries of this thesis, however touching some of the basics about Maya cannot be avoided in order to explain how it has been used. Maya is tremendously powerful and customizable. It offers satisfying programming abilities by including extensively documented programming interfaces and the data in a Maya scene is efficiently organized. This makes querying for data in a scene simple and that is the main reason why Maya was selected in this thesis, even if there are other environments that also would work well for this sort of tasks. The information about Maya in this thesis relies on these two sources[4][14]. A reader interested in learning more about Maya should visit Alias Wavefront s homepage[4]. 55

70 56 Chapter 4. Facial animation viewer (FAV) Basics of Maya In a Maya scene all sorts of existing data is stored as nodes, a node consists purely of its attributes and a computing function. The concept is object oriented in a sense that attributes in these nodes are shielded from the computing part of Maya, which in this thesis is referred to as the Maya engine. Internally, nodes have their own attributes and a computing function that operates on the input attributes and stores the result in the output attributes. The Maya engine is not involved in the nodes internal operations, it consider nodes to be black-boxes and works with the network formed when nodes are connected to each other. However, there is a slight discernment made by the Maya engine among the nodes, connected nodes hold either input or output values. This determines the direction of the dataflow between the nodes in the network. As an example, output attributes from a previous node are connected to the input attributes of the current node. The computation function in the current node works with values stored in its input attributes and produces some values stored in its output attributes. This concept is shown in figure 4.1. Figure 4.1: Nodes in Maya Dependency graph At the core of the Maya engine is a patented technology called Dependency graph, which basically describes relations between the attributes of all nodes in a scene. For example, in a network of connected nodes, the value in an output attribute of node j is used as value of the input attribute of node(j + 1). Maintaining the connections defined by the various nodes, such as the one previously described is basically what the Dependency Graph does. Typical attributes might be coordinates, geometric details of the object, surfaces information and timing values. A specific type of node that needs to be presented is DAG (directed acyclic graph) nodes. Technically a DAG node is an ordinary node, but it contributes with the ability to create parent to child relations among the other nodes in the scene. As will be seen, parenting nodes is very useful for some purposes in this thesis. In Maya every 3D object will automatically have two DAG nodes, a shape node and a transform node. By parenting objects to each other, child nodes are affected by the transformations of its parents. In figure 4.2 a simplified hierarchy of a facial rig is shown, where the neck is the parent of the eyes and the jaw. If the neck is transformed by some rotation, both eyes and the jaw are also rotated. Two locator objects have been parented to the ikhandles in the eyes and the jaw. When the locators are transformed, the goal positions in the ikhandles are also transformed. This causes the ikhandles to solve the raised IK problem. This is how the skeleton is re-positioned, which in turn would cause deformations in the bound surfaces.

71 4.3. MEL exporter 57 Figure 4.2: The hierarchy of a basic face rig in Maya The idea with connected nodes in a Dependency graph is very flexible, since nodes easily can be added, removed and reconnected with other nodes. Among other things, this also helps a lot when retrieving information from the scene and sorting it into useful data structures, since it is easy to traverse the current nodes of interest and retrieve the values of their attributes Programmability In Maya there are two possible programming interfaces, a C++ API and a script language, called Maya embedded language (MEL). They are not mutually exclusive, since MEL commands can be called from a C++ API using inline MEL commands in the code. Nodes can be accessed through these two interfaces and that offers great possibilities to create user specific features in Maya. MEL is an interpreted language, so there is no need for compilation and linking, but overall it is quite slow compared to a C++ API and it is not suitable for large scale projects. In such cases it is best to create a plug-in with the C++ API, as mentioned it is possible to use MEL from the C++ API. The GUI used in Maya interacts with the Dependency graph using MEL. The user actions performed in the GUI are translated into MEL commands, it is also possible to switch into batch mode in Maya and run everything directly by using MEL commands only. However, the most common method is to use Maya s script editor. This possibility proved to be very useful, since the results and feedback of the performed MEL commands are instant. 4.3 MEL exporter It was a necessity to create an exporter, which collects the required data in a scene and exports it into a custom-made file format. This made the development of the load module in FAV easier and less sensitive to which version of Maya being used, since some formats in Maya tends to differ between versions and the current export settings.

72 58 Chapter 4. Facial animation viewer (FAV) The exporter is written entirely in MEL and it exports the required model data from the scene. The specified format is simple and straight forward and figure 4.3 shows a more detailed view over the exported data. Figure 4.3: Exported data The script uses the abilities described in chapter 4.2 to query the objects for their attributes. It basically traverses through the scene in a specified order and exports the information to an ASCII file, which FAV can read. The exporter turned out to be quite slow, due to the huge amount of data it is traversing and MEL is interpreted and slow. But since there are no specific time limits for the exporter, optimization of this part has been left out. 4.4 Setting up a rig The term rig alludes in this thesis to the model whose attributes, such as skeleton, influence weights on surfaces, control instances and pre-made positioning have been setup properly. How well a rig works for a particular model representation is to a large degree an aesthetic matter and cannot be explained in a scientific way. Additionally in the average case it takes years and lots of practice to become a good animator. The model representation used in FAV consists of textured polygon surfaces and these were included on a CD which came with a book[22]. It covers FA from a more aesthetic view and is strongly recommended to anyone interested in creating models used in FA. Chapter shows the procedure of rigging using a simple example, showing how to construct a rig that can open its mouth, the concept is then extended Placing the skeleton Assume there is a complete geometric representation of a face and it has been implemented with a set of polygon meshes. The task is to generate expressions with it. A basic property of the skin in FA is that it should slide along the surface, so a skeleton must cause the skin to slide along the surfaces of the model in a way that looks as natural as possible. There are no given set of rules on how to place skeletons to achieve good results, except that the face must behave in a convincing way. It is a method of trial and error until the expressions appear acceptable.

73 4.4. Setting up a rig 59 Continuing with the example, consider a jaw joint, shown below in figure 4.4. It has been placed so that when it is rotated by an angle v, the jaw drops down and pulls back a little bit. Even if this example is very simple, it is basically this skeleton positioning technique that is used all the way. Figure 4.4: Placement of the jaw, Maya screenshot How the above reasoning looks in Maya is shown in figure 4.5, the leftmost picture shows the surfaces of the model and there are no other attributes of concern. In the rightmost picture, a simple skeleton has been added. The skeleton consists of five joints, with the root joint at the bottom, close to where the shoulders should be. The jaw joint, the one that is going to be rotated in this example, is placed in front and a little below the ear. Figure 4.5: Placing the jaw, Maya screenshot Placing the skeleton kinematics Movements in the face is caused by rotating joints as discussed in chapter The motion is initiated by a change in a control point (parented locator in Maya), which causes equivalent displacements of the goal positions in some ikhandles, raising IK problems. When the required rotations have been calculated, they are applied to the

74 60 Chapter 4. Facial animation viewer (FAV) affected joints and the skeleton repositions itself and causes deformations in the models surfaces. Continuing with the example, things left to do once the skeleton is in place, are adding a control point and an ikhandle in the skeleton. Then parenting them, so when the control point is transformed, the goal of the ikhandle is transformed by the same amount. The next step is to add the jaw joints influences on the vertices of the bound meshes. The skeleton and the mesh is bound using smooth bind in Maya, discussed in chapter Influence weights are applied with a paint tool for weights, or by selecting vertices and manually typing in the weights. The below middle picture shows the result, the area affected by the jaw joints transformations are marked white. Now let us open the jaw. By translating the control point down a few units, the goal for the ikhandle is changed and starts to rotate its joint chain, which currently consists only by the jaw joint. The affected part of the mesh will rotate around the jaw joint using the influence values to scale the rotation of the vertices, the result is the mouth opens as shown to the right in figure 4.6. Figure 4.6: Setting up the jaw, Maya screenshot A more appealing graphical result is shown in figure 4.7, first the neutral relaxed pose and second when the mouth has opened. Figure 4.7: A jaw opens, Maya screenshot It must be stated that this example is heavily simplified and by looking at the final result, the mouth has a clear effect of a box shaped mouth. In reality, this painting procedure usually means that one has to select each vertex, at least in the trickier areas such as the area around the mouth and manually type in the value for each joint bound to the surface. The paint tool is a convenient solution, but it is difficult to precisely

75 4.5. Implementation 61 predict the result, when the influence distribution is done this way Creating poses The model must be able to support both pre-made and dynamic poses. The advantage of having a dynamic face is also a drawback, since it is quite difficult to generate expressions on the model with the same visual quality, compared to pre-made poses. The solution is to use pre-made poses as background expressions and still keep the ability of applying dynamic poses, generated from FAP streams. These pre-made poses of the model are created in Maya and their appearance is therefore much easier to control. Furthermore, the parameterization in the MPEG-4 standard assumes the feature of background expressions, the high-level FAP1 and FAP2 parameters, see chapter FAV is capable of an arbitrary sized set of pre-made poses and blending between them as described in chapter However, MPEG-4 restricts the amount of visemes to the fourteen listed in table D.1 and the available expressions are the six listed in table D.2. As an example, a pre-made pose for sadness is shown below in figure 4.8. Figure 4.8: Neutral and sad poses, FAV screenshot The first image shows the model in its neutral position, the model is totally relaxed. In the second image, the pose sadness has been applied to the skeleton. Setting up a rig is an extremely tedious task and for the model used in FAV, the described task took about three days to complete and the visual quality of the resulting animations can definitely be discussed. In most cases professional animators uses considerably more time completing their models, so there is an excuse. 4.5 Implementation Figure 4.9 shows an overview of the application. Poses are generated in its control part and they are fed into the animation engine, which generates expressions on the model. The animation control is done by user settings in the GUI or through an incoming FAP stream, generating poses for the skeleton. The second part consist of a blender, which takes poses from the control part as arguments and blends them into a resulting composite pose. The blenders output pose is divided up in smaller animation objects. Each of these objects works with some parts of the models skeleton, collectively causing

76 62 Chapter 4. Facial animation viewer (FAV) Figure 4.9: Overview of FAV the skeleton to reposition itself and thereby deforming the model into an expression. Each part of the implementation is explained in more detail in chapter Control The control part generates poses that will be added to the animation engine. As previously mentioned, poses are created in two ways: either by using the GUI or through a MPEG-4 FAP interface GUI The GUI is implemented with GLUI, which is a C++ library that provides graphical user controls to OpenGL applications. GLUI is simple to use and includes the most common features in GUI s. It offers a quick and functional way of relieving the keyboard from bound keys by supplying buttons, checkboxes and so on. It is probably not the first choice for a software project of a larger scale, since it does not offer much else and it appears to have some bugs. But for scientific reasons, where the main purpose is not a proper GUI, it is well suited. Figure 4.10 shows a screenshot of FAV. The GUI works directly above the blender, allowing the user to run the pre-made poses of the model by some intensity or customize new expressions.

77 4.7. Animation engine 63 Selecting a pose is done by clicking the checkbox next to it and then choosing some intensity between 0.0 and 1.0. Several poses can be used, the blender will mix these poses into a single composite pose and feed it into the solver. Figure 4.10: GUI and main window in FAV MPEG-4 FAP interface The MPEG-4 interface in FAV translates FAP frames into poses and uses them to update the model as done with any other pose. The actual FAP parameters are currently just being read in from disk into a queue, using a timing functionality which adapts the rate of the parameter stream to the specified framerate value in the FAP file s overhead. The actual translation of low-level FAPs of type 3 are done in the model part, since it holds the animator defined information, such as which FAP value corresponds to which control point. High-level FAPs, type 1 or 2 corresponds to pre-made poses that has to be supported by the model. The translations of type 1 or 2 FAPs are solved by table lookups and retrieving the pre-made poses, while the FAP3 values are converted into poses and then applied to the model. 4.7 Animation engine The animation engine consists of the blender, an animation layer, an IK solver and transforming functionality, which is used when deforming the model. Poses from the control part, described in the previous chapter are used as argument poses in the blender. The resulting output composite pose from the blender is the one that is used when deforming the model. FAV uses an approach where a pose consists of parameters, essentially offset values, which corresponds to the model s control points. These parameters are applied

78 64 Chapter 4. Facial animation viewer (FAV) to the neutral position of the model s control points. This yields new positions for the control points and results in a repositioning of the skeleton, as discussed in chapter The actual execution flow in the animation engine is shown in figure Figure 4.11: Execution flow in FAV Chapter explains this in more detail Blender The function of the blender is to receive a set of poses from the control part and combine them into a resulting pose R, which is applied to the model. This is shown in figure Figure 4.12: Add and clamp blender in FAV Inspired by how Maya treats blend shapes, the blender is of an additive kind. Poses are implemented as frames of translation offsets and the blender adds the corresponding offsets in the involved poses into the final composite pose. If the resulting displacement in a control point violates its user defined limits, the translation is clamped to the control point s minimum or maximum translation values. The method of constraining the control points in their translation instead of constraining the joints in their rotation seems to work well. Additionally, it does not require much calculation. When the corresponding offsets have been added, they are clamped according to the translation limits of the control point. So the restriction has been moved out from the skeleton and put into the blending part.

79 4.7. Animation engine Animation Layer The animation layer is a vector, containing animation objects. The reason for the layer is maintenance of the animation objects, specified by a pose. Once an output pose from the blender has been calculated, it is divided up into a set of animation objects, which are inserted into the animation layer. An animation object exists in the layer as long as the layer is not cleared from a higher level or the animation object has been fully executed. The purpose of an animation object is to cause motion in the models surfaces by moving some parts of the skeleton. They maintain the required data for this task. The data consist of time periods, time steps and the required start and stop orientations for the rotation in the affected joints. The animation objects use their data when calculating the arguments needed in the models transform methods. In more detail, each animation object updates the position of a control point in the model, by adding the provided offset from the blenders output pose. This causes IK problems to occur in the model s corresponding ikhandles. An IK solving method in the model calculates the required start and stop orientations for the rotation in the concerned joints and returns this information to the animation object, which then combines this result with its timing data and begins to animate the affected skeleton parts. Collectively, the animation objects determine the in-between poses of the skeleton and thereby also the appearances of the models surfaces during the animation. These in-between poses are generated by using interpolation with an increasing time coefficient t, using a spherical linear interpolation method, see chapter An animation object initialises itself by gathering the orientations for each ikhandle related to the displaced control point. This is done by querying the IK solver, retreiving start and goal orientation, see chapter 3.6. The time coefficient t determines the velocity of the animation and the value of t is based on the remaining parts of the rotational distance and the given time period. The rotational distance is the distance between the start and stop orientations. Shorter distances and plenty of time left lead to smaller increments in t and thereby more inbetween poses. This result in a smoother animation, because more frames are displayed. Longer distances and lack of time will result in larger increments in t and the animation will produce less in-between poses, which leads to a faster but jerkier animation. These are the three main methods for an animation object, besides the additional initialisation methods, which sets up things like timers, time periods and distances: Express(): This method transforms the skeleton from its current pose, which is the neutral pose unless an ongoing animation is interrupted, into the desired output pose from the blender. Hold(): Expressions are held during a time period with this method. Relax(): This method relaxes the model from its current pose, back towards its neutral pose. If the relaxation is not interrupted by a new output pose from the blender and the animation objects are allowed to complete their relaxation, the model will reset its geometry. This is because values used in the calculations tend to degenerate due to floating point round off errors and needs to be reset every once in a while.

80 66 Chapter 4. Facial animation viewer (FAV) IK solver The IK solver is integrated in the model part of the animation engine. An animation object delivers its offset to the model part, updating the position of a control point. This raises IK problems in the corresponding ikhandles. The IK solving functionality calculates the solutions to the IK problems and returns them to the animation object that caused them. The solution consists of the required orientations for the rotation in the ikhandle s joint chains, which minimizes the distance between the ikhandle s effector and its goal position, explained in chapter The calculation procedure is performed by a CCD IK solving algorithm, as explained in chapter 3.6. This procedure does not update any geometric values in the model, instead the algorithm works virtually with the affected ikhandles in the skeleton. Here follows a more detailed explanation of the implementation of this procedure: Virtual calculations The model part always keeps copies of some of its geometric values, for instance one copy contain its neutral values, never to be changed and another copy contains the values used during rendering. In the most common case, the calculation is based upon neutral values. However, occasionally an ongoing animation is interrupted. This happens when a user decides to do another expression on the model from the GUI, before the ongoing animation is completed. In this case, the models current rendering values are used as neutral values in the calculations. This is also the case during the relaxing procedure if the current expression is not derived from neutral values. When the animations are controlled by an MPEG-4 FAP stream, all calculations are based on the models neutral values, since this is required by the MPEG-4 FA standard. Anyway, the results are always applied to the values used in the rendering procedure. When the IK solver iterates the CCD algorithm, it needs to base its calculation on values that does not affect the models neutral values or its current rendering values, therefore the method operates virtually on the skeleton, using temporary values. Hence, there are no updates done to the model by the IK solver method. CCD As described in chapter 3.6 this is an iterative approach, each time the method iterates it works itself from the outmost joint to the inmost joint. At each joint it attempts to rotate it, so the effector position of the current ikhandle intersects with the vector between the joint and the goal position. The IK solver method stops when the convergence of the joint chain towards the goal is insufficient or the maximum limit of N iterations has been performed. When the algorithm iterates, the current rotation in the affected joints receives an adjusting rotational change δθ, causing the joint chain to converge towards the goal. These rotational changes are accumulated in each individual joint into a final rotation R, as shown in equation 4.1. R = R + δθ (4.1) Joints have their own local frames of reference. A joint considers itself to be the origin in space, around which the affected geometry rotates. The spatial reference of a joints local frame is implemented with three points that maintains the same relative distance vector between itself and the joint. Every geometric entity affected by the joints rotations must

81 4.7. Animation engine 67 have their position in the Cartesian frame converted into a position relative the joints local frame, where it is rotated. The resulting position is then converted back into the Cartesian frame. The IK solver only works with the skeleton and temporary values, but conversion between joints local frames and the Cartesian frame are also used during the actual transformation of the model. These conversion formulas are shown in equation Their proofs are presented in the corresponding references listed next them. The local frame of reference is spanned by an orthonormal base, three unit vectors with the joint as their origin. The frame is updated with the same rotations as the joint it belongs to, in case some of the joints preceding parent joints have rotated. These rotations does not affect the joint s own local rotation. Figure 4.13 shows a joint and its local frame in the Cartesian frame. Figure 4.13: Local frame of reference The formula used when converting from the Cartesian frame into an arbitrary joints local frame of reference[6]. We seek the local position p l of a point p w in the Cartesian frame. The local frame is defined as F (u, v, w, O), where (u, v, w) are linearly independent unit vectors forming a basis for the local frame and O is the local frame s position in the cartesian frame. p l (x l, y l, z l ) can be retrieved as follows: D = 1 D 1 =< p w O > u D 2 =< p w O > v =< p w O > w D 3 x l = D1 D y l = D2 D z l = D3 D (4.2) The conversion from an arbitrary joints local frame of reference into the Cartesian frame[3]. We seek the world position in the Cartesian frame p w (x, y, z) of point p l (x l, y l, z l ) in the joints local frame F (u, v, w, O). (u, v, w) are linearly independent unit vectors forming a basis for the local frame and O is the local frame s position in the Cartesian frame.

82 68 Chapter 4. Facial animation viewer (FAV) p w (x, y, z) can be retreived as follows: ( ) ( x y z 1 = xl y l z l 1 ) u x u y z 3 0 v x v y z 3 0 w x w y z 3 0 O x O y O z 1 (4.3) Constraints As mentioned before, FAV has two methods that constrain joints locally, although they have been left out in the final version of the implementation. There are no real needs for them, since constraints are solved at a higher level by constraining the control points in their translation. Additionally, there are rarely any extreme situations causing trouble for the IK solver, since the joint chains mostely consist of no more than one or two joints and each of these joints does not rotate that much in general. The IK solver does find the best solution more than sufficiently often. This justifies the exclusion of joint s local constraints due to accuracy and speed problems in the two current constraining methods. Anyway, these two methods are briefly mentioned in this section. The first method, clamping the rotations around the local axis works like this. Consider a ball joint with 3 rotational DOF s and it is limited from π 2 to π 2 around its three local axis x, y and z. The IK solver converts the effector and the goal positions into the joint s local frame and calculates the joint s required rotation quaternion. The joint is virtually rotated by the suggested rotation, yielding an unconstrained rotation. The rotation is then converted into an Euler angles representation and compared to the joint s rotational limits. If any of the three angles are out of boundary, that angular value is simply clamped to the violated boundary value. The constrained rotation is then converted back from Euler angles representation into a quaternion representation and used when rotating the joint. The problem with clamping the angular values like this, is that it might cause the IK solver to suggest a rotation which is not necessarily optimal. The algorithm will probably still converge, but depending on how the angular intervals for the joint have been set, the results might vary. The other method searches the angular intervals, in which the joint is free to rotate. It divides the three intervals into a number of segments and searches through all possible x, y and z combinations. The segment combination which yields the closest distance between the goal and the effector is kept. Next, the method calls itself recursively, but this time it divides the angular intervals in the previously kept segment combination into smaller segments and searches them for a new best combination, in the same manner. The method continues like this for M number of times or until the distance between the effector and the goal is sufficiently small, narrowing down the angular intervals and keeping the best combination. This method works quite well on small angular intervals, but it require lots of calculations that could be spent more wisely on other parts of the application. IK solver stop conditions Although the stop conditions for the IK solver have been partially mentioned, they are summarized in this section. The method stops iterating after N iterations or until some of the goal conditions become true. The goal conditions for the IK solver used in FAV is:

83 4.7. Animation engine 69 - The distance between effector and goal is smaller than a threshold value. - The convergence, the improvement of the current solution compared to the previous is not fast enough. This is also determined by a threshold value. When the solutions to the IK problems have been calculated by the IK solver, the resulting quaternions representing the start and stop orientations for the joint s rotation are reported back to the animation object, that caused the IK problem Transformation The transformation of the model is controlled by the animation objects, which maintains the data used during the transformation of the model. However, the actual interpolations are performed in the models own transformation methods, which receives the quaternions representing the start and stop orientations for the rotation and the additional time coefficient t, which determines the amount of the total rotation that will be used. This generates in-between positions of the skeleton and its bound surfaces, as described in chapter The actual transformation of the model is done in two steps, first the skeleton is repositioned and then its bound surfaces are deformed accordingly. All transformations in the skeleton are rotations, using quaternions as described in chapter 3.9. Skeleton When a joint rotates all its succeeding child joints are rotated around that joint by the same amount. As mentioned in chapter 3.2, a skeleton is made up as a tree structure and the rotations are passed down the tree using a recursive depth first method, starting in the root joint. Technically, this is solved by using a vector storing the joints rotations and a copy of this vector is being passed as an argument in the recursive calls. This pushes the joints rotations down towards the leaf joints. Figure 4.14 shows the above reasoning, the first image shows how the depth first search traverses a skeleton and the second shows how the rotations are being sent from parent joints to their children. The rotating joints have been highlighted with a square shape. Figure 4.14: Skeleton rotation in FAV

84 70 Chapter 4. Facial animation viewer (FAV) Surfaces When the skeleton has been updated its bound surfaces are next. The vertices in the surfaces are bound to a set of specified joints by some influence, as described in chapter This is implemented by maintaining a relation in the vertices, telling which joints they are bound to, along with the corresponding influence values. The new position for a vertex in a surface is calculated according to equation Render OpenGL has been used in the visualization part of FAV. The book[28] is the main OpenGL information source used in this thesis. Additionally, the OpenGL utility toolkit (GLUT) has been used for setting up the rendering window. The version of GLUT used in the implementation is The render function traverses the models geometric rendering values, which holds the current positioning of the model. Depending on which rendering mode that is activated, different geometric details are rendered. Here are the three rendering modes in FAV: Skeleton mode: This mode renders the joints, the additional frames of reference and the bone vectors, connecting the joints. Wireframe mode: This is the same as skeleton mode, but it also renders the wireframe representation of the surfaces. Textured mode: This is the default mode in the application, in this modes a textured version of the model is rendered.

85 Chapter 5 Software analysis This chapter contains development information about FAV, along with an analysis of the result. 5.1 Development information This summarizes the development information of the implementation. The textured surfaces used were originally created in Maya and they can be found on the CD, which is included in the book[22]. The models used have been rigged using Alias Wavefront Maya, versions 5.0 and 6.0. The exporter is written MEL and usable in these two versions of Maya. FAV has been developed under Windows XP and Visual Studio 2003, using C++ and OpenGL GLUT provides a programming interface with relatively simple methods for setting up window systems in OpenGL applications and it is used in FAV. Additionally, the GUI is based on an interface library GLUI, as mentioned in chapter GLUI is based on GLUT, offering common user controls. The versions used in FAV are GLUT and GLUI Complexity The complexities in the main parts of the implementation part are dealt with in this section. In a realistic scenario, traversing and performing operations on the model s data consumes most time and it is done during both rendering and when applying poses. However, words are also spent on the traversing of the skeleton, when the rotations made are passed down the skeleton hierarchy Traversing the geometry The model s data, both representation and skeleton data are stored in vector types. The data is accessed through index keys, in the general case, methods requiring large amounts of this data to be traversed are clearly the most time consuming parts in FAV. This occurs when applying poses to the model and during rendering. Occasionally it is also being done while resetting the values in the model s data. Obviously, it is a good thing if a high percentage of the execution time can be spent on rendering. 71

86 72 Chapter 5. Software analysis Traversing vectors are done in O(n) time and due to the amounts of data in a model, using a general case, this part takes the longest time in FAV. The application has been tested in terms of how the execution time is distributed in the main parts of the program. The results can be found in table 5.1. Continuing with a general case, of the mentioned parts of FAV, applying a pose to the model takes the longest time. Due to vertex blending, vertices might be affected by the rotation made in some of the skeleton s joints. Each individual vertex must be checked and if it is affected by some set of rotated joints, these rotations have to be interpolated according to a unique influence weight. This usually requires lots of operations, although it depends to a large degree on how much of the skeleton that has been repositioned and what parts of the surfaces that are affected. This is done according to equation Skeleton rotation The algorithm used when traversing the hierarchic skeleton is a recursive depth first search method operating in O( j + b ) time, where j is the amount of joints and b are the bones (vectors) between the joints. The required time for the actual traversing of the skeleton can be neglected, in comparision to the required time spent in methods traversing and operating of the model s data. 5.3 Timing issues The velocity of the animations depends on the rotational distances left in the joints and the given time periods for the animation. This causes the amount of generated in-between frames to vary, as described in chapter The timer used in FAV is implemented with clock(), a function included in the ANSI standard. The timing part returns the elapsed time, based on the clock tics of the CPU. The accuracy of the retrieved time values should be enough for this task, since the time periods typically requires an accuracy down to tenths of a second. Timing is used in the FAP-interface and in the animation objects, maintaining the time coefficients. 5.4 Test results The application was tested in terms of average FPS on two different system configurations. These benchmarks were captured when running a MPEG-4 FAP stream in FAV. Additionally, a profiling test has been performed with a profiler tool called DevParter on one of the systems (System 1), showing how the execution time is distributed in the main parts of FAV. Table 5.1 summarizes the test results.

87 5.4. Test results 73 Configurations System 1 System 2 CPU AMD Athlon 1.44GHz AMD Athlon RAM 512Mb SDRAM 512Mb DDR GFX GeForce3 64Mb ATI Radeon 9800 pro 128Mb OS WinXP pro WinXP pro Average FPS System 1 System 2 640x x x x Model Statistics Vertices 5696 Edges Faces 6059 UV 6650 Joints 63 Locators 29 IkHandles 29 FAP Statistics Frames 330 Rate 15 Profile %- total execution time Transform part 37.7% Rendering part 30.5% FAP Interface 21.8%.. IK solver 0.1% Table 5.1: Test results

88 74 Chapter 5. Software analysis Not surprisingly, the worst bottleneck in FAV is its transformation part. In this part, the model geometry is being traversed and operations are made on the data. During the development of FAV there was an unofficial goal, keeping the FPS rate to at least 15 FPS. This was achieved on both systems. 5.5 Gallery The screenshots shown in figure 5.1 are taken in FAV and shows some example of premade and blended poses applied to the model. Figure 5.1: Emotions screenshots

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,

More information

Advanced Graphics and Animation

Advanced Graphics and Animation Advanced Graphics and Animation Character Marco Gillies and Dan Jones Goldsmiths Aims and objectives By the end of the lecture you will be able to describe How 3D characters are animated Skeletal animation

More information

Character animation Christian Miller CS Fall 2011

Character animation Christian Miller CS Fall 2011 Character animation Christian Miller CS 354 - Fall 2011 Exam 2 grades Avg = 74.4, std. dev. = 14.4, min = 42, max = 99 Characters Everything is important in an animation But people are especially sensitive

More information

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016

Computergrafik. Matthias Zwicker Universität Bern Herbst 2016 Computergrafik Matthias Zwicker Universität Bern Herbst 2016 Today Curves NURBS Surfaces Parametric surfaces Bilinear patch Bicubic Bézier patch Advanced surface modeling 2 Piecewise Bézier curves Each

More information

3D Modeling Parametric Curves & Surfaces. Shandong University Spring 2013

3D Modeling Parametric Curves & Surfaces. Shandong University Spring 2013 3D Modeling Parametric Curves & Surfaces Shandong University Spring 2013 3D Object Representations Raw data Point cloud Range image Polygon soup Surfaces Mesh Subdivision Parametric Implicit Solids Voxels

More information

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics Preview CS770/870 Spring 2017 Animation Basics Related material Angel 6e: 1.1.3, 8.6 Thalman, N and D. Thalman, Computer Animation, Encyclopedia of Computer Science, CRC Press. Lasseter, J. Principles

More information

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics CS770/870 Spring 2017 Animation Basics Related material Angel 6e: 1.1.3, 8.6 Thalman, N and D. Thalman, Computer Animation, Encyclopedia of Computer Science, CRC Press. Lasseter, J. Principles of traditional

More information

Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p.

Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p. Basics of Design p. 2 Approaching Design as an Artist p. 4 Knowing Your Character p. 4 Making Decisions p. 4 Categories of Design p. 6 Realistic Designs p. 6 Stylized Designs p. 7 Designing a Character

More information

CS 231. Deformation simulation (and faces)

CS 231. Deformation simulation (and faces) CS 231 Deformation simulation (and faces) Deformation BODY Simulation Discretization Spring-mass models difficult to model continuum properties Simple & fast to implement and understand Finite Element

More information

Computergrafik. Matthias Zwicker. Herbst 2010

Computergrafik. Matthias Zwicker. Herbst 2010 Computergrafik Matthias Zwicker Universität Bern Herbst 2010 Today Curves NURBS Surfaces Parametric surfaces Bilinear patch Bicubic Bézier patch Advanced surface modeling Piecewise Bézier curves Each segment

More information

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala Animation CS 4620 Lecture 33 Cornell CS4620 Fall 2015 1 Announcements Grading A5 (and A6) on Monday after TG 4621: one-on-one sessions with TA this Friday w/ prior instructor Steve Marschner 2 Quaternions

More information

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Describing Shapes. Constructing Objects in Computer Graphics 1/15

CS123 INTRODUCTION TO COMPUTER GRAPHICS. Describing Shapes. Constructing Objects in Computer Graphics 1/15 Describing Shapes Constructing Objects in Computer Graphics 1/15 2D Object Definition (1/3) Lines and polylines: Polylines: lines drawn between ordered points A closed polyline is a polygon, a simple polygon

More information

Free-Form Deformation and Other Deformation Techniques

Free-Form Deformation and Other Deformation Techniques Free-Form Deformation and Other Deformation Techniques Deformation Deformation Basic Definition Deformation: A transformation/mapping of the positions of every particle in the original object to those

More information

Fall CSCI 420: Computer Graphics. 4.2 Splines. Hao Li.

Fall CSCI 420: Computer Graphics. 4.2 Splines. Hao Li. Fall 2014 CSCI 420: Computer Graphics 4.2 Splines Hao Li http://cs420.hao-li.com 1 Roller coaster Next programming assignment involves creating a 3D roller coaster animation We must model the 3D curve

More information

For each question, indicate whether the statement is true or false by circling T or F, respectively.

For each question, indicate whether the statement is true or false by circling T or F, respectively. True/False For each question, indicate whether the statement is true or false by circling T or F, respectively. 1. (T/F) Rasterization occurs before vertex transformation in the graphics pipeline. 2. (T/F)

More information

Maths at the Movies. Chris Budd

Maths at the Movies. Chris Budd Maths at the Movies Chris Budd See maths in the movies in different ways Sometimes maths in the background Moriarty Some movies hate maths Some feature mathematicians Some films are about mathematicians

More information

CS 231. Deformation simulation (and faces)

CS 231. Deformation simulation (and faces) CS 231 Deformation simulation (and faces) 1 Cloth Simulation deformable surface model Represent cloth model as a triangular or rectangular grid Points of finite mass as vertices Forces or energies of points

More information

COMPUTER ANIMATION 3 KEYFRAME ANIMATION, RIGGING, SKINNING AND CHARACTER ANIMATION. Rémi Ronfard, Animation, M2R MOSIG

COMPUTER ANIMATION 3 KEYFRAME ANIMATION, RIGGING, SKINNING AND CHARACTER ANIMATION. Rémi Ronfard, Animation, M2R MOSIG COMPUTER ANIMATION 3 KEYFRAME ANIMATION, RIGGING, SKINNING AND CHARACTER ANIMATION Rémi Ronfard, Animation, M2R MOSIG 2 Outline Principles of animation Keyframe interpolation Rigging, skinning and walking

More information

3D Modeling Parametric Curves & Surfaces

3D Modeling Parametric Curves & Surfaces 3D Modeling Parametric Curves & Surfaces Shandong University Spring 2012 3D Object Representations Raw data Point cloud Range image Polygon soup Solids Voxels BSP tree CSG Sweep Surfaces Mesh Subdivision

More information

Curves and Surfaces 1

Curves and Surfaces 1 Curves and Surfaces 1 Representation of Curves & Surfaces Polygon Meshes Parametric Cubic Curves Parametric Bi-Cubic Surfaces Quadric Surfaces Specialized Modeling Techniques 2 The Teapot 3 Representing

More information

Character Modeling COPYRIGHTED MATERIAL

Character Modeling COPYRIGHTED MATERIAL 38 Character Modeling p a r t _ 1 COPYRIGHTED MATERIAL 39 Character Modeling Character Modeling 40 1Subdivision & Polygon Modeling Many of Maya's features have seen great improvements in recent updates

More information

CSE 167: Introduction to Computer Graphics Lecture 12: Bézier Curves. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013

CSE 167: Introduction to Computer Graphics Lecture 12: Bézier Curves. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 CSE 167: Introduction to Computer Graphics Lecture 12: Bézier Curves Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2013 Announcements Homework assignment 5 due tomorrow, Nov

More information

Curves. Computer Graphics CSE 167 Lecture 11

Curves. Computer Graphics CSE 167 Lecture 11 Curves Computer Graphics CSE 167 Lecture 11 CSE 167: Computer graphics Polynomial Curves Polynomial functions Bézier Curves Drawing Bézier curves Piecewise Bézier curves Based on slides courtesy of Jurgen

More information

CS337 INTRODUCTION TO COMPUTER GRAPHICS. Describing Shapes. Constructing Objects in Computer Graphics. Bin Sheng Representing Shape 9/20/16 1/15

CS337 INTRODUCTION TO COMPUTER GRAPHICS. Describing Shapes. Constructing Objects in Computer Graphics. Bin Sheng Representing Shape 9/20/16 1/15 Describing Shapes Constructing Objects in Computer Graphics 1/15 2D Object Definition (1/3) Lines and polylines: Polylines: lines drawn between ordered points A closed polyline is a polygon, a simple polygon

More information

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/27/2017

Computer Graphics. Si Lu. Fall uter_graphics.htm 11/27/2017 Computer Graphics Si Lu Fall 2017 http://web.cecs.pdx.edu/~lusi/cs447/cs447_547_comp uter_graphics.htm 11/27/2017 Last time o Ray tracing 2 Today o Animation o Final Exam: 14:00-15:30, Novermber 29, 2017

More information

3D Production Pipeline

3D Production Pipeline Overview 3D Production Pipeline Story Character Design Art Direction Storyboarding Vocal Tracks 3D Animatics Modeling Animation Rendering Effects Compositing Basics : OpenGL, transformation Modeling :

More information

(Refer Slide Time: 00:02:24 min)

(Refer Slide Time: 00:02:24 min) CAD / CAM Prof. Dr. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 9 Parametric Surfaces II So these days, we are discussing the subject

More information

Animation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter

Animation COM3404. Richard Everson. School of Engineering, Computer Science and Mathematics University of Exeter Animation COM3404 Richard Everson School of Engineering, Computer Science and Mathematics University of Exeter R.M.Everson@exeter.ac.uk http://www.secamlocal.ex.ac.uk/studyres/com304 Richard Everson Animation

More information

CSE 167: Introduction to Computer Graphics Lecture #13: Curves. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017

CSE 167: Introduction to Computer Graphics Lecture #13: Curves. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 CSE 167: Introduction to Computer Graphics Lecture #13: Curves Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2017 Announcements Project 4 due Monday Nov 27 at 2pm Next Tuesday:

More information

In this course we will need a set of techniques to represent curves and surfaces in 2-d and 3-d. Some reasons for this include

In this course we will need a set of techniques to represent curves and surfaces in 2-d and 3-d. Some reasons for this include Parametric Curves and Surfaces In this course we will need a set of techniques to represent curves and surfaces in 2-d and 3-d. Some reasons for this include Describing curves in space that objects move

More information

MODELING AND HIERARCHY

MODELING AND HIERARCHY MODELING AND HIERARCHY Introduction Models are abstractions of the world both of the real world in which we live and of virtual worlds that we create with computers. We are all familiar with mathematical

More information

Computer Animation Visualization. Lecture 5. Facial animation

Computer Animation Visualization. Lecture 5. Facial animation Computer Animation Visualization Lecture 5 Facial animation Taku Komura Facial Animation The face is deformable Need to decide how all the vertices on the surface shall move Manually create them Muscle-based

More information

Computer Graphics I Lecture 11

Computer Graphics I Lecture 11 15-462 Computer Graphics I Lecture 11 Midterm Review Assignment 3 Movie Midterm Review Midterm Preview February 26, 2002 Frank Pfenning Carnegie Mellon University http://www.cs.cmu.edu/~fp/courses/graphics/

More information

Until now we have worked with flat entities such as lines and flat polygons. Fit well with graphics hardware Mathematically simple

Until now we have worked with flat entities such as lines and flat polygons. Fit well with graphics hardware Mathematically simple Curves and surfaces Escaping Flatland Until now we have worked with flat entities such as lines and flat polygons Fit well with graphics hardware Mathematically simple But the world is not composed of

More information

Splines. Parameterization of a Curve. Curve Representations. Roller coaster. What Do We Need From Curves in Computer Graphics? Modeling Complex Shapes

Splines. Parameterization of a Curve. Curve Representations. Roller coaster. What Do We Need From Curves in Computer Graphics? Modeling Complex Shapes CSCI 420 Computer Graphics Lecture 8 Splines Jernej Barbic University of Southern California Hermite Splines Bezier Splines Catmull-Rom Splines Other Cubic Splines [Angel Ch 12.4-12.12] Roller coaster

More information

Animation. CS 465 Lecture 22

Animation. CS 465 Lecture 22 Animation CS 465 Lecture 22 Animation Industry production process leading up to animation What animation is How animation works (very generally) Artistic process of animation Further topics in how it works

More information

Parameterization. Michael S. Floater. November 10, 2011

Parameterization. Michael S. Floater. November 10, 2011 Parameterization Michael S. Floater November 10, 2011 Triangular meshes are often used to represent surfaces, at least initially, one reason being that meshes are relatively easy to generate from point

More information

2D Spline Curves. CS 4620 Lecture 18

2D Spline Curves. CS 4620 Lecture 18 2D Spline Curves CS 4620 Lecture 18 2014 Steve Marschner 1 Motivation: smoothness In many applications we need smooth shapes that is, without discontinuities So far we can make things with corners (lines,

More information

Sung-Eui Yoon ( 윤성의 )

Sung-Eui Yoon ( 윤성의 ) CS480: Computer Graphics Curves and Surfaces Sung-Eui Yoon ( 윤성의 ) Course URL: http://jupiter.kaist.ac.kr/~sungeui/cg Today s Topics Surface representations Smooth curves Subdivision 2 Smooth Curves and

More information

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala Animations Hakan Bilen University of Edinburgh Computer Graphics Fall 2017 Some slides are courtesy of Steve Marschner and Kavita Bala Animation Artistic process What are animators trying to do? What tools

More information

Bezier Curves, B-Splines, NURBS

Bezier Curves, B-Splines, NURBS Bezier Curves, B-Splines, NURBS Example Application: Font Design and Display Curved objects are everywhere There is always need for: mathematical fidelity high precision artistic freedom and flexibility

More information

Dgp _ lecture 2. Curves

Dgp _ lecture 2. Curves Dgp _ lecture 2 Curves Questions? This lecture will be asking questions about curves, their Relationship to surfaces, and how they are used and controlled. Topics of discussion will be: Free form Curves

More information

CS 231. Basics of Computer Animation

CS 231. Basics of Computer Animation CS 231 Basics of Computer Animation Animation Techniques Keyframing Motion capture Physics models Keyframe animation Highest degree of control, also difficult Interpolation affects end result Timing must

More information

Beginners Guide Maya. To be used next to Learning Maya 5 Foundation. 15 juni 2005 Clara Coepijn Raoul Franker

Beginners Guide Maya. To be used next to Learning Maya 5 Foundation. 15 juni 2005 Clara Coepijn Raoul Franker Beginners Guide Maya To be used next to Learning Maya 5 Foundation 15 juni 2005 Clara Coepijn 0928283 Raoul Franker 1202596 Index Index 1 Introduction 2 The Interface 3 Main Shortcuts 4 Building a Character

More information

CSE 167: Introduction to Computer Graphics Lecture #11: Bezier Curves. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016

CSE 167: Introduction to Computer Graphics Lecture #11: Bezier Curves. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 CSE 167: Introduction to Computer Graphics Lecture #11: Bezier Curves Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2016 Announcements Project 3 due tomorrow Midterm 2 next

More information

Intro to Curves Week 1, Lecture 2

Intro to Curves Week 1, Lecture 2 CS 536 Computer Graphics Intro to Curves Week 1, Lecture 2 David Breen, William Regli and Maxim Peysakhov Department of Computer Science Drexel University Outline Math review Introduction to 2D curves

More information

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions.

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Path and Trajectory specification Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Specifying the desired motion to achieve a specified goal is often a

More information

2D Spline Curves. CS 4620 Lecture 13

2D Spline Curves. CS 4620 Lecture 13 2D Spline Curves CS 4620 Lecture 13 2008 Steve Marschner 1 Motivation: smoothness In many applications we need smooth shapes [Boeing] that is, without discontinuities So far we can make things with corners

More information

Facial Expression Analysis for Model-Based Coding of Video Sequences

Facial Expression Analysis for Model-Based Coding of Video Sequences Picture Coding Symposium, pp. 33-38, Berlin, September 1997. Facial Expression Analysis for Model-Based Coding of Video Sequences Peter Eisert and Bernd Girod Telecommunications Institute, University of

More information

Curves and Surfaces Computer Graphics I Lecture 9

Curves and Surfaces Computer Graphics I Lecture 9 15-462 Computer Graphics I Lecture 9 Curves and Surfaces Parametric Representations Cubic Polynomial Forms Hermite Curves Bezier Curves and Surfaces [Angel 10.1-10.6] February 19, 2002 Frank Pfenning Carnegie

More information

Overview. Animation is a big topic We will concentrate on character animation as is used in many games today. humans, animals, monsters, robots, etc.

Overview. Animation is a big topic We will concentrate on character animation as is used in many games today. humans, animals, monsters, robots, etc. ANIMATION Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. Character Representation A character is represented

More information

Parametric Curves. University of Texas at Austin CS384G - Computer Graphics

Parametric Curves. University of Texas at Austin CS384G - Computer Graphics Parametric Curves University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Parametric Representations 3 basic representation strategies: Explicit: y = mx + b Implicit: ax + by + c

More information

Chapter 9 Animation System

Chapter 9 Animation System Chapter 9 Animation System 9.1 Types of Character Animation Cel Animation Cel animation is a specific type of traditional animation. A cel is a transparent sheet of plastic on which images can be painted

More information

Facial Animation System Design based on Image Processing DU Xueyan1, a

Facial Animation System Design based on Image Processing DU Xueyan1, a 4th International Conference on Machinery, Materials and Computing Technology (ICMMCT 206) Facial Animation System Design based on Image Processing DU Xueyan, a Foreign Language School, Wuhan Polytechnic,

More information

CS 536 Computer Graphics Intro to Curves Week 1, Lecture 2

CS 536 Computer Graphics Intro to Curves Week 1, Lecture 2 CS 536 Computer Graphics Intro to Curves Week 1, Lecture 2 David Breen, William Regli and Maxim Peysakhov Department of Computer Science Drexel University 1 Outline Math review Introduction to 2D curves

More information

An introduction to interpolation and splines

An introduction to interpolation and splines An introduction to interpolation and splines Kenneth H. Carpenter, EECE KSU November 22, 1999 revised November 20, 2001, April 24, 2002, April 14, 2004 1 Introduction Suppose one wishes to draw a curve

More information

Information Coding / Computer Graphics, ISY, LiTH. Splines

Information Coding / Computer Graphics, ISY, LiTH. Splines 28(69) Splines Originally a drafting tool to create a smooth curve In computer graphics: a curve built from sections, each described by a 2nd or 3rd degree polynomial. Very common in non-real-time graphics,

More information

Topic 0. Introduction: What Is Computer Graphics? CSC 418/2504: Computer Graphics EF432. Today s Topics. What is Computer Graphics?

Topic 0. Introduction: What Is Computer Graphics? CSC 418/2504: Computer Graphics EF432. Today s Topics. What is Computer Graphics? EF432 Introduction to spagetti and meatballs CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~karan/courses/418/ Instructors: L0101, W 12-2pm

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE452 Computer Graphics Lecture 19: From Morphing To Animation Capturing and Animating Skin Deformation in Human Motion, Park and Hodgins, SIGGRAPH 2006 CSE452 Lecture 19: From Morphing to Animation 1

More information

3D Modeling techniques

3D Modeling techniques 3D Modeling techniques 0. Reconstruction From real data (not covered) 1. Procedural modeling Automatic modeling of a self-similar objects or scenes 2. Interactive modeling Provide tools to computer artists

More information

B-spline Curves. Smoother than other curve forms

B-spline Curves. Smoother than other curve forms Curves and Surfaces B-spline Curves These curves are approximating rather than interpolating curves. The curves come close to, but may not actually pass through, the control points. Usually used as multiple,

More information

Computer Graphics Curves and Surfaces. Matthias Teschner

Computer Graphics Curves and Surfaces. Matthias Teschner Computer Graphics Curves and Surfaces Matthias Teschner Outline Introduction Polynomial curves Bézier curves Matrix notation Curve subdivision Differential curve properties Piecewise polynomial curves

More information

Images from 3D Creative Magazine. 3D Modelling Systems

Images from 3D Creative Magazine. 3D Modelling Systems Images from 3D Creative Magazine 3D Modelling Systems Contents Reference & Accuracy 3D Primitives Transforms Move (Translate) Rotate Scale Mirror Align 3D Booleans Deforms Bend Taper Skew Twist Squash

More information

Introduction to Computer Graphics

Introduction to Computer Graphics Introduction to Computer Graphics 2016 Spring National Cheng Kung University Instructors: Min-Chun Hu 胡敏君 Shih-Chin Weng 翁士欽 ( 西基電腦動畫 ) Data Representation Curves and Surfaces Limitations of Polygons Inherently

More information

Character Animation 1

Character Animation 1 Character Animation 1 Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. Character Representation A character

More information

CHAPTER 1 Graphics Systems and Models 3

CHAPTER 1 Graphics Systems and Models 3 ?????? 1 CHAPTER 1 Graphics Systems and Models 3 1.1 Applications of Computer Graphics 4 1.1.1 Display of Information............. 4 1.1.2 Design.................... 5 1.1.3 Simulation and Animation...........

More information

Intro to Modeling Modeling in 3D

Intro to Modeling Modeling in 3D Intro to Modeling Modeling in 3D Polygon sets can approximate more complex shapes as discretized surfaces 2 1 2 3 Curve surfaces in 3D Sphere, ellipsoids, etc Curved Surfaces Modeling in 3D ) ( 2 2 2 2

More information

Parametric Curves. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell

Parametric Curves. University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Parametric Curves University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Parametric Representations 3 basic representation strategies: Explicit: y = mx + b Implicit: ax + by + c

More information

CS559 Computer Graphics Fall 2015

CS559 Computer Graphics Fall 2015 CS559 Computer Graphics Fall 2015 Practice Final Exam Time: 2 hrs 1. [XX Y Y % = ZZ%] MULTIPLE CHOICE SECTION. Circle or underline the correct answer (or answers). You do not need to provide a justification

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Introduction to the Mathematical Concepts of CATIA V5

Introduction to the Mathematical Concepts of CATIA V5 CATIA V5 Training Foils Introduction to the Mathematical Concepts of CATIA V5 Version 5 Release 19 January 2009 EDU_CAT_EN_MTH_FI_V5R19 1 About this course Objectives of the course Upon completion of this

More information

Parametric curves. Brian Curless CSE 457 Spring 2016

Parametric curves. Brian Curless CSE 457 Spring 2016 Parametric curves Brian Curless CSE 457 Spring 2016 1 Reading Required: Angel 10.1-10.3, 10.5.2, 10.6-10.7, 10.9 Optional Bartels, Beatty, and Barsky. An Introduction to Splines for use in Computer Graphics

More information

doi: / The Application of Polygon Modeling Method in the Maya Persona Model Shaping

doi: / The Application of Polygon Modeling Method in the Maya Persona Model Shaping doi:10.21311/001.39.12.37 The Application of Polygon Modeling Method in the Maya Persona Model Shaping Qinggang Sun Harbin University of Science and Technology RongCheng Campus, RongCheng Shandong, 264300

More information

Intro to Curves Week 4, Lecture 7

Intro to Curves Week 4, Lecture 7 CS 430/536 Computer Graphics I Intro to Curves Week 4, Lecture 7 David Breen, William Regli and Maxim Peysakhov Geometric and Intelligent Computing Laboratory Department of Computer Science Drexel University

More information

The Free-form Surface Modelling System

The Free-form Surface Modelling System 1. Introduction The Free-form Surface Modelling System Smooth curves and surfaces must be generated in many computer graphics applications. Many real-world objects are inherently smooth (fig.1), and much

More information

Lecture IV Bézier Curves

Lecture IV Bézier Curves Lecture IV Bézier Curves Why Curves? Why Curves? Why Curves? Why Curves? Why Curves? Linear (flat) Curved Easier More pieces Looks ugly Complicated Fewer pieces Looks smooth What is a curve? Intuitively:

More information

Approximation of 3D-Parametric Functions by Bicubic B-spline Functions

Approximation of 3D-Parametric Functions by Bicubic B-spline Functions International Journal of Mathematical Modelling & Computations Vol. 02, No. 03, 2012, 211-220 Approximation of 3D-Parametric Functions by Bicubic B-spline Functions M. Amirfakhrian a, a Department of Mathematics,

More information

Animation Lecture 10 Slide Fall 2003

Animation Lecture 10 Slide Fall 2003 Animation Lecture 10 Slide 1 6.837 Fall 2003 Conventional Animation Draw each frame of the animation great control tedious Reduce burden with cel animation layer keyframe inbetween cel panoramas (Disney

More information

Shape Representation Basic problem We make pictures of things How do we describe those things? Many of those things are shapes Other things include

Shape Representation Basic problem We make pictures of things How do we describe those things? Many of those things are shapes Other things include Shape Representation Basic problem We make pictures of things How do we describe those things? Many of those things are shapes Other things include motion, behavior Graphics is a form of simulation and

More information

COMP 175 COMPUTER GRAPHICS. Lecture 10: Animation. COMP 175: Computer Graphics March 12, Erik Anderson 08 Animation

COMP 175 COMPUTER GRAPHICS. Lecture 10: Animation. COMP 175: Computer Graphics March 12, Erik Anderson 08 Animation Lecture 10: Animation COMP 175: Computer Graphics March 12, 2018 1/37 Recap on Camera and the GL Matrix Stack } Go over the GL Matrix Stack 2/37 Topics in Animation } Physics (dynamics, simulation, mechanics)

More information

There we are; that's got the 3D screen and mouse sorted out.

There we are; that's got the 3D screen and mouse sorted out. Introduction to 3D To all intents and purposes, the world we live in is three dimensional. Therefore, if we want to construct a realistic computer model of it, the model should be three dimensional as

More information

COMP3421. Global Lighting Part 2: Radiosity

COMP3421. Global Lighting Part 2: Radiosity COMP3421 Global Lighting Part 2: Radiosity Recap: Global Lighting The lighting equation we looked at earlier only handled direct lighting from sources: We added an ambient fudge term to account for all

More information

Computer Animation. Algorithms and Techniques. z< MORGAN KAUFMANN PUBLISHERS. Rick Parent Ohio State University AN IMPRINT OF ELSEVIER SCIENCE

Computer Animation. Algorithms and Techniques. z< MORGAN KAUFMANN PUBLISHERS. Rick Parent Ohio State University AN IMPRINT OF ELSEVIER SCIENCE Computer Animation Algorithms and Techniques Rick Parent Ohio State University z< MORGAN KAUFMANN PUBLISHERS AN IMPRINT OF ELSEVIER SCIENCE AMSTERDAM BOSTON LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO

More information

Computer Graphics. Spring Feb Ghada Ahmed, PhD Dept. of Computer Science Helwan University

Computer Graphics. Spring Feb Ghada Ahmed, PhD Dept. of Computer Science Helwan University Spring 2018 13 Feb 2018, PhD ghada@fcih.net Agenda today s video 2 Starting video: Video 1 Video 2 What is Animation? Animation is the rapid display of a sequence of images to create an illusion of movement

More information

Curves D.A. Forsyth, with slides from John Hart

Curves D.A. Forsyth, with slides from John Hart Curves D.A. Forsyth, with slides from John Hart Central issues in modelling Construct families of curves, surfaces and volumes that can represent common objects usefully; are easy to interact with; interaction

More information

Speech Driven Synthesis of Talking Head Sequences

Speech Driven Synthesis of Talking Head Sequences 3D Image Analysis and Synthesis, pp. 5-56, Erlangen, November 997. Speech Driven Synthesis of Talking Head Sequences Peter Eisert, Subhasis Chaudhuri,andBerndGirod Telecommunications Laboratory, University

More information

EF432. Introduction to spagetti and meatballs

EF432. Introduction to spagetti and meatballs EF432 Introduction to spagetti and meatballs CSC 418/2504: Computer Graphics Course web site (includes course information sheet): http://www.dgp.toronto.edu/~karan/courses/418/fall2015 Instructor: Karan

More information

Parameterization of triangular meshes

Parameterization of triangular meshes Parameterization of triangular meshes Michael S. Floater November 10, 2009 Triangular meshes are often used to represent surfaces, at least initially, one reason being that meshes are relatively easy to

More information

Case Study: The Pixar Story. By Connor Molde Comptuer Games & Interactive Media Year 1

Case Study: The Pixar Story. By Connor Molde Comptuer Games & Interactive Media Year 1 Case Study: The Pixar Story By Connor Molde Comptuer Games & Interactive Media Year 1 Contents Section One: Introduction Page 1 Section Two: About Pixar Page 2 Section Three: Drawing Page 3 Section Four:

More information

INF3320 Computer Graphics and Discrete Geometry

INF3320 Computer Graphics and Discrete Geometry INF3320 Computer Graphics and Discrete Geometry More smooth Curves and Surfaces Christopher Dyken, Michael Floater and Martin Reimers 10.11.2010 Page 1 More smooth Curves and Surfaces Akenine-Möller, Haines

More information

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Hartmann - 1 Bjoern Hartman Advisor: Dr. Norm Badler Applied Senior Design Project - Final Report Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya Introduction Realistic

More information

Computer Animation Fundamentals. Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics

Computer Animation Fundamentals. Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics Computer Animation Fundamentals Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics Lecture 21 6.837 Fall 2001 Conventional Animation Draw each frame of the animation great control

More information

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a

The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a The goal is the definition of points with numbers and primitives with equations or functions. The definition of points with numbers requires a coordinate system and then the measuring of the point with

More information

Computer Graphics Fundamentals. Jon Macey

Computer Graphics Fundamentals. Jon Macey Computer Graphics Fundamentals Jon Macey jmacey@bournemouth.ac.uk http://nccastaff.bournemouth.ac.uk/jmacey/ 1 1 What is CG Fundamentals Looking at how Images (and Animations) are actually produced in

More information

Lesson 01 Polygon Basics 17. Lesson 02 Modeling a Body 27. Lesson 03 Modeling a Head 63. Lesson 04 Polygon Texturing 87. Lesson 05 NURBS Basics 117

Lesson 01 Polygon Basics 17. Lesson 02 Modeling a Body 27. Lesson 03 Modeling a Head 63. Lesson 04 Polygon Texturing 87. Lesson 05 NURBS Basics 117 Table of Contents Project 01 Lesson 01 Polygon Basics 17 Lesson 02 Modeling a Body 27 Lesson 03 Modeling a Head 63 Lesson 04 Polygon Texturing 87 Project 02 Lesson 05 NURBS Basics 117 Lesson 06 Modeling

More information

Animation of 3D surfaces.

Animation of 3D surfaces. Animation of 3D surfaces Motivations When character animation is controlled by skeleton set of hierarchical joints joints oriented by rotations the character shape still needs to be visible: visible =

More information

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming

L1 - Introduction. Contents. Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming L1 - Introduction Contents Introduction of CAD/CAM system Components of CAD/CAM systems Basic concepts of graphics programming 1 Definitions Computer-Aided Design (CAD) The technology concerned with the

More information

Advanced Texture-Mapping Curves and Curved Surfaces. Pre-Lecture Business. Texture Modes. Texture Modes. Review quiz

Advanced Texture-Mapping Curves and Curved Surfaces. Pre-Lecture Business. Texture Modes. Texture Modes. Review quiz Advanced Texture-Mapping Curves and Curved Surfaces Pre-ecture Business loadtexture example midterm handed bac, code posted (still) get going on pp3! more on texturing review quiz CS148: Intro to CG Instructor:

More information

Automatic Rigging/Skinning Script. Maya Python Scripting Master Thesis

Automatic Rigging/Skinning Script. Maya Python Scripting Master Thesis Automatic Rigging/Skinning Script Maya Python Scripting Master Thesis Rahul Lakakwar i7834921 MSc CAVE, Bournemouth University 21-Aug-2009 Thanks to: Jon Macey Adam Vanner NCCA Bournemouth & All students

More information

CS 465 Program 4: Modeller

CS 465 Program 4: Modeller CS 465 Program 4: Modeller out: 30 October 2004 due: 16 November 2004 1 Introduction In this assignment you will work on a simple 3D modelling system that uses simple primitives and curved surfaces organized

More information