Animation of 3D Avatars Progress Report

Size: px
Start display at page:

Download "Animation of 3D Avatars Progress Report"

Transcription

1 Animation of 3D Avatars Progress Report May 28, 2015 Abstract A system is developed, based on the seminal Motion Graphs paper by Lucas Kovar et al.[1], which automatically generates a directed graph from a library of motion capture sequences. The arcs on the graph represent original sequences of motion and the nodes represent transition points, which are determined automatically by searching for frames in the library that are similar enough to be interpolated with good results. Novel motion sequences can be extracted from this graph by finding a graph walk that minimises a total cost function. By selecting a cost function based on a user-level constraint, motion that would be useful to an animator can be generated in a manner that is not nearly as laborious as other techniques. The steps taken to create this system are detailed and relevant issues are discussed, such as the impact of certain design choices and direction of future work. 1

2 Contents 1 Introduction Generalising Motion Capture Articulated Figures in Animation Project Goals Outline of Following Chapters Implementation Reading and Displaying Motion Sequences Point Clouds and the Difference Matrix Finding Transition Points Aside on Motion Graphs Tarjan s SCC Algorithm Extracting Motion Application of Branch and Bound and choice of cost function Constructing novel motion sequences Evaluation Effect of error threshold Length of Graph Walk Discussion Future Work

3 Chapter 1 Introduction Ever since animation matured as an art-form in the mid 20th century, more and more advanced techniques have been developed and applied to solve the problem of making animated motions look realistic. Before the era of computer animation, animators had developed some quite elaborate techniques. Many of these drew inspiration from science, such as approximating the effects of inertia to give animated objects the illusion of being weighty and modelling the deformation of non-rigid objects as they accelerate to emphasize the impact of speed and force[2]. Computer animation began in the 1970s when computer programmers began to develop tools based on traditional cell animation that allowed animators to animate figures by specifying poses and positions at particular frames that the computer was then able to interpolate between, in much the same way as a traditional lead animator would personally create a subset of the frames in an animation and have subordinate animators fill in the gaps[3][4]. This process is called key-framing. The major disadvantage to such techniques is that the key-frames must be manually specified by the animator, which is a painstaking process. In the early 1980s, motion capture techniques, which had previously been applied in a limited way in medicine and industry, began to be used in animation[5]. Motion capture allows for human figure animation of unparalleled realism but is even more painstaking to produce than key-framed sequences, since a human subject is required to produce each movement sequence. An animator may be able to combine motion capture sequences into new ones by carefully matching frames in the sequences and possibly using key-framing to smooth the transition. This is a laborious process and generally results in diminished realism at the transition between sequences. Thus, the question naturally emerged: is it possible to create arbitrary animation sequences from a library of motion capture data that will conform well enough to an animator s specifications without noticeably diminishing the realism of the original motion sequences. This question was, for a while, thought of as one of the major unsolved problems in the field of computer graphics. 3

4 1.1 Generalising Motion Capture In the late 1990s and early 2000s, several techniques were proposed with the goal of generalising motion capture data without sacrificing realism. Among these were techniques such as motion warping[6], which allowed for small, smooth changes to be made to motion sequences, retargetting[7], which generalised sequences to multiple character models (i.e. figures of different proportions to the original motion capture subject), and various other techniques that allowed different motion sequences to be blended together to create new ones[8][9]. All of these techniques rely on making small changes to the original data and do not explicitly seek to preserve realism or focus on conforming to an animator s specifications, though they do give the animator more options when choosing what sequences to work with. More elaborate methods based on statistical machine learning were also proposed[10][11]. These methods involve learning a model of realistic human motion and applying that model to create novel sequences. Around the same time, data-based approaches to generating motion sequences were being developed in the games industry that involved the manual creation of a tree of motion sequences. More reliable techniques for the interpolation of motion capture data were also developed at this time. Building on this prior work, Kovar et al.[1] published a paper in 2002 that details a technique that automatically generated a directed graph of motion capture data from a library of clips by searching for points in the input sequences that are similar enough to one another to be interpolated with good results. These points are represented by nodes on the graph and are connected by arcs representing sequences of original data. Motion sequences can then be extracted from the graph by finding a graph walk that minimises a total cost function. The form of the cost function can be selected to represent user-level constraints. In the following sections, I will detail the steps undertaken to reproduce this technique. Since the paper was published, several other techniques have been proposed, such as finding a reduced feature space of realistic motion types through principle component analysis[12], using dynamical models to inform the matching and blending of sequences[13][14], using clustering techniques to develop elaborate representations of human motion[14] and precomputing certain aspects of potential motion sequences in order to improve the systems on-line performance[15][14]. These tend to be concerned with solving specific problems, such as realistic interaction between characters and real-time performance issues. The motion graph technique is still regarded as important as a general solution to the problem of realistic human animation. 1.2 Articulated Figures in Animation Kinematic models have long been used in the field of robotics to calculate the position of the end of an articulated limb as a function of the angles of its joints or vice versa and can readily be applied to computer animation by representing figures as a hierarchical set of bones connected at joints with specified rotational 4

5 degrees of freedom[16]. This is known as a skeletal model. At the top of the hierarchy is a root node, which uniquely has an absolute position and orientation associated with it. The position of every bone is then defined by a set of joint angles, which specify its position in the frame of its parent, such that the total rotation of a bone is the sum of the corresponding angles of the bones above it in the hierarchy and the root node. Angles that are specified in the frame of the parent are confusingly referred to as Euler angles even though they are extrinsic and hence not Euler angles in the usual sense of the term, which specify the orientation of an object in terms of rotations about its intrinsic axes. The advantage to this representation is that the angles of the joints are independent of one another, meaning that each joint can be changed arbitrarily within its own set of constraints without causing inconsistencies elsewhere in the skeleton. For example, the change in position of the hand when the shoulder is raised is implicit and only one variable need be altered to make this change. Other models in which the position of points on the skeleton are represented directly would require that all points on the arm be updated when the shoulder is raised. This is time consuming and could potentially lead to inconsistencies, as the number of degrees in the skeleton will typically be over represented. For example, if one or more parameters failed to be updated, the arm might become distorted. This is not possible in the kinematic skeletal, as realistic constraints on the skeleton s position are, to an extent, built-in. Animators commonly use articulated figures for character animation because they allow the position of certain joints to be changed and the corresponding changes in the joints higher up in the hierarchy to be determined automatically through the use of inverse kinematics. This very useful for key-framing, as it means the position of bones can be changed and corresponding changes to other bones can be carried out procedurally. Articulated figures are also a good fit for motion capture data, since motion capture techniques involve inferring a subject s pose from the position of sensors placed at specific points on the body and, in order to do so, must assume that the body is constrained in exactly the same way that kinematic models do. 1.3 Project Goals In order to reproduce the results of Kovar et al.[1], certain goals had to be achieved. The method for achieving some of these goals is given in detail in the paper, for others a more general approach is given and some are not mentioned at all, though it is obvious that they must have been achieved one way or another. In this section, I will give an overview the major steps involved in completing the project and what guidance there was for doing so in the original paper. All of these points will be revisited in the next chapter and described more thoroughly. 1. Interpret and display motion capture sequences Kovar et al. mention that 2400 frames worth of motion capture data was 5

6 donated to them for the purposes of completing their research. As may be expected, they do not describe the format of the data or any of the details of how it was stored in memory or how the rendering and animation was achieved. Clearly, this was the first goal of the project, since little else can be achieved before the data can be properly interpreted and displayed. 2. Construct motion graph As mentioned above, the motion graph consists of transition sequences represented by nodes that are connected by arcs representing sequences of original data. In order to construct the graph, transition points must be found. Transition points are pairs of short sequences of original data that are similar enough to be interpolated with good results. The paper goes into some detail on how transition points should be found. a. Generate point clouds In order to find transition points, a metric must be established for deciding how similar two frames are to one another. This is done by generating a set of points around the figure that move with it as a solid body, referred to as a point cloud. The paper mentions that this should be done but does not go into any detail about how the point cloud should be generated or what its form should be, only that it should ideally be a downsampling of the mesh defining the character [1], referring to the fact that skeletal models are generally used to drive polygonal meshes. b. Calculate difference matrix With point clouds in place, two frames can be compared by considering the point clouds from a short window of frames before the first frame and after the second. A reasonable comparison of these frames must take into account that the absolute position and orientation of the figure relative to the horizontal plane are arbitrary as far as the naturalness of the motion is concerned. Hence, the sum of square distances between corresponding points in the clouds is computed after an optimising transform, consisting of a rotation about the vertical axis and a translation in the horizontal plane, is applied. The paper goes into a lot of detail on this point, even providing the closed form solution of the optimisation. This value is the distance metric by which the interpolability of two frames is judged. It is calculated for each pair of frames in the database. c. Find transition points The transition points are the pairs of frames in the difference matrix that are locally minimal and fall beneath a user-defined error threshold. The paper does not describe exactly how the local minima are found but it is implied that they were found by exhaustively comparing each point to its immediate neighbours, as is reasonable for a 2-dimensional search problem. 6

7 With the transition points identified, the motion graph is essentially defined. The paper does not describe exactly how the graph should be represented in memory. 3. Prune graph The paper argues that problems will be encountered if the graph contains dead-ends, which are nodes with no outgoing arcs, or sinks, which are comparatively small regions of the graph that cannot be exited once entered, and suggest a method for eliminating them by finding the graph s largest strongly connected component and deleting all nodes that are not part of it. 4. Extract motion With the graph constructed and pruned, it will become possible to extract novel motion sequences from it. The first step towards doing so is to select a graph walk. The paper suggests that this should be done by searching the graph, using branch and bound, for a walk that connects two arbitrarily chosen nodes and minimises some total cost function. The choice of cost function is an issue the paper spends a large amount of time discussing, concluding that the cost function should ultimately be selected to reflect the high-level specifications of the user. Once a graph walk has been selected, the corresponding motion can be constructed by lining up the relevant clips of original data and connecting them with transition sequences created through linear interpolation. 5. Path synthesis The final section of the paper describes a method for generating motion sequences in which the avatar follows an arbitrary line on the ground. The paper explains that this was achieved by defining a cost function that punishes deviation away from the line and rewards forward motion along it. 1.4 Outline of Following Chapters The following chapters will be organised as follows. Chapter two will describe the steps I went through to implement the motion graphs technique and replicate the result of Kovar et al.[1]. Chapter three will describe efforts to evaluate the performance of the system in both qualitative and quantitative terms and investigate the impact of user-specified parameters. Chapter four will discuss the progress made on the project so far and what I hope to achieve in the time remaining. 7

8 Chapter 2 Implementation 2.1 Reading and Displaying Motion Sequences The CMU Graphics Lab Motion Capture Database[17] is a freely available library of motion capture data from which all of the data used in this project was taken. The CMU database represents its subject using a kinematic skeleton model of the kind described in section 1.2. The root node of the skeletal model used in these data files is located in the lower back and the skeleton consists twenty-eight bones in total. The database consists of two kinds of file: one specifying the anthropometric information for each subject, such as the length and rotational degrees of freedom 1 of each bone in the model and the structure of the skeletal hierarchy, and another specifying, for each frame in a motion clip, the position and orientation of the root node and the Euler angles of the joints. These are referred to as ASF (Acclaim Skeleton Format, after the games company that developed the format.) and AMC (Acclaim Motion Capture) files respectively. Functions were written to parse ASF/AMC formatted files into data structures. These structures were designed to resemble the structure of the files themselves. One structure, named Skeleton, held the anthropometric data and structural information. This structure contained a member Root containing the default position and orientation of the root node and an array of structures of Bone type, which contained the name of each bone and the name of it s parent in the hierarchy and it s length and degrees of freedom. Another structure was defined to represent the state of each bone at each frame and contained the bones name and relevant joint angles. These structures were then stored in an array of arrays. This meant that, in order to find a particular bone, the element of the inner array with the correct name had to be searched for. A hash-table could have been used instead but, since the skeletons only consist of a few dozen bones, the advantage to this would be slight at best. If the system 1 A translational degree of freedom that would allow the bone to extend or contract is also possible but never used. 8

9 was ever to be used on models with much larger numbers of bones, then this change could be made. The CMU database contains many different kinds of motion. Figure 2.1 shows a few examples of frames from the database that were interpreted and rendered using OpenGL[18], a commonly used graphics library. All of the graphics shown were produced in OpenGL, as it is a relatively low-level library and hence allowed for maximum flexibility in the way things are displayed. It also allowed the rest of the program to be written from scratch in C++, which is an appropriate choice of language for data-driven applications such as this. The rendering of the skeleton was achieved by creating a queue of strings representing the names of the bones in the skeleton, with complementary queues for the position of their hierarchically lower end and their orientation. To begin, the root node is added to the queue. All of the bones that are children of the root node are found. Their position and orientation are determined from the motion capture data, which contains the bone s length and joint angles in the frame of the parent, by first looking up the bones length and assuming that it is aligned with the origin. The bone can then be rotated according to its degrees of freedom and the corresponding values in the motion capture data for the current frame. The bone is then rotated according to its default orientation and rotated and translated into the frame of its parent. The absolute position and orientation of the bone is now known and, so, a line can be drawn representing the bone and the values and the name of the bone can be added to the back of the appropriate queue. When all of the children have been dealt with in this way, the head of the queue is deleted and the process repeated, using whichever bone is at the head of the queue, until the queue is empty. Figure 2.1: Frames in which the figure is walking (left), running (centre) and jumping (right). 2.2 Point Clouds and the Difference Matrix As mentioned in the introduction, one the main ideas behind the motion graph technique is to find pair of frames in the database that are similar enough to one another to be interpolated easily. In order to find such pairs of frames, it was 9

10 first necessary to represent quantitatively the similarity between frames. The obvious way to do this would be to take a weighted sum of the square differences in the joint angles. The sum would have to be weighted to account for the fact that joints higher up in the skeletal hierarchy have more influence on the pose than those lower down. The problem with this approach would be that the joint angles are not directly related to the avatar s pose. Differences at one joint may act to correct differences at another or they may act to make the poses even less similar. A metric based on the comparison of joint angles would not be able to distinguish between these situations. A better idea would be to compare the relative positions of the joints, though this would fail to account for rotations in the plane normal to the direction of the bone. Kovar et al.[1] suggest a slightly different method. They argue that, since the skeletal model will ultimately be used to drive a polygonal mesh, it is more appropriate to compare the positions of a cloud of points that surround the skeleton and move with it as a solid body. The point cloud can be thought of as a down sampling of the polygonal mesh or simply as a succinct way to account for all of the skeleton s degrees of freedom while retaining a concept of similarity that is based on the actual Euclidean proximity of skeletal features rather than on joint angles. In order to create the cloud, each bone was considered independently. First, the direction of the bone was calculated and crossed with a constant vector that had been rotated into the frame of the bone. This rotation was done by summing the joint angles for each coordinate axis up the skeletal hierarchy. The result was then normalised and multiplied by a spacing constant r and added to the position of the upper end of the bone to give the position of the first point. The subsequent points are given by rotating this vector r radians about the bone until a full circle has been traced and by adding r times the unit vector in the direction of the bone until the total length of the added vectors is longer than the bone. This be expressed mathematically as follows, R ir,êbone (ê bone R θz,ê z R θy,ê y R θx,ê x ê const ) + r bone + jrê bone (0 < i < 2π r ), (0 < j < r bone r ), where R θ,ê is the operator that rotates a vector by angle θ about vector ê, θ x, etc. are the sums over the joint angles corresponding to the relevant axis in the bone branch of the skeletal hierarchy, e bone ˆ and e const ˆ are unit vectors in the direction of the bone and a constant direction and r bone is the position vector of the base of the bone. This set of points was computed for each bone and aggregated to give the point cloud for each frame, which consists of roughly evenly spaced points that move with the skeleton as a rigid body. An example point cloud is shown in figure 2.2. This point cloud contains all the relevant information about the avatar s position at a particular point in time. However, the interpolability of two frames depends not only on position but also on higher order kinematics, i.e. how the position changes over time. To account for this, when comparing 10

11 frames, the point clouds of the frames within a certain window following the first frame (i.e. the one to be transitioned from) and preceding the second frame (i.e. the one to be transitioned to) are merged into two super clouds. These super clouds were used to calculate the distance metric. It should be noted that each frame in the database has two clouds associated with it: an anterior cloud that is used when it is the frame being transitioned from and and a posterior cloud that is used when it is the frame being transitioned to. It should also be noted that, since the window is of a fixed number of frames, transitions to frames too close to the start of a clip or from frames too close to the end of a clip are forbidden. Two frames are shown with their anterior and posterior super-clouds in figure 2.3. A disadvantage to this method is that, unlike joint angles, the proximity of corresponding points in the clouds is dependent on the absolute position and orientation of the avatar. This means that an optimisation must be performed before two frames can be meaningfully compared. Specifically, we must find the rotation about the vertical y-axis and translation in the horizontal plane that together minimise the square difference in position of corresponding points in the point cloud. This is because human motions are fundamentally unchanged under such transformations. In the case of motion capture, these transformation correspond to arbitrary parameters of measurement. The other transformations, i.e. translations with a component out of the horizontal plane and rotations about any non-vertical axis, do not preserve natural human motion. This is because motion is constrained by the level of the floor and the direction of gravity. The optimisation can be expressed as follows, min θ,x0,z 0 i w i p i T θ,x0,z 0 p i 2, where T θ,x0,z 0 is the transformation that rotates a point by angle θ about the y-axis before translating by x 0 in the x-direction and by z 0 in the z-direction and p i and p i are corresponding points in two point clouds. The closed-form solution to this optimisation, given by Kovar et al.[1], is as follows, i wi(xiz i x i zi) 1 ( x z x z) i θ = arctan w i i wi(xix i +z i zi) 1 ( x x + z z, ) i w i x 0 = i 1 ( x x cos(θ) z sin(θ)), wi z 0 = 1 i wi ( z x sin(θ) z cos(θ)), where barred terms are weighted sums over i, e.g. x = i w ix i. This optimal transform was calculated for each pair of frames in the database. The sum of square differences between the points in the first cloud and the points in the second cloud after transformation was computed and stored in a matrix. An example of the resulting distance matrix is shown in figure 2.4. The figure shows the matrix for a database consisting of four clips: one of walking, one of 11

12 running, one in which the subject jumps up and one in which the subject jumps forwards. The black horizontal and vertical bars are the forbidden regions at the start and end of each clip. It can be seen that the frames of the walking clip generally compare well with each other, particularly when the walking pattern is in synchronisation, and fairly well to the frames of both jumping clips in which the figure is on the ground. It can also be seen that the running clip does not compare well to any of the others. The calculation of the matrix is quite computationally expensive. For a database of a few hundred frames, it can take several minutes to over an hour depending on the number of points in the super cloud. For this reason, the calculation was parallelised using OpenMP. This was a straightforward process, since the calculation is data parallel, i.e. each value in the matrix can be calculated independently of the others. Functionality was later added to store the matrix corresponding to a particular series of clips in a file. Figure 2.2: Frame shown without and with point cloud. 2.3 Finding Transition Points Having calculated the difference matrix, it became possible to find the transition points. This was done by simply finding the local minima of the 2D error function represented by the difference matrix. The local minima whose value fell below a particular error threshold became the transition points (see figure 2.5). Predictably, most of the transitions operate between in-sync walking frames, i.e. ones in which the figure is at the same point in the walking cycle. A comparitively small number of transitions exist between the walking and jumping frames and none exist between the running frames and any of the others. Figures show both frames in a selection of transitions marked in figure 2.5. With the exception of figure 2.6, these all correspond to relatively low 12

13 Figure 2.3: Anterior and posterior super-cloud of avatar while running (top) and jumping (bottom). quality transitions and would be among the first to be culled if the threshold was lowered. It should also be noted that these poses correspond to frames at opposite ends of a transition sequence of roughly a third of a second. Nevertheless, they all look like they are similar enough to be interpolable, though the transition shown in figure 2.7 would be problematic as the frames are not in-sync with respect to figure s walking cycle. Extracting motion using this transition would cause the avatar to perform a skipping motion and would likely also cause planted feet to slide along the ground. In addition to the error threshold, it was found that imposing a locality threshold, i.e. throwing away transitions that were within a certain number of frames of a higher quality transition, was useful in that it reduced the complexity 13

14 Figure 2.4: Example difference matrix for clips in which the figure walk, runs, jumps up and jumps forward. of the graph by eliminating practically redundant transitions. 2.4 Aside on Motion Graphs Motion graphs are a somewhat counter-intuitive idea. The reason for this is that it is natural to think of the transitions as arcs when they are, in fact, nodes. To illustrate this, consider two motion sequences that have been converted into a motion graph with a transition connecting the final frame of each sequences to the first frame of its counterpart and two more transitions mutually connecting two frames somewhere in the middle of each sequence. This is shown in a intuitive way in figure In the motion graph for this sequence, the transitions are represented by nodes and arcs represent motion data sequences, as in figure 14

15 Figure 2.5: Matrix with transitions marked with red x s. Poses corresponding to the labelled transitions are shown side in figures The structure of the graph is not intuitively obvious from the relationship of the clips and transitions. It should be noted that arcs exist where a sequence of original motion connects the head frame of one transition to the tail frame of another. Hence, the nodes that are adjacent to a particular node can be found by searching for the transitions whose tail comes after the head of the first node in a particular clip. This observation is important, as it allows the representation of the graph to consist only of nodes. The arcs, of which there are many more in a large, well connected graph (roughly n2 2, where n is the number of nodes), are implicit and no information about them need be stored in memory. 15

16 Figure 2.6: Comparison of poses corresponding to transition labelled in figure 2.5 with an a. Figure 2.7: Comparison of poses corresponding to transition labelled in figure 2.5 with a b. 2.5 Tarjan s SCC Algorithm A potential problem with extracting motion from the graph is that it is possible that the graph contains dead ends, which are nodes with no outgoing edges from which the avatar cannot advance, and sinks, which are regions of the graph that cannot be exited once entered. Both of these would cause problems when extracting motion as they would make certain pairs of nodes impossible to connect through a graph walk. Depending on the technique used to extract motion, it may be possible work around dead-ends and sinks. For example, 16

17 Figure 2.8: Comparison of poses corresponding to transition labelled in figure 2.5 with a c. Figure 2.9: Comparison of poses corresponding to transition labelled in figure 2.5 with a d. if graph walks were to be found using a depth-first search, as they will be in the next section, a careful choice of start and end nodes for the walk would allow the user to avoid dead-ends and sinks, though this is exactly the kind of low-level user involvement the system was designed to eliminate in the first place. If graph walks were generated on the fly or in a piecemeal way according complex high-level constraints, dead-ends and sinks would always be a potential problem. In order to ensure that the graph contained no sinks or dead-ends, the graph was pruned in the manner suggested by Kovar et al.[1], by computing the largest 17

18 Figure 2.10: Intuitive representation of situation in which two motion sequences, represented by grey arrows, are connected by transitions, shown in red. Frames that are connected by transitions are numbered. Transitions are labelled with letters. strongly connected component (SCC), i.e. the largest subgraph of mutually accessible nodes, and deleting any nodes that were not part of this component. 2 This was done using an algorithm invented by Robert Tarjan[19]. The basic idea of the algorithm is to discover nodes in a depth-first manner, starting from an arbitrarily chosen node and recursively branching out to adjacent nodes to form a tree structure. Note that all nodes that are discovered must be reachable from the start node. The nodes are indexed in the order in which they are found. The algorithm keeps track of the lowest index node that has been discovered in each node s sub-tree and are, hence, reachable from that node. When there are no undiscovered nodes adjacent to any of the discovered ones, all nodes from which the starting node is reachable must form an SCC. The other components can then be found by repeating the process with a starting node that has not been identified as being part of an SCC. Pseudocode for this algorithm is shown below in code 6.1. In this manner, the graph s SCCs were found and all but the largest were removed. This guaranteed that every node in the graph was accessible from 2 The actual implementation allows for the fact that the original sequences may have tags specifying their motion type and that all nodes of a particular type should be strongly connected. This feature has not been used so far, though it may be required in the future (see section 4.1). 18

19 Figure 2.11: The motion graph corresponding to the situation shown in figure Transitions are now shown as nodes. Arcs represent sequences of motion data and are labelled according to the frames they connect, as numbered in figure every other node and that the avatar would never get stuck at a dead end or in a cycle. Kovar et al.[1] suggest that the choice of threshold may have to be revised here if the original graph turns out to be not be well connected enough for a significant number of nodes to survive pruning. It was found, however, that the largest SCC generally comprises most of the graph and its relative size is stable across a range of threshold values (see figure 2.12). It is possible that certain input sequences are more sensitive to the threshold in this regard than others. Code 6.1: Pseudocode for the SCC algorithm, as given by Tarjan[19]. BEGIN INTEGER i; PROCEDURE STRONGCONNECT(v); BEGIN LOWLINK(v) := NUMBER(v) := i := i+1 19

20 Figure 2.12: The size of the original graph (blue) and the largest strongly connected component (red) shown against error threshold on a logarithmic scale. put v on stack of point; FOR w in the adjacency list of v DO BEGIN IF w is not yet numbered THEN BEGIN comment (v,w) is a tree arc; STRONGCONNECT(w); LOWLINK(v) := min(lowlink(v),lowlink(w)); END ELSE IF NUMBER(w) < NUMBER(v) DO BEGIN comment (v,w) is frond or cross-link; IF w is on stack of points THEN LOWLINK(v) := min(lowlink(v),number(w)); END; END; IF(LOWLINK(v) = NUMBER(v)) THEN BEGIN comment v is the root of a component; start new strongly connected component; WHILE w on top of point stack satisfies NUMBER(w) >= NUMBER(v) DO delete w from point stack and put w in current component; END END i := 0; 20

21 empty stack of points; FOR w a vertex IF w is not yet numbered THEN STRONGCONNECT(w); END; 2.6 Extracting Motion Application of Branch and Bound and choice of cost function In order to extract motion from the graph, a graph walk must first be chosen. To this end, a branch and bound algorithm was implemented. The algorithm used the standard technique of converting the graph into a tree structure with an arbitrarily chosen starting node at the root. A cost is associated with each arc and, once the goal has been found at one branch, all paths with a total cost greater than the path to the goal are disregarded. The algorithm also used an extended list, which guarantees that nodes will only be extended by the path to them with the lowest total cost. A major issue discussed by Kovar et al.[1] is the form that the cost function used in the branch and bound algorithm should take. The cost function plays a vital role in path synthesis (see section 4.1). The impact of using different cost functions will be discussed further in section Constructing novel motion sequences With a graph walk chosen, it became possible to extract novel motion sequences from the graph. This was done by applying the relevant 2D rigid body transformations (see section 2.2) to the various clips. Since the clips form a sequence, the transformations must be applied successively such that each new clip is transformed in the frame of the old one. Essentially, this means the total transformation has to be kept track of as the sequence progresses. As well as lining up the original motion clips, the transition sequences must also be constructed by interpolating the root position and joint angles of the frames that fall in the short window at the end one clip and the start of the subsequent one. Examples of extracted motion sequences are shown in figures It cannot be seen in the images but, in the vast majority of cases, it is impossible to judge by eye whether or not the avatar is in a transition sequence. 21

22 Figure 2.13: Example of an extracted motion sequence. Figure 2.14: Example of an extracted motion sequence. 22

23 Figure 2.15: Example of an extracted motion sequence. 23

24 Chapter 3 Evaluation 3.1 Effect of error threshold The value of the error threshold can have a large impact on the complexity of the motion graph. We have already seen this in figure 2.12., which shows the number of nodes in a graph plotted against the error threshold on a logarithmic scale and resembles a sigmoid function. The number of nodes tends to zero when the threshold is low and to some maximal value when the threshold is high, as would be expected. In this case, the practically useful range of threshold values would roughly be between 10 3 and 10 4, as the value of the distance metric for most of the transitions falls in this range. The error threshold also has qualitative effect on the kinds of motion that can be extracted. In the case of the database with four different motion types, transitions existed between the walking, jumping up and jumping forward frame when the threshold was above a certain value. Below this value, only walking was extractable from the database. This indicates that it may be appropriate to apply different thresholds to transitions between different motion types, as well as highlighting the advantage to tagging frames in the manner described by Kovar et al.[1]. Figure 3.1 shows the transition points marked on the difference matrix after the application of decreasing error thresholds. It can be seen that transitions gradually become extinct in all but the bottom left region, which corresponds to walking to walking transitions. In order to examine the effect of the error threshold in more quantitative terms, the difference between the avatar s direction of motion and the orientation of its root node while walking was measured. The reason for measuring this difference is that it is expected, when the avatar is walking forwards, the orientation of the root node should be roughly aligned with the direction of motion. If it is not, then the reason is likely to be that the avatar is in a low quality transition for which the interpolation is causing inconsistencies in the motion. The measurements were made by first averaging the vector difference be- 24

25 tween the position of root node on a particular frame and its position on each of the ten subsequent frames to give a stable approximation of the direction of motion.the angle this average vector made with the x-axis was then calculated. The RMS difference between this angle and the angle the root node made with the x-axis on the frame was summed over a large number of frames. This included over 460 frames of an original walking sequence and for walking transition sequences with a range of threshold values. The results of this are shown in figure 3.2. It can be seen that the metric does indeed increase as the error threshold is raised, albeit slightly. Interestingly, the values are generally lower for the transition sequences than they are for the original data. This could be because the interpolation, which essentially similar to averaging, has the effect of reducing the variation in root orientation. Figure 3.1: Transitions marked on the difference matrix for decreasing error threshold. 3.2 Length of Graph Walk As mentioned in section 2.5, the form of the cost function is vitally important to the behaviour of system. To illustrate this point, a selection of very simple cost functions were implemented and their effect on the complexity of the resulting graph walk was shown. Probably the most straightforward form of cost function is one that penalises all arcs equally. This will lead to graph walk containing as few transitions as possible, which may be desirable if we wish to preserve the original motion clips as much as possible. Another candidate is the distance metric of the target node. Using this metric means that the graph walk will tend to consist of higher quality transitions. The cost function can also be randomly generated. This leads to graph walks that are essentially random and non-repeating. The probability of such a graph walk is inversely proportional to it s length. This allows for some longer and more interesting sequences. Another option is to use a hybrid of these where a random number is multiplied by the distance metric to give the cost for each arc. 25

26 Figure 3.2: Self-consistency metric of transition sequences shown against error threshold on a logarithmic scale. Dotted line shows value of metric for original motion sequence. The distribution of lengths of one hundred graph walks produced using each of these cost functions is shown in figure 3.3. It can be seen that the random cost function does indeed lead to a roughly inverse relationship between frequency and length. The constant cost function simply peaks somewhere between two and three nodes and falls to zero at five nodes, suggesting each every pair of nodes on the graph can be connected by three or fewer arcs (the number of arcs in a graph walk being one less than the number of nodes) with most pairs of nodes being connected directly. Most interestingly, the hybrid cost function tends to produce graph walks that are a few nodes longer than the other forms of cost function. The likely explanation for this is that the randomness tends to force the search algorithm into areas of the graph it would not normally enter and the additional imperative to avoid low quality transitions causes the graph walks to meander slightly more than the purely random ones. 26

27 Figure 3.3: Distribution of graph walk lengths for each of the four discussed cost functions. 27

28 Chapter 4 Discussion To summarise, a program was written, based on the seminal work by Kovar et al.[1], that reads motion capture data in ASF/AMC format, displays their contents graphically, finds specific frames that can interpolated, procedurally constructs a motion graph from a subset of these frames, finds graph walks the minimise a certain cost function and outputs novel motion sequences by blending together the clips that make up the graph walk. At the start of the project, the ability to extract novel motion sequences in this way was deemed to be the baseline amount of progress to be achieved in order for the project to be considered a success. Additionally, some insight has been gained into the factors that determine the qualities of the extracted motion. Firstly, we have seen that the choice of error threshold will have an impact on both the quality of the transition sequences in terms of preserving natural and realistic motion and on the size of the graph and, hence, the diversity of the extractable motion sequences. The error threshold must be carefully chosen in order to balance these concerns. Secondly, we observe that the form of the cost function used in the graph search is critical. We have seen that the complexity of the extracted motions can be affected significantly by switching between fairly simple forms of cost function. Engineering a cost function that causes the motion sequences to conform to arbitrary user-level constraints is the key to making the system useful to an animator. 4.1 Future Work In the coming months, there are several goals that I would like to achieve. Firstly, there is one final result in the Kovar et al. paper that has not yet been reproduced. In the final section, a cost function is devised that punishes deviation from a predefined path a rewards forward motion along it. The result of this is that the avatar roughly follows a path specified by the user. This is referred to as path synthesis and replicating it will be my next goal. 28

29 With the paper s final result replicated, I will continue to move forward by taking some inspiration from the part of the paper that discusses potential applications of path synthesis. Specifically, I will aim to create a system whereby the user can interactively specify the avatar s direction of travel and, ideally, the type of motion the avatar is performing (e.g. walking, running, jumping, etc.) using motion tags. This will be done by continuously extending the graph walk according to a constantly changing set of user constraints and will require that the graph search algorithm be modified such that it returns a reasonably low cost path in a short amount of time, rather than a globally optimal path as it does now. 29

30 References [1] L. Kovar et al., Motion Graphs, July 2002, ACM Transactions on Graphics (TOG) - Proceedings of ACM SIGGRAPH 2002, vol. 21, iss. 3, pg [2] J. Lasseter, Principles of Traditional Animation Applied to 3D Computer Animation, July 1987, Computer Graphics, vol. 21, num. 4, pg [3] E. Catmull, A system for computer generated movies, 1972, ACM 72 Proceedings of the ACM annual conference, vol. 1, pg [4] N. Burtnyk and M. Wein, Interactive Skeleton Techniques for Enhancing Motion Dynamics in Key Frame Animation, Oct 1976, Communications of the ACM, vol. 19, iss. 10, pg [5] T.W. Calvert, J. Chapman, A. Patla, The integration of subjective and objective data in the animation of human movement, 1980, SIGGRAPH 80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques, pg [6] A. Witkin and Z. Popovic, Motion Warping, 1995, SIGGRAPH 95 Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pg [7] M. Gleicher, Retargetting motion to new characters, 1998, SIGGRAPH 98 Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pg [8] D.J. Wiley and J.K. Hahn, Interpolation synthesis of articulated figure motion, Nov 1997, Computer Graphics and Applications, IEEE, vol. 17, iss. 6, pg [9] C. Rose, M.F. Cohen, B. Bodenheimer, Verbs and adverbs: multidimensional motion interpolation, Sept 1998, Computer Graphics and Applications, IEEE, vol. 18, iss. 5, pg [10] K. Pullen and C. Bregler, Animating by multi-level sampling, 2000, Computer Animation Proceedings, pg [11] A. Galata, A. Cohn, D. Magee, D. Hogg, Modeling interaction using learnt qualitative spatio-temporal relations and variable length markov models, 2001, Computer Vision and Image Understanding, vol. 3, pg [12] P. Glardon, R. Boulic, D. Thalmann, PCA-basedWalking Engine using Motion Capture Data, June 2004, Computer Graphics International, Proceedings, pg

31 [13] V.B. Zordan, A. Majkowska, B. Chiu, M. Fast, Dynamic response for motion capture animation, 2005, SIGGRAPH 05 ACM SIGGRAPH 2005 Papers, pg [14] J. Lee, K.H. Lee, Precomputing avatar behavior from human motion data, March 2006, Graphical Models, vol. 68, iss. 2, pg [15] M. Gleicher, H.J. Shin, L. Kovar, A. Jepsen, Snap-together motion: assembling run-time animations, 2008, SIGGRAPH 08 ACM SIGGRAPH 2008 classes, no. 52. [16] W.W. Armstrong, M.W. Green, A. Jepsen, The dynamics of articulated rigid bodies for purposes of animation, Dec 1985, The Visual Computer, vol. 1, iss. 4, pg [17] CMU Graphics Lab Motion Capture Database, retrieved 12/05/2015. [18] OpenGL, retrieved 12/05/2015. [19] R. Tarjan, Depth-first search and linear graph algorithms, June 1972, SIAM Journal on Computing, vol. 1, iss. 2, pg

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides Rigging / Skinning based on Taku Komura, Jehee Lee and Charles B.Own's slides Skeletal Animation Victoria 2 CSE 872 Dr. Charles B. Owen Advanced Computer Graphics Skinning http://www.youtube.com/watch?

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information

Motion Graphs for Character Animation

Motion Graphs for Character Animation Parag Chaudhuri Indian Institute of Technology Bombay Research Promotion Workshop on Introduction to Graph and Geometric Algorithms Thapar University Patiala October 30, 2010 Outline Introduction The Need

More information

Learnt Inverse Kinematics for Animation Synthesis

Learnt Inverse Kinematics for Animation Synthesis VVG (5) (Editors) Inverse Kinematics for Animation Synthesis Anonymous Abstract Existing work on animation synthesis can be roughly split into two approaches, those that combine segments of motion capture

More information

MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC. Alexandre Meyer Master Informatique

MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC. Alexandre Meyer Master Informatique 1 MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC Alexandre Meyer Master Informatique Overview: Motion data processing In this course Motion editing

More information

CS354 Computer Graphics Character Animation and Skinning

CS354 Computer Graphics Character Animation and Skinning Slide Credit: Don Fussell CS354 Computer Graphics Character Animation and Skinning Qixing Huang April 9th 2018 Instance Transformation Start with a prototype object (a symbol) Each appearance of the object

More information

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala Animation CS 4620 Lecture 33 Cornell CS4620 Fall 2015 1 Announcements Grading A5 (and A6) on Monday after TG 4621: one-on-one sessions with TA this Friday w/ prior instructor Steve Marschner 2 Quaternions

More information

Overview. Animation is a big topic We will concentrate on character animation as is used in many games today. humans, animals, monsters, robots, etc.

Overview. Animation is a big topic We will concentrate on character animation as is used in many games today. humans, animals, monsters, robots, etc. ANIMATION Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. Character Representation A character is represented

More information

Synthesis by Example. Connecting Motion Planning and Example based Movement. Michael Gleicher

Synthesis by Example. Connecting Motion Planning and Example based Movement. Michael Gleicher Synthesis by Example Connecting Motion Planning and Example based Movement Michael Gleicher Dept of Computer Sciences University of Wisconsin Madison Case Study 1 Part I. Michael Gleicher 1 What is Motion

More information

Interactive Computer Graphics

Interactive Computer Graphics Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create

More information

Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces

Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Eric Christiansen Michael Gorbach May 13, 2008 Abstract One of the drawbacks of standard reinforcement learning techniques is that

More information

About this document. Introduction. Where does Life Forms fit? Prev Menu Next Back p. 2

About this document. Introduction. Where does Life Forms fit? Prev Menu Next Back p. 2 Prev Menu Next Back p. 2 About this document This document explains how to use Life Forms Studio with LightWave 5.5-6.5. It also contains short examples of how to use LightWave and Life Forms together.

More information

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury

More information

Motion Control with Strokes

Motion Control with Strokes Motion Control with Strokes Masaki Oshita Kyushu Institute of Technology oshita@ces.kyutech.ac.jp Figure 1: Examples of stroke-based motion control. Input strokes (above) and generated motions (below).

More information

INFOMCANIM Computer Animation Motion Synthesis. Christyowidiasmoro (Chris)

INFOMCANIM Computer Animation Motion Synthesis. Christyowidiasmoro (Chris) INFOMCANIM Computer Animation Motion Synthesis Christyowidiasmoro (Chris) Why Motion Synthesis? We don t have the specific motion / animation We don t have the skeleton and motion for specific characters

More information

An Introduction to animation and motion blending

An Introduction to animation and motion blending An Introduction to animation and motion blending Stuart Bryson - 98082365 Information Technology Research Preparation University Technology, Sydney 14/06/2005 An introduction to animation and motion blending

More information

Triangulation: A new algorithm for Inverse Kinematics

Triangulation: A new algorithm for Inverse Kinematics Triangulation: A new algorithm for Inverse Kinematics R. Müller-Cajar 1, R. Mukundan 1, 1 University of Canterbury, Dept. Computer Science & Software Engineering. Email: rdc32@student.canterbury.ac.nz

More information

Bumptrees for Efficient Function, Constraint, and Classification Learning

Bumptrees for Efficient Function, Constraint, and Classification Learning umptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street, Suite 600 erkeley, California 94704 Abstract A

More information

Chapter 9 Animation System

Chapter 9 Animation System Chapter 9 Animation System 9.1 Types of Character Animation Cel Animation Cel animation is a specific type of traditional animation. A cel is a transparent sheet of plastic on which images can be painted

More information

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th Announcements Midterms back at end of class ½ lecture and ½ demo in mocap lab Have you started on the ray tracer? If not, please do due April 10th 1 Overview of Animation Section Techniques Traditional

More information

CMSC 425: Lecture 10 Skeletal Animation and Skinning

CMSC 425: Lecture 10 Skeletal Animation and Skinning CMSC 425: Lecture 10 Skeletal Animation and Skinning Reading: Chapt 11 of Gregory, Game Engine Architecture. Recap: Last time we introduced the principal elements of skeletal models and discussed forward

More information

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,

More information

Analyzing and Segmenting Finger Gestures in Meaningful Phases

Analyzing and Segmenting Finger Gestures in Meaningful Phases 2014 11th International Conference on Computer Graphics, Imaging and Visualization Analyzing and Segmenting Finger Gestures in Meaningful Phases Christos Mousas Paul Newbury Dept. of Informatics University

More information

1. Mesh Coloring a.) Assign unique color to each polygon based on the polygon id.

1. Mesh Coloring a.) Assign unique color to each polygon based on the polygon id. 1. Mesh Coloring a.) Assign unique color to each polygon based on the polygon id. Figure 1: The dragon model is shown rendered using a coloring scheme based on coloring each triangle face according to

More information

Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure

Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure Challenge Problem 5 - The Solution Dynamic Characteristics of a Truss Structure In the final year of his engineering degree course a student was introduced to finite element analysis and conducted an assessment

More information

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction Motion Texture Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays 1. Introduction Motion capture data is often used in movies and video games because it is able to realistically depict human

More information

Video based Animation Synthesis with the Essential Graph. Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes

Video based Animation Synthesis with the Essential Graph. Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes Video based Animation Synthesis with the Essential Graph Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes Goal Given a set of 4D models, how to generate realistic motion from user specified

More information

Adding Hand Motion to the Motion Capture Based Character Animation

Adding Hand Motion to the Motion Capture Based Character Animation Adding Hand Motion to the Motion Capture Based Character Animation Ge Jin and James Hahn Computer Science Department, George Washington University, Washington DC 20052 {jinge, hahn}@gwu.edu Abstract. Most

More information

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

animation computer graphics animation 2009 fabio pellacini 1

animation computer graphics animation 2009 fabio pellacini 1 animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

Character Animation COS 426

Character Animation COS 426 Character Animation COS 426 Syllabus I. Image processing II. Modeling III. Rendering IV. Animation Image Processing (Rusty Coleman, CS426, Fall99) Rendering (Michael Bostock, CS426, Fall99) Modeling (Dennis

More information

animation projects in digital art animation 2009 fabio pellacini 1

animation projects in digital art animation 2009 fabio pellacini 1 animation projects in digital art animation 2009 fabio pellacini 1 animation shape specification as a function of time projects in digital art animation 2009 fabio pellacini 2 how animation works? flip

More information

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al CS 231 Motion Capture Data I The Pipeline Bodenheimer et al 1 Marker Magnetic Optical Marker placement On limbs vs joints neither is ideal Over tight clothing or thin skin In repeatable 'landmarks' Using

More information

Character Animation 1

Character Animation 1 Character Animation 1 Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. Character Representation A character

More information

Why animate humans? Why is this hard? Aspects of the Problem. These lectures. Animation Apreciation 101

Why animate humans? Why is this hard? Aspects of the Problem. These lectures. Animation Apreciation 101 Animation by Example Lecture 1: Introduction, Human Representation Michael Gleicher University of Wisconsin- Madison www.cs.wisc.edu/~gleicher www.cs.wisc.edu/graphics Why animate humans? Movies Television

More information

Course Review. Computer Animation and Visualisation. Taku Komura

Course Review. Computer Animation and Visualisation. Taku Komura Course Review Computer Animation and Visualisation Taku Komura Characters include Human models Virtual characters Animal models Representation of postures The body has a hierarchical structure Many types

More information

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Department of Computer Science The University of British Columbia duanx@cs.ubc.ca, lili1987@cs.ubc.ca Abstract

More information

Motion Capture & Simulation

Motion Capture & Simulation Motion Capture & Simulation Motion Capture Character Reconstructions Joint Angles Need 3 points to compute a rigid body coordinate frame 1 st point gives 3D translation, 2 nd point gives 2 angles, 3 rd

More information

Simple Silhouettes for Complex Surfaces

Simple Silhouettes for Complex Surfaces Eurographics Symposium on Geometry Processing(2003) L. Kobbelt, P. Schröder, H. Hoppe (Editors) Simple Silhouettes for Complex Surfaces D. Kirsanov, P. V. Sander, and S. J. Gortler Harvard University Abstract

More information

Supplementary Information. Design of Hierarchical Structures for Synchronized Deformations

Supplementary Information. Design of Hierarchical Structures for Synchronized Deformations Supplementary Information Design of Hierarchical Structures for Synchronized Deformations Hamed Seifi 1, Anooshe Rezaee Javan 1, Arash Ghaedizadeh 1, Jianhu Shen 1, Shanqing Xu 1, and Yi Min Xie 1,2,*

More information

Ragdoll Physics. Abstract. 2 Background. 1 Introduction. Gabe Mulley, Matt Bittarelli. April 25th, Previous Work

Ragdoll Physics. Abstract. 2 Background. 1 Introduction. Gabe Mulley, Matt Bittarelli. April 25th, Previous Work Ragdoll Physics Gabe Mulley, Matt Bittarelli April 25th, 2007 Abstract The goal of this project was to create a real-time, interactive, and above all, stable, ragdoll physics simulation. This simulation

More information

Computer Animation Fundamentals. Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics

Computer Animation Fundamentals. Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics Computer Animation Fundamentals Animation Methods Keyframing Interpolation Kinematics Inverse Kinematics Lecture 21 6.837 Fall 2001 Conventional Animation Draw each frame of the animation great control

More information

Character Animation. Presented by: Pam Chow

Character Animation. Presented by: Pam Chow Character Animation Presented by: Pam Chow Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. PLAZMO AND

More information

Shape Blending Using the Star-Skeleton Representation

Shape Blending Using the Star-Skeleton Representation Shape Blending Using the Star-Skeleton Representation Michal Shapira Ari Rappoport Institute of Computer Science, The Hebrew University of Jerusalem Jerusalem 91904, Israel. arir@cs.huji.ac.il Abstract:

More information

Universiteit Leiden Computer Science

Universiteit Leiden Computer Science Universiteit Leiden Computer Science Optimizing octree updates for visibility determination on dynamic scenes Name: Hans Wortel Student-no: 0607940 Date: 28/07/2011 1st supervisor: Dr. Michael Lew 2nd

More information

COMP 175 COMPUTER GRAPHICS. Lecture 10: Animation. COMP 175: Computer Graphics March 12, Erik Anderson 08 Animation

COMP 175 COMPUTER GRAPHICS. Lecture 10: Animation. COMP 175: Computer Graphics March 12, Erik Anderson 08 Animation Lecture 10: Animation COMP 175: Computer Graphics March 12, 2018 1/37 Recap on Camera and the GL Matrix Stack } Go over the GL Matrix Stack 2/37 Topics in Animation } Physics (dynamics, simulation, mechanics)

More information

Real-Time Motion Transition by Example

Real-Time Motion Transition by Example Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2005-11-10 Real-Time Motion Transition by Example Cameron Quinn Egbert Brigham Young University - Provo Follow this and additional

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

Motion Graphs. Abstract. 1 Introduction. Lucas Kovar University of Wisconsin-Madison

Motion Graphs. Abstract. 1 Introduction. Lucas Kovar University of Wisconsin-Madison Lucas Kovar University of Wisconsin-Madison Motion Graphs Michael Gleicher University of Wisconsin-Madison Frédéric Pighin University of Southern California Institute for Creative Technologies Abstract

More information

Motion Control Methods for Skeleton Daniel Thalmann

Motion Control Methods for Skeleton Daniel Thalmann Motion Control Methods for Skeleton Daniel Thalmann Cagliari, May 2008 Animation of articulated bodies Characters, humans, animals, robots. Characterized by hierarchical structure: skeleton. Skeleton:

More information

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala

Animations. Hakan Bilen University of Edinburgh. Computer Graphics Fall Some slides are courtesy of Steve Marschner and Kavita Bala Animations Hakan Bilen University of Edinburgh Computer Graphics Fall 2017 Some slides are courtesy of Steve Marschner and Kavita Bala Animation Artistic process What are animators trying to do? What tools

More information

Cloth Simulation. Tanja Munz. Master of Science Computer Animation and Visual Effects. CGI Techniques Report

Cloth Simulation. Tanja Munz. Master of Science Computer Animation and Visual Effects. CGI Techniques Report Cloth Simulation CGI Techniques Report Tanja Munz Master of Science Computer Animation and Visual Effects 21st November, 2014 Abstract Cloth simulation is a wide and popular area of research. First papers

More information

Character Animation Seminar Report: Complementing Physics with Motion Capture

Character Animation Seminar Report: Complementing Physics with Motion Capture Character Animation Seminar Report: Complementing Physics with Motion Capture Stefan John 1, and Alexis Heloir 2 1 Saarland University, Computer Graphics Lab, Im Stadtwald Campus E 1 1, 66123 Saarbrücken,

More information

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Christos Mousas Paul Newbury Department of Informatics University of Sussex East Sussex, Brighton BN1 9QH Email: {c.mousas,

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

Articulated Characters

Articulated Characters Articulated Characters Skeleton A skeleton is a framework of rigid body bones connected by articulated joints Used as an (invisible?) armature to position and orient geometry (usually surface triangles)

More information

TYPES OF PARAMETRIC MODELLING

TYPES OF PARAMETRIC MODELLING Y. Ikeda, C. M. Herr, D. Holzer, S. Kaijima, M. J. J. Kim. M, A, A, Schnabel (eds.), Emerging Experiences of in Past, the Past, Present Present and and Future Future of Digital of Digital Architecture,

More information

Kinematics: Intro. Kinematics is study of motion

Kinematics: Intro. Kinematics is study of motion Kinematics is study of motion Kinematics: Intro Concerned with mechanisms and how they transfer and transform motion Mechanisms can be machines, skeletons, etc. Important for CG since need to animate complex

More information

Collision Detection with Bounding Volume Hierarchies

Collision Detection with Bounding Volume Hierarchies Simulation in Computer Graphics Collision Detection with Bounding Volume Hierarchies Matthias Teschner Computer Science Department University of Freiburg Outline introduction bounding volumes BV hierarchies

More information

Breathing life into your applications: Animation with Qt 3D. Dr Sean Harmer Managing Director, KDAB (UK)

Breathing life into your applications: Animation with Qt 3D. Dr Sean Harmer Managing Director, KDAB (UK) Breathing life into your applications: Animation with Qt 3D Dr Sean Harmer Managing Director, KDAB (UK) sean.harmer@kdab.com Contents Overview of Animations in Qt 3D Simple Animations Skeletal Animations

More information

Directable Motion Texture Synthesis

Directable Motion Texture Synthesis Directable Motion Texture Synthesis A Thesis presented by Ashley Michelle Eden to Computer Science in partial fulfillment of the honors requirements for the degree of Bachelor of Arts Harvard College Cambridge,

More information

Doyle Spiral Circle Packings Animated

Doyle Spiral Circle Packings Animated Doyle Spiral Circle Packings Animated Alan Sutcliffe 4 Binfield Road Wokingham RG40 1SL, UK E-mail: nsutcliffe@ntlworld.com Abstract Doyle spiral circle packings are described. Two such packings illustrate

More information

A Responsiveness Metric for Controllable Characters Technical Report CS

A Responsiveness Metric for Controllable Characters Technical Report CS A Responsiveness Metric for Controllable Characters Technical Report CS05-50-0 Madhusudhanan Srinivasan Ronald A. Metoyer School of Electrical Engineering and Computer Science Oregon State University ρ

More information

Animation. CS 465 Lecture 22

Animation. CS 465 Lecture 22 Animation CS 465 Lecture 22 Animation Industry production process leading up to animation What animation is How animation works (very generally) Artistic process of animation Further topics in how it works

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Kinematics and Orientations

Kinematics and Orientations Kinematics and Orientations Hierarchies Forward Kinematics Transformations (review) Euler angles Quaternions Yaw and evaluation function for assignment 2 Building a character Just translate, rotate, and

More information

Data-driven Approaches to Simulation (Motion Capture)

Data-driven Approaches to Simulation (Motion Capture) 1 Data-driven Approaches to Simulation (Motion Capture) Ting-Chun Sun tingchun.sun@usc.edu Preface The lecture slides [1] are made by Jessica Hodgins [2], who is a professor in Computer Science Department

More information

CSC 2529F Computer Animation Graduate Project -Collaborative Motion Graphs. Alex & Philipp Hertel

CSC 2529F Computer Animation Graduate Project -Collaborative Motion Graphs. Alex & Philipp Hertel CSC 2529F Computer Animation Graduate Project -Collaborative Motion Graphs Alex & Philipp Hertel April 15th, 2003 Introduction There has recently been much interest in using motion graphs as a means of

More information

Motion Editing with Data Glove

Motion Editing with Data Glove Motion Editing with Data Glove Wai-Chun Lam City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong Kong email:jerrylam@cityu.edu.hk Feng Zou City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong

More information

Combining PGMs and Discriminative Models for Upper Body Pose Detection

Combining PGMs and Discriminative Models for Upper Body Pose Detection Combining PGMs and Discriminative Models for Upper Body Pose Detection Gedas Bertasius May 30, 2014 1 Introduction In this project, I utilized probabilistic graphical models together with discriminative

More information

CS-184: Computer Graphics

CS-184: Computer Graphics CS-184: Computer Graphics Lecture #19: Motion Capture!!! Prof. James O Brien! University of California, Berkeley!! V2015-S-18-1.0 Today 1 18-MoCap.key - April 8, 2015 Motion Capture 2 2 18-MoCap.key -

More information

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render 1 There are two major classes of algorithms for extracting most kinds of lines from 3D meshes. First, there are image-space algorithms that render something (such as a depth map or cosine-shaded model),

More information

Generating Different Realistic Humanoid Motion

Generating Different Realistic Humanoid Motion Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing

More information

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object.

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object. CENG 732 Computer Animation Spring 2006-2007 Week 4 Shape Deformation Animating Articulated Structures: Forward Kinematics/Inverse Kinematics This week Shape Deformation FFD: Free Form Deformation Hierarchical

More information

Skeletal similarity based automatic joint mapping for performance animation

Skeletal similarity based automatic joint mapping for performance animation Skeletal similarity based automatic joint mapping for performance animation Author: Steven Weijden, 3683591 Supervisor: Dr. Nicolas Pronost March 2014 MSc Game and Media Technology Utrecht University Abstract

More information

Modeling of Humanoid Systems Using Deductive Approach

Modeling of Humanoid Systems Using Deductive Approach INFOTEH-JAHORINA Vol. 12, March 2013. Modeling of Humanoid Systems Using Deductive Approach Miloš D Jovanović Robotics laboratory Mihailo Pupin Institute Belgrade, Serbia milos.jovanovic@pupin.rs Veljko

More information

(Refer Slide Time: 1:40)

(Refer Slide Time: 1:40) Computer Architecture Prof. Anshul Kumar Department of Computer Science and Engineering, Indian Institute of Technology, Delhi Lecture - 3 Instruction Set Architecture - 1 Today I will start discussion

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Lecture VI: Constraints and Controllers. Parts Based on Erin Catto s Box2D Tutorial

Lecture VI: Constraints and Controllers. Parts Based on Erin Catto s Box2D Tutorial Lecture VI: Constraints and Controllers Parts Based on Erin Catto s Box2D Tutorial Motion Constraints In practice, no rigid body is free to move around on its own. Movement is constrained: wheels on a

More information

Lecture VI: Constraints and Controllers

Lecture VI: Constraints and Controllers Lecture VI: Constraints and Controllers Motion Constraints In practice, no rigid body is free to move around on its own. Movement is constrained: wheels on a chair human body parts trigger of a gun opening

More information

Particle Swarm Optimization applied to Pattern Recognition

Particle Swarm Optimization applied to Pattern Recognition Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...

More information

CSE452 Computer Graphics

CSE452 Computer Graphics CSE452 Computer Graphics Lecture 19: From Morphing To Animation Capturing and Animating Skin Deformation in Human Motion, Park and Hodgins, SIGGRAPH 2006 CSE452 Lecture 19: From Morphing to Animation 1

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Principles of Computer Game Design and Implementation. Revision Lecture

Principles of Computer Game Design and Implementation. Revision Lecture Principles of Computer Game Design and Implementation Revision Lecture Introduction Brief history; game genres Game structure A series of interesting choices Series of convexities Variable difficulty increase

More information

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions.

Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Path and Trajectory specification Robots are built to accomplish complex and difficult tasks that require highly non-linear motions. Specifying the desired motion to achieve a specified goal is often a

More information

TEAM 12: TERMANATOR PROJECT PROPOSAL. TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu

TEAM 12: TERMANATOR PROJECT PROPOSAL. TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu TEAM 12: TERMANATOR PROJECT PROPOSAL TEAM MEMBERS: Donald Eng Rodrigo Ipince Kevin Luu 1. INTRODUCTION: This project involves the design and implementation of a unique, first-person shooting game. The

More information

Using surface markings to enhance accuracy and stability of object perception in graphic displays

Using surface markings to enhance accuracy and stability of object perception in graphic displays Using surface markings to enhance accuracy and stability of object perception in graphic displays Roger A. Browse a,b, James C. Rodger a, and Robert A. Adderley a a Department of Computing and Information

More information

CSC 2504F Computer Graphics Graduate Project -Motion From Primitives. Alex & Philipp Hertel

CSC 2504F Computer Graphics Graduate Project -Motion From Primitives. Alex & Philipp Hertel CSC 2504F Computer Graphics Graduate Project -Motion From Primitives Alex & Philipp Hertel December 2, 2002 Introduction Most partner dances such as Salsa, Swing, Cha Cha, Merengue, and Lindy Hop are danced

More information

Tutorial 1: Welded Frame - Problem Description

Tutorial 1: Welded Frame - Problem Description Tutorial 1: Welded Frame - Problem Description Introduction In this first tutorial, we will analyse a simple frame: firstly as a welded frame, and secondly as a pin jointed truss. In each case, we will

More information

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models 3D Programming Concepts Outline 3D Concepts Displaying 3D Models 3D Programming CS 4390 3D Computer 1 2 3D Concepts 3D Model is a 3D simulation of an object. Coordinate Systems 3D Models 3D Shapes 3D Concepts

More information

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight

coding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image

More information

Animation Lecture 10 Slide Fall 2003

Animation Lecture 10 Slide Fall 2003 Animation Lecture 10 Slide 1 6.837 Fall 2003 Conventional Animation Draw each frame of the animation great control tedious Reduce burden with cel animation layer keyframe inbetween cel panoramas (Disney

More information

RINGS : A Technique for Visualizing Large Hierarchies

RINGS : A Technique for Visualizing Large Hierarchies RINGS : A Technique for Visualizing Large Hierarchies Soon Tee Teoh and Kwan-Liu Ma Computer Science Department, University of California, Davis {teoh, ma}@cs.ucdavis.edu Abstract. We present RINGS, a

More information

Modeling Physically Simulated Characters with Motion Networks

Modeling Physically Simulated Characters with Motion Networks In Proceedings of Motion In Games (MIG), Rennes, France, 2012 Modeling Physically Simulated Characters with Motion Networks Robert Backman and Marcelo Kallmann University of California Merced Abstract.

More information

An object in 3D space

An object in 3D space An object in 3D space An object's viewpoint Every Alice object has a viewpoint. The viewpoint of an object is determined by: The position of the object in 3D space. The orientation of the object relative

More information

FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION

FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION Christos Mousas 1,Paul Newbury 1, Christos-Nikolaos Anagnostopoulos 2 1 Department of Informatics, University of Sussex, Brighton BN1 9QJ, UK 2 Department of Cultural

More information

MOTION capture is a technique and a process that

MOTION capture is a technique and a process that JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2008 1 Automatic estimation of skeletal motion from optical motion capture data xxx, Member, IEEE, Abstract Utilization of motion capture techniques

More information

TIE Graph algorithms

TIE Graph algorithms TIE-20106 1 1 Graph algorithms This chapter discusses the data structure that is a collection of points (called nodes or vertices) and connections between them (called edges or arcs) a graph. The common

More information

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview

D-Optimal Designs. Chapter 888. Introduction. D-Optimal Design Overview Chapter 888 Introduction This procedure generates D-optimal designs for multi-factor experiments with both quantitative and qualitative factors. The factors can have a mixed number of levels. For example,

More information

7 Modelling and Animating Human Figures. Chapter 7. Modelling and Animating Human Figures. Department of Computer Science and Engineering 7-1

7 Modelling and Animating Human Figures. Chapter 7. Modelling and Animating Human Figures. Department of Computer Science and Engineering 7-1 Modelling and Animating Human Figures 7-1 Introduction Modeling and animating an articulated figure is one of the most formidable tasks that an animator can be faced with. It is especially challenging

More information