MKM: A global framework for animating humans in virtual reality. applications

Size: px
Start display at page:

Download "MKM: A global framework for animating humans in virtual reality. applications"

Transcription

1 MKM: A global framework for animating humans in virtual reality applications Franck Multon 1,2, Richard Kulpa 1, Benoit Bideau 1 1 M2S, University Rennes 2,av. Charles Tillon, Rennes, France 2 Bunraku project, IRISA,Campus de Beaulieu, Rennes, France fmulton@irisa.fr Paper accepted in Presence in November 2007 Keywords Computer animation, virtual human, motion blending, motion adaptation. Abstract Virtual humans are more and more used in VR applications but their animation is still a challenge, especially if complex tasks must be carried-out in interaction with the user. In many applications with virtual humans, credible virtual characters play a major role in presence. Motion editing techniques assume that the natural laws are intrinsically encoded in prerecorded trajectories and that modifications may preserve them leading to credible autonomous actors. However, a complete knowledge of all the constraints is required to ensure continuity or to synchronize and blend several actions necessary to achieve a given task. We propose a framework capable of performing these tasks in an interactive environment that can change at each frame, depending on the user s orders. This framework enables to animate from dozens of characters in real-time for complex constraints to hundreds of characters if only ground adaptation is performed. It offers the following capabilities: motion synchronization, blending, retargeting and adaptation thanks to enhanced inverse kinetics and kinematics solver. To evaluate this framework we have compared the motor behavior of subjects in real and in virtual environments.

2 Introduction Virtual reality (VR) generally implies animating believable human-like figures in order to interact with real users. Some VR applications require making numerous characters displace in a virtual environment, such as a team of soldiers or a city in which users can drive cars. The animation of such characters leads to several technical problems, including automatic motion retargeting, adaptation to the environment and control. To solve those problems in interactive environments, dynamic models, even if they look promising (Hodgins et al. 1995), cannot be used because they generally require high computation cost incompatible with interactive environments. On the opposite, descriptive models are quite fast because they are based on providing the average shape of angular trajectories for well-known motions. Despite their controllability and low computation time, they are generally limited to locomotion (Boulic, Magnenat-Thalmann & Thalmann 1990, Bruderlin & Calvert 1996) and seem difficult to extend to a generic animation model. Motion capture data were also used to generate new motions thanks to a database of recorded trajectories. Motion graphs (Kovar & Gleicher 2002) were then introduced in order to calculate all the possible transitions between postures recorded in this database. After a quite long computation, this method allows driving interactively a character while verifying complex tasks, such as controlling a boxer to punch various targets (Lee & Lee 2004). However, to ensure realism, this method requires recording a very large set of motions leading to an enormous database in order to achieve a large variety of motions. Moreover, motions stored in the graph generally correspond to a specific skeleton that is not compatible with any kind of virtual human. It is the same problem for methods based on PCA (Principal Component Analysis) (Safonova, Hodgins & Pollard 2004, Glardon, Boulic & Thalmann 2004, Forbes & Fiume 2005), machine learning (Hsu, Pulli & Popovic 2005) and style-based inverse kinematics (Grochow et al. 2004).

3 An alternative consists in editing captured motions. To this end, displacement maps are now widely used to animate realistically human-like figures with various sizes (Gleicher 98, Choi et al. 2000) and constraints (Gleicher & Litwinowicz 1998, Lee & Shin 1999). Those techniques generally use inverse kinematics to solve space-time constraints at imposed frames. An iterative process is then used to ensure continuity, requiring the knowledge of all the constraints for the entire animation sequence which is impossible in interactive environments. Prioritized inverse kinematics and kinetics (Baerlocher & Boulic 2004, Le Callennec & Boulic 2004) makes it possible to solve contradictory constraints and enables to animate rapidly one complete character with many constraints. To reduce computation time, other techniques generally use efficient inverse kinematics solvers either dedicated on parts of the skeleton (Tolani, Goswami & Badler 2000) or on the entire body (Lee & Shin 1999, Shin et al. 2001). Those methods, even more efficient, are only limited to the control of joints position without taking care of balance, dynamics and naturalness. Except for simple tasks, it is generally necessary to combine several different elementary motions, such as for displacing in a room while grasping various objects. A common method to solve this problem is motion blending (Kovar & Gleicher 2003, Mukai & Kuriyama 2005). This method calculates a weighted sum of several elementary trajectories according to priorities and activation/deactivation of constraints. This approach requires using dynamic time warping (Witkin & Kass 1995) because the time-scale and events for all the elementary motions may not correspond, leading to visual artifacts. However, in VR applications, it is quite difficult to predict how the actions will change with a quite long delay. Indeed, the user can request to start or stop motions at any time. As a consequence, a dynamic representation of all the running actions is required and was first modeled as a stack of actions (Boulic et al. 1997). In this stack, the actions with higher priorities are placed on the top. However, when the priority of an action placed in the bottom of the stack is increased, all the stack must be restructured leading to undesired computation time.

4 Commercial packages were proposed in the past few years in order to edit captured motions, dealing with motion blending and adaptation to specific constraints and to various skeletons. In those packages, synchronization is ensured manually by the user. Hence, those packages are generally dedicated to off-line editing and cannot be used in interactive environments with complex constraints, such as those required in VR. We propose a framework that was designed to deal with all of those processes in interactive environments by offering efficient motion representation and constraints solvers. It also allows intuitive and easy-to-use synchronization and blending algorithms. Not only such a system must be easy-to-use but it should make users react as in real situations, to improve presence. We thus present and recall some results obtained with our system for the duel between real handball goalkeepers and virtual opponents. Overview The entire framework is based on a representation of motion that is independent from morphology leading to very efficient techniques to perform motion retargeting that was presented in (Kulpa, Multon & Arnaldi 2005) and that is based on both Cartesian and angular data that avoids requiring costly inverse kinematics algorithms. The overall framework is organized as follows (see Figure 1): Synchronization Param(t) Orders User Motions M i Skeleton Adaptation L Param(t) Posture conversion Constraints solver Blending Quat Rendering L Param(t) L Param(t) User s constraints Figure 1. Overview of the framework, allowing motion retargeting, synchronization, blending and adaptation to the environment.

5 A synchronization module which task is to time-scale (dynamic time warping) each selected motion in order to make it be compatible with the other ones for motion blending, A method to adapt postures to every kind of human-like figure. This method is based on the representation of posture that is independent from morphology, An easy-to-use motion blending algorithm simply driven by priorities (ranking from 0 to infinity) and states (beginning, active, stopping, inactive) while automatically ensuring continuity, A novel inverse kinematics and kinetics solver that makes it possible to adapt the resulting posture to constraints (either associated to the motions or defined interactively by the user) that are activated and deactivated continuously to avoid discontinuities in the resulting motion. As stated above, all this system is based on a motion representation that allows many enhancements. In Figure 1, Param(t) is filled-in with values coming from this morphologicalindependent representation. Param(t) is provided by each motion for each time step. The skeleton adaptation module only scales those values by the dimensions of the human-like figure to be animated (named L Param(t)). We recall that L Param(t) does not contain any direct information on intermediate joints (such as knees, elbows and vertebrae) that are only calculated in the posture conversion module just before rendering. Let us recall now this representation. Motion representation The motion is stored using a normalized representation of the skeleton and a set of associated constraints (see Kulpa, Multon & Arnaldi 2005 for details). Those constraints are intrinsically linked to the motion, such as foot-contacts, distance between two hands in bimanual manipulations Although those constraints are defined off-line, they are adapted and solved

6 in real-time in the virtual environment. Instead of storing joint angles that are not easy to adapt to new skeletons (requiring iterative processes with many inverse kinematics calls), we have proposed to store adimensional data that are independent of the character s dimensions. Thus, retargeting a motion to a new character without any other constraints is simply achieved by scaling those data with the new character s dimensions. Hence, many characters can use the same motion file without the help of complex processes to adapt it to their morphology. Let us consider some details about this description. First, the human body is subdivided into kinematic subchains that describe parts of the skeleton (see Figure 2). Those kinematic chains are divided into three main parts, as described in (Kulpa, Multon & Arnaldi 2005): the normalized segments are composed of only one body segment (such as the hands, the feet, the hips, the clavicle and the scapula); it consists of the Cartesian position of the extremity in the origin reference frame divided by its length in the initial skeleton. As each normalized segment is stored this way, we can deal with really different proportions between segments. Hence, it is more adapted than applying angular trajectories that intrinsically suppose equivalent proportions between the original and the target skeletons. the limbs with variable length that encode upper and lower limbs; in this representation, the intermediate joints (elbows and knees) are not encoded because their position is directly linked to the character's anthropometric properties. Thus, retrieving the position of those intermediate joints for different proportions between segments leads to searching the intersection between two circles in a plane which can be expressed analytically, as proposed in (Tolani, Goswami & Badler 2000). and the spine represented with a spline that can be subdivided into as segments as wishes in the real-time animation module.

7 Figure 2. New representation that is independent from the character's anthropometric dimensions. In this last representation, each point of the skeleton can be retrieved relatively to the root position. Contrary to classical representation, the instantaneous orientation of the root is divided into two main components. The global orientation deals with the global direction of the motion. The local orientation is the additional rotation applied to the global orientation in order to obtain the actual root orientation. During a walk, it represents the pelvis oscillations around the global direction. In order to encode movements without taking anthropometric properties into account, the position of the root is normalized by the leg length. A motion is not limited to a sequence of postures but is also linked to intrinsic constraints, such as ensuring foot-contact with the ground or reaching targets in the environment. All those intrinsic constraints are designed off-line by a user and are stored with the sequence of postures. To model a constraint C i several parameters are necessary: i { CP, T, KC, P S } C =, i i The first parameter CP i is the constrained point. It is linked to a body segment and its position is defined using a 3D local offset from the root of this segment. Next parameter T i is the type of the constraint among the following ones: distance between two points (could equal zero for contacts), orientation of body segment and allowed/forbidden area. Depending on the i i i

8 constraint, specific parameters must obviously be added such as the desired position of the constraint or the dimensions of the restricted area. The next parameter KC i defines the kinematic chain associated to the constraint. It allows the user to specify the set of usable body segments in order to solve the constraint. For example, a constraint C 1 could be applied on the right hand and could act on all the segments ranging from the hand to the abdomen. Another constraint C 2 could only involve the arm and the clavicle. The priority of the constraint, called P i, indicates intuitively the importance of a constraint compared to others. The constraints with low priorities are only verified after those associated to higher priorities. Finally, the user can start and stop constraints leading to the computation of the parameter S i which is the state of the constraint (ranging continuously from 0 for deactivated to 1 for fully activated). With this method it is quite simple to make the virtual character react to unpredictable user s actions by simply tuning priorities. This problem is addressed in the next section. The data linked to posture (using the representation presented above) and the set of constraints are stored together in a common structure called Param i (t) for motion M i. Motion synchronization and blending The main principle of this method is to perform a weighted sum of postures and constraints at each time step (the data are first scaled to fit the new character's dimensions in order to have a L Param(t) structure in which the intermediate joints such as the knees, elbows and vertebrae, are still not calculated). The main problems thus consist in: ensuring compatibility between all the motions selected for motion blending by analyzing stances for each of them, calculating weights that take the priorities and continuity into account, designing an easy-to-use blending algorithm for which a user just has to start, stop motions and provide the system with only one priority for each of them.

9 To this end, we have first chosen to synchronize motions according to stances by calculating and applying dynamic time warping on each of them, if required. Each type of stance is encoded as follows: NS for no-support phase, LS (resp. RS) for unipodal left-support (resp. right-support) phase and DS for double-support phases. Blending two motions m 1 and m 2 with their corresponding stances s 1 and s 2 may lead to several different cases: if s 1 =s 2 then the motions are compatible at this time and their combination lead to a motion with the same stance s b =s 1 =s 2, if a LS is blend with a RS, it will lead to an incompatibility encoded with an impossible stance Err. Indeed, if a figure has only its left foot in contact with the ground it cannot also have only its right foot in contact with the ground. if the two feet are in contact with the ground for m 1, we assume the everything is possible for m 2, resulting in a stance equal to s 2. Indeed, when two feet are in contact with the ground, the character can naturally jump (resulting in a NS phase) or hold a foot up (resulting in a LS or RS phase). An operator was then introduced to extend those statements for n motions m 1..n. The result of this operator belongs to the set {NS, LS, RS, DS, Err}. Hence, the result s r obtained for the blending of n motions is: s r = s 1 s 2 s n If Err is the result, it means that the motions are not compatible and dynamic time warping is required for some phases of at least one motion as described in (Ménardais, Kulpa, & Multon. 2004). To take interactivity into account, we cannot apply this algebraic relation on all the sequence but only on the next nk stances (a window sliding on the sequences of stances), assuming the current stance is number k. The following algorithm is then applied iteratively during realtime animation. In this algorithm, we assume that the motion was synchronized until the step

10 k+nk. The next stance that must be synchronized is s i (k+nk+1) for all motions M i. This last stance must ensure that the above relation is still true for stances j [k..k+nk+1]. If it is not true, the system has to modify the time scale of one or more stances s i (j) with j [k..nk] yet synchronized. Such a problem occurs only if at least one LS is associated to one RS. This is the only case that leads to Err. Hence, to solve this problem, we have to modify the time scales of each motion that exhibits either LS or RS at stance k+nk+1. We assume that motions with high priorities may be less affected by this process than those with low priorities. As a consequence we have to search motion M j that has either LS or RS (denoted s j (k+nk+1)) at stance k+nk+1 with the highest priority. Then we have to change the time scale of the stances sj(k+nk+1) (with LS=RS and RS=LS) of all the remaining motions with lower priorities in order to solve the problem. Synchronizing the next stance k+nk+1 leads to the following algorithm: St_result = i=1:n s i (k+nk+1) If St_result == Err // only if there are both LS and RS // search for the motion with highest priority that exhibits either LS or RS // and modify all motions with lower priority that exhibit the opposite stance stance = // we assume that motions are stored in the increasing order of their priority For i=n downto 1 If (stance== ) & ((s i (k+nk+1) {LS, RS}) stance = s i (k+nk+1) // M j found else If (s i (k+nk+1) == stance) TimeScale(s i (k+nk))

11 end If end For; end If end If Figure 3 illustrates this process for three motions, with nk=1. Figure 3. Synchronization of three motions that are not compatible. In the example of Figure 3, only the first column of stances can be modified because nk=1. We can see that an error occurs at stance k+2. s 3 has the highest priority and a LS stance. As a consequence, we have to modify the time scale of all the motions that have lower priority and a RS stance. We consequently enlarge s 2 (k+1) and there is no more Err for stance k+2. This method has a limitation for highly dynamic motions. Hence if the character is in double stance in motion 1, we assume that everything is possible for motion M 2. If motion M 1 represents the character landing on the ground after a fall, no motion M 2 other than absorbing the forces of impact are feasible for some time duration t, and only then new motions are possible. For such dynamic cases, the operator should be extended. However, practically, we did not experiment too much problematic cases even if they exist. Moreover, coupling this

12 synchronization module to the motion adaptation technique is coherent because the latter is also limited to kinematic and kinetic constraints. Once all the motions that are supposed to be blend have compatible stances, the system has to compute a weight for each motion depending on its priority, its state and the corresponding kinematic chain. Obviously, asking the user to tune manually those parameters is quite impossible, especially in real-time environment. As a consequence, an automatic calculation of those weights is necessary (see Ménardais et al for details). Contrary to motion blending classically applied on angular trajectories, this method is applied on the data structure L Param(t) presented above. This resulting structure must then be adapted to the environment leading to a constraints solving problem. Constraints solver All the geometric constraints (such as ensuring foot-contact with the ground without sliding) are solved in a common iterative process extended from the one proposed in (Shin et al. 2001). This process consists in decomposing the skeleton into groups for which analytical solutions are available, enhancing performance (see Kulpa, Multon & Arnaldi 2005 for details on the inverse kinematics module). One of the main point is the order in which groups are used to solve the kinematic constraints: from the lightest groups (limbs) to the heaviest ones (the trunk). It consequently minimizes the kinetic energy required to solve the constraints and lead to more realistic behaviors. Indeed, it would be totally unrealistic to bend the torso without moving the arms to catch an object placed below the pelvis. We now focus on how this method is adapted to deal with both kinematic and kinetic constraints. The inverse kinetics module is based on the same philosophy than the kinematic one: it consists in an iterative process that is based on analytic solutions for each required group. Hence, as for inverse kinematics, each group has an analytical solution to displace the center of mass (denoted COM) into the convenient direction.

13 Figure 4. COM 1 (resp. COM 2 ) is the COM position of the segment s 1 (resp. s 2 ). COM g is the COM position of the limb. Figure 4 shows the example of an arm composed of two segments which lengths are respectively l 1 and l 2. The COM position can be retrieved calculating first the COM positions of the two segments in the current posture (COM 1 and COM 2 respectively). These positions are calculated from the proximal articulations (such as a shoulder for an upper-arm and a elbow for a forearm) using two percentages r 1 and r 2 of the segment length. These ratios are provided from anthromopetric tables (Zatsiorsky, Seluyanov & Chugunova 1990). Then the COM position of the total limb is calculated using another ratio r 3 that depends on the mass of the belonging segments: r = m 2 3 m + m 1 2 To place the COM at position COM g, an analytical solution is calculated. Let l be the distance between the proximal (the shoulder) and the distal point (the wrist) of the limb. Changing the arm configuration could be summed-up as two independent operations: a change of its extension (l in Figure 4) and a rotation of the limb (supposed rigid). Hence, to modify the limb s configuration on order to place the COM at the desired COM g, the solution consists in calculating first a new length l according to the distance d between COM g and the shoulder (see Figure 4). If d is greater than r 3 [(r 2 l 2 +l 1 ) r 1 l 1 ]+r 1 l 1 then the limb should be placed in its extended position leading to set l to the limb s length. Otherwise, l is given by:

14 l' = 2 d' F G where 2 [( r + r ) + r [ r ( r r r r ) + r ( r r ( r )) ] F = l r3 and G = r ( r + r r ) 2r r3 Knowing l, it is possible to simply calculate the corresponding elbow flexion angle. The limb (with its new configuration) is then simply rotated around the shoulder in order to place the COM at the most convenient position, as a classical CCD algorithm would do. In this iterative inverse kinetics algorithm, we have to determine first the groups that can be requested in order to verify the COM position. The groups that are free of kinematic constraints can be requested for the kinetic adaptation. The user can then select only a subset of these free groups for adaptation. The selected groups are then considered according to their level in the groups hierarchy, from the heaviest groups (such as the trunk) to the lightest ones (such as the arms). This strategy could be observed in trained humans given that only a small movement of the heaviest mass makes the COM move significantly. On the contrary, displacing light masses would require large gestures of numerous body segments in order to obtain the same effect. One of the main problems in solving both kinematic and kinetic constraints is that the two processes may lead to opposite solutions that would make the system not converge to a solution. In order to minimize the number of required body segments, the groups with the minimum mass are used first for inverse kinematics while it is the contrary for inverse kinetics. In order to solve both kinematic (Km) and kinetic (Kn) constraints, the two methods are called in a global loop aiming at minimizing the two errors at the same time. This loop (see the following pseudo-code) converges to a solution which minimizes a tunable compromise of Km and Kn (see Figure 5).

15 it = 0 completed = false Do posturekm = kinematicadaptation() posturekn = kineticadaptation() If ( (ΔkmError k < thrkm) & (ΔknError k < thrkn) ) finalposture = posturekm completed = true End If While ( (it++ < maxit) & ( completed) ) Where maxit is the maximum number of iterations, ΔkmError k and ΔknError k are the variations of the errors in the kinematic and kinetic constraints respectively, with their arbitrary selected threshold thrkm and thrkn. As a consequence, this algorithm ends when both adaptations do not modify the resulting posture. It allows the system to ensure that the kinetic constraints are verified even if the kinematic constraints are not completely verified, as other priority-based inverse kinematics methods do (Le Callennec & Boulic 2004). We assumed that kinetic constraints are imposed mainly to maintain balance and violating those constraints would lead to unrealistic postures. If the kinematic constraints cannot be verified concurrently, it means that they are not reachable in a realistic (balance) posture, as shown in Figure 5.

16 Figure 5. Several different postures with two unreachable constraints. On each screenshot, the semitransparency character also drives its COM on its rest position (high weight for Kn compared to Km). The other character is calculated with a high value of Km compared to Kn, neglecting the control of the COM. However, this method does not take dynamics into account and may produce some artifacts for dynamic motions but it offers convincing results in many cases. Let us consider a classical study case in VR where a human carries more or less heavy objects. Thanks to the control of the center of mass, the system adapts the posture to the addition of those weights. However, it would not be able to control realistically jumps or other motions involving complex external forces (such as wind and collisions). Framework application and testing in VR MKM was used in several applications including videogames, e-learning and virtual reality. In this section, we describe the results obtained in a VR experiment involving a real handball goalkeeper that had to stop virtual throws animated with MKM. First, a set of motions were

17 captured thanks to a Vicon-MX (product of Oxford Metrics) motion capture system composed of 12 cameras cadenced at 160Hz. Reflective markers were placed over standardized anatomical landmarks in order to retrieve joint centers and the motion of all the body segments. The captured motions were: running with a ball, throwing at various places in a goal with two methods (with and without jumping) and rest motions. All those motions were encoded with the motion representation described in this paper aiming at animating easily and rapidly virtual skeletons with various dimensions (see Figure 6). Figure 6. Motion capture of handball throws and animation of virtual players with different dimensions. For all the motions, we specified the corresponding constraints: contact of the feet with the ground when necessary and position of the wrist on which was attached the ball. After a complete biomechanical analysis of all the collected trajectories, a model of handball thrower was designed providing us with a motion for each kind of throw. This model is based on MKM and is controlled through additional constraints. Hence, several operators were proposed thanks to those constraints, such as changing the position of the wrist at ball release, changing the orientation of the trunk and delaying the ball release event. A first study demonstrated that the goalkeepers gestures were similar when stopping virtual thrown balls while applying directly motion capture or while using this model (Bideau et al. 2003). To

18 evaluate of this similarity we have calculated the correlation between the arm s gestures (the trajectory of the arm s center of mass) of the goalkeeper in the two situations (real and virtual). We have found correlations between 0.96 and 0.98 for the 8 subjects (professional male players). Standard deviation was very little for each subject: almost Another study showed that the goalkeeper s gestures were affected when at least one of the three above modifications were used (Bideau et al. 2004): the correlation decreased down to 0.80 (from 0.76 to 0.82 for the same 8 subjects). This result is encouraging in using VR to study interactions between humans through interactions with virtual humans animated with MKM. Moreover, as the goalkeepers gestures are captured, it is also possible to animate concurrently the goalkeeper and the thrower in a common virtual environment leading to a very interesting investigation tool for trainers (see Figure 7). Figure 7. The left view depicts a 3D-visualization of the complete scenery including thrower and goalkeeper (the gestures of the virtual goalkeeper were obtained thanks to motion capture when the actual goalkeeper tried to stop the virtual throws, such as in the right view). Additional tests were carried-out to evaluate the performance of MKM in animating several characters in real-time while verifying many constraints. Obviously, computation time depends on the number and the kind of constraints that are applied to the character. Hence, with only one reachable constraint, without dealing with the COM position, our system

19 requires 125ns for each character and at each time step on a P-IV 2.8GHz laptop computer. On the opposite, for two unreachable constraints (that require more iterations) while controlling the center of mass position, it requires 1470ns. As a consequence, on this computer up to 177 characters can be animated at 30Hz. For all those examples, motion retargeting and ground adaptation were also performed. Discussion In this paper we have presented a framework that embeds original algorithms to control human-like figures in interactive environments. The first main contribution of this work is to provide a set of methods based on a representation of motion that is independent from morphology. Thanks to this representation, motion retargeting does not require classical inverse kinematics. Hence, only a minimum database of motions recorded with various actors is required to animate many different characters without performing preprocessing such as motion retargeting. Moreover, in VR applications, the autonomous characters need modifying their gestures according to the users orders. Hence, instead of recording numerous motions for various situations, our framework enables using only one motion per type of interaction (such as grasping, displacing, punching ). An efficient inverse kinematics and kinetics solver allows modifying the gestures in order to verify constraints that can change continuously in the virtual world. This solver enabled us to animate numerous of various characters at 30Hz with complex interactive constraints. As a consequence, this framework can be used in crowd simulation or to make several autonomous characters live in a virtual city. In this framework, constraints are very intuitive to design and control. In order to grasp objects that move in the virtual environment, the Cartesian constraint associated to the grasping motion is set to the target s position. The priority of this constraint is increased continuously to a maximum value when the object must be reached. The autonomous character can take objects in many positions and merge other complex motions while

20 preserving balance thanks to the inverse kinetics solver. Moreover, if the objects are heavy or light, the system can automatically adapt the posture in order to preserve balance by assuming that the mass of the object is added to the hand s one. However, on the one hand, dynamics is not taken into account in this framework. As a consequence, nothing ensures that the resulting motion is linked to realistic forces and torques for humans. On the other hand, the experiments carried-out with experts in sports demonstrate that the system is able to make them react realistically to virtual opponents animated with our framework. Nevertheless, it could be interesting to take dynamics into account in order to make the virtual characters react to external forces in a more convincing way. The main problem consists in finding a method with low computation time in order to fit constraints of many VR applications. Up to our knowledge, no other framework is able to provide such functionalities (kinematic and kinetic constraints solving, motion retargeting, synchronization and blending) in an interactive environment. Procedural animations are limited to a small set of motions. Motion graphs require huge database of motions and a lot of manual and automatic preprocesses. They are also limited to reusing motions on character that has the same anthropometric properties as the original actor. Spacetime constraints are not dedicated to interactive applications because they require knowing all the constraints in advance. Some commercial packages allow part of these functionalities but generally animate a unique type of character or cannot deal with complex kinematic and kinetic constraints. We have carried-out preliminary experiments to verify if our framework engenders plausible motor behaviors in real users. This work has to be extended to more complex situations because it is a fundamental problem in VR. If people do not react as in real world, it may be a serious problem for many applications, such as in education, training, phobia treatment Testing the motor behaviors of subjects in VR seems to us a very promising issue and the methods described in this paper are promising to evaluate of the animation quality. However,

21 this method should be improved to take other parameters into account, not only motor behaviors. Acknowledgements The authors wish to thank the reviewers for their constructive comments and suggestions. References Baerlocher, P., & Boulic, R. (2004) An inverse kinematic architecture enforcing on arbitrary number of strict priority levels. Visual Computer, 20, 6, Bideau, B., Kulpa, R., Ménardais, S., Multon, F., Delamarche, P., & Arnaldi, B.(2003), Real handball keeper vs. virtual handball player: a case study, Presence, 12(4): , august 2003 Bideau, B., Multon, F., Kulpa, R., Fradet, L., Arnaldi, B., & Delamarche, P. (2004) Virtual reality, a new tool to investigate anticipation skills: application to the goalkeeper and handball thrower duel. Neuroscience letters, 372(1-2) : Boulic, R., Magnenat-Thalmann, N., & Thalmann, D. (1990) A global human walking model with real-time kinematic personification. Visual Computer, 6, 6, Boulic, R., Becheiraz, P., Emering, L., & Thalmann, D. (1997) Integration of Motion Control Techniques for Virtual Humans and Avatars Realtime Animation. Proceedings of ACM International Symposium VRST, Bruderlin, A., & Calvert, T. (1996) Knowledge-Driven, Interactive Animation of Human Running. Proceedings of Graphics Interface, Choi, K.J., & Ko, H.S. (2000) Online motion retargeting. The Journal Of Visualisation and Computer Animation, 11, 5, Forbes, K., & Fiume, E. (2005) An Efficient Search Algorithm for Motion Data Using Weighted PCA. Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation,

22 Glardon, P., Boulic, R., & Thalmann, D. (2004) PCA-based walking engine using motion capture data. Proceedings of IEEE Computer Graphics International, Gleicher, M. (1998) Retargetting Motion to new Characters. ACM SIGGRAPH annual conference, Gleicher, M., & Litwinowicz, P. (1998) Constraint-Based Motion Adaptation. Journal of Visualization and Computer Animation, 9, 2, Guo, S., & Roberge, J. (1996) A high-level control mechanism for human locomotion based on parametric frame space interpolation. Proceedings of the Eurographics workshop on Computer animation and simulation, Grochow, K., Martin, S.L., Hertzmann, A., & Popovic, Z. (2004) Style-based inverse kinematics. ACM Transactions on Graphics, 23, 3, Hodgins, J., Wooten, W., Brogan, D., & O'Brien, J.(1995). Animating human athletics. Proceedings of ACM SIGGRAPH annual conference, Hsu, E., Pulli, K., & Popovic, J. (2005) Style Translation for Human Motion. ACM Transactions on Graphics, 24, 3, Kulpa, R., Multon, F. & Arnaldi, B. (2005). Morphology-independent representation of motions for interactive human-like animation. Computer Graphics Forum 24(3): Kovar, L., & Gleicher, M. (2002) Motion graphs. ACM Transactions on Graphics, 21, 3, Kovar, L., & Gleicher, M. (2003) Flexible Automatic Motion Blending with Registration Curves. Proceedings of ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Le Callennec, B., & Boulic, R. (2004) Interactive motion deformation with prioritized constraints. Proceedings of ACM SIGGRAPG/Eurographics Symposium on Computer Animation,

23 Lee, J., & Lee, K.H. (2004) Precomputing avatar behavior from human motion data. Proceedings of Eurographics/ACM SIGGRAPH Symposium on Computer Animation, Lee, J., & Shin, S.Y. (1999) A hierarchical approach to interactive motion editing for humanlike figures. Proceedings of ACM SIGGRAPH annual conference, Ménardais, S., Multon, F., Kulpa, R., & Arnaldi, B. (2004) Motion blending for real-time animation while accounting for the environment. Proceedings IEEE Computer Graphics International, p , Crete, Greece, june 2004 Ménardais, S., Kulpa, R., & Multon, F. (2004) Synchronization of interactively adapted motions. Proceedings of ACM SIGGRAPH/EUROGRAPHICS Symposium of Computer Animation, p , Grenoble, august 2004 Mukai, T., & Kuriyama, S. (2005) Geostatistical Motion Interpolation. ACM Transactions on Graphics, 24, 3, Safonova, A., Hodgins, J., & Pollard., N (2004) Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces. ACM Transactions on Graphics, 23, 3, Shin, H.J., Lee, J., Shin, S.Y., & Gleicher, M. (2001) Computer puppetry: An importancebased approach. ACM Transactions on Graphics, 20, 2, Tolani, D., Goswami, A., & Badler, N. (2000) Real-Time Inverse Kinematics Techniques for Anthropomorphic Limbs. Graphical Models, 62, Wang, L.-C. T., & Chen C. C. (1991) A combined optimization method for solving the inverse kinematics problem of mechanical manipulators. IEEE Trans. On Robotics and Applications, 7, 4, Witkin, A., & Kass, M. (1995) Motion Warping. Proceedings of ACM SIGGRAPH 1995 annual conference,

24 Zatsiorsky, Z., Seluyanov, V., & Chugunova, L.G. (1990) Methods of determining massinertial characteristics of human body segments. In: Contemporary problems of Biomechanics, Moscow: Mir publishers,

25 Figure captions Figure 1. Overview of the frame work, allowing motion retargeting, synchronization, blending and adaptation to the environment Figure 2. New representation that is independent from the character's anthropometric dimensions... 7 Figure 3. Synchronization of three motions that are not compatible Figure 4. COM 1 (resp. COM 2 ) is the COM position of the segment s 1 (resp. s 2 ). COM g is the COM position of the limb Figure 5. Several different postures with two unreachable constraints. On each screenshot, the semi-transparency character also drives its COM on its rest position (high weight for Kn compared to Km). The other character was calculated with a high value of Km compared to Kn, neglecting the control of the COM Figure 6. Motion capture of handball throws and animation of virtual players with different dimensions Figure 7. The left view depicts a 3D-visualization of the complete scenery including thrower and goalkeeper (the gestures of the virtual goalkeeper were obtained thanks to motion capture when the actual goalkeeper tried to stop the virtual throws, such as in the right view)... 18

From Motion Capture to Real-Time Character Animation

From Motion Capture to Real-Time Character Animation From Motion Capture to Real-Time Character Animation Franck Multon 1,2, Richard Kulpa 1, Ludovic Hoyet 2, Taku Komura 3 1 M2S, University of Rennes 2, Av. Charles Tillon CS 24414, 35044 Rennes, FRANCE

More information

Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting

Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting EUROGRAPHICS 2000 / M. Gross and F.R.A. Hopgood (Guest Editors) Volume 19 (2000), Number 3 Using an Intermediate Skeleton and Inverse Kinematics for Motion Retargeting Jean-Sébastien Monzani, Paolo Baerlocher,

More information

Character Animation Seminar Report: Complementing Physics with Motion Capture

Character Animation Seminar Report: Complementing Physics with Motion Capture Character Animation Seminar Report: Complementing Physics with Motion Capture Stefan John 1, and Alexis Heloir 2 1 Saarland University, Computer Graphics Lab, Im Stadtwald Campus E 1 1, 66123 Saarbrücken,

More information

Motion Editing with Data Glove

Motion Editing with Data Glove Motion Editing with Data Glove Wai-Chun Lam City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong Kong email:jerrylam@cityu.edu.hk Feng Zou City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information

A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control

A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control Journal of Information & Computational Science 1: 1 (2004) 137 141 Available at http://www.joics.com A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control Xiaomao Wu, Lizhuang Ma, Zhihua

More information

MOTION capture is a technique and a process that

MOTION capture is a technique and a process that JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2008 1 Automatic estimation of skeletal motion from optical motion capture data xxx, Member, IEEE, Abstract Utilization of motion capture techniques

More information

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Animation, Motion Capture, & Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based

More information

Motion Control with Strokes

Motion Control with Strokes Motion Control with Strokes Masaki Oshita Kyushu Institute of Technology oshita@ces.kyutech.ac.jp Figure 1: Examples of stroke-based motion control. Input strokes (above) and generated motions (below).

More information

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics Velocity Interpolation Original image from Foster & Metaxas, 1996 In 2D: For each axis, find the 4 closest face velocity samples: Self-intersecting

More information

Normalized Euclidean Distance Matrices for Human Motion Retargeting

Normalized Euclidean Distance Matrices for Human Motion Retargeting Normalized Euclidean Distance Matrices for Human Motion Retargeting Antonin Bernardin University of Limoges, Inria antonin.bernardin@inria.fr Ludovic Hoyet Inria ludovic.hoyet@inria.fr Antonio Mucherino

More information

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Animation, Motion Capture, & Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based

More information

Data-driven Approaches to Simulation (Motion Capture)

Data-driven Approaches to Simulation (Motion Capture) 1 Data-driven Approaches to Simulation (Motion Capture) Ting-Chun Sun tingchun.sun@usc.edu Preface The lecture slides [1] are made by Jessica Hodgins [2], who is a professor in Computer Science Department

More information

Preparation Behaviour Synthesis with Reinforcement Learning

Preparation Behaviour Synthesis with Reinforcement Learning Preparation Behaviour Synthesis with Reinforcement Learning Hubert P. H. Shum Northumbria University hubert.shum@northumbria.ac.uk Ludovic Hoyet Trinity College Dublin hoyetl@tcd.ie Edmond S. L. Ho Hong

More information

Human Motion Reconstruction by Direct Control of Marker Trajectories

Human Motion Reconstruction by Direct Control of Marker Trajectories Human Motion Reconstruction by Direct Control of Marker Trajectories Emel Demircan, Luis Sentis, Vincent De Sapio and Oussama Khatib Artificial Intelligence Laboratory, Stanford University, Stanford, CA

More information

Interval-Based Motion Blending for Hand Grasping

Interval-Based Motion Blending for Hand Grasping EG UK Theory and Practice of Computer Graphics (2007) Ik Soo Lim, David Duce (Editors) Interval-Based Motion Blending for Hand Grasping Matt Brisbin and Bedřich Beneš Purdue University {mbrisbin bbenes}@purdue.edu

More information

Adding Hand Motion to the Motion Capture Based Character Animation

Adding Hand Motion to the Motion Capture Based Character Animation Adding Hand Motion to the Motion Capture Based Character Animation Ge Jin and James Hahn Computer Science Department, George Washington University, Washington DC 20052 {jinge, hahn}@gwu.edu Abstract. Most

More information

Last Time? Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based Animation Forward and

More information

Generating Different Realistic Humanoid Motion

Generating Different Realistic Humanoid Motion Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing

More information

An Introduction to animation and motion blending

An Introduction to animation and motion blending An Introduction to animation and motion blending Stuart Bryson - 98082365 Information Technology Research Preparation University Technology, Sydney 14/06/2005 An introduction to animation and motion blending

More information

Animation, Motion Capture, & Inverse Kinematics. Announcements: Quiz

Animation, Motion Capture, & Inverse Kinematics. Announcements: Quiz Animation, Motion Capture, & Inverse Kinematics Announcements: Quiz On Tuesday (3/10), in class One 8.5x11 sheet of notes allowed Sample quiz (from a previous year) on website Focus on reading comprehension

More information

animation projects in digital art animation 2009 fabio pellacini 1

animation projects in digital art animation 2009 fabio pellacini 1 animation projects in digital art animation 2009 fabio pellacini 1 animation shape specification as a function of time projects in digital art animation 2009 fabio pellacini 2 how animation works? flip

More information

A Responsiveness Metric for Controllable Characters Technical Report CS

A Responsiveness Metric for Controllable Characters Technical Report CS A Responsiveness Metric for Controllable Characters Technical Report CS05-50-0 Madhusudhanan Srinivasan Ronald A. Metoyer School of Electrical Engineering and Computer Science Oregon State University ρ

More information

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation Announcements: Quiz Animation, Motion Capture, & Inverse Kinematics On Friday (3/1), in class One 8.5x11 sheet of notes allowed Sample quiz (from a previous year) on website Focus on reading comprehension

More information

Virtual Interaction System Based on Optical Capture

Virtual Interaction System Based on Optical Capture Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Virtual Interaction System Based on Optical Capture Peng CHEN, 2 Xiaoyang ZHOU, 3 Jianguang LI, Peijun WANG School of Mechanical Engineering,

More information

MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC. Alexandre Meyer Master Informatique

MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC. Alexandre Meyer Master Informatique 1 MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC Alexandre Meyer Master Informatique Overview: Motion data processing In this course Motion editing

More information

Physically Based Character Animation

Physically Based Character Animation 15-464/15-664 Technical Animation April 2, 2013 Physically Based Character Animation Katsu Yamane Disney Research, Pittsburgh kyamane@disneyresearch.com Physically Based Character Animation Use physics

More information

Validating retargeted and interpolated locomotions by dynamics-based analysis

Validating retargeted and interpolated locomotions by dynamics-based analysis Validating retargeted and interpolated locomotions by dynamics-based analysis Nicolas Pronost IRISA, University of Rennes 1 Georges Dumont IRISA, ENS Cachan Figure 1: Some steps of our analysis method.

More information

Modeling Physically Simulated Characters with Motion Networks

Modeling Physically Simulated Characters with Motion Networks In Proceedings of Motion In Games (MIG), Rennes, France, 2012 Modeling Physically Simulated Characters with Motion Networks Robert Backman and Marcelo Kallmann University of California Merced Abstract.

More information

Motion Capture & Simulation

Motion Capture & Simulation Motion Capture & Simulation Motion Capture Character Reconstructions Joint Angles Need 3 points to compute a rigid body coordinate frame 1 st point gives 3D translation, 2 nd point gives 2 angles, 3 rd

More information

Motion Pattern Preserving IK Operating in the Motion Principal Coefficients Space

Motion Pattern Preserving IK Operating in the Motion Principal Coefficients Space Motion Pattern Preserving IK Operating in the Motion Principal Coefficients Space Schubert R Carvalho Ecole Polytechnique Fédérale de Lausanne (EPFL) Virtual Reality Lab CH-1015 Lausanne, Switzerland schubertcarvalho@epflch

More information

Kinematics & Motion Capture

Kinematics & Motion Capture Lecture 27: Kinematics & Motion Capture Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2017 Forward Kinematics (Slides with James O Brien) Forward Kinematics Articulated skeleton Topology

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

CONSTRUCTING VIRTUAL HUMAN LIFE SIMULATIONS

CONSTRUCTING VIRTUAL HUMAN LIFE SIMULATIONS CONSTRUCTING VIRTUAL HUMAN LIFE SIMULATIONS Marcelo Kallmann, Etienne de Sevin and Daniel Thalmann Swiss Federal Institute of Technology (EPFL), Computer Graphics Lab (LIG), Lausanne, Switzerland, CH-1015,

More information

Analyzing and Segmenting Finger Gestures in Meaningful Phases

Analyzing and Segmenting Finger Gestures in Meaningful Phases 2014 11th International Conference on Computer Graphics, Imaging and Visualization Analyzing and Segmenting Finger Gestures in Meaningful Phases Christos Mousas Paul Newbury Dept. of Informatics University

More information

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al CS 231 Motion Capture Data I The Pipeline Bodenheimer et al 1 Marker Magnetic Optical Marker placement On limbs vs joints neither is ideal Over tight clothing or thin skin In repeatable 'landmarks' Using

More information

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides

Rigging / Skinning. based on Taku Komura, Jehee Lee and Charles B.Own's slides Rigging / Skinning based on Taku Komura, Jehee Lee and Charles B.Own's slides Skeletal Animation Victoria 2 CSE 872 Dr. Charles B. Owen Advanced Computer Graphics Skinning http://www.youtube.com/watch?

More information

Design and Implementation of Synthetic Humans for Virtual Environments and Simulation Systems

Design and Implementation of Synthetic Humans for Virtual Environments and Simulation Systems Design and Implementation of Synthetic Humans for Virtual Environments and Simulation Systems SPYROS VOSINAKIS, THEMIS PANAYIOTOPOULOS Department of Informatics University of Piraeus 80 Karaoli & Dimitriou

More information

Animating Non-Human Characters using Human Motion Capture Data

Animating Non-Human Characters using Human Motion Capture Data Animating Non-Human Characters using Human Motion Capture Data Laurel Bancroft 1 and Jessica Hodgins 2 1 College of Fine Arts, Carngie Mellon University, lbancrof@andrew.cmu.edu 2 Computer Science, Carnegie

More information

Mapping optical motion capture data to skeletal motion using a physical model

Mapping optical motion capture data to skeletal motion using a physical model Mapping optical motion capture data to skeletal motion using a physical model Victor B. Zordan Nicholas C. Van Der Horst University of California, Riverside Motivation Motivation Optical data + Skeleton

More information

Hierarchical Motion Controllers for Real-Time Autonomous Virtual Humans

Hierarchical Motion Controllers for Real-Time Autonomous Virtual Humans In Proceedings of the 5th International Working Conference on Intelligent Virtual Agents (IVA), Kos, Greece, September 12-14, 2005, 243-265 Hierarchical Motion Controllers for Real-Time Autonomous Virtual

More information

Motion Control Methods for Skeleton Daniel Thalmann

Motion Control Methods for Skeleton Daniel Thalmann Motion Control Methods for Skeleton Daniel Thalmann Cagliari, May 2008 Animation of articulated bodies Characters, humans, animals, robots. Characterized by hierarchical structure: skeleton. Skeleton:

More information

Planning Cooperative Motions for Animated Characters

Planning Cooperative Motions for Animated Characters Planning Cooperative Motions for Animated Characters Claudia Esteves Gustavo Arechavaleta Jean-Paul Laumond 7, avenue du Colonel Roche, 31077 Toulouse {cesteves,garechav,jpl}@laas.fr Abstract This paper

More information

Synthesis and Editing of Personalized Stylistic Human Motion

Synthesis and Editing of Personalized Stylistic Human Motion Synthesis and Editing of Personalized Stylistic Human Motion Jianyuan Min Texas A&M University Huajun Liu Texas A&M University Wuhan University Jinxiang Chai Texas A&M University Figure 1: Motion style

More information

Hybrid Control For Interactive Character Animation

Hybrid Control For Interactive Character Animation Hybrid Control For Interactive Character Animation Ari Shapiro University of California, Los Angeles ashapiro@cs.ucla.edu Petros Faloutsos University of California, Los Angeles pfal@cs.ucla.edu Fred Pighin

More information

7 Modelling and Animating Human Figures. Chapter 7. Modelling and Animating Human Figures. Department of Computer Science and Engineering 7-1

7 Modelling and Animating Human Figures. Chapter 7. Modelling and Animating Human Figures. Department of Computer Science and Engineering 7-1 Modelling and Animating Human Figures 7-1 Introduction Modeling and animating an articulated figure is one of the most formidable tasks that an animator can be faced with. It is especially challenging

More information

Articulated Characters

Articulated Characters Articulated Characters Skeleton A skeleton is a framework of rigid body bones connected by articulated joints Used as an (invisible?) armature to position and orient geometry (usually surface triangles)

More information

Flexible Registration of Human Motion Data with Parameterized Motion Models

Flexible Registration of Human Motion Data with Parameterized Motion Models Flexible Registration of Human Motion Data with Parameterized Motion Models Yen-Lin Chen Texas A&M University Jianyuan Min Texas A&M University Jinxiang Chai Texas A&M University Figure 1: Our registration

More information

Character Animation COS 426

Character Animation COS 426 Character Animation COS 426 Syllabus I. Image processing II. Modeling III. Rendering IV. Animation Image Processing (Rusty Coleman, CS426, Fall99) Rendering (Michael Bostock, CS426, Fall99) Modeling (Dennis

More information

Kinematics and Orientations

Kinematics and Orientations Kinematics and Orientations Hierarchies Forward Kinematics Transformations (review) Euler angles Quaternions Yaw and evaluation function for assignment 2 Building a character Just translate, rotate, and

More information

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Christos Mousas Paul Newbury Department of Informatics University of Sussex East Sussex, Brighton BN1 9QH Email: {c.mousas,

More information

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Mohammed Z. Al-Faiz,MIEEE Computer Engineering Dept. Nahrain University Baghdad, Iraq Mohammed S.Saleh

More information

Phase-Functioned Neural Networks for Motion Learning

Phase-Functioned Neural Networks for Motion Learning Phase-Functioned Neural Networks for Motion Learning TAMS University of Hamburg 03.01.2018 Sebastian Starke University of Edinburgh School of Informatics Institue of Perception, Action and Behaviour Sebastian.Starke@ed.ac.uk

More information

SM2231 :: 3D Animation I :: Basic. Rigging

SM2231 :: 3D Animation I :: Basic. Rigging SM2231 :: 3D Animation I :: Basic Rigging Object arrangements Hierarchical Hierarchical Separate parts arranged in a hierarchy can be animated without a skeleton Flat Flat Flat hierarchy is usually preferred,

More information

Motion Capture, Motion Edition

Motion Capture, Motion Edition Motion Capture, Motion Edition 2013-14 Overview Historical background Motion Capture, Motion Edition Motion capture systems Motion capture workflow Re-use of motion data Combining motion data and physical

More information

Dynamics Based Comparison Metrics for Motion Graphs

Dynamics Based Comparison Metrics for Motion Graphs Dynamics Based Comparison Metrics for Motion Graphs paper id 165 Abstract Motion graph approaches focus on the re-use of character animation contained in a motion-capture repository by connecting similar

More information

Interactive Computer Graphics

Interactive Computer Graphics Interactive Computer Graphics Lecture 18 Kinematics and Animation Interactive Graphics Lecture 18: Slide 1 Animation of 3D models In the early days physical models were altered frame by frame to create

More information

Open Access The Kinematics Analysis and Configuration Optimize of Quadruped Robot. Jinrong Zhang *, Chenxi Wang and Jianhua Zhang

Open Access The Kinematics Analysis and Configuration Optimize of Quadruped Robot. Jinrong Zhang *, Chenxi Wang and Jianhua Zhang Send Orders for Reprints to reprints@benthamscience.ae The Open Automation and Control Systems Journal, 014, 6, 1685-1690 1685 Open Access The Kinematics Analysis and Configuration Optimize of Quadruped

More information

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th Announcements Midterms back at end of class ½ lecture and ½ demo in mocap lab Have you started on the ray tracer? If not, please do due April 10th 1 Overview of Animation Section Techniques Traditional

More information

Morphological and stance interpolations in database for simulating bipedalism of virtual humans

Morphological and stance interpolations in database for simulating bipedalism of virtual humans Visual Comput (2006) 22: 4 13 DOI 10.1007/s00371-005-0350-y ORIGINAL ARTICLE Nicolas Pronost Georges Dumont Gilles Berillon Guillaume Nicolas Morphological and stance interpolations in database for simulating

More information

A Dynamics-based Comparison Metric for Motion Graphs

A Dynamics-based Comparison Metric for Motion Graphs The Visual Computer manuscript No. (will be inserted by the editor) Mikiko Matsunaga, Victor B. Zordan University of California, Riverside A Dynamics-based Comparison Metric for Motion Graphs the date

More information

INFOMCANIM Computer Animation Motion Synthesis. Christyowidiasmoro (Chris)

INFOMCANIM Computer Animation Motion Synthesis. Christyowidiasmoro (Chris) INFOMCANIM Computer Animation Motion Synthesis Christyowidiasmoro (Chris) Why Motion Synthesis? We don t have the specific motion / animation We don t have the skeleton and motion for specific characters

More information

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters Computer Animation and Visualisation Lecture 3. Motion capture and physically-based animation of characters Character Animation There are three methods Create them manually Use real human / animal motions

More information

CS 231. Control for articulate rigid-body dynamic simulation. Articulated rigid-body dynamics

CS 231. Control for articulate rigid-body dynamic simulation. Articulated rigid-body dynamics CS 231 Control for articulate rigid-body dynamic simulation Articulated rigid-body dynamics F = ma No control 1 No control Ragdoll effects, joint limits RT Speed: many sims at real-time rates on today

More information

Splicing Upper-Body Actions with Locomotion

Splicing Upper-Body Actions with Locomotion EUROGRAPHICS 2006 / E. Gröller and L. Szirmay-Kalos (Guest Editors) Volume 25 (2006), Number 3 Splicing Upper-Body Actions with Locomotion Rachel Heck Lucas Kovar Michael Gleicher University of Wisconsin-Madison

More information

Planning Fine Motions for a Digital Factotum

Planning Fine Motions for a Digital Factotum Planning Fine Motions for a Digital Factotum Gustavo Arechavaleta Claudia Esteves LAAS-CNRS 7, avenue du Colonel Roche 31077, Toulouse Cedex, France {garechav, cesteves, jpl}@laas.fr Jean-Paul Laumond

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION

FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION Christos Mousas 1,Paul Newbury 1, Christos-Nikolaos Anagnostopoulos 2 1 Department of Informatics, University of Sussex, Brighton BN1 9QJ, UK 2 Department of Cultural

More information

Animation. CS 465 Lecture 22

Animation. CS 465 Lecture 22 Animation CS 465 Lecture 22 Animation Industry production process leading up to animation What animation is How animation works (very generally) Artistic process of animation Further topics in how it works

More information

Controlling Reactive, Motion Capture-driven Simulated Characters

Controlling Reactive, Motion Capture-driven Simulated Characters Controlling Reactive, Motion Capture-driven Simulated Characters Victor B. Zordan University of California at Riverside Motion capture-driven simulations? Motivation: Unreal Havok Havok2 Motion capture

More information

CMSC 425: Lecture 10 Skeletal Animation and Skinning

CMSC 425: Lecture 10 Skeletal Animation and Skinning CMSC 425: Lecture 10 Skeletal Animation and Skinning Reading: Chapt 11 of Gregory, Game Engine Architecture. Recap: Last time we introduced the principal elements of skeletal models and discussed forward

More information

Simulation. x i. x i+1. degrees of freedom equations of motion. Newtonian laws gravity. ground contact forces

Simulation. x i. x i+1. degrees of freedom equations of motion. Newtonian laws gravity. ground contact forces Dynamic Controllers Simulation x i Newtonian laws gravity ground contact forces x i+1. x degrees of freedom equations of motion Simulation + Control x i Newtonian laws gravity ground contact forces internal

More information

Moving Beyond Ragdolls:

Moving Beyond Ragdolls: Moving Beyond Ragdolls: Generating Versatile Human Behaviors by Combining Motion Capture and Controlled Physical Simulation by Michael Mandel Carnegie Mellon University / Apple Computer mmandel@gmail.com

More information

8.7 Interactive Motion Correction and Object Manipulation

8.7 Interactive Motion Correction and Object Manipulation 8.7 Interactive Motion Correction and Object Manipulation 335 Interactive Motion Correction and Object Manipulation Ari Shapiro University of California, Los Angeles Marcelo Kallmann Computer Graphics

More information

Real-time Physical Modelling of Character Movements with Microsoft Kinect

Real-time Physical Modelling of Character Movements with Microsoft Kinect Real-time Physical Modelling of Character Movements with Microsoft Kinect ABSTRACT Hubert P. H. Shum School of Computing, Engineering and Information Sciences Northumbria University Newcastle, United Kingdom

More information

Character Animation 1

Character Animation 1 Character Animation 1 Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. Character Representation A character

More information

Evaluation of motion retargeting using spacetime constraints. Master s Thesis of Bas Lommers. December 6, Supervisor: Dr. J.

Evaluation of motion retargeting using spacetime constraints. Master s Thesis of Bas Lommers. December 6, Supervisor: Dr. J. Evaluation of motion retargeting using spacetime constraints Master s Thesis of Bas Lommers Student Number: 3441431 December 6, 2013 Supervisor: Dr. J. Egges Thesis number: ICA-3441431 Utrecht University

More information

Walk This Way: A Lightweight, Data-driven Walking Synthesis Algorithm

Walk This Way: A Lightweight, Data-driven Walking Synthesis Algorithm Walk This Way: A Lightweight, Data-driven Walking Synthesis Algorithm Sean Curtis, Ming Lin, and Dinesh Manocha University of North Carolina at Chapel Hill, Chapel Hill, NC, USA {seanc,lin,dm}@cs.unc.edu

More information

Realistic Rendering and Animation of a Multi-Layered Human Body Model

Realistic Rendering and Animation of a Multi-Layered Human Body Model Realistic Rendering and Animation of a Multi-Layered Human Body Model Mehmet Şahin Yeşil and Uǧur Güdükbay Dept. of Computer Engineering, Bilkent University, Bilkent 06800 Ankara, Turkey email: syesil@alumni.bilkent.edu.tr,

More information

To Do. History of Computer Animation. These Lectures. 2D and 3D Animation. Computer Animation. Foundations of Computer Graphics (Spring 2010)

To Do. History of Computer Animation. These Lectures. 2D and 3D Animation. Computer Animation. Foundations of Computer Graphics (Spring 2010) Foundations of Computer Graphics (Spring 2010) CS 184, Lecture 24: Animation http://inst.eecs.berkeley.edu/~cs184 To Do Submit HW 4 (today) Start working on HW 5 (can be simple add-on) Many slides courtesy

More information

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object.

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object. CENG 732 Computer Animation Spring 2006-2007 Week 4 Shape Deformation Animating Articulated Structures: Forward Kinematics/Inverse Kinematics This week Shape Deformation FFD: Free Form Deformation Hierarchical

More information

Interactive low-dimensional human motion synthesis by combining motion models and PIK ...

Interactive low-dimensional human motion synthesis by combining motion models and PIK ... COMPUTER ANIMATION AND VIRTUAL WORLDS Published online in Wiley InterScience (www.interscience.wiley.com).210 Interactive low-dimensional human motion synthesis by combining motion models and PIK By Schubert

More information

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala

Animation. CS 4620 Lecture 33. Cornell CS4620 Fall Kavita Bala Animation CS 4620 Lecture 33 Cornell CS4620 Fall 2015 1 Announcements Grading A5 (and A6) on Monday after TG 4621: one-on-one sessions with TA this Friday w/ prior instructor Steve Marschner 2 Quaternions

More information

Character Animation. Presented by: Pam Chow

Character Animation. Presented by: Pam Chow Character Animation Presented by: Pam Chow Overview Animation is a big topic We will concentrate on character animation as is used in many games today humans, animals, monsters, robots, etc. PLAZMO AND

More information

Triangulation: A new algorithm for Inverse Kinematics

Triangulation: A new algorithm for Inverse Kinematics Triangulation: A new algorithm for Inverse Kinematics R. Müller-Cajar 1, R. Mukundan 1, 1 University of Canterbury, Dept. Computer Science & Software Engineering. Email: rdc32@student.canterbury.ac.nz

More information

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics Preview CS770/870 Spring 2017 Animation Basics Related material Angel 6e: 1.1.3, 8.6 Thalman, N and D. Thalman, Computer Animation, Encyclopedia of Computer Science, CRC Press. Lasseter, J. Principles

More information

CS770/870 Spring 2017 Animation Basics

CS770/870 Spring 2017 Animation Basics CS770/870 Spring 2017 Animation Basics Related material Angel 6e: 1.1.3, 8.6 Thalman, N and D. Thalman, Computer Animation, Encyclopedia of Computer Science, CRC Press. Lasseter, J. Principles of traditional

More information

A Model-based Approach to Rapid Estimation of Body Shape and Postures Using Low-Cost Depth Cameras

A Model-based Approach to Rapid Estimation of Body Shape and Postures Using Low-Cost Depth Cameras A Model-based Approach to Rapid Estimation of Body Shape and Postures Using Low-Cost Depth Cameras Abstract Byoung-Keon D. PARK*, Matthew P. REED University of Michigan, Transportation Research Institute,

More information

CS 231. Inverse Kinematics Intro to Motion Capture. 3D characters. Representation. 1) Skeleton Origin (root) Joint centers/ bones lengths

CS 231. Inverse Kinematics Intro to Motion Capture. 3D characters. Representation. 1) Skeleton Origin (root) Joint centers/ bones lengths CS Inverse Kinematics Intro to Motion Capture Representation D characters ) Skeleton Origin (root) Joint centers/ bones lengths ) Keyframes Pos/Rot Root (x) Joint Angles (q) Kinematics study of static

More information

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Department of Computer Science The University of British Columbia duanx@cs.ubc.ca, lili1987@cs.ubc.ca Abstract

More information

Graphical Models 73 (2011) Contents lists available at ScienceDirect. Graphical Models. journal homepage:

Graphical Models 73 (2011) Contents lists available at ScienceDirect. Graphical Models. journal homepage: Graphical Models 73 (2011) 243 260 Contents lists available at ScienceDirect Graphical Models journal homepage: www.elsevier.com/locate/gmod FABRIK: A fast, iterative solver for the Inverse Kinematics

More information

Motion Parameterization and Adaptation Strategies for Virtual Therapists

Motion Parameterization and Adaptation Strategies for Virtual Therapists In Proceedings of Intelligent Virtual Agents (IVA), Boston, 2014 (This version is the authors manuscript, the final publication is available at link.springer.com) Motion Parameterization and Adaptation

More information

Scalable Solutions for Interactive Virtual Humans that can Manipulate Objects

Scalable Solutions for Interactive Virtual Humans that can Manipulate Objects In Proceedings of the Artificial Intelligence and Interactive Digital Entertainment (AIIDE), Marina del Rey, CA, June 1-3, 2005, 69-74 Scalable Solutions for Interactive Virtual Humans that can Manipulate

More information

Computer Kit for Development, Modeling, Simulation and Animation of Mechatronic Systems

Computer Kit for Development, Modeling, Simulation and Animation of Mechatronic Systems Computer Kit for Development, Modeling, Simulation and Animation of Mechatronic Systems Karol Dobrovodský, Pavel Andris, Peter Kurdel Institute of Informatics, Slovak Academy of Sciences Dúbravská cesta

More information

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time

animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

Learnt Inverse Kinematics for Animation Synthesis

Learnt Inverse Kinematics for Animation Synthesis VVG (5) (Editors) Inverse Kinematics for Animation Synthesis Anonymous Abstract Existing work on animation synthesis can be roughly split into two approaches, those that combine segments of motion capture

More information

Animation Lecture 10 Slide Fall 2003

Animation Lecture 10 Slide Fall 2003 Animation Lecture 10 Slide 1 6.837 Fall 2003 Conventional Animation Draw each frame of the animation great control tedious Reduce burden with cel animation layer keyframe inbetween cel panoramas (Disney

More information

MOTION CAPTURE BASED MOTION ANALYSIS AND MOTION SYNTHESIS FOR HUMAN-LIKE CHARACTER ANIMATION

MOTION CAPTURE BASED MOTION ANALYSIS AND MOTION SYNTHESIS FOR HUMAN-LIKE CHARACTER ANIMATION MOTION CAPTURE BASED MOTION ANALYSIS AND MOTION SYNTHESIS FOR HUMAN-LIKE CHARACTER ANIMATION ZHIDONG XIAO July 2009 National Centre for Computer Animation Bournemouth University This copy of the thesis

More information

animation computer graphics animation 2009 fabio pellacini 1

animation computer graphics animation 2009 fabio pellacini 1 animation computer graphics animation 2009 fabio pellacini 1 animation shape specification as a function of time computer graphics animation 2009 fabio pellacini 2 animation representation many ways to

More information

Automating Expressive Locomotion Generation

Automating Expressive Locomotion Generation Automating Expressive ocomotion Generation Yejin Kim and Michael Neff University of California, Davis, Department of Computer Science and Program for Technocultural Studies, 1 Shields Avenue, Davis, CA

More information