Planning Cooperative Motions for Animated Characters

Similar documents
Planning Fine Motions for a Digital Factotum

A 2-Stages Locomotion Planner for Digital Actors

Etienne Ferré*, Jean-Paul Laumond**, Gustavo Arechavaleta**, Claudia Estevès**

COORDINATED MOTION PLANNING FOR 3D ANIMATION USING CHINESE LION DANCE AS AN EXAMPLE. Fu-Sheng Yu and Tsai-Yen Li

Planning human walk in virtual environments

Scalable Solutions for Interactive Virtual Humans that can Manipulate Objects

Real-time Reach Planning for Animated Characters Using Hardware Acceleration

Planning in Mobile Robotics

Geometric Path Planning McGill COMP 765 Oct 12 th, 2017

A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control

Coordinated Motion Planning for 3D Animation With Use of a Chinese Lion Dance as an Example

Planning with Reachable Distances: Fast Enforcement of Closure Constraints

8.7 Interactive Motion Correction and Object Manipulation

A motion capture based control-space approach for walking mannequins

Full-Body Behavioral Path Planning in Cluttered Environments

Coordinated Motion Planning for 3D Animation - With Use of a Chinese Lion Dance as an Example

Workspace-Guided Rapidly-Exploring Random Tree Method for a Robot Arm

Robot Motion Planning

Spring 2010: Lecture 9. Ashutosh Saxena. Ashutosh Saxena

Probabilistic roadmaps for efficient path planning

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

II. RELATED WORK. A. Probabilistic roadmap path planner

Interval-Based Motion Blending for Hand Grasping

Dexterous Manipulation Planning Using Probabilistic Roadmaps in Continuous Grasp Subspaces

10/25/2018. Robotics and automation. Dr. Ibrahim Al-Naimi. Chapter two. Introduction To Robot Manipulators

Humanoid Robotics. Inverse Kinematics and Whole-Body Motion Planning. Maren Bennewitz

Path Planning for a Robot Manipulator based on Probabilistic Roadmap and Reinforcement Learning

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

Figure 1: A typical industrial scene with over 4000 obstacles. not cause collisions) into a number of cells. Motion is than planned through these cell

Kinodynamic Motion Planning on Roadmaps in Dynamic Environments

Adaptive Tuning of the Sampling Domain for Dynamic-Domain RRTs

Motion Planning with Dynamics, Physics based Simulations, and Linear Temporal Objectives. Erion Plaku

Mobile Robot Path Planning in Static Environments using Particle Swarm Optimization

Geometric Path Planning for General Robot Manipulators

MCE/EEC 647/747: Robot Dynamics and Control. Lecture 1: Introduction

Efficient Planning of Spatially Constrained Robot Using Reachable Distances

Path Deformation Roadmaps

A motion capture-based control-space approach for walking mannequins

Robotics Tasks. CS 188: Artificial Intelligence Spring Manipulator Robots. Mobile Robots. Degrees of Freedom. Sensors and Effectors

Planning, Execution and Learning Application: Examples of Planning for Mobile Manipulation and Articulated Robots

vizmo++: a Visualization, Authoring, and Educational Tool for Motion Planning

Robot Motion Planning

Motion Planning of Human-Like Robots using Constrained Coordination

Dynamic-Domain RRTs: Efficient Exploration by Controlling the Sampling Domain

Probabilistic Motion Planning: Algorithms and Applications

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Advanced Robotics Path Planning & Navigation

Motion Planning for Humanoid Walking in a Layered Environment

Search Spaces I. Before we get started... ME/CS 132b Advanced Robotics: Navigation and Perception 4/05/2011

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Principles of Robot Motion

Motion Capture & Simulation

Kinematics: Intro. Kinematics is study of motion

Character Animation Seminar Report: Complementing Physics with Motion Capture

Learning Humanoid Reaching Tasks in Dynamic Environments

Introduction to State-of-the-art Motion Planning Algorithms. Presented by Konstantinos Tsianos

animation projects in digital art animation 2009 fabio pellacini 1

6.141: Robotics systems and science Lecture 9: Configuration Space and Motion Planning

Motion Planning with interactive devices

Configuration Space of a Robot

Nonholonomic motion planning for car-like robots

Human Character Animation in 3D-Graphics: The EMOTE System as a Plug-in for Maya

Algorithms for Sensor-Based Robotics: Sampling-Based Motion Planning

Design and Implementation of Synthetic Humans for Virtual Environments and Simulation Systems

Hierarchical Optimization-based Planning for High-DOF Robots

Specialized PRM Trajectory Planning For Hyper-Redundant Robot Manipulators

D-Plan: Efficient Collision-Free Path Computation for Part Removal and Disassembly

Planning Collision-Free Reaching Motions for Interactive Object Manipulation and Grasping

Motion Planning for Humanoid Robots: Highlights with HRP-2

Configuration Space. Ioannis Rekleitis

Trajectory Optimization

Variable-resolution Velocity Roadmap Generation Considering Safety Constraints for Mobile Robots

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object.

Path Planning with Motion Optimization for Car Body-In-White Industrial Robot Applications

Manipulator trajectory planning

crrt : Planning loosely-coupled motions for multiple mobile robots

Motion Synthesis and Editing. Yisheng Chen

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots

Motion Control with Strokes

Small-Space Controllability of a Walking Humanoid Robot

Local Search Methods. CS 188: Artificial Intelligence Fall Announcements. Hill Climbing. Hill Climbing Diagram. Today

Ballistic motion planning for jumping superheroes

Embedding a Motion-Capture Interface in a Control Structure for Human Like Agent Behavior Achievement

CHAPTER SIX. the RRM creates small covering roadmaps for all tested environments.

SPATIAL GUIDANCE TO RRT PLANNER USING CELL-DECOMPOSITION ALGORITHM

arxiv: v2 [cs.ro] 29 Jul 2016

CS 4649/7649 Robot Intelligence: Planning

Saving Time for Object Finding with a Mobile Manipulator Robot in 3D Environment

KINEMATICS FOR ANIMATION. Rémi Ronfard, Animation, M2R MOSIG

Algorithmic Robotics and Motion Planning

Kinematics, Kinematics Chains CS 685

Efficient Motion Planning for Humanoid Robots using Lazy Collision Checking and Enlarged Robot Models

INCREASING THE CONNECTIVITY OF PROBABILISTIC ROADMAPS VIA GENETIC POST-PROCESSING. Giuseppe Oriolo Stefano Panzieri Andrea Turli

Non-Gaited Humanoid Locomotion Planning

Multi-Modal Motion Planning for a Humanoid Robot Manipulation Task

Computer Animation. Algorithms and Techniques. z< MORGAN KAUFMANN PUBLISHERS. Rick Parent Ohio State University AN IMPRINT OF ELSEVIER SCIENCE

Optimal motion trajectories. Physically based motion transformation. Realistic character animation with control. Highly dynamic motion

Announcements. CS 188: Artificial Intelligence Fall Robot motion planning! Today. Robotics Tasks. Mobile Robots

CS 188: Artificial Intelligence Fall Announcements

Factor-Guided Motion Planning for a Robot Arm

Transcription:

Planning Cooperative Motions for Animated Characters Claudia Esteves Gustavo Arechavaleta Jean-Paul Laumond 7, avenue du Colonel Roche, 31077 Toulouse {cesteves,garechav,jpl}@laas.fr Abstract This paper presents a motion planning scheme to plan, generate and edit motions for two or more virtual characters in cluttered environments. The main challenge is to deal with 3D collision avoidance while preserving the believability of the agent behaviors. To accomplish a coordinated task a geometrical decoupling of the system is proposed. Several techniques such as probabilistic path planning for open and closed kinematic chains, motion controllers and motion editing are integrated within a same algorithmic framework. 1 Introduction The interaction between the fields of robotics and computer animation is not new. The human figure has been frequently represented in computer animation with the articulated mechanisms used in robotics to control manipulators [1]. With this common representation, many techniques have been developed within both fields having two different goals in mind: On one hand, applications of computer animation mainly in the entertainment industry have made it necessary to develop techniques that generate realistic-looking motions. On the other hand the interest in motions from a robotics point of view is to generate them automatically without caring for realism. In recent years, arising applications in both areas (e.g. ergonomics, interactive video games, etc.) have interested researchers in automatically generating human-like plausible motions. With such a motivation, several approaches to automatically plan motions for a human mannequin have been developed. These works have focused in animating a given behavior at a time. For instance, in [6] the authors make a virtual mannequin perform manipulation planning. Another approach using an automated footprint planner that deals with locomotion on rough terrains is described in [2]. A two-step path planner for walking virtual characters was proposed in [8]. This approach consists in planning a collision-free path for a cylinder in a 2D world and then animating the mannequin along the path. This idea of planner was extended in [12] in order to deal with 3D obstacle avoidance and produce eye-believable motions. In the same spirit, this work combines techniques issued from robotics and computer animation into a single motion planning scheme accounting for two behaviors: locomotion and manipulation. Here, the main challenge is to deal with 3D collision avoidance while preserving the believability of the motions and allowing cooperation between two or more virtual mannequins. In order to achieve this, we define a simplified model of the task (walking while carrying a bulky object) by performing a geometric and kinematical decomposition of the systems degrees of freedom (DOF). This task model is extended to allow the cooperation between characters by automatically computing an approximation of the so-called reachable cooperative space. The remaining of this paper is structured as follows: In Section 2 a brief overview of the techniques used in this work is presented. In Section 3 the system and the task models are described. Section 4 describes how all techniques are combined in a 3-step algorithm in order to obtain an animation. In Section 5, examples of individual and cooperative tasks are shown and discussed. Finally, conclusion and future work are presented in Section 6. 2 Techniques Overview In order to generate complete motion sequences of one or more virtual mannequins transporting a bulky object in a cluttered environment we use three main components: a motion planner that handles open and closed kinematic chains motion controllers adapted for virtual human mannequins a 3D collision-avoidance editing strategy. There are many techniques that could be used to cover these requirements. In the paragraphs below, we describe the techniques that we think are best adapted to our problem.

2.1 Probabilistic Motion Planning Techniques 2.1.1 Probabilistic Roadmaps The interest of this method is to obtain a representation of the topology of the collision-free space in a compact data structure called a roadmap. Such a structure is computed without requiring an explicit representation of the obstacles. A roadmap can be obtained by using two types of algorithms: sampling or diffusion. The main idea of the first techniques (e.g. PRM [5]) is to sample random configuration lying in the free space and to trace edges to connect them with neighbor samples. Edges or local paths should also be collision-free and their form depends on the kinematical constraints (steering method) of the moving device. The diffusion techniques [9] consist in sampling the collision-free space with only few configurations called roots and to diffuse the exploration in the neighborhoods to randomly chosen directions. In this work we use a variant of the first approach: the Visibility PRM [14]. In this method, there are two types of nodes: the guards and the connexions. Nodes are added if they are not visible from previous sampled nodes or if they allow to link two or more connected components of the roadmap. The kind of roadmap generated with this approach is more compact than the one obtained using the PRM alone. 2.1.2 Planning for open-kinematic chains A motion path can be found in a roadmap by using a two-step algorithm consisting on a learning and a query phase. For an articulated mechanism, the roadmap is computed in the learning phase by generating random configurations within the allowed range of each DOF. In the query phase, the initial and final configurations are added as new nodes in the roadmap and connected with an adapted steering method. Then, a graph search is performed to find a collision-free path between the starting and goal configurations. If a path is found it is then converted into a trajectory (a time-parameterized path). 2.1.3 Planning for closed-kinematic chains In order to handle the motions of closed kinematic mechanisms, some path planning methods have been proposed in the literature [10, 4, 3]. In our work we have chosen to use the Random Loop Generator (RLG) algorithm proposed in [3]. For applying such method a closed kinematic chain is divided in active and passive parts. The main idea of the algorithm is to decrease the complexity of the closed kinematic chain at each iteration until the active part becomes reachable by all passive chain segments simultaneously. The reachable workspace of a kinematic chain is defined as the volume which the end-effector can reach. An approximation of this volume is automatically computed by the RLG using a simple bounding volume (spherical shell) consisting on the intersection of concentric spheres and cones. A guided random sampling of the configurations of the active part is done inside the computed shell and within the current joint limits. When several loops are present in the mechanism, they are treated as separate closed chains. Once the roadmap is constructed, a path is found in the same way as for open kinematic chains. 2.2 Motion Generation Techniques 2.2.1 Kinematics based methods Kinematics based techniques specify motion independently of the underlying forces that produced them. Motion can either be defined by specifying the value of each joint (forward-kinematics) or it can be derived from a given end-effector configuration (inversekinematics). In this work we are especially interested in generating the motions of virtual human characters. In computer animation this approach has been frequently used to generate the motions of articulated human characters as in [18]. Several inverse kinematics (IK) algorithms for 7-DOF anthropomorphic limbs have been developed based on biomechanical data in order to best reproduce human-arm motions (e.g. [7, 15]). In our work we have chosen to use the combined analytic and numerical IK method presented in [15]. Kinematics-based methods are well adapted when a specified target is given (like in reaching motions) but generating believable motions is often problematic. 2.2.2 Motion capture based methods Motion capture allows a rapid generation of character motions and it has been widely used in interactive and real-time applications. The idea of these kind of techniques is to record human motions by placing sensors in the subjects body and apply them later to a synthetic model [11]. Motion libraries are built by filtering and characterizing the data obtained from the recorded motions. One particular motion at a time is later chosen from the library. The disadvantage of these techniques is that, when used alone, the kind of motion generated cannot be adapted or reused.

2.2.3 Motion Editing Techniques When a limited set of data is available, like in a motion capture library, there is a need to expand the set of possible motions in order to produce an animation. Interpolation methods to modify, combine and adapt these motions have been developed to overcome this necessity. For example in [17] motion parameters are represented in curves which are blended in order to combine motions. In [16] the authors extract the motion characteristics from original data and define a functional model based of Fourier series expansion. This is done in order to interpolate or extrapolate human locomotion and generate emotion-based animations. These methods deal with the generation of new motions while preserving the qualities of the original ones such as realism and expressiveness. 3 System and Task Modeling Our system can be composed of one or several human or robot mannequins. All characters are modeled as a hierarchy of rigid bodies connected by constrained links trying to reproduce (in the case of human mannequins) the human joint limits. Human mannequins are modeled with 53 DOF in 18 spherical and rotational joints arranged in a tree structure of 5 kinematical chains that converge in the characters pelvis (Figure 1). Here, we consider locomotion and manipulation as the robots basic complementary behaviors. In order to combine them we perform a geometrical decoupling of the systems DOF according to their main function. The mannequins are thus divided in three DOF groups: Locomotion, Grasp and Mobility. Figure 1: DOF are decomposed into three groups accoring to their function: locomotion, mobility and grasp. The Locomotion (resp. Grasp) DOF are those involved in producing the walking (resp. manipulating) behavior on the virtual mannequin. The Mobility group contains the DOF that allow a complementary posture control to the specified behaviors. In our virtual mannequin these are the DOF located in the head and spine. This geometrical decoupling strategy is fundamental in our approach because in this way a reduced system model can be employed to simplify the control and description of the current task. As we are aiming for two mannequins to interact with each other in the manipulation behavior, a geometrical decoupling of the system is not enough to describe the cooperative task. In the paragraphs below the strategy to compute a reachable cooperative space that allows to complete the task description is described. As it can be seen in Figure 2, when two virtual mannequins hold the same object, several closed kinematic loops are formed. In this case, the system is considered as a multiple-loop articulated mechanism and treated with the RLG technique overviewed in Section 2.1.3. Figure 2: Closed kinematic chains are formed in cooperative manipulation For a single human mannequin, the arms inverse kinematics algorithm defines the range of its reachable space. This space is automatically aproximated by using the spherical shells volume. For a cooperative task, the reachable cooperative space is considered to be the intersection of all the individual spaces (Figure 3a). In Figure 3b it is seen that even though the individual reachable spaces are not intersecting, a cooperative reachable space approximation is computed using the large object as the end-effector of each kinematic chain.

(a) Figure 3: Individual reachable spaces are intersected to obtain a cooperative space for manipulation. (b) 4 Algorithm Our algorithm consists on three stages: 1. Plan a collision-free trajectory for a reduced model of the system 2. Animate locomotion and manipulation behaviors in parallel 3. Edit the generated motions to avoid residual collisions. The user-specified inputs for the algorithm are: a geometric and kinematical description of the system maximal linear velocity and acceleration number of desired frames in the animation a motion library containing captured data from different walking sequences an initial and a final configuration. The output is an animated sequence of the combined behaviors. In the next paragraphs each step of the algorithm is described. 4.1 Motion planning In this step a simplified model of the system is employed in order to reduce the complexity of the planning problem. For this, each mannequin locomotion DOF are covered with a bounding box as seen on Figure 4. In this example a 12-DOF reduced system is obtained with 6 parameters to specify the mannequin position and orientation and the other 6 to specify Figure 4: Simplified model used for planning a trajectory. the object motions. With this model a collision-free path is found using the probabilistic roadmap method described in Section 2.1.1. In this work we have considered that smooth human-like paths can be approximated to Bezier curves of third degree but other local paths could be used instead. Once a collision-free path is found, it is then transformed into a set of discrete time-stamped positions (animation frames), which is the input to the next step of the algorithm. 4.2 Motion generation In this step, locomotion and manipulation behaviors are synthesized in parallel using the complete model of the system. The locomotion controller described in [12], uses motion-capture editing techniques in order to combine different walking cycles from the library and to adapt them to the velocity and acceleration constrained trajectory. Here, the walking behavior will set the joint values for the locomotion and mobility DOF leaving the grasp DOF inanimated. The grasping behavior is separately animated applying a 7-DOF inverse kinematics. This, to reach the valued imposed by the object configurations along the trajectory defined in the planning step. In this process only the grasp DOF values are specified. 4.3 Motion editing In the first step of the algorithm, only a system involving the lower part of the mannequins was considered. Therefore, the generated trajectory allows residual collisions with the remaining DOF (mobility and grasp). The purpose of this last step is to solve these possible remaining collisions while preserving the believability of the generated motions. Collisions with the mobility and grasping DOF are treated differently depending on the nature of the kinematic chains that contains them (open or closed).

International Symposium on Robotics and Automation 2004 Quere taro, Me xico In the case of an open kinematic chain (head-spine), a local deformation of the chain is performed until a valid (collision-free) configuration is reached. Thereafter, a warping method is applied to obtain the minimal deformation and preserve the smoothness of the motions. When collisions are found in a closed chain (armsobject) a new local collision-free path is found for the chain and the same warping method is applied. If at this stage there are no collision-free configurations found, the computed trajectory is invalidated and a new one is found. 5 Experimental Results We have implemented our algorithm within the software platform Move3D [13] developed at LAAS. We have generated different animations with different environments varying their size and complexity. Two types of virtual mannequins have been used. In the paragraphs below, two examples are presented and discussed. 5.1 Pizza Delivery Service In this example the virtual pizza delivery boy has to take a pizza from one office to another (Figure 5). Here, we would like the mannequin to keep the boxes horizontal along the trajectory to prevent the pizzas to loose their ingredients. For this, we impose kinematical constraints by restricting two of the six DOF for the free-flying object. This means that in roll-pitchyaw angle representation we have removed the DOF allowing the object to pitch and roll. Figure 6: Selected configurations of the computed trajectory. mannequin s arm and the bookshelf at the second office entrance. In this case a solution was quickly found by modifying the elbows configuration as it is shown in Figure 7. Here, this collision was not likely to be avoided in the original path because the doorways are narrow and the bookshelf is very near the final configuration. Figure 7: The mannequin moves his elbow in order to avoid collision with the bookshelf. Figure 5: Initial and final configurations 5.2 Once the initial and final configurations, velocity and acceleration constraints and the number of frames are specified by the user, the algorithm is applied and the animation generated. In Figure 6 the resulting trajectory is shown as a sequence of configurations selected for image clarity. After the locomotion and grasping behaviors were generated, a residual collision was found between the In the factory The second example takes place in a typical industrial environment. Here a heavy plate has to be transported across the room by a human mannequin along with a virtual robot manipulator (Figure 8). As it can be seen, the environment is complex, it contains plenty of obstacles and collisions are likely to occur. The drums lying around the room leave only one collision-free pathway for the system to traverse.

Table 1: Computational time in seconds. Stage Office Factory Figure 8: Virtual mannequins cooperating in the industrial facilities. No. Frames 308 268 530 Stages I. Planning - Path 0.5 6.5 6.5 - Trajectory 2.0 2.1 4.5 II. Animating - Locomotion 0.8 0.8 1.6 - Manipulation 0.3 0.4 0.8 III. Editing 0.2 5.7 11.4 Total time 3.8 15.5 24.8 In the first step of our algorithm this collision-free path is found and sampled. Then, locomotion and cooperative grasping behaviors are synthesized. Here, the virtual robot is considered to be holonomic, so a straight line was used as steering method. In the tuning step, several residual collisions were found and avoided. Some frames of the final animation are shown in Figure 9. Figure 9: The agents deal with several obstacles while transporting the plate. 5.3 Computational time Both examples were computed on a Sun-Blade-100 workstation wih a 500MHz UltraSparc-IIe processor and 512 Mb in RAM. For the first example, the office environment is composed of 148,977 polygons, all of them taking part in the collision test. The industrial environment is composed of 159,698 polygons but only the lower (mannequin height) 92,787 take part in collision testing. Table 1 presents the time needed to compute each of the examples considering a pre-computed roadmap (i.e. only the query phase). The time it took to build this roadmap was 1.69 sec. for the office and 31.4 sec. for the factory. At this stage, the time varies with the complexity of the environment and given its the probabilistic nature of the algorithm. Two different animations were generated for each trajectory in the industrial environment and one in the office. We can see that the time to compute the trajectory in the factory is higher than in the office because there are two agents in the scene, and there are more obstacles to avoid and the distance between the initial and final configurations is greater. The time it takes to synthesize locomotion and manipulation behaviors is clearly proportional to the number of frames in the animation. The edition-step computing time depends on the number of residual collisions found along the trajectory. 6 Conclusions We presented an approach to plan, generate and edit collision-free motions for cooperating virtual characters handling bulky objects. To accomplish this task a geometric decoupling of the system is proposed. A three-step algorithm that integrates several techniques such as motion planning algorithms, motion controllers and motion editing techniques is described. This approach works well to cope with complementary behaviors such as locomotion and manipulation where different DOF are used for each of them. This approach should be extended in order to generate a larger set of behaviors. A major component of realistic motion, which is not considered in this work is physically-based reaction to commonly encountered forces. Work should be done in order to integrate this in our current motion planning scheme. This is one of our near-future goals. At this stage, manipulation planning is not considered. This problem has been previously tackled in [6]. However, more complicated instances of this problem

should be considered in order to plan the motions of several interacting arms. Animations related to this work can be found at http://www.laas.fr/ria/ RIA-research-motion-character.html Acknowledgment C. Esteves and G. Arechavaleta benefit from SFERE-CONACyT grants. This work is partially funded by the European Community Projects FP5 IST 2001-39250 Movie and FP6 IST 002020 Cogniron. [9] S. LaValle, Rapidly-exploring random trees: A new tool for path planning, Computer Science Department. Iowa State University, Tech. Rep., October 1998. [10] S. LaValle, J. H. Yakey, and L. E. Kavraki, A probabilistic roadmap approach for systems with closed kinematic chains, in Proc. IEEE Transactions on Robotics and Automation, 1999. [11] R. Parent, Computer Animation: Algorithms and Techniques. The Ohio State University: Morgan- Kaufmann Publishers, 2001. References [1] N. Badler, C. Phillips, and B. Webber, Simulating Humans: Computer Graphics, Animation, and Control. University of Pennsylvania, Philadelphia: Oxford University Press, 1992. [2] M. Choi, J. Lee, and S. Shin, Planning biped locomotion using motion capture data and probabilistic roadmaps, ACM transactions on Graphics, Vol. 22(2), 2003. [3] J. Cortés and T. Siméon, Probabilistic motion planning for parallel mechanisms, in Proc. IEEE International Conference on Robotics and Automation (ICRA 2003), 2003. [4] L. Han and N. Amato, A kinematics-based probabilistic roadmap method for closed chain systems, in Proceedings of International Workshop on Algorithmic Foundations of Robotics (WAFR 00), 2000. [5] L. Kavraki, P. Svestka, J.-C. Latombe, and M. Overmars, Probabilistic roadmaps for path planning in high-dimensional configuration spaces, in Proc. IEEE Transactions on Robotics and Automation, 1996. [6] Y. Koga, K. Kondo, J. Kuffner, and J. Latombe, Planning motions with intentions, ACM SIG- GRAPH Computer Graphics, vol. 28, pp. 395 408, 1994. [12] J. Pettré, J.-P. Laumond, and T. Siméon, A 2-stages locomotion planner for digital actors, in Proc. ACM SIGGRAPH/Eurographics Symposium on Computer Animation. San Diego, California: Eurographics Association, 2003, pp. 258 264. [13] T. Siméon, J.-P. Laumond, and F. Lamiraux, Move3d: A generic platform for motion planning, in Proc. 4th International Symposium on Assembly and Task Planning (ISATP 2001), 2001. [14] T. Simeón, J.-P. Laumond, and C. Nissoux, Visibility based probabilistic roadmaps for motion planning, Advanced Robotics, vol. 14, no. 6, 2000. [15] D. Tolani, A. Goswami, and N. Badler, Realtime inverse kinematics techniques for antropomorphic limbs, Graphical models, no. 5, pp. 353 388, 2000. [16] M. Unuma, K. Anjyo, and R. Takeuchi, Fourier principles for emotion-based human figure animation, in Proc. of SIGGRAPH 95, 1995. [17] A. Witkin and Z. Popovic, Motion warping, in Proc. of SIGGRAPH 95, 1995. [18] J. Zhao and N. Badler, Inverse kinematics positioning using nonlinear programming for highly articulated figures, ACM Transactions on Graphics, vol. 14, no. 4, 1994. [7] K. Kondo, Inverse kinematics of a human arm, Journal of Robotics and Systems, vol. 8, no. 2, pp. 115 175, 1991. [8] J. Kuffner, Autonomous agents for real-time animation, Ph.D. dissertation, Stanford University, Stanford, CA, December 1999.