Generating Different Realistic Humanoid Motion

Similar documents
Motion Interpretation and Synthesis by ICA

Optimal motion trajectories. Physically based motion transformation. Realistic character animation with control. Highly dynamic motion

Motion Editing with Data Glove

Graph-based High Level Motion Segmentation using Normalized Cuts

Automated Modularization of Human Motion into Actions and Behaviors

Synthesizing Physically Realistic Human Motion in Low-Dimensional, Behavior-Specific Spaces

Motion Synthesis and Editing. Yisheng Chen

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction

Animating Non-Human Characters using Human Motion Capture Data

Selecting Models from Videos for Appearance-Based Face Recognition

A Method of Hyper-sphere Cover in Multidimensional Space for Human Mocap Data Retrieval

Non-linear dimension reduction

GRAPH-BASED APPROACH FOR MOTION CAPTURE DATA REPRESENTATION AND ANALYSIS. Jiun-Yu Kao, Antonio Ortega, Shrikanth S. Narayanan

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics

CSE 481C Imitation Learning in Humanoid Robots Motion capture, inverse kinematics, and dimensionality reduction

THE capability to precisely synthesize online fullbody

Human pose estimation using Active Shape Models

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Generalized Principal Component Analysis CVPR 2007

Modeling Variation in Motion Data

Dimensionality Reduction and Generation of Human Motion

Localization from Pairwise Distance Relationships using Kernel PCA

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation

Analyzing and Segmenting Finger Gestures in Meaningful Phases

Deriving Action and Behavior Primitives from Human Motion Data

A Retrieval Method for Human Mocap Data Based on Biomimetic Pattern Recognition

CMSC 425: Lecture 10 Skeletal Animation and Skinning

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

Learning Deformations of Human Arm Movement to Adapt to Environmental Constraints

Segment-Based Human Motion Compression

Motion Track: Visualizing Variations of Human Motion Data

Motion Capture Assisted Animation: Texturing and Synthesis

Does Dimensionality Reduction Improve the Quality of Motion Interpolation?

A Developmental Framework for Visual Learning in Robotics

Adding Hand Motion to the Motion Capture Based Character Animation

Learning a Manifold as an Atlas Supplementary Material

Computer Graphics II

Motion Control with Strokes

Real-Time Human Pose Inference using Kernel Principal Component Pre-image Approximations

Learnt Inverse Kinematics for Animation Synthesis

Tracking Human Motion by using Motion Capture Data

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components

Recognition: Face Recognition. Linda Shapiro EE/CSE 576

A 12-DOF Analytic Inverse Kinematics Solver for Human Motion Control

INFOMCANIM Computer Animation Motion Synthesis. Christyowidiasmoro (Chris)

Physically Based Character Animation

Human Action Recognition Using Independent Component Analysis

Interpolation and extrapolation of motion capture data

Image Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial

Differential Structure in non-linear Image Embedding Functions

3D Human Motion Analysis and Manifolds

Robust Pose Estimation using the SwissRanger SR-3000 Camera

MOTION capture is a technique and a process that

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th

Motion Synthesis and Editing. in Low-Dimensional Spaces

Epitomic Analysis of Human Motion

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model

Gaussian Process Motion Graph Models for Smooth Transitions among Multiple Actions

School of Computer and Communication, Lanzhou University of Technology, Gansu, Lanzhou,730050,P.R. China

3D Reconstruction of Human Motion Through Video

Open Access The Kinematics Analysis and Configuration Optimize of Quadruped Robot. Jinrong Zhang *, Chenxi Wang and Jianhua Zhang

Human Motion Database with a Binary Tree and Node Transition Graphs

Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition

Animation. CS 465 Lecture 22

Overview on Mocap Data Compression

Realistic Synthesis of Novel Human Movements from a Database of Motion Capture Examples

Articulated Structure from Motion through Ellipsoid Fitting

Animation Lecture 10 Slide Fall 2003

Computer Kit for Development, Modeling, Simulation and Animation of Mechatronic Systems

Hand Gesture Extraction by Active Shape Models

Facial Expression Detection Using Implemented (PCA) Algorithm

Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding (CSSLE)

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters

Statistical Learning of Human Body through Feature Wireframe

Identifying Humans by Their Walk and Generating New Motions Using Hidden Markov Models

Classification Performance related to Intrinsic Dimensionality in Mammographic Image Analysis

Expanding gait identification methods from straight to curved trajectories

Exploiting Spatial-temporal Constraints for Interactive Animation Control

The Design and Application of Statement Neutral Model Based on PCA Yejun Zhu 1, a

Kinematics and Orientations

Dimension Reduction of Image Manifolds

CSE 258 Lecture 5. Web Mining and Recommender Systems. Dimensionality Reduction

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC

Low Cost Motion Capture

Motion Capture, Motion Edition

DYNAMMO: MINING AND SUMMARIZATION OF COEVOLVING SEQUENCES WITH MISSING VALUES

Synthesis and Editing of Personalized Stylistic Human Motion

Object and Action Detection from a Single Example

Inferring 3D Body Pose from Silhouettes using Activity Manifold Learning

Robot Manifolds for Direct and Inverse Kinematics Solutions

Coupled Visual and Kinematic Manifold Models for Tracking

Inferring 3D People from 2D Images

15-462: Computer Graphics. Jessica Hodgins and Alla Safonova

Motion Graphs for Character Animation

CSC 411: Lecture 14: Principal Components Analysis & Autoencoders

Rotation and Scaling Image Using PCA

MOTION CAPTURE BASED MOTION ANALYSIS AND MOTION SYNTHESIS FOR HUMAN-LIKE CHARACTER ANIMATION

Head Frontal-View Identification Using Extended LLE

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

Transcription:

Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 00080, P.R.China 2 National Research Center for Intelligent Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 00080, P.R.China 3 Graduate University of Chinese Academy of Sciences, Beijing 00039, P.R.China {zbli, dengyu, lihua}@ict.ac.cn Abstract. Different realistic humanoid motion can be used in vary situations in animation. It also plays an important role in virtual reality. In this paper, we propose a novel method to generate different realistic humanoid motion automatically. Firstly, eigenvectors of a motion sequence is computed using principle component analysis. The principle components are served as virtual joints in our system. The number of virtual joints can be used to control the realistic level of motions. After given the virtual joints number, the actual joints parameters of new motion are computed using the selected virtual joints. The experiments illuminate that this method has good ability to generate different realistic level motions. Keywords: PCA, DOF, Motion Capture. Introduction Humanoid motion plays an important role in games, movies etc. With its wildly use, different realistic motions are needed in many cases. Traditionally, in order to get a motion sequence, animators need to specify many key frames. However, creating these key frames is extremely labor intensive. Motion capture technique is triumphantly used to produce motion scripts recently, which can get motion data from living human beings directly and expediently. Though motion capture lightens the burden of animators, it has two limitations:.people can only use the motion scripts recorded beforehand. 2. When the body proportion in animation is not suitable with the recorded body proportion; the data must be retargeted. These two limitations counteracted its using. Humanoid motion is defined as the combination of a set of postures fi (i=, 2... n), each posture is represented by a set of degrees of freedom (DOF). The motion space is often high, generally fifty to sixty. For many behaviors, the movements of the joints are highly correlated; this makes it possible to change the movement reality through controlling the energy of motions. The goal of making motion animation is reality, but at many situations such as in a cartoon system, people often need nonrealistic motions. In this paper, we present a

method to generate different realistic level motions through selecting the dimensions of motion space. The method could help people generate new nonrealistic motions from a given motion sequence. The rest of this paper is organized as follows: Section 2 reviews the related work of human motion space reducing, which is the base of our work. In section 3, we described our method of generating different realistic humanoid motions. We give the experiment results in section 4. And finally in section 5, we conclude with a brief summary and discussion of future works. 2 Related Works Automatically generating animations is an important problem and many good solutions have been proposed [, 2, 3]. In the process of making animation, animators are often confused by the high motion dimensions because it s not easy to control the data in high dimension directly. Recently people find that the movements of the joints are highly correlated for many behaviors [4, 5]. These correlations are especially clear for a repetitive motion like walking. For example, during a walk cycle, the arms, legs and torso tend to move in a similar oscillatory pattern: when the right foot steps forward, the left arm swings forward, or when the hip angle has a certain value, the knee angle is most likely to fall within a certain range. And these relationships hold true for more complex motions as well [4]. Using the correlated information, we can reduce the dimensions of working space. It also means reducing energy of the motion space. And motion space reducing is the base of our work. Degrees of freedom are correlated with each other, many research works are also benefited from the observation. Alla Safonova et al. [4] proved that many dynamic human motions can be adequately represented with only five to ten degrees of freedom. He used lots of motions with similar behavior to construct a lowdimensional space to represent well other examples of the same behavior. Arikan[6] used this observation to implement the compression of motion capture databases and got good compression results. Popovi c and Witkin [7] showed that significant changes to motion capture data can be made by manually reducing the character to the degrees of freedom most important for the task. Howe and colleagues [8] published one of the earliest papers on using global PCA to reduce the dimensionality of human motion. They incorporated the reduced model into a probabilistic Bayesian framework to constrain the search of human motion. Sidenbladh and colleagues [9] reduced the dimensionality of the database using global PCA and then constrained the set of allowable trajectories within a high-dimensional state space. Pullen and Bregler[4] used this observation for motion synthesis/texturing and Jenkins and Mataric [0] used it for identifying behavior primitives. There are many methods to reduce the dimensions like PCA, Kernel PCA [], Isomap[2], Locally Linear Embedding[3] etc. These methods could implement linear or nonlinear dimension reduction. We attempts to apply these dimension reduction approaches to generate different realistic motions. With the dimension adding or reducing, we could get different realistic level motions. And these motions contain the basic intrinsic information of the original motion.

3 Proposed Method For a motion animation sequence, we compute eigenvectors V (v, v 2 v n ) from the sequence. The principle components can be represented by the eigenvectors and their coefficients, which we named virtual joints. Through controlling the number of virtual joints, we could get different realistic level of humanoid motion sequences. 3. Motion Definition Motion M is defined as the combination of a set of postures f i (i=, 2... n), which is organized according time axis (fig.). It could be a simple motion (such as walking, running) or a complex motion (such as playing basketball). Fig.. Sketch map of motion definition. 3.2 Different Realistic Motion generating After we got a motion sequence M, which might be represented by fifty or sixty DOF. It s hard for us to generate similar motions in such high dimension. PCA method is used to reduce the motion space. 3.2. Principal Component Analysis Each frame f i (i=, 2... n) saved in the captured file is a point of fifty or sixty dimension. Because joint movements of human body are highly correlated, we can synthesize some main measures from the DOF of human model. These measures contain the primary information of the motion. So we can use these measures to describe the captured motion data. PCA method is useful to get such main measures. After gotten a motion capture file, we assume the frame length is n and the DOF of the human model is p. The data matrix can be represented as:

x p = (,, n) = M O M X x x x n K L x x np () For PCA to work properly, we have to subtract the mean from each of the data dimensions. The mean subtracted is the average across each dimension. n n n x = x = ( x,..., x ) = ( x,..., x ) n n n (2) i i ip p i= i= i= So, all the x ij values have x j subtracted. This produces the captured data set whose mean is zero. We use formula 3 to calculate the covariance matrix after the captured data standardization. The covariance matrix is a measure to find out how much the dimensions vary from the mean with respect to each other. n = ( x x)( x x) i i (3) n i= We can calculate the eigenvectors and eigenvalues of the covariance matrix. These are rather important, as they tell us useful information about our data. Suppose the eigenvalues of as: λ λ L λ 0 (4) 2 p > And the corresponding standard eigenvectors are: u, u 2, u p The eigenvector with the highest eigenvalue is the principle component of the data set. So we give the principal component formula as: F i = u ' x (i=... n) (5) i Each F i and F j are orthogonal when i,j=,...,n and i j. So the principal component formula is:

F = u x + u x + L+ u x 2 2 p p L F = u x + u x + L+ u x m m 2m 2 pm p (6) Here F is the maximal of the variance and called first principle component, F 2 is the second principle component and F m is the m ordered principle component. After we get the principle components, we can calculate the principle component s contribution factor. The accumulation contribution factor from first principle component to m principle component is: E m = m i= p i= λ λ i i (7) E m is an indicator of how much information is retained by projecting the frames on the m-dimensional space. If Em 85%, we can regard the m-dimensional space contains original space s information basically. 3.2.2 Motion Representation Using PCA, we can get the principle components of a motion sequence. A motion could be represented as below: y x a K an y2 x2 = M O M M a a M L n nn yn xn (8) a K an Here M O M is the matrix of eigenvectors and ( x, x2, L, x n )' is an a L nn the vector of coefficients, which is the value of joints rotation at every frame. ( y, y2, L, y n )' is the vector of principle components. We name the element of the

vector (,, L, )' y y2 y n as virtual joints, which could be used to control the realistic level of the motion. The more the virtual joints selecting, the more realistic level we could get. Virtual joints could also represent the energy distribution of the motions in the orthogonal space. After selecting the numbers of virtual joints as m (m n), we could compute the parameters of actual joints of every frame as follow: x y x2 y2 = A M M x y n m (9) a K an Here, A is a generalized inverses matrix of M O M. It is a matrix of an a L nn n*m. From formula 9, we could get the x values of every frame in the motion sequence. Thus we could get different realistic motion sequences through controlling the number of virtual joints m. When m=n, we could get the original motion. 4 Experiments We used the CMU Mocap database in our experiments. The full human movement dimension is 62 degrees. In our experiments we did not care about the three position dimensions; the four dimensions of left and right clavicles and the two dimensions of right and left fingers are also careless. Thus the actual space we used is 53 degrees. We select a jump motion sequence, which contains 300 frames. Firstly, we compute the eigenvectors of the motion using the method introduced in section 3. The number of virtual joints m we selected are,3,6,3,23,33,43,53 respectively. The relationship of virtual joints number and its energy containing percent is shown in table. Table. The relationship of virtual joints numbers and the ratio of energy it containing Number 3 6 3 23 33 43 53 Percent 4.9% 76.8% 9.7% 98.4% 99.8% 99.99% 00% 00%

Fig. 2. Energy ratio chart Fig. 3. The value of first virtual joints The relationship of virtual joints numbers and the energy it containing is also shown in figure 2. When the virtual joints number changes from 6 to 53, the energy was not change greatly. In figure 3, we gave value of the first virtual joints changing according the frame numbers. From the experiment results, we could see all of the motions containing the basically motion configuration. But they have different view realistic results. The results of motions generated using our method is shown in figure4. The figure also reflects the relationship of virtual joints and their realistic level. When we selected the number of virtual joints as, the motion we generated can be recognized as jump hazily; when the number of virtual joints up to 3, the motion is more like a mummy jump; when the number up to 53, we could get the original motion sequence. We could see it s a holistic method to generating humanoid motions. From table and figure 4, we could also derive that the details determined the reality of motions. Throwing away some details could help us get different realistic motions like cartoon motion etc. 5. Discussion and Future Works Though generating realistic humanoid motion is important in animation and virtual reality, different level realistic humanoid motion is often required in many applications. For example, cartoon motion is often used in many systems. We proposed a method to generate different level realistic humanoid motion automatically. This could help lighten the burden of animators and reuse the existed animation sequence. In future work, we will try to use other dimension reducing methods to compute the virtual joints. The relationship of virtual joints with different motion types and the motion details influence to motion reality are other aspects to be particularly studied in future.

Frame Num VJ Num 85 53 78 23 262 298 3 6 3 23 33 43 53 Fig.4. The results of motion generating using different virtual joints numbers Acknowledgements. This work was supported in part by National Natural Science Foundation of China (grant No: 60533090). The data used in this project was obtained

from mocap.cs.cmu.edu. The database was created with funding from NSF EIA- 09627. References. Wilkin. A. and Kass. H: Spacetime Constraints. In Proceedings of Siggraph88, 988: 59-68 2. Shin. H. J, Lee. J, Gleicher. M. and Shin. S.Y. Computer puppetry: An importance based aproach. ACM Trans. On Graphics, 20(2), 200: 67-94 3. Gleicher. M: Retargeting motion to new characters. In Proceedings of Siggraph98, 998: 33-42 4. Pullen. K, and Bregler. C: Motion capture assisted animation: Texturing and synthesis. In Proceedings of Siggraph02, 2002: 50 508 5. Alla Safonova, Jessica K. Hodgins, Nancy S. Pollard: Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces. ACM Trans. On Graphics.23 (3), 2004:524-52 6. Okan Arikan: Compression of Motion Capture Databases. To appear in Siggraph 2006 7. Popovi c. Z., and Witkin A. P: Physically based motion transformation. In Proceedings of Siggraph 99, 999: 20 8. Howe. N, Leventon. M. and Freeman. W: Bayesian reconstruction of 3d human motion from single-camera video. In Advances in Neural Information Processing Systems 2. 999:820-826. 9. Sindenbladh, H, Black. M. J., and Sigal, L: Implicit probabilistic models of human motion for synthesis and tracking. In European Conference on Computer Vision, 2002:784-800 0. Jenkins, O. C, and Mataric. M. J: Automated derivation of behavior vocabularies for autonomous humanoid motion. In AAMAS 03: Proceedings of the second international joint conference on Autonomous agents and multiagent systems, 2003:225 232. B. Scholkopf, A. J. Smola and K.-R. Muller: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 0(5), 998:299 39 2. J. B. Tenenbaum, V. de Silva and J. C. Langford: A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500), 2000:239 2323 3. S. T. Roweis and L. K. Saul: Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500), 2000:2323 2326