Motion Style Transfer in Correlated Motion Spaces

Size: px
Start display at page:

Download "Motion Style Transfer in Correlated Motion Spaces"

Transcription

1 Motion Style Transfer in Correlated Motion Spaces Alex Kilias 1 and Christos Mousas 2(B) 1 School of Engineering and Digital Arts, University of Kent, Canterbury CT2 7NT, UK alexk@kent.ac.uk 2 Department of Computer Science, Southern Illinois University, Carbondale, IL 62901, USA christos@cs.siu.edu Abstract. This paper presents a methodology for transferring different motion style behaviors to virtual characters. Instead of learning the differences between two motion styles and then synthesizing the new motion, the presented methodology assigns to the style transformation the motion s distribution transformation process. Specifically, in this paper, the joint angle values of motion are considered as a threedimensional stochastic variable and as a set of samples respectively. Thus, the correlation between three components can be computed by the covariance. The presented method imports covariance between three components of joint angle values, while calculating the mean along each of the three axes. Then, by decomposing the covariance matrix using the singular value decomposition (SVD) algorithm, it is possible to retrieve a rotation matrix. For fitting the motion style of an input to a reference motion style, the joint angle orientation of the input motion is scaled, rotated and transformed to the reference style motion, therefore enabling the motion transfer process. The results obtained from such a methodology indicate that quite reasonable motion sequences can be synthesized while keeping the required style content. Keywords: Character animation Motion style Style transfer 1 Introduction Animated virtual characters can be met in various applications of virtual reality, such as in video games, in films and so on. Those characters should be able to provide realistically each different motion that is performed, such as enabling a human like way for representing each action. Hence, even if the well-known key framing techniques are able to provide an intuitive way for animating a virtual character, since these are always dependent on the developer s technical and artistic skills, it still may not be possible to generate highly realistic character motion. Due to these shortcomings, motion capture technologies were developed in order to enhance the naturalness of the virtual character s motion. With c Springer International Publishing AG 2017 L.T. De Paolis et al. (Eds.): AVR 2017, Part I, LNCS 10324, pp , DOI: /

2 Motion Style Transfer in Correlated Motion Spaces 243 these technologies, it is possible to capture the required motion sequences, by simply capturing real humans performing the required motions. Then, by using motion retargeting techniques [1], those motion sequences can be transferred to any character. Moreover, these motion sequences can be interpolated [2], blended [3] and so on, therefore providing to the developer the ability to reuse the motion data, as well as input into the motion data the new spatial and temporal characteristics of a new animated sequence. Among others, the requirement of transferring the style behavior of a motion sequence to another has attracted the research community. Generally, the techniques that have been proposed during the past years are based on the ability to learn the style content of a motion by using various methodologies related to statistical analyses and syntheses of human motion data. However, less attention has been given to methodologies that are able to transfer the required style by simply transferring the distribution of character s joint angles to a reference distribution that represents the motion style. The main advantage of such a methodology is its ability to treat each character s joint separately, allowing a partial motion style to be mapped to the original motion. Yet, it is required that the reference style motion be similar in content to the input motion. For example, it is not possible to transfer a motion style from locomotion to a non-locomotion sequence, and vice versa. Hence, in conjunction with the presented statics-based motion style transfer methodology, a simple extension that provides the partial motion style transfer is introduced. Based on the aforementioned explanation, this paper presents a novel methodology for transferring a motion style of a motion sequence to any other motion sequences. The presented methodology is achieved by assigning the motion distribution transfer process to a linear transformation methodology. Based on this methodology, different examples are implemented and presented in this paper where either the whole body or the partial body motion of a character is enhanced with style content. The remainder of the paper is organized in the following sections. In Sect. 2 related work on motion synthesis techniques for transferring the motion styles is presented. The problem statement and the methodology s overview of the proposed methodology are presented in Sect. 3. The proposed methodology that is used for transferring the motion style of an input to a reference motion is presented in Sect. 4. The results obtained from the implementation of the proposed methodology are presented in Sect. 5. Finally, conclusions are drawn and potential future work is discussed in Sect Related Work Among others, data-driven techniques for animating virtual characters are the most popular. Those techniques are responsible for synthesizing new motion sequences by using existing motion data. The most popular approach is the motion graphs [4 7], which allows transitions between poses or footprints [8 10] where the locomotion of a character is synthesized by following the footprints placed in the 3D environment. However, the main drawback of those techniques

3 244 A. Kilias and C. Mousas is that it does not allow generalization of the motion data, such as allowing new styles to be synthesized. While the ability to edit or synthesize new motion sequences by keeping the style variations of existing sequences is required, methodologies that are related to transferring the motion styles of one motion to any other have been previously proposed. The parameterization process for the motion data can be quite powerful in cases that require the prediction of a new motion style from existing motion data. In general, dimensionality reduction techniques, such as principal component analysis (PCA) [11, 12] and Gaussian process models (GPMs) [13], as well as probabilistic models, such as the hidden Markov model [14,15], or other machine learning methods, such as the well-known radial basis function (RBF) [16,17], can be quite beneficial in cases that require a learning process to distinguish separate different motion styles. In the following paragraphs, methodologies that are responsible for synthesizing style variations from human motions are presented. Urtasun et al. [12] used PCA to train a large motion capture dataset with variations in locomotion style sequences. Using PCA coefficients, they synthesized new motions with different heights and speeds. Cao et al. [18] used independent component analysis (ICA) to automatically determine the emotional aspects of facial motions and edit the styles of motion data. In addition, Shapiro et al. [19] used ICA to decompose each single motion into components, providing the ability to select the style component manually. Torresani et al. [20] proposed a motion blending-based controller, where the blending weights were acquired from a large dataset of motion sequences of which the motions styles had been labelled by specialists. Liu et al. [21] constructed a physical model using an optimization approach to generate motions with learned physical parameters that contained the style aspects. Another solution proposed by Elgammal and Lee [22] assigned style properties to time-invariant parameters and used a decomposable generative model that explicitly decomposed the style in a walking motion video. On the other hand, statistics have been used extensively in character animation [23 25]. Generally statistical models are also able to provide quite reasonable results. Specifically, Hsu et al. [26] proposed a style transfer methodology that learns linear time-invariant models by comparing the input and output motions in order to perform style translation. Brand and Hertzmann [15] used Markov models to capture the style of training motion, which were archived in order to synthesize new motion sequences while the style variations of the motion were transferred. Moreover, Gaussian process latent variable models (GPLVM) have been used to synthesize stylistic variations of human motion data. Generally, GPLVM can enable the probabilistic mapping of non-linear structure in human motion data from the embedded space to the data space. Methodologies such as those proposed by Grochow et al. [27] can be used for motion editing while maintaining the original style by adapting a Gaussian process latent variable models with the shared latent spaces (SGPLVM) model. Methodologies such as those proposed by Wang et al. [28] can be used to separate different style variations.

4 Motion Style Transfer in Correlated Motion Spaces 245 Interpolation and motion blending methodologies can also provide the desirable results. Therefore, Kovar and Gleicher [29] proposed a denser sampling methodology for the parameter space and applied blending techniques in order to generate new motion sequences. Rose et al. [16] used an interpolation method based on RBF to generate motions based on verb and adverb parameters. They constructed a verb graph to create smooth transitions between different actions. By using signal processing related techniques it is possible to transfer a motion style to another motion sequence. Specifically, Unuma et al. [30] proposed a method that uses Fourier techniques to change the style of human gaits in a Fourier domain. Using this method, the motion characteristics based on the Fourier data could be extracted. Bruderlin and Williams [31] edited the stylistic motions by varying the frequency bands of the signal. Perlin [32] added rhythmic and stochastic noise functions to a character s skeletal joints in order to synthesize motion sequences with personality variations. In the proposed methodology we try to transfer a motion style by aligning the distribution of two corresponding motion styles for each of the character s joint angle orientations. Based on this linear transformation process, the presented methodology succeed in synthesizing a style motion in a well manner. 3 Overview In the following two subsections we present the problem statement of motion style transfer, and the methodology that is used in this paper for approaching this problem. 3.1 Problem Statement This motion style transfer process problem is the requirement for finding a continuous mapping such as the input motion m in to be represented as t(m in ), fulfilling the form m in t(m in ), where t(m in ) denotes the target distribution of the reference style motion m ref. Finally, having aligned the corresponding motion sequences, a decomposition process that is responsible for synthesizing the input motion and fulfilling the target style content is required. This transformation process in mathematic literature is known as the mass preserving transport problem [33 35]. Figure 1 shows a simple example of this problem. 3.2 Motion Representation In the presented methodology, each joints angle value of the character s motion is considered as a three-dimensional stochastic variable, and the motion data as a set of sample (postures), and therefore the correlation between three components can be measured by covariance. The presented approach changes motion style through a series of transformations of the mean and covariance related to

5 246 A. Kilias and C. Mousas Fig. 1. The distribution transfer problem requires finding a mapping m in t(m in) between the input motion m in and a reference motion style m ref. Fig. 2. The pipeline of the presented methodology. translation, scaling, and rotation. Hence, the resulted motion succeeds in keeping the required components of the reference style motion. A simple graphical explanation of the presented methodology is shown in Fig. 2. It should be noted that in the presented methodology the motion sequences are represented as M = {P (t),q(t),r 1 (t),..., r n (t)} where P (t) andq(t) represents the position and the orientation of the character s root in the t th frame, and r i (t) fori =1,..., n is the orientation of the i th joint of the character in the t th frame. It should also be noted that the character s root position is executed by this process. Based on this representation, the necessary components of the character s joint angles are computed. Then by using the aforementioned methodology the transfer of the required style content to an input motion is achieved.

6 Motion Style Transfer in Correlated Motion Spaces Methodology In this section we present the methodology that was used for transferring a reference motion style to an input motion. In the presented methodology this is achieved by developing a statistics-based method that provides the feasibility of the transformation process in the joint angles orientation space by just utilizing the mean and the covariance matrix of the input motion. In the remainder of this section we introduce the methodology that was used for achieving this transformation. 4.1 Motion Style Transfer Firstly, for both the input motion and the reference motion style, the mean angle orientation of joint angles along the three axes as well as the covariance matrix between the three components in the Euler space are computed. Thus for the joint angle values of the input motion M in we have M in =( X in, Ȳin, Z in )and for the reference motion style M ref we have M ref =( X ref, Ȳref, Z ref ), while we also have the covariance matrices represented as C in and C ref respectively. Now it is possible to decompose the covariance matrixes using the singular value decomposition (SVD) methodology as presented in Konstantinides and Yao [36]: C = UΛV T (1) where U and V are orthogonal matrices and are compressed by the eigenvectors of C. Moreover,Λ = diag(λ X,λ Y,λ Z ) where λ X,andλ Z are the eigenvectors of C. U is employed in the next step as a rotation matrix to manipulate the joint angles of the style motion. Finally, the following transformation is used: M = T ref R ref S ref S in R in T in M in (2) where M =(X, Y, Z, 1) T and M in =(X in,y in,z in, 1) T denote the homogeneous coordinates of a joint angle orientation in Euler space for the output and the input motion respectively. Moreover, T ref, T in, R ref, R in, S ref,ands in denote the matrices of translation, rotation and scaling derived from the reference style and the input motion respectively. The definition of each aforementioned component is given in Appendix. Based on this methodology, the key component of this transformation is its ability to transform an ellipsoid in order to fit another one. Thus, the two ellipsoids fit separately the joint angle orientation of the reference style and the input motion in the Euler angle space. The fitting of the ellipsoid functions as an extension to the method of fitting an ellipse in the two-dimensional space as proposed by Lee [37] and involves computing the mean and the covariance matrix. While the mean denotes the center coordinates of an ellipsoid, the eigenvalues and eigenvectors of the covariance matrix indicate the length and orientation of the three aces of the ellipsoid. The transformations act on all of the character s joint angles in the input motion and move them in the appropriate position in the Euler space. Results of the presented motion style transfer process are shown in Fig. 3 as well as at the accompanying video.

7 248 A. Kilias and C. Mousas Fig. 3. Examples of motions s with the proposed methodology. The input motion (a), and the synthesized styles (b) (e). 5 Implementation and Results For the implementation of the proposed solution we asked an experienced designer to provide motion sequences with style content which are related to happy, angry, sad, tiered, proud, sneaky, cat walking, crab walking, lame walking and many more (see Fig. 4). On average, those motion sequences are not greater that 100 frames. Hence, based on the aforementioned methodology we Fig. 4. Example postures of the different reference motion styles that used in the presented methodology.

8 Motion Style Transfer in Correlated Motion Spaces 249 use various input motion-sequence themes provided by the CMU motion capture database [38] along with various synthesized example motions. Additional examples are presented in the accompanying video. It should be noted that the aforementioned methodology that maps the input motion to the reference motion style was generated off-line. However, the motion synthesis process is generated in real-time. Hence, by using an Intel i7 at 2.2 GHz with 8 GB memory, the presented methodology succeeds in providing the new motion on average of 45 frames per second. 6 Conclusions and Future Work In this paper, a novel methodology for transferring the motion style of a reference motion to an input one was presented. The proposed methodology succeeds in transferring a motion style by aligning the corresponding motion spaces and by aligning the distribution and the centre of each joint angle orientation values. However, the presented method can provide quite reasonable results only while the motion sequences that are used correspond to the same content (i.e., to transferring a locomotion style to another locomotion sequence). Hence, for enhancing the motion style transfer process we assumed that a partial motion style transfer could be quite beneficial. Thus, we implemented a simple methodology that allows the partial motion transfer. On the other hand, the motion style transfer is quite a complex process. We assume that methodologies that synthesize motion sequences to contain specific stylistic content by aligning the distributions are a quite promising research area. Therefore, in our future work we would like to extend the presented methodology by developing enhanced methodologies that will enable a generalized motion style transfer process. Appendix Here the definition of the components used in Eq. 2 is presented. Specifically, the matrices of T ref, T in, R ref, R in, S ref,ands in denote the translation, rotation and scaling derived from the reference style and the input motion respectively. They are solved as: 100 X ref T ref = 010Ȳref 001 Z ref (3) X in T in = 010 Ȳin 001 Z in (4) 000 1

9 250 A. Kilias and C. Mousas References λ X ref S ref = 0 λy ref λz ref 0 (5) / λx in / λy S in = in 0 0 (6) 0 0 1/ λz in R ref = U ref (7) R in = U 1 in (8) 1. Gleicher, M.: Retargetting motion to new characters. In: Annual Conference on Computer Graphics and Interactive Techniques, pp (1998) 2. Mukai, T., Kuriyama, S.: Geostatistical motion interpolation. ACM Trans. Graph. 24(3), (2005) 3. Kovar, L., Gleicher, M.: Flexible automatic motion blending with registration curves. In: ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp (2003) 4. Arikan, O., Forsyth, D.A.: Interactive motion generation from examples. ACM Trans. Graph. 21(3), (2002) 5. Kovar, L., Gleicher, M., Pighin, F.: Motion graphs. ACM Trans. Graph. 21(3), (2002) 6. Lee, J., Chai, J., Reitsma, P.S., Hodgins, J.K., Pollard, N.S.: Interactive control of avatars animated with human motion data. ACM Trans. Graph. 21(3), (2002) 7. Safonova, A., Hodgins, J.K.: Construction and optimal search of interpolated motion graphs. ACM Trans. Graph. 26(3), 106 (2007) 8. van Basten, B.J., Peeters, P.W.A.M., Egges, A.: The step space: example-based footprint-driven motion synthesis. Comput. Animat. Virtual Worlds 21(3 4), (2010) 9. Mousas, C., Newbury, P., Anagnostopoulos, C.: Footprint-driven locomotion composition. Int. J. Comput. Graph. Animat. 4(4), (2014) 10. Mousas, C., Newbury, P., Anagnostopoulos, C.: Measuring the steps: generating action transitions between locomotion behaviours. In: International Conference on Computer Games: AI, Animation, Mobile, Interactive Multimedia, Educational & Serious Games, pp (2013) 11. Chien, Y.-R., Liu, J.-S.: Learning the stylistic similarity between human motions. In: Bebis, G., Boyle, R., Parvin, B., Koracin, D., Remagnino, P., Nefian, A., Meenakshisundaram, G., Pascucci, V., Zara, J., Molineros, J., Theisel, H., Malzbender, T. (eds.) ISVC LNCS, vol. 4291, pp Springer, Heidelberg (2006). doi: / Urtasun, R., Glardon, R., Boulic, R., Thalmann, D., Fua, P.: Style-based motion synthesis. Comput. Graph. Forum 23(4), (2004)

10 Motion Style Transfer in Correlated Motion Spaces Ma, X., Le, B.H., Deng, Z.: Style learning and transferring for facial animation editing. In: ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp (2009) 14. Tilmanne, J., Moinet, A., Dutoit, T.: Stylistic gait synthesis based on hidden markov models. EURASIP J. Adv. Sig. Process. 1, 1 14 (2012) 15. Brand, M., Hertzmann, A.: Style machines. In: 27th Annual Conference on Computer Graphics and Interactive Techniques, pp (2000) 16. Rose, C., Bodenheimer, B., Cohen, M.F.: Verbs and adverbs: multidimensional motion interpolation. IEEE Comput. Graph. Appl. 18(5), (1998) 17. Song, J., Choi, B., Seol, Y., Noh, J.: Characteristic facial retargeting. Comput. Animat. Virtual Worlds 22(2 3), (2011) 18. Cao, Y., Faloutsos, P., Pighin, F.: Unsupervised learning for speech motion editing. In: ACM SIGGRAPH/Eurographics Symposium on Computer Animation, pp (2003) 19. Shapiro, A., Cao, Y., Faloutsos, P.: Style components. In: Graphics Interface, pp (2006) 20. Torresani, L., Hackney, R., Bregler, C.: Learning motion style synthesis from perceptual observations. In: Advances in Neural Information Processing Systems, pp (2007) 21. Liu, C., Hertzmann, A., Popović, Z.: Learning physics-based motion style with nonlinear inverse optimization. ACM Trans. Graph. 24(3), (2005) 22. Elgammal, A., Lee, C.: Separating style and content on a nonlinear manifold. In: IEEE Conference on Computer Vision and Pattern Recognition, pp (2004) 23. Mousas, C., Newbury, P., Anagnostopoulos, C.N.: Evaluating the covariance matrix constraints for data-driven statistical human motion reconstruction. In: Spring Conference on Computer Graphics, pp (2014) 24. Mousas, C., Newbury, P., Anagnostopoulos, C.-N.: Data-driven motion reconstruction using local regression models. In: Iliadis, L., Maglogiannis, I., Papadopoulos, H. (eds.) AIAI IAICT, vol. 436, pp Springer, Heidelberg (2014). doi: / Mousas, C., Newbury, P., Anagnostopoulos, C.-N.: Efficient hand-over motion reconstruction. In: International Conference on Computer Graphics, Visualization and Computer Vision, pp (2014) 26. Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. ACM Trans. Graph. 24(3), (2005) 27. Grochow, K., Martin, S.L., Hertzmann, A., Popović, Z.: Style-based inverse kinematics. ACM Trans. Graph. 23(3), (2004) 28. Wang, J.M., Fleet, D.J., Hertzmann, A.: Multifactor Gaussian process models for style-content separation. In: International Conference on Machine learning, pp (2007) 29. Kovar, L., Gleicher, M.: Automated extraction and parameterization of motions in large data sets. ACM Trans. Graph. 23(3), (2004) 30. Unuma, M., Anjyo, K., Takeuchi, R.: Fourier principles for emotion-based human figure animation. In: Annual Conference on Computer Graphics and Interactive Techniques, pp (1995) 31. Bruderlin, A., Williams, L.: Motion signal processing. In: Annual Conference on Computer Graphics and Interactive Techniques, pp (1995) 32. Perlin, K.: Real time responsive animation with personality. IEEE Trans. Vis. Comput. Graph. 1(1), 5 15 (1995)

11 252 A. Kilias and C. Mousas 33. Evans, L.C.: Partial differential equations and monge-kantorovich mass transfer. In: Current Developments in Mathematics, pp (1999) 34. Gangbo, W., McCann, R.J.: The geometry of optimal transportation. Acta Mathe. 177(2), (1996) 35. Villani, C.: Topics in Optimal Transportation, vol. 58. American Mathematical Society (2003) 36. Konstantinides, K., Yao, K.: Statistical analysis of effective singular values in matrix rank determinatio. IEEE Trans. Acoust. Speech Sig. Process. 36(5), (1988) 37. Lee, L.: Gait analysis for classification. Ph.d. thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology (2002) 38. Carnegie Mellon University, Motion capture database.

Synthesis and Editing of Personalized Stylistic Human Motion

Synthesis and Editing of Personalized Stylistic Human Motion Synthesis and Editing of Personalized Stylistic Human Motion Jianyuan Min Texas A&M University Huajun Liu Texas A&M University Wuhan University Jinxiang Chai Texas A&M University Figure 1: Motion style

More information

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Chan-Su Lee and Ahmed Elgammal Rutgers University, Piscataway, NJ, USA {chansu, elgammal}@cs.rutgers.edu Abstract. We

More information

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Christos Mousas Paul Newbury Department of Informatics University of Sussex East Sussex, Brighton BN1 9QH Email: {c.mousas,

More information

Analyzing and Segmenting Finger Gestures in Meaningful Phases

Analyzing and Segmenting Finger Gestures in Meaningful Phases 2014 11th International Conference on Computer Graphics, Imaging and Visualization Analyzing and Segmenting Finger Gestures in Meaningful Phases Christos Mousas Paul Newbury Dept. of Informatics University

More information

Motion Interpretation and Synthesis by ICA

Motion Interpretation and Synthesis by ICA Motion Interpretation and Synthesis by ICA Renqiang Min Department of Computer Science, University of Toronto, 1 King s College Road, Toronto, ON M5S3G4, Canada Abstract. It is known that high-dimensional

More information

FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION

FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION FOOTPRINT-DRIVEN LOCOMOTION COMPOSITION Christos Mousas 1,Paul Newbury 1, Christos-Nikolaos Anagnostopoulos 2 1 Department of Informatics, University of Sussex, Brighton BN1 9QJ, UK 2 Department of Cultural

More information

Character Animation Seminar Report: Complementing Physics with Motion Capture

Character Animation Seminar Report: Complementing Physics with Motion Capture Character Animation Seminar Report: Complementing Physics with Motion Capture Stefan John 1, and Alexis Heloir 2 1 Saarland University, Computer Graphics Lab, Im Stadtwald Campus E 1 1, 66123 Saarbrücken,

More information

A Generative Model for Motion Synthesis and Blending Using Probability Density Estimation

A Generative Model for Motion Synthesis and Blending Using Probability Density Estimation A Generative Model for Motion Synthesis and Blending Using Probability Density Estimation Dumebi Okwechime and Richard Bowden University of Surrey, Guildford, Surrey, GU2 7XH, UK {d.okwechime,r.bowden}@surrey.ac.uk

More information

An Analysis of Motion Blending Techniques

An Analysis of Motion Blending Techniques An Analysis of Motion Blending Techniques Andrew Feng 1, Yazhou Huang 2, Marcelo Kallmann 2, and Ari Shapiro 1 1 Institute for Creative Technologies, University of Southern California 2 University of California,

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information

A Responsiveness Metric for Controllable Characters Technical Report CS

A Responsiveness Metric for Controllable Characters Technical Report CS A Responsiveness Metric for Controllable Characters Technical Report CS05-50-0 Madhusudhanan Srinivasan Ronald A. Metoyer School of Electrical Engineering and Computer Science Oregon State University ρ

More information

An Analysis of Motion Blending Techniques

An Analysis of Motion Blending Techniques In Proceedings of the 5th International Conference on Motion in Games (MIG), Rennes, France, 2012 An Analysis of Motion Blending Techniques Andrew Feng 1, Yazhou Huang 2, Marcelo Kallmann 2, and Ari Shapiro

More information

Flexible Registration of Human Motion Data with Parameterized Motion Models

Flexible Registration of Human Motion Data with Parameterized Motion Models Flexible Registration of Human Motion Data with Parameterized Motion Models Yen-Lin Chen Texas A&M University Jianyuan Min Texas A&M University Jinxiang Chai Texas A&M University Figure 1: Our registration

More information

Stylistic Motion Decomposition

Stylistic Motion Decomposition EUROGRAPHICS 04 / R. Boulic, D. K. Pai (Editors) Volume 0 (1981), Number 0 Stylistic Motion Decomposition Ari Shapiro Yong Cao Petros Faloutsos Abstract We propose a novel method for interactive editing

More information

Geostatistical Motion Interpolation

Geostatistical Motion Interpolation Geostatistical Motion Interpolation Tomohiko Mukai Shigeru Kuriyama Toyohashi University of Technology Figure 1: Animations synthesized by our motion interpolation in a 5D parametric space. One parameter

More information

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Department of Computer Science The University of British Columbia duanx@cs.ubc.ca, lili1987@cs.ubc.ca Abstract

More information

Motion Control with Strokes

Motion Control with Strokes Motion Control with Strokes Masaki Oshita Kyushu Institute of Technology oshita@ces.kyutech.ac.jp Figure 1: Examples of stroke-based motion control. Input strokes (above) and generated motions (below).

More information

Spectral Style Transfer for Human Motion between Independent Actions

Spectral Style Transfer for Human Motion between Independent Actions Spectral Style Transfer for Human Motion between Independent Actions M. Ersin Yumer Adobe Research Niloy J. Mitra University College London Figure 1: Spectral style transfer between independent actions.

More information

Learnt Inverse Kinematics for Animation Synthesis

Learnt Inverse Kinematics for Animation Synthesis VVG (5) (Editors) Inverse Kinematics for Animation Synthesis Anonymous Abstract Existing work on animation synthesis can be roughly split into two approaches, those that combine segments of motion capture

More information

Generating Different Realistic Humanoid Motion

Generating Different Realistic Humanoid Motion Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing

More information

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets Kyungha Min and Moon-Ryul Jung Dept. of Media Technology, Graduate School of Media Communications, Sogang Univ., Seoul,

More information

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction Motion Texture Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays 1. Introduction Motion capture data is often used in movies and video games because it is able to realistically depict human

More information

Motion Synthesis and Editing. in Low-Dimensional Spaces

Motion Synthesis and Editing. in Low-Dimensional Spaces Motion Synthesis and Editing in Low-Dimensional Spaces Hyun Joon Shin Div. of Digital Media, Ajou University, San 5, Woncheon-dong, Yungtong-Ku Suwon, Korea Tel. (+82)31 219 1837 Fax. (+82)31 219 1797

More information

CS-184: Computer Graphics

CS-184: Computer Graphics CS-184: Computer Graphics Lecture #19: Motion Capture!!! Prof. James O Brien! University of California, Berkeley!! V2015-S-18-1.0 Today 1 18-MoCap.key - April 8, 2015 Motion Capture 2 2 18-MoCap.key -

More information

Splicing of concurrent upper-body motion spaces with locomotion

Splicing of concurrent upper-body motion spaces with locomotion Splicing of concurrent upper-body motion spaces with locomotion Article (Unspecified) Mousas, Christos, Newbury, Paul and Anagnostopoulos, Christos-Nikolaos (2013) Splicing of concurrent upper-body motion

More information

Adding Hand Motion to the Motion Capture Based Character Animation

Adding Hand Motion to the Motion Capture Based Character Animation Adding Hand Motion to the Motion Capture Based Character Animation Ge Jin and James Hahn Computer Science Department, George Washington University, Washington DC 20052 {jinge, hahn}@gwu.edu Abstract. Most

More information

Motion synthesis and editing in low-dimensional spaces. Introduction. By Hyun Joon Shin and Jehee Lee

Motion synthesis and editing in low-dimensional spaces. Introduction. By Hyun Joon Shin and Jehee Lee COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds 006; 7: 9 7 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 0.00/cav. Motion synthesis and editing in lowdimensional

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

Phase-Functioned Neural Networks for Motion Learning

Phase-Functioned Neural Networks for Motion Learning Phase-Functioned Neural Networks for Motion Learning TAMS University of Hamburg 03.01.2018 Sebastian Starke University of Edinburgh School of Informatics Institue of Perception, Action and Behaviour Sebastian.Starke@ed.ac.uk

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

Modeling Style and Variation in Human Motion

Modeling Style and Variation in Human Motion Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2010) M. Otaduy and Z. Popovic (Editors) Modeling Style and Variation in Human Motion Wanli Ma 1,2 Shihong Xia 1 Jessica K. Hodgins 3 Xiao Yang

More information

THE capability to precisely synthesize online fullbody

THE capability to precisely synthesize online fullbody 1180 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER 2014 Sparse Constrained Motion Synthesis Using Local Regression Models Huajun Liu a, Fuxi Zhu a a School of Computer, Wuhan University, Wuhan 430072,

More information

Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments

Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments 3D Res (2017) 8:18 DOI 10.1007/s13319-017-0124-0 3DR EXPRESS Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments Christos Mousas. Christos-Nikolaos

More information

Motion Track: Visualizing Variations of Human Motion Data

Motion Track: Visualizing Variations of Human Motion Data Motion Track: Visualizing Variations of Human Motion Data Yueqi Hu Shuangyuan Wu Shihong Xia Jinghua Fu Wei Chen ABSTRACT This paper proposes a novel visualization approach, which can depict the variations

More information

Automating Expressive Locomotion Generation

Automating Expressive Locomotion Generation Automating Expressive ocomotion Generation Yejin Kim and Michael Neff University of California, Davis, Department of Computer Science and Program for Technocultural Studies, 1 Shields Avenue, Davis, CA

More information

Physically Based Character Animation

Physically Based Character Animation 15-464/15-664 Technical Animation April 2, 2013 Physically Based Character Animation Katsu Yamane Disney Research, Pittsburgh kyamane@disneyresearch.com Physically Based Character Animation Use physics

More information

M I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies?

M I RA Lab. Speech Animation. Where do we stand today? Speech Animation : Hierarchy. What are the technologies? MIRALab Where Research means Creativity Where do we stand today? M I RA Lab Nadia Magnenat-Thalmann MIRALab, University of Geneva thalmann@miralab.unige.ch Video Input (face) Audio Input (speech) FAP Extraction

More information

Gaussian Process Dynamical Models

Gaussian Process Dynamical Models DRAFT Final version to appear in NIPS 18. Gaussian Process Dynamical Models Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science University of Toronto, Toronto, ON M5S 3G4 jmwang,hertzman

More information

Active Learning for Real-Time Motion Controllers

Active Learning for Real-Time Motion Controllers Active Learning for Real-Time Motion Controllers Seth Cooper 1 Aaron Hertzmann 2 Zoran Popović 1 1 University of Washington 2 University of Toronto Figure 1: Catching controller: the character moves through

More information

The Role of Manifold Learning in Human Motion Analysis

The Role of Manifold Learning in Human Motion Analysis The Role of Manifold Learning in Human Motion Analysis Ahmed Elgammal and Chan Su Lee Department of Computer Science, Rutgers University, Piscataway, NJ, USA {elgammal,chansu}@cs.rutgers.edu Abstract.

More information

MOTION capture is a technique and a process that

MOTION capture is a technique and a process that JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2008 1 Automatic estimation of skeletal motion from optical motion capture data xxx, Member, IEEE, Abstract Utilization of motion capture techniques

More information

Epitomic Analysis of Human Motion

Epitomic Analysis of Human Motion Epitomic Analysis of Human Motion Wooyoung Kim James M. Rehg Department of Computer Science Georgia Institute of Technology Atlanta, GA 30332 {wooyoung, rehg}@cc.gatech.edu Abstract Epitomic analysis is

More information

GRAPH-BASED APPROACH FOR MOTION CAPTURE DATA REPRESENTATION AND ANALYSIS. Jiun-Yu Kao, Antonio Ortega, Shrikanth S. Narayanan

GRAPH-BASED APPROACH FOR MOTION CAPTURE DATA REPRESENTATION AND ANALYSIS. Jiun-Yu Kao, Antonio Ortega, Shrikanth S. Narayanan GRAPH-BASED APPROACH FOR MOTION CAPTURE DATA REPRESENTATION AND ANALYSIS Jiun-Yu Kao, Antonio Ortega, Shrikanth S. Narayanan University of Southern California Department of Electrical Engineering ABSTRACT

More information

Stylistic Reuse of View-Dependent Animations

Stylistic Reuse of View-Dependent Animations Stylistic Reuse of View-Dependent Animations Parag Chaudhuri Ashwani Jindal Prem Kalra Subhashis Banerjee Department of Computer Science and Engineering, Indian Institute of Technology Delhi, Hauz Khas,

More information

Learning Deformations of Human Arm Movement to Adapt to Environmental Constraints

Learning Deformations of Human Arm Movement to Adapt to Environmental Constraints Learning Deformations of Human Arm Movement to Adapt to Environmental Constraints Stephan Al-Zubi and Gerald Sommer Cognitive Systems, Christian Albrechts University, Kiel, Germany Abstract. We propose

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Real-Time Motion Transition by Example

Real-Time Motion Transition by Example Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2005-11-10 Real-Time Motion Transition by Example Cameron Quinn Egbert Brigham Young University - Provo Follow this and additional

More information

Using GPLVM for Inverse Kinematics on Non-cyclic Data

Using GPLVM for Inverse Kinematics on Non-cyclic Data 000 00 002 003 004 005 006 007 008 009 00 0 02 03 04 05 06 07 08 09 020 02 022 023 024 025 026 027 028 029 030 03 032 033 034 035 036 037 038 039 040 04 042 043 044 045 046 047 048 049 050 05 052 053 Using

More information

3D Human Motion Analysis and Manifolds

3D Human Motion Analysis and Manifolds D E P A R T M E N T O F C O M P U T E R S C I E N C E U N I V E R S I T Y O F C O P E N H A G E N 3D Human Motion Analysis and Manifolds Kim Steenstrup Pedersen DIKU Image group and E-Science center Motivation

More information

CS-184: Computer Graphics. Today

CS-184: Computer Graphics. Today CS-184: Computer Graphics Lecture #20: Motion Capture Prof. James O Brien University of California, Berkeley V2005-F20-1.0 Today Motion Capture 2 Motion Capture Record motion from physical objects Use

More information

Graph-Based Action Models for Human Motion Classification

Graph-Based Action Models for Human Motion Classification Graph-Based Action Models for Human Motion Classification Felix Endres Jürgen Hess Wolfram Burgard University of Freiburg, Dept. of Computer Science, Freiburg, Germany Abstract Recognizing human actions

More information

Realtime Style Transfer for Unlabeled Heterogeneous Human Motion

Realtime Style Transfer for Unlabeled Heterogeneous Human Motion Realtime Style Transfer for Unlabeled Heterogeneous Human Motion 1 Institute Shihong Xia1 Congyi Wang1 Jinxiang Chai2 2 of Computing Technology, CAS Texas A&M University Jessica Hodgins3 3 Carnegie Mellon

More information

Style-based Inverse Kinematics

Style-based Inverse Kinematics Style-based Inverse Kinematics Keith Grochow, Steven L. Martin, Aaron Hertzmann, Zoran Popovic SIGGRAPH 04 Presentation by Peter Hess 1 Inverse Kinematics (1) Goal: Compute a human body pose from a set

More information

Gaussian Process Dynamical Models

Gaussian Process Dynamical Models Gaussian Process Dynamical Models Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science University of Toronto, Toronto, ON M5S 3G4 jmwang,hertzman@dgp.toronto.edu, fleet@cs.toronto.edu

More information

MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC. Alexandre Meyer Master Informatique

MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC. Alexandre Meyer Master Informatique 1 MOTION CAPTURE DATA PROCESSING - MOTION EDITING / RETARGETING - MOTION CONTROL / GRAPH - INVERSE KINEMATIC Alexandre Meyer Master Informatique Overview: Motion data processing In this course Motion editing

More information

Scaled Functional Principal Component Analysis for Human Motion Synthesis

Scaled Functional Principal Component Analysis for Human Motion Synthesis Scaled Functional Principal Component Analysis for Human Motion Synthesis Han Du 1 Somayeh Hosseini 1 Martin Manns 2 Erik Herrmann 1 Klaus Fischer 1 1 German Research Center for Artificial Intelligence,

More information

Human Motion Database with a Binary Tree and Node Transition Graphs

Human Motion Database with a Binary Tree and Node Transition Graphs Human Motion Database with a Binary Tree and Node Transition Graphs Katsu Yamane Disney Research, Pittsburgh kyamane@disneyresearch.com Yoshifumi Yamaguchi Dept. of Mechano-Informatics University of Tokyo

More information

Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation

Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation Facial Motion Capture Editing by Automated Orthogonal Blendshape Construction and Weight Propagation Qing Li and Zhigang Deng Department of Computer Science University of Houston Houston, TX, 77204, USA

More information

Motion Capture Assisted Animation: Texturing and Synthesis

Motion Capture Assisted Animation: Texturing and Synthesis Motion Capture Assisted Animation: Texturing and Synthesis Katherine Pullen Stanford University Christoph Bregler Stanford University Abstract We discuss a method for creating animations that allows the

More information

Modeling Variation in Motion Data

Modeling Variation in Motion Data Modeling Variation in Motion Data Manfred Lau Ziv Bar-Joseph James Kuffner April 2008 CMU-CS-08-118 School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract We present a new

More information

MOTION CAPTURE BASED MOTION ANALYSIS AND MOTION SYNTHESIS FOR HUMAN-LIKE CHARACTER ANIMATION

MOTION CAPTURE BASED MOTION ANALYSIS AND MOTION SYNTHESIS FOR HUMAN-LIKE CHARACTER ANIMATION MOTION CAPTURE BASED MOTION ANALYSIS AND MOTION SYNTHESIS FOR HUMAN-LIKE CHARACTER ANIMATION ZHIDONG XIAO July 2009 National Centre for Computer Animation Bournemouth University This copy of the thesis

More information

A Method of Hyper-sphere Cover in Multidimensional Space for Human Mocap Data Retrieval

A Method of Hyper-sphere Cover in Multidimensional Space for Human Mocap Data Retrieval Journal of Human Kinetics volume 28/2011, 133-139 DOI: 10.2478/v10078-011-0030-0 133 Section III Sport, Physical Education & Recreation A Method of Hyper-sphere Cover in Multidimensional Space for Human

More information

Homeomorphic Manifold Analysis (HMA): Generalized Separation of Style and Content on Manifolds

Homeomorphic Manifold Analysis (HMA): Generalized Separation of Style and Content on Manifolds Homeomorphic Manifold Analysis (HMA): Generalized Separation of Style and Content on Manifolds Ahmed Elgammal a,,, Chan-Su Lee b a Department of Computer Science, Rutgers University, Frelinghuysen Rd,

More information

Angular momentum guided motion concatenation. Introduction. Related Work. By Hubert P. H. Shum *, Taku Komura and Pranjul Yadav

Angular momentum guided motion concatenation. Introduction. Related Work. By Hubert P. H. Shum *, Taku Komura and Pranjul Yadav COMPUTER ANIMATION AND VIRTUAL WORLDS Comp. Anim. Virtual Worlds (2009) Published online in Wiley InterScience (www.interscience.wiley.com).315 Angular momentum guided motion concatenation By Hubert P.

More information

Identifying Humans by Their Walk and Generating New Motions Using Hidden Markov Models

Identifying Humans by Their Walk and Generating New Motions Using Hidden Markov Models Identifying Humans by Their Walk and Generating New Motions Using Hidden Markov Models CPSC 532A Topics in AI: Graphical Models and CPSC 526 Computer Animation December 15, 2004 Andrew Adam andyadam@cs.ubc.ca

More information

Path Planning Directed Motion Control of Virtual Humans in Complex Environments

Path Planning Directed Motion Control of Virtual Humans in Complex Environments 1 Path Planning Directed Motion Control of Virtual Humans in Complex Environments Song Song, Weibin Liu, Ruxiang Wei Institute of Information Science Beijing Jiaotong University Beijing Key Laboratory

More information

Hierarchical Gaussian Process Latent Variable Models

Hierarchical Gaussian Process Latent Variable Models Neil D. Lawrence neill@cs.man.ac.uk School of Computer Science, University of Manchester, Kilburn Building, Oxford Road, Manchester, M13 9PL, U.K. Andrew J. Moore A.Moore@dcs.shef.ac.uk Dept of Computer

More information

Motion Capture, Motion Edition

Motion Capture, Motion Edition Motion Capture, Motion Edition 2013-14 Overview Historical background Motion Capture, Motion Edition Motion capture systems Motion capture workflow Re-use of motion data Combining motion data and physical

More information

Artificial Neural Network-Based Prediction of Human Posture

Artificial Neural Network-Based Prediction of Human Posture Artificial Neural Network-Based Prediction of Human Posture Abstract The use of an artificial neural network (ANN) in many practical complicated problems encourages its implementation in the digital human

More information

Splicing Upper-Body Actions with Locomotion

Splicing Upper-Body Actions with Locomotion EUROGRAPHICS 2006 / E. Gröller and L. Szirmay-Kalos (Guest Editors) Volume 25 (2006), Number 3 Splicing Upper-Body Actions with Locomotion Rachel Heck Lucas Kovar Michael Gleicher University of Wisconsin-Madison

More information

Gaussian Process Motion Graph Models for Smooth Transitions among Multiple Actions

Gaussian Process Motion Graph Models for Smooth Transitions among Multiple Actions Gaussian Process Motion Graph Models for Smooth Transitions among Multiple Actions Norimichi Ukita 1 and Takeo Kanade Graduate School of Information Science, Nara Institute of Science and Technology The

More information

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al CS 231 Motion Capture Data I The Pipeline Bodenheimer et al 1 Marker Magnetic Optical Marker placement On limbs vs joints neither is ideal Over tight clothing or thin skin In repeatable 'landmarks' Using

More information

Component-based Locomotion Composition

Component-based Locomotion Composition Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2012) P. Kry and J. Lee (Editors) Component-based Locomotion Composition Yejin Kim and Michael Neff Department of Computer Science and Program

More information

3D Mesh Sequence Compression Using Thin-plate Spline based Prediction

3D Mesh Sequence Compression Using Thin-plate Spline based Prediction Appl. Math. Inf. Sci. 10, No. 4, 1603-1608 (2016) 1603 Applied Mathematics & Information Sciences An International Journal http://dx.doi.org/10.18576/amis/100440 3D Mesh Sequence Compression Using Thin-plate

More information

Motion Rings for Interactive Gait Synthesis

Motion Rings for Interactive Gait Synthesis Motion Rings for Interactive Gait Synthesis Tomohiko Mukai Square Enix Motion sample Pose space Time axis (a) Motion ring (b) Precomputed sphere (c) Bumpy terrain (d) Stairs (e) Stepping stones Figure

More information

A New Algorithm for Measuring and Optimizing the Manipulability Index

A New Algorithm for Measuring and Optimizing the Manipulability Index DOI 10.1007/s10846-009-9388-9 A New Algorithm for Measuring and Optimizing the Manipulability Index Ayssam Yehia Elkady Mohammed Mohammed Tarek Sobh Received: 16 September 2009 / Accepted: 27 October 2009

More information

Synthesizing Realistic Human Motions Using Motion Graphs

Synthesizing Realistic Human Motions Using Motion Graphs Synthesizing Realistic Human Motions Using Motion Graphs by Ling Mao, B.Sc. Dissertation Presented to the University of Dublin, Trinity College in fulfillment of the requirements for the Degree of Master

More information

Character Motion Control by Hands and Principal Component Analysis

Character Motion Control by Hands and Principal Component Analysis Character Motion Control by Hands and Principal Component Analysis Masaki Oshita Hayato Oshima Yuta Senju Syun Morishige Kyushu Institute of Technology (a) Picture of our interface (b) Interface design

More information

Gait analysis for person recognition using principal component analysis and support vector machines

Gait analysis for person recognition using principal component analysis and support vector machines Gait analysis for person recognition using principal component analysis and support vector machines O V Strukova 1, LV Shiripova 1 and E V Myasnikov 1 1 Samara National Research University, Moskovskoe

More information

Does Dimensionality Reduction Improve the Quality of Motion Interpolation?

Does Dimensionality Reduction Improve the Quality of Motion Interpolation? Does Dimensionality Reduction Improve the Quality of Motion Interpolation? Sebastian Bitzer, Stefan Klanke and Sethu Vijayakumar School of Informatics - University of Edinburgh Informatics Forum, 10 Crichton

More information

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component

Abstract We present a system which automatically generates a 3D face model from a single frontal image of a face. Our system consists of two component A Fully Automatic System To Model Faces From a Single Image Zicheng Liu Microsoft Research August 2003 Technical Report MSR-TR-2003-55 Microsoft Research Microsoft Corporation One Microsoft Way Redmond,

More information

Multidirectional 2DPCA Based Face Recognition System

Multidirectional 2DPCA Based Face Recognition System Multidirectional 2DPCA Based Face Recognition System Shilpi Soni 1, Raj Kumar Sahu 2 1 M.E. Scholar, Department of E&Tc Engg, CSIT, Durg 2 Associate Professor, Department of E&Tc Engg, CSIT, Durg Email:

More information

Dynamic Human Shape Description and Characterization

Dynamic Human Shape Description and Characterization Dynamic Human Shape Description and Characterization Z. Cheng*, S. Mosher, Jeanne Smith H. Cheng, and K. Robinette Infoscitex Corporation, Dayton, Ohio, USA 711 th Human Performance Wing, Air Force Research

More information

Motion Editing with Data Glove

Motion Editing with Data Glove Motion Editing with Data Glove Wai-Chun Lam City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong Kong email:jerrylam@cityu.edu.hk Feng Zou City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong

More information

The accuracy and robustness of motion

The accuracy and robustness of motion Orthogonal-Blendshape-Based Editing System for Facial Motion Capture Data Qing Li and Zhigang Deng University of Houston The accuracy and robustness of motion capture has made it a popular technique for

More information

Segment-Based Human Motion Compression

Segment-Based Human Motion Compression Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006) M.-P. Cani, J. O Brien (Editors) Segment-Based Human Motion Compression Guodong Liu and Leonard McMillan Department of Computer Science,

More information

Face Hallucination Based on Eigentransformation Learning

Face Hallucination Based on Eigentransformation Learning Advanced Science and Technology etters, pp.32-37 http://dx.doi.org/10.14257/astl.2016. Face allucination Based on Eigentransformation earning Guohua Zou School of software, East China University of Technology,

More information

Statistical Learning of Human Body through Feature Wireframe

Statistical Learning of Human Body through Feature Wireframe Statistical Learning of Human Body through Feature Wireframe Jida HUANG 1, Tsz-Ho KWOK 2*, Chi ZHOU 1 1 Industrial and Systems Engineering, University at Buffalo, SUNY, Buffalo NY, USA; 2 Mechanical, Industrial

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover

More information

Unsupervised Learning for Speech Motion Editing

Unsupervised Learning for Speech Motion Editing Eurographics/SIGGRAPH Symposium on Computer Animation (2003) D. Breen, M. Lin (Editors) Unsupervised Learning for Speech Motion Editing Yong Cao 1,2 Petros Faloutsos 1 Frédéric Pighin 2 1 University of

More information

Master s Thesis. Cloning Facial Expressions with User-defined Example Models

Master s Thesis. Cloning Facial Expressions with User-defined Example Models Master s Thesis Cloning Facial Expressions with User-defined Example Models ( Kim, Yejin) Department of Electrical Engineering and Computer Science Division of Computer Science Korea Advanced Institute

More information

Real Time Motion Authoring of a 3D Avatar

Real Time Motion Authoring of a 3D Avatar Vol.46 (Games and Graphics and 2014), pp.170-174 http://dx.doi.org/10.14257/astl.2014.46.38 Real Time Motion Authoring of a 3D Avatar Harinadha Reddy Chintalapalli and Young-Ho Chai Graduate School of

More information

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th Announcements Midterms back at end of class ½ lecture and ½ demo in mocap lab Have you started on the ray tracer? If not, please do due April 10th 1 Overview of Animation Section Techniques Traditional

More information

Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks

Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks Series Prediction as a Problem of Missing Values: Application to ESTSP7 and NN3 Competition Benchmarks Antti Sorjamaa and Amaury Lendasse Abstract In this paper, time series prediction is considered as

More information

Master of Puppets: An Animation-by-Demonstration Computer Puppetry Authoring Framework

Master of Puppets: An Animation-by-Demonstration Computer Puppetry Authoring Framework 3D Res (2018)9:5 https://doi.org/10.1007/s13319-018-0158-y 3DR EXPRESS Master of Puppets: An Animation-by-Demonstration Computer Puppetry Authoring Framework Yaoyuan Cui. Christos Mousas Received: 15 December

More information

Manifold Learning for Video-to-Video Face Recognition

Manifold Learning for Video-to-Video Face Recognition Manifold Learning for Video-to-Video Face Recognition Abstract. We look in this work at the problem of video-based face recognition in which both training and test sets are video sequences, and propose

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

A Dynamics-based Comparison Metric for Motion Graphs

A Dynamics-based Comparison Metric for Motion Graphs The Visual Computer manuscript No. (will be inserted by the editor) Mikiko Matsunaga, Victor B. Zordan University of California, Riverside A Dynamics-based Comparison Metric for Motion Graphs the date

More information

Skin Infection Recognition using Curvelet

Skin Infection Recognition using Curvelet IOSR Journal of Electronics and Communication Engineering (IOSR-JECE) e-issn: 2278-2834, p- ISSN: 2278-8735. Volume 4, Issue 6 (Jan. - Feb. 2013), PP 37-41 Skin Infection Recognition using Curvelet Manisha

More information

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method

The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method The Novel Approach for 3D Face Recognition Using Simple Preprocessing Method Parvin Aminnejad 1, Ahmad Ayatollahi 2, Siamak Aminnejad 3, Reihaneh Asghari Abstract In this work, we presented a novel approach

More information