THE capability to precisely synthesize online fullbody

Size: px
Start display at page:

Download "THE capability to precisely synthesize online fullbody"

Transcription

1 1180 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER 2014 Sparse Constrained Motion Synthesis Using Local Regression Models Huajun Liu a, Fuxi Zhu a a School of Computer, Wuhan University, Wuhan , China {huajunliu, fxzhu}@whu.edu.cn Abstract Based on sparse control constraints, one difficulty for synthesizing natural human motions is that lowdimensional control information can not be directly used to construct high-dimensional human poses. This paper introduces a novel and powerful local dimensionality reduction approach for synthesizing accurate and natural full-body human motions. The approach is to construct a group of online local dynamic regression models from a pre-captured motion database as a prior to support the full-body human action synthesis. By synthesizing a variety of human motions from as possible as few sparse constraints provided by users, the paper verifies the effectiveness of the proposed approach. Compared with previous statistical models, our model can synthesize more accurate results. Index Terms Human motion synthesis, Sparse constraints, Data-driven animation I. INTRODUCTION THE capability to precisely synthesize online fullbody human motions in real time would be applied to many areas, such as real sport training, rehabilitation, and realtime control of game virtual characters or robotic systems. Creating such a system has already been partially solved by commercially mocap equipments (such as, Vicon, XSens etc.), but their approaches are very expensive for a general family use. They also need complex operations which require the user to wear skintight clothing along with about 50 retro-reflective markers which are carefully positioned, 18 inertial or magnetic sensors, or a full-body exoskeleton to provide supports for motion capture. Recently, major game console companies, including Microsoft, Sony, and Nintendo, have developed next generation hardware devices to capture online performances of individual players. One of the advantages for these devices is their cheap price and suitable for common uses. However, low-dimensional signals from the devices are the main challenge for an accurate full-body human motion control, since a typical human body model is represented by more than 50 degrees of freedom (DOF). Building such an animation synthesis system is inherently an ill-posed problem, because user s inputs essentially are This work is supported by the National Basic Research Program of China (973 Program) (GrantNo.2011CB302306), National Science and Technology Support Program (Grant No. 2012BAH35B02), and the National Nature Science Foundation of China (GrantNo , ), Natural Science Foundation of Hubei Province of China (GrantNo. 2013CFB300), Research Fund for the Doctoral Program of Higher Education of China (GrantNo ). not enough to adequately determine a high-dimensional human pose. One attractive approach to eliminate reconstruction ambiguity is to learn a prior from pre-captured human poses. Previous researches often utilize principle component analysis (PCA) models [1] or principle component regression (PCR) models [2] to constrain the reconstruction space. Their systems work well for a large number of pre-captured motion data and they can model a virtual character s actions although its movement is highly nonlinear. Different from previous approaches, a new local dimensionality reduction method is proposed in our paper. While running, our system uses pre-captured mocap data to learn a group of local regression models to constrain the reconstruction space. We search the mocap database for K nearest motion examples which are similar to the recently already synthesized poses q t 1,...,q t m, and then use these motion sequence q tk 1,...q tk m along with their subsequent poses q tk, k= 1,...K, as training data to learn a predictive model for the reconstruction of the current pose q t. For linear regression learning approaches, the training data can be divided into two parts: input data and output data. Our proposed model has the following features: first, like the other local models, it is time-varying because, at each moment, a new local model is produced for the next pose prediction, and it scales up well to the size and heterogeneity of motion database. However, different from theirs, our proposed model estimates an projection direction and output data with a linear combination of basis regression equations or functions, therefore, it can add more spatial-temporal information than previous ones which just reduce dimensionality on input data. Our testing results also approved that our model is more powerful than previous models for synthesizing accurate motions. Using constrained maximum a posteriori (MAP) framework for human poses estimation can produce a natural motion sequence that best matches the control inputs assigned by the user. For this reason, we also formulate the online motion reconstruction problem in a MAP framework by utilizing a prior from online local regression models and a likelihood term from user-specified sparse constraints together. Our animation system can synthesize variously natural motions based on as possible as few sparse constraints (e.g., several joint positional trajectories) for an accurate online motion reconstruction doi: /jmm

2 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER Figure 1. Based on user-specified sparse constraints, our system can automatically synthesize realistic human motions. The blue points are sparse control points. of a full-body virtual character (see Figure 1). By online synthesizing a variety of human actions, such as walking, golf swinging, running, jumping, and boxing, the effectiveness of our model has been demonstrated in our implementation. Based on the same motion capture database and sparse control constraints, the synthesized results are better than the ones created by previous models. When the database is appropriate, the results are even comparable in quality to the ground truth data which were captured by the commercial equipments. II. RELATED WORK We discuss related work in utilizing sparse control constraints and statistical motion models based on data-driven approach for full-body human motion reconstruction. A. Sparse-constrained optimization A number of researchers have developed approaches that use sparse constraints provided by sensors to control high-dimensional human motions. For example, Shotton [3] and Wei [4] used a single depth camera to track and reconstruct various human motions. No markers were acquired to attach on user s body in their approaches, however, no less than 15 control points needed to be used to segment the human body. Semwal [5] combined the inverse kinematics algorithm and the sparse constraints from eight magnetic sensors together to provide an analytic solution for human motion control. Chai and Hodgins [1] used six to nine retro-reflective markers as the control points for online human motion reconstruction. Slyper and Hodgins [6] used five inertial sensors for realtime upper-body control. Recently, Liu and colleagues [2] reached a full-body human motion control using the positional and orientational constraints from six inertial sensors. Compared with theirs, this paper can use only four positional control points for realistic full-body human motion synthesis. Note that Tautges and colleagues [7] utilized sparse constraints provided by four accelerometer sensors to control a full-body human motion. However, their method can just closely approximate a performed motion because of missing positional and orientational constraints. Different with their control constraints, this paper applies positional constraints for motion synthesis to get a better results. In general, all of these approaches were using pre-recorded motion data as the supplement to balance the missing constraints of the sparse controls, and our work also adopts the same idea which is using sparse constraints and a statistical model calculated from a prerecorded motion capture data for human motion synthesis. B. Data-driven human motion reconstruction It is one of popular ways to construct statistical models of human motions for interactively controlling an avatar in real time. Statistical motion models are often described as several mathematical functions that represent human motion by a set of parameters associated with probability distributions. So far, using pre-captured motion data to learn statistical motion models have been used for key frames interpolation [8], motion styles synthesis [9] or speech-driven facial expressions [12, 13], interactive creation of a character s pose using a mouse [10] or control of human actions using vision-based tracking [1], realtime human motion control with inertial sensors [2] or accelerometer sensors [7], synthesizing human motion by using multifactor model [16], building physically-valid motion models for human motion synthesis [11] and so forth. Our approach is also to learn a statistical dynamic model from human motion capture database, however, the dynamic behavior of our model is controlled by a continuous sparse control constraints rather than a discrete hidden state as in [9, 12, 13] which using Hidden Markov Models (HMMs), so the control will be more flexible. Different with Gaussian Process Latent Variable Model (GPLVM), a global and nonlinear dimensionality reduction technique which works well with a small homogenous data set applied in [10, 14], this paper proposes a local dynamic model which can be applied for a large and heterogeneous motion database. Among above-mentioned statistical models, our work is most relevant to local models constructed in the subspace for online control of human motions [1, 2], because all of them are built during runtime and based on training data which are closed to the current reconstructed motion. Nevertheless, there exists a very significant difference. For dimensionality reduction methods, the training data can be divided into two parts: input and output data. If the modeling process has the ability to find the structure along with the parameters of a function so that it optimally represents a group of a given projection direction and output data, the prediction will be more effective. The methods [1, 2] they proposed are reducing dimensionality

3 1182 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER 2014 only on the input data, however, our proposed method focuses on finding the relationship between projection directions and output data, so it can give a more accurate prediction. The experiments in Section 5 show that the proposed method can produce more accurate results than the previous methods. In addition, for the equivalently accurate requirements, our approach needs fewer control information than theirs for the power of the proposed model. III. OVERVIEW OF ONLINE SYNTHESIS SYSTEM Our animation synthesis system automatically transforms control inputs from user-specified sparse constraints into realistic human motions by building a group of online local regression models at run time, and then using those models combined with the sparse constraints (e.g., userspecified joint positional trajectories) for human motion synthesis (see Figure 2). We assume human actions can be represented as an m-order Markov chain, so the current pose q t can be thought to only depend on previous m synthesized poses Q t,m = [q t 1,..., q t m ]: Pr(q t q t 1,..., q 1 ) = Pr(q t q t 1,..., q t m ). where the likelihood term E control measures to what extent the generated pose q t matches the user-specified constraints c t, and the prior term E prior measures the naturalness of the synthesized pose. An optimal estimate of the synthesized motion produces a natural human motion that achieves the goal specified by the user. IV. ONLINE SYNTHESIS OF HUMAN BEHAVIORS Using sparse constraints to synthesize human motion is difficult because the control constraints provided by the user can not fully constrain the entire human model to stay in the natural-looking space. So our solution is to utilize a group of local regression models to solve the issue of reconstruction ambiguity. The results in Section 5 show our proposed model can achieve more accurate results than previous ones. A. Control Consistency E control measures how well the locations of corresponding joints in the reconstructed human pose fit the control inputs obtained from user-defined constraints: E control = ln pr(c t q t ) f(q t ; s) c t 2 (3) where q t, s and c t are vectors. q t is joint angles of the synthesized pose at frame t, s is the character s skeletal size and c t is the user-specified constraints for the pose at frame t. The forward kinematics function f calculates the global coordinates value of the current pose q t. Figure 2. System overview. The proposed local regression modeling approach adds local spatial-temporal directions into the models for constraining the transformation of poses in the configuration space, predicts how humans move in each region and constrains the reconstructed motion to stay in the space of natural-looking human motion. We use the online constructed model to generate the desired pose q t from various forms of kinematic constraints c t specified by the user. We optimize the motion synthesis process in a MAP framework by estimating the pose q t which is most satisfied by user s input c t along with previous reconstructed pose sequence Q t,m : arg max qt pr(q t c t, Q t,m ) arg max qt pr(c t q t ) pr(q t Q t,m ) (1) By applying the negative log to the posteriori distribution function pr(q t c t, Q t,m ), we can convert the constrained MAP problem into an energy minimization problem: arg min qt ln pr(c t q t ) + ln pr(q t Q t,m) (2) }{{}}{{} E control E prior B. Online Local Regression Modeling In this subsection, we propose to automatically build sequential online local regression models to adequately constrain the synthesized pose to stay in the naturallooking solution space. A novel local linear model is proposed to avoid the problem of finding an appropriate structure for a global dynamic model, which would necessarily be high-dimensional and nonlinear. We adopt K-nearest neighbor search algorithm to find the K motion examples in the database which are similar to the already synthesized poses, and use these motion examples along with their subsequent poses for our online model learning. To predict the current pose q t at frame t, the first step is to search in the motion database, and find the closest motion segments which are most similar to the recently constructed one Q t,m = [q t 1,...,q t m ], we choose the K nearest motion segment examples [q tk 1,...,q tk m] along with their subsequent poses q tk, k=1,...,k as the training data to learn a predictive model via statistical model learning methods for the current pose q t. Suppose a linear relationship exists between an input joint angle vector x = [q t 1,...,q t m ] and an output joint angle vector y = q t. For simplicity, the prediction function for each DOF in output q t is learned separately. By subtracting the means from input training data and output training data, we assume the mean values of x and y are zeros. Therefore, the function of proposed model which is represented using linear regression as

4 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER y = α T x + β y (4) where the input joint angle x is an m D-dimensional vector. D represents the dimension of DOFs for a human character and y is joint angle value for output. Regression coefficients are vector α, and β y represents a homoscedastic noise variable, which is independent of vector x. Moreover, given the K motion examples {(x k ; y k )}, k = 1,..., K which are similar to the current synthesized poses, and by minimizing the expected squared error K E = y k α T x k 2,we can get the coefficients α k=1 by the least squares solution, α = (X T X) 1 X T y (5) The row of the matrix X stores the input joint angle vectors x k, k = 1,..., K, and K output joint angle values are stacked in vector y. Our proposed method calculates projections of highest correlation between input joint angle matrix X and output vector y. These projections can be achieved by maximizing the squared relationship correlation 2 (Xu j, y) = (u T j XT y) 2 /u T j XT Xu j (6) where u j is one of the projection directions. Since each projection Xu j is orthogonal to others and its length is unit, we can get u T j XT Xu j = 1. Here, u j is one column of the matrix U that includes the eigenvectors of covariance matrix C = (X T X) 1 X T yy T X. In the proposed model, we project X onto U for only considering the projections. By minimizing y XUγ 2 with respect to the reduced coefficients γ, we can get α = Uγ = U(U T X T XU) 1 U T X T y (7) Because we predict each DOF in output q t separately, we just have one projection direction u for each time. The weight for each data point x k can be calculated by Gaussian functions, using their relative distance from the previous reconstructed poses Q t,m = [q t 1,...,q t m ] : ω k = exp( 1 2 (x k Q t,m ) T W (x k Q t,m )) (8) where W is the diagonal matrix which contains the weights for each DOF. In our implementation, we use an identity matrix to represent W. The eigenvectors are extracted from the matrix C ω = (X T DX) 1 X T Dyy T DX (9) where D is a diagonal matrix, containing ω k along its diagonal. Furthermore, the weighted regression coefficients can be represented as α ω = U(U T X T DXU) 1 U T X T Dy (10) Suppose there exists a Gaussian distributed noise variable β y, its standard deviation σ can be estimated by y k βx k, k = 1,..., K. In our experiment, a predicted function for each DOF of the synthesized pose is constructed, therefore, to predict the d-th DOF of the pose, we can describe our local regression model as q t,d = α T d,ω Q t,m + N(0, σ d ) (11) where q t,d, σ d are scalars, q t,d represents the d-th DOF of the t-th frame pose, and σ d represents the standard deviation for the d-th prediction function. α d,ω, Q t,m are vectors, α d,ω are the weighted regression coefficients for the d-th DOF, and Q t,m is reconstructed motion segment before current synthesized pose. The computational complexity for reconstructing a pose is O(Km 2 D 2 ). Here, K, m, D represents the number of training data, the previous m poses (window size) and the dimension of DOF for a human character, respectively. In our implementation, we maximize the probability of the pose q t based on previous synthesized motion sequence Q t,m : P r(q t Q t,m ) D d=1 exp[ (q t,d α T d Q t,m) 2 ] (12) σd 2 where q t,d, d = 1,..., D is the d-th degree of freedom of the current pose q t. The vector α d and the scalar σ d are the regression coefficients and standard deviation of the d-th prediction model. By minimizing the negative log of Pr(q t Q t,m ), the energy formulation can be reached as: E prior = d (q t,d α T d Q t,m) 2 σ 2 d (13) The final cost function for our pose synthesis is constitutive of the aforementioned terms: control term (Equation 3) and prior term (Equation 13). V. RESULTS AND COMPARISONS Software implementation. We have examined our algorithms with different motion sequences on a desktop PC with Intel Core 2.8 GHz CPU and 4 GB memory. Figure 3 are several interfaces of our prototype software. We adopt gradient-based optimization using Levenberg- Marquardt method [15] for the objective function which is defined in Equation (2), and use the most similar motion example which already exists in the database to initialize the optimization. In the implementation, the optimization converges fast. The computational efficiency of our animation system mainly relies on searching scope in the motion database, so we accelerate process for the K nearest neighbor search using the neighbor graph approach in [1]. We use two databases for our testing. One database has poses includes five full-body behaviors: golf swinging (2935 frames), jumping (5082), boxing (32852), walking (22846) and running (6173). The other has 1.2 M poses downloaded from the CMU database. All of them were recorded using a Vicon mocap system which has a frame rate of 120 fps. In our implementation, we downsampled the data to 60 fps in order to achieve more natural-looking motions for visualization. To balance the synthesis framerate and the quality of reconstructed motion, in our implementation, we choose window size m is 2, and our system can run at average 57 fps. We verified the effectiveness of proposed approach on various behaviors and evaluated the reconstructed results with ground-truth data.

5 1184 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER 2014 Figure 4. Comparison with three popular algorithms: GPLVM, local PCA and local PCR algorithms: (upper) motion synthesis using positions of six joints (head, the center of torso, two wrists and two ankles); (lower) motion synthesis using positions of four joints (two wrists and two ankles). TABLE I. RECONSTRUCTION ERRORS AND SYNTHESIS FRAMERATE ON DIFFERENT WINDOW SIZE m=1,2,3, Reconstruction error (degree) Synthesis framerate (fps) Figure 3. The interfaces of our prototype software. (upper) walking motion; (middle) golf swinging motion; (lower) jumping motion. The red lines are user-specified trajectory constraints, and the blue points are the point constraints at one frame on red trajectories. Testing on user-specified data. We tested our animation system by online synthesizing various human motions (walking, running, golf swinging, jumping and boxing) of an avatar based on four joint positional constraints specified by the user (left wrist, right wrist, left ankle and right ankle). Figure 7 is some frames of the results for online motion synthesis. Error evaluation. We used leave-one-out error evaluation method to verify the quality of synthesized motions by our approach. We choose one sequence of motion capture example as the testing data each time, and then we use the rest of motions as the searching data for our online motion reconstruction. We synthesized different human motions and calculated the average synthesis errors. Figure 4 shows the standard deviations and mean errors of the reconstruction errors for various behaviors (golf swinging, jumping, boxing, walking and running). Window size and synthesis framerate. The computational complexity of modeling process mainly depends on the dimensional number of input data (m D). Table 1 is the reconstruction errors and framerates under different window size m. We tested the window size from 1 to 4. In our implementation, we found our system runs in real time with the average framerate of 35 fps when m is 3, compared with 84 fps when m is 1. In addition, the reconstruction errors usually increase as window size is reduced. When more local spacial-temporal information is added to the prior learning, the effect of prediction will be more accurate. Comparing with previous algorithms. Based on leave-one-out error evaluation, we tested the performance of our proposed model compared with three popular approaches (Gaussian Process Latent Variable Model [10], local principle component analysis model [1] and local principle component regression model [2]) by reconstructing various human motions. Figure 4 is the comparison for mean errors and standard deviations on five different motion behaviors for all four techniques.

6 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER Figure 6. Frame-by-frame comparison for one testing sequence: (upper) walking motion; (lower) boxing motion. TABLE II. R ECONSTRUCTION ERRORS BASED ON DIFFERENT NUMBER OF CONTROL POINTS FOR DIFFERENT ALGORITHMS. GPLVM LPCA LPCR Our method Figure 5. The interfaces of comparison with two popular algorithms (local PCA and local PCR ) and ground truth data (green one). (upper) walking motion; (middle) golf swinging motion; (lower) boxing motion. In the evaluation, we also adopt six constraint points which were used in [2], the results show that, compared to the other three techniques, our method can achieve not only smaller mean errors but also smaller standard deviations. In another aspect, while just using four control points(two wrists and two ankles), previous methods can not synthesize natural-looking human data (See Figure 5). Figure 6 shows the frame-by-frame comparison of the reconstruction errors for one testing data. The assessment results indicate the synthesis results by our proposed method are better than the results created by the other two local methods. Different number of control points. We tested the different number of control points for four methods. We chose the number of positional control points from two to six. (1) Left wrist and right ankle; (2) left wrist and two ankles; (3) two wrists and two ankles; (4) root, two wrists and two ankles; (5) head, root, two wrists and two ankles. By testing on different motions, we got the conclusion that the reconstruction errors usually reduce as the number of constraints is increased. In addition, we also found that compared with the six constraint points (head, the center of torso, two wrists and two ankles) which was used in [2] to get a natural-looking reconstructed human motion, we can use as possible as few constraint points (4 constraint points: two wrists and two ankles) to achieve a comparable result with the mocap real data. Table 2 is the average reconstruction error comparison on different number of control points. Along with few constraint points are used, our model is still more powerful for accurate motion synthesis than the previous three methods. Testing on different database. Table 3 is the average reconstruction errors of five different actions from three algorithms for two different training database. The reconstruction errors are calculated using 3D positional constraints from six control points. We found that, for GPLVM method, the reconstruction error is great when using a large and heterogeneous database. When applying a small database, the reconstruction error is also larger than the local modeling approaches. For the other local modeling methods, when the size of training database is

7 1186 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER 2014 Figure 7. Key frames for online motion synthesis. From top to bottom: walking, golf swinging, running, jumping and boxing. The blue points are joint constraints. TABLE III. AVERAGE RECONSTRUCTION ERRORS FOR FOUR METHODS ON DIFFERENT DATABASE poses 1.2 M poses GPLVM LPCA LPCR Our method increased, the reconstruction error reduced. In addition, our proposed model can achieve the smallest reconstruction error than others. By testing on different database, we also verified the powerful of the proposed model. Limitations. There are three limitations for the proposed method. (1) Like other data-driven approaches, the database is crucial for the quality of synthesized motion. The system will not produce a desired motion if the training data does not contain any desired motion patterns. For example, if the walking motion pattern is not included in the database, our system will not be able to synthesize desired walking data. (2) Userspecified constraints are also crucial for the final results. In fact, if user-specified constraints are not natural or self-conflicting, the reconstruction result will not be a realistic human motion to satisfy user s constraints. (3) The motion data need to be previously arranged for online

8 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER search. Like most local modeling approaches, a specific data arrangement structure is applied for motion data to accelerate the searching process. VI. CONCLUSIONS In this paper, a new local regression model is introduced for online reconstructing natural full-body human motion just based on as possible as few user-specified constraints. The proposed method, which is using datadriven approach, is to utilize several nearest motion examples to construct a group of online local regression models for online motion synthesis. However, based on the same defined constraints and motion database, the proposed method has better force of constraint than the previous local models and thus can synthesize more realistic human motions. Therefore, our proposed model is suitable for the next generation hardware devices to exploit the motion capture system for a common use. [15] M. I. A. Lourakis, levmar: Levenberg-Marquardt Nonlinear Least Squares Algorithms in C/C++., (2009) [16] G. Liu, M. Xu, Z. Pan, A. E. Rhalibi, Human Motion Generation with Multifactor Models. Journal of Visualization and Computer Animation 22(4): (2011) Huajun Liu received his B.S. and Ph.D. degree in School of Computer at Wuhan University in 2005 and He was a post-doctor in Wuhan University, a visiting researcher in Korea Institute of Science and Technology and a visiting scholar in Texas A&M University. Now he works in School of Computer, Wuhan University. His research interests include computer graphics (character animation, data-driven approaches for graphics and vision, interaction techniques for 3D graphics). Fuxi Zhu was born He received Ph.D. degree from School of Computer at Wuhan University. Now he is a professor in School of Computer, Wuhan University, China. His research interests include web knowledge mining in machine learning and parallel computing. REFERENCES [1] J. Chai, J. Hodgins, Performance animation from lowdimensional control signals, ACM Transactions on Graphics (2005), 24(3), pp [2] H. Liu, X. Wei, J. Chai, I. Ha, T. Rhee, Realtime human motion control with a small number of inertial sensors, In Proceedings of the 2011 symposium on Interactive 3D graphics and games (2011), pp [3] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, A. Blake, Real-time human pose recognition in parts from a single depth image. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (2011),pp [4] X. Wei, P. Zhang, J. Chai, Accurate realtime full-body motion capture using a single depth camera, ACM Transactions on Graphics (2012), 31(6), Article No. 188 [5] S. Semwal, R. Hightower, S. Stansfield, Mapping algorithms for real-time control of an avatar using eight sensors, In Presence (1998), 7(1):1-21. [6] R. Slyper, J. Hodgins, Action capture with accelerometers. In 2008 ACM SIGGRAPH / Eurographics Symposium on Computer Animation (2008), pp [7] J. Tautges, A. Zinke, B. Kruger, J. Baumann, A. Weber, T. Helten, M. Muller, H. Seidel, B. Eberhardt, Motion reconstruction using sparse accelerometer data. ACM Transaction on Graphics (2011), 30(3), Article No. 18 [8] Y. Li, T. Wang, H.-Y. Shum, Motion Texture: A two-level statistical model for character synthesis, ACM Transactions on Graphics (2002), 21(3), pp [9] M. Brand, A. Hertzmann, Style machines, In Proceedings of ACM SIGGRAPH (2000), pp [10] K. Grochow, S. L. Martin, A. Hertzmann, Z. Popovic, Style-based inverse kinematics. ACM Transactions on Graphics (2004), 23(3): [11] X. Wei, J. Min, J. Chai, Physically valid statistical models for human motion generation, ACM Transactions on Graphics (2011), 30(3), Article No.19. [12] C. Bregler, M. Covell, M. Slaney, Video rewrite: driving visual speech with audio. InProceedings of ACM SIGGRAPH (1997), pp [13] M. E. Brand, Voice puppetry. In Proceedings of ACM SIGGRAPH (1999), pp [14] S. Levine, J. Wang, A. Haraux, Z. PopoviC, V. Koltun, Continuous character control with low-dimensional embeddings, ACM Transactions on Graphics (2012), 31(4), Article 28.

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

Style-based Inverse Kinematics

Style-based Inverse Kinematics Style-based Inverse Kinematics Keith Grochow, Steven L. Martin, Aaron Hertzmann, Zoran Popovic SIGGRAPH 04 Presentation by Peter Hess 1 Inverse Kinematics (1) Goal: Compute a human body pose from a set

More information

Modeling 3D Human Poses from Uncalibrated Monocular Images

Modeling 3D Human Poses from Uncalibrated Monocular Images Modeling 3D Human Poses from Uncalibrated Monocular Images Xiaolin K. Wei Texas A&M University xwei@cse.tamu.edu Jinxiang Chai Texas A&M University jchai@cse.tamu.edu Abstract This paper introduces an

More information

Generating Different Realistic Humanoid Motion

Generating Different Realistic Humanoid Motion Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing

More information

Motion Interpretation and Synthesis by ICA

Motion Interpretation and Synthesis by ICA Motion Interpretation and Synthesis by ICA Renqiang Min Department of Computer Science, University of Toronto, 1 King s College Road, Toronto, ON M5S3G4, Canada Abstract. It is known that high-dimensional

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information

A Method of Hyper-sphere Cover in Multidimensional Space for Human Mocap Data Retrieval

A Method of Hyper-sphere Cover in Multidimensional Space for Human Mocap Data Retrieval Journal of Human Kinetics volume 28/2011, 133-139 DOI: 10.2478/v10078-011-0030-0 133 Section III Sport, Physical Education & Recreation A Method of Hyper-sphere Cover in Multidimensional Space for Human

More information

Synthesis and Editing of Personalized Stylistic Human Motion

Synthesis and Editing of Personalized Stylistic Human Motion Synthesis and Editing of Personalized Stylistic Human Motion Jianyuan Min Texas A&M University Huajun Liu Texas A&M University Wuhan University Jinxiang Chai Texas A&M University Figure 1: Motion style

More information

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models Emanuele Ruffaldi Lorenzo Peppoloni Alessandro Filippeschi Carlo Alberto Avizzano 2014 IEEE International

More information

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Department of Computer Science The University of British Columbia duanx@cs.ubc.ca, lili1987@cs.ubc.ca Abstract

More information

3D Human Motion Analysis and Manifolds

3D Human Motion Analysis and Manifolds D E P A R T M E N T O F C O M P U T E R S C I E N C E U N I V E R S I T Y O F C O P E N H A G E N 3D Human Motion Analysis and Manifolds Kim Steenstrup Pedersen DIKU Image group and E-Science center Motivation

More information

Does Dimensionality Reduction Improve the Quality of Motion Interpolation?

Does Dimensionality Reduction Improve the Quality of Motion Interpolation? Does Dimensionality Reduction Improve the Quality of Motion Interpolation? Sebastian Bitzer, Stefan Klanke and Sethu Vijayakumar School of Informatics - University of Edinburgh Informatics Forum, 10 Crichton

More information

Flexible Registration of Human Motion Data with Parameterized Motion Models

Flexible Registration of Human Motion Data with Parameterized Motion Models Flexible Registration of Human Motion Data with Parameterized Motion Models Yen-Lin Chen Texas A&M University Jianyuan Min Texas A&M University Jinxiang Chai Texas A&M University Figure 1: Our registration

More information

A Retrieval Method for Human Mocap Data Based on Biomimetic Pattern Recognition

A Retrieval Method for Human Mocap Data Based on Biomimetic Pattern Recognition UDC 004.65, DOI: 10.98/CSIS1001099W A Retrieval Method for Human Mocap Data Based on Biomimetic Pattern Recognition Xiaopeng Wei 1, Boxiang Xiao 1, and Qiang Zhang 1 1 Key Laboratory of Advanced Design

More information

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Chan-Su Lee and Ahmed Elgammal Rutgers University, Piscataway, NJ, USA {chansu, elgammal}@cs.rutgers.edu Abstract. We

More information

Research on the New Image De-Noising Methodology Based on Neural Network and HMM-Hidden Markov Models

Research on the New Image De-Noising Methodology Based on Neural Network and HMM-Hidden Markov Models Research on the New Image De-Noising Methodology Based on Neural Network and HMM-Hidden Markov Models Wenzhun Huang 1, a and Xinxin Xie 1, b 1 School of Information Engineering, Xijing University, Xi an

More information

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets Kyungha Min and Moon-Ryul Jung Dept. of Media Technology, Graduate School of Media Communications, Sogang Univ., Seoul,

More information

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction Motion Texture Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays 1. Introduction Motion capture data is often used in movies and video games because it is able to realistically depict human

More information

Articulated Structure from Motion through Ellipsoid Fitting

Articulated Structure from Motion through Ellipsoid Fitting Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 179 Articulated Structure from Motion through Ellipsoid Fitting Peter Boyi Zhang, and Yeung Sam Hung Department of Electrical and Electronic

More information

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Animation, Motion Capture, & Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based

More information

Human Upper Body Pose Estimation in Static Images

Human Upper Body Pose Estimation in Static Images 1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project

More information

Research and Literature Review on Developing Motion Capture System for Analyzing Athletes Action

Research and Literature Review on Developing Motion Capture System for Analyzing Athletes Action International Conference on Education Technology, Management and Humanities Science (ETMHS 2015) Research and Literature Review on Developing Motion Capture System for Analyzing Athletes Action HAN Fang

More information

Motion Capture, Motion Edition

Motion Capture, Motion Edition Motion Capture, Motion Edition 2013-14 Overview Historical background Motion Capture, Motion Edition Motion capture systems Motion capture workflow Re-use of motion data Combining motion data and physical

More information

A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space

A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space Eng-Jon Ong and Shaogang Gong Department of Computer Science, Queen Mary and Westfield College, London E1 4NS, UK fongej

More information

Last Time? Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based Animation Forward and

More information

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation

Last Time? Animation, Motion Capture, & Inverse Kinematics. Today. Keyframing. Physically-Based Animation. Procedural Animation Last Time? Animation, Motion Capture, & Inverse Kinematics Navier-Stokes Equations Conservation of Momentum & Mass Incompressible Flow Today How do we animate? Keyframing Procedural Animation Physically-Based

More information

Realtime Style Transfer for Unlabeled Heterogeneous Human Motion

Realtime Style Transfer for Unlabeled Heterogeneous Human Motion Realtime Style Transfer for Unlabeled Heterogeneous Human Motion 1 Institute Shihong Xia1 Congyi Wang1 Jinxiang Chai2 2 of Computing Technology, CAS Texas A&M University Jessica Hodgins3 3 Carnegie Mellon

More information

A Nonlinear Manifold Learning Framework for Real-time Motion Estimation using Low-cost Sensors

A Nonlinear Manifold Learning Framework for Real-time Motion Estimation using Low-cost Sensors A Nonlinear Manifold Learning Framework for Real-time Motion Estimation using Low-cost Sensors Liguang Xie, Bing Fang, Yong Cao, Francis Quek Center for Human Computer Interaction Virginia Polytechnic

More information

Data-driven Approaches to Simulation (Motion Capture)

Data-driven Approaches to Simulation (Motion Capture) 1 Data-driven Approaches to Simulation (Motion Capture) Ting-Chun Sun tingchun.sun@usc.edu Preface The lecture slides [1] are made by Jessica Hodgins [2], who is a professor in Computer Science Department

More information

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview

Human Body Recognition and Tracking: How the Kinect Works. Kinect RGB-D Camera. What the Kinect Does. How Kinect Works: Overview Human Body Recognition and Tracking: How the Kinect Works Kinect RGB-D Camera Microsoft Kinect (Nov. 2010) Color video camera + laser-projected IR dot pattern + IR camera $120 (April 2012) Kinect 1.5 due

More information

Analyzing and Segmenting Finger Gestures in Meaningful Phases

Analyzing and Segmenting Finger Gestures in Meaningful Phases 2014 11th International Conference on Computer Graphics, Imaging and Visualization Analyzing and Segmenting Finger Gestures in Meaningful Phases Christos Mousas Paul Newbury Dept. of Informatics University

More information

Optimal motion trajectories. Physically based motion transformation. Realistic character animation with control. Highly dynamic motion

Optimal motion trajectories. Physically based motion transformation. Realistic character animation with control. Highly dynamic motion Realistic character animation with control Optimal motion trajectories Physically based motion transformation, Popovi! and Witkin Synthesis of complex dynamic character motion from simple animation, Liu

More information

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group)

Research Subject. Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) Research Subject Dynamics Computation and Behavior Capture of Human Figures (Nakamura Group) (1) Goal and summary Introduction Humanoid has less actuators than its movable degrees of freedom (DOF) which

More information

Dimensionality Reduction and Generation of Human Motion

Dimensionality Reduction and Generation of Human Motion INT J COMPUT COMMUN, ISSN 1841-9836 8(6):869-877, December, 2013. Dimensionality Reduction and Generation of Human Motion S. Qu, L.D. Wu, Y.M. Wei, R.H. Yu Shi Qu* Air Force Early Warning Academy No.288,

More information

CS 231. Inverse Kinematics Intro to Motion Capture

CS 231. Inverse Kinematics Intro to Motion Capture CS 231 Inverse Kinematics Intro to Motion Capture Representation 1) Skeleton Origin (root) Joint centers/ bones lengths 2) Keyframes Pos/Rot Root (x) Joint Angles (q) 3D characters Kinematics study of

More information

Motion Capture & Simulation

Motion Capture & Simulation Motion Capture & Simulation Motion Capture Character Reconstructions Joint Angles Need 3 points to compute a rigid body coordinate frame 1 st point gives 3D translation, 2 nd point gives 2 angles, 3 rd

More information

Video Texture. A.A. Efros

Video Texture. A.A. Efros Video Texture A.A. Efros 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Weather Forecasting for Dummies Let s predict weather: Given today s weather only, we want to know tomorrow s Suppose

More information

Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition

Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition Combined Shape Analysis of Human Poses and Motion Units for Action Segmentation and Recognition Maxime Devanne 1,2, Hazem Wannous 1, Stefano Berretti 2, Pietro Pala 2, Mohamed Daoudi 1, and Alberto Del

More information

Exploiting Spatial-temporal Constraints for Interactive Animation Control

Exploiting Spatial-temporal Constraints for Interactive Animation Control Exploiting Spatial-temporal Constraints for Interactive Animation Control Jinxiang Chai CMU-RI-TR-06-49 Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Robotics

More information

Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing

Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing Samer Al Moubayed Center for Speech Technology, Department of Speech, Music, and Hearing, KTH, Sweden. sameram@kth.se

More information

Virtual Interaction System Based on Optical Capture

Virtual Interaction System Based on Optical Capture Sensors & Transducers 203 by IFSA http://www.sensorsportal.com Virtual Interaction System Based on Optical Capture Peng CHEN, 2 Xiaoyang ZHOU, 3 Jianguang LI, Peijun WANG School of Mechanical Engineering,

More information

Hierarchical Gaussian Process Latent Variable Models

Hierarchical Gaussian Process Latent Variable Models Neil D. Lawrence neill@cs.man.ac.uk School of Computer Science, University of Manchester, Kilburn Building, Oxford Road, Manchester, M13 9PL, U.K. Andrew J. Moore A.Moore@dcs.shef.ac.uk Dept of Computer

More information

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics Velocity Interpolation Original image from Foster & Metaxas, 1996 In 2D: For each axis, find the 4 closest face velocity samples: Self-intersecting

More information

Practical Least-Squares for Computer Graphics

Practical Least-Squares for Computer Graphics Outline Least Squares with Generalized Errors Robust Least Squares Constrained Least Squares Practical Least-Squares for Computer Graphics Outline Least Squares with Generalized Errors Robust Least Squares

More information

CITY UNIVERSITY OF HONG KONG 香港城市大學

CITY UNIVERSITY OF HONG KONG 香港城市大學 CITY UNIVERSITY OF HONG KONG 香港城市大學 Modeling of Single Character Motions with Temporal Sparse Representation and Gaussian Processes for Human Motion Retrieval and Synthesis 基於時域稀疏表示和高斯過程的單角色動作模型的建立及其在動作檢索和生成的應用

More information

Dynamic Human Shape Description and Characterization

Dynamic Human Shape Description and Characterization Dynamic Human Shape Description and Characterization Z. Cheng*, S. Mosher, Jeanne Smith H. Cheng, and K. Robinette Infoscitex Corporation, Dayton, Ohio, USA 711 th Human Performance Wing, Air Force Research

More information

Low Cost Motion Capture

Low Cost Motion Capture Low Cost Motion Capture R. Budiman M. Bennamoun D.Q. Huynh School of Computer Science and Software Engineering The University of Western Australia Crawley WA 6009 AUSTRALIA Email: budimr01@tartarus.uwa.edu.au,

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

Gaussian Process Dynamical Models

Gaussian Process Dynamical Models DRAFT Final version to appear in NIPS 18. Gaussian Process Dynamical Models Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science University of Toronto, Toronto, ON M5S 3G4 jmwang,hertzman

More information

Real-Time Human Pose Recognition in Parts from Single Depth Images

Real-Time Human Pose Recognition in Parts from Single Depth Images Real-Time Human Pose Recognition in Parts from Single Depth Images Jamie Shotton, Andrew Fitzgibbon, Mat Cook, Toby Sharp, Mark Finocchio, Richard Moore, Alex Kipman, Andrew Blake CVPR 2011 PRESENTER:

More information

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration

Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration , pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information

More information

Behaviour based particle filtering for human articulated motion tracking

Behaviour based particle filtering for human articulated motion tracking Loughborough University Institutional Repository Behaviour based particle filtering for human articulated motion tracking This item was submitted to Loughborough University's Institutional Repository by

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Epitomic Analysis of Human Motion

Epitomic Analysis of Human Motion Epitomic Analysis of Human Motion Wooyoung Kim James M. Rehg Department of Computer Science Georgia Institute of Technology Atlanta, GA 30332 {wooyoung, rehg}@cc.gatech.edu Abstract Epitomic analysis is

More information

Inferring 3D People from 2D Images

Inferring 3D People from 2D Images Inferring 3D People from 2D Images Department of Computer Science Brown University http://www.cs.brown.edu/~black Collaborators Hedvig Sidenbladh, Swedish Defense Research Inst. Leon Sigal, Brown University

More information

Segment-Based Human Motion Compression

Segment-Based Human Motion Compression Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2006) M.-P. Cani, J. O Brien (Editors) Segment-Based Human Motion Compression Guodong Liu and Leonard McMillan Department of Computer Science,

More information

Graph-Based Action Models for Human Motion Classification

Graph-Based Action Models for Human Motion Classification Graph-Based Action Models for Human Motion Classification Felix Endres Jürgen Hess Wolfram Burgard University of Freiburg, Dept. of Computer Science, Freiburg, Germany Abstract Recognizing human actions

More information

Human Action Recognition Using Independent Component Analysis

Human Action Recognition Using Independent Component Analysis Human Action Recognition Using Independent Component Analysis Masaki Yamazaki, Yen-Wei Chen and Gang Xu Department of Media echnology Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577,

More information

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters

Computer Animation and Visualisation. Lecture 3. Motion capture and physically-based animation of characters Computer Animation and Visualisation Lecture 3. Motion capture and physically-based animation of characters Character Animation There are three methods Create them manually Use real human / animal motions

More information

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model Shunta Saito* a, Makiko Kochi b, Masaaki Mochimaru b, Yoshimitsu Aoki a a Keio University, Yokohama, Kanagawa, Japan; b

More information

Articulated Pose Estimation with Flexible Mixtures-of-Parts

Articulated Pose Estimation with Flexible Mixtures-of-Parts Articulated Pose Estimation with Flexible Mixtures-of-Parts PRESENTATION: JESSE DAVIS CS 3710 VISUAL RECOGNITION Outline Modeling Special Cases Inferences Learning Experiments Problem and Relevance Problem:

More information

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation

Announcements: Quiz. Animation, Motion Capture, & Inverse Kinematics. Last Time? Today: How do we Animate? Keyframing. Procedural Animation Announcements: Quiz Animation, Motion Capture, & Inverse Kinematics On Friday (3/1), in class One 8.5x11 sheet of notes allowed Sample quiz (from a previous year) on website Focus on reading comprehension

More information

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS Yanghao Li, Jiaying Liu, Wenhan Yang, Zongg Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,

More information

Simplified Orientation Determination in Ski Jumping using Inertial Sensor Data

Simplified Orientation Determination in Ski Jumping using Inertial Sensor Data Simplified Orientation Determination in Ski Jumping using Inertial Sensor Data B.H. Groh 1, N. Weeger 1, F. Warschun 2, B.M. Eskofier 1 1 Digital Sports Group, Pattern Recognition Lab University of Erlangen-Nürnberg

More information

Face Recognition Using Long Haar-like Filters

Face Recognition Using Long Haar-like Filters Face Recognition Using Long Haar-like Filters Y. Higashijima 1, S. Takano 1, and K. Niijima 1 1 Department of Informatics, Kyushu University, Japan. Email: {y-higasi, takano, niijima}@i.kyushu-u.ac.jp

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

Animation Lecture 10 Slide Fall 2003

Animation Lecture 10 Slide Fall 2003 Animation Lecture 10 Slide 1 6.837 Fall 2003 Conventional Animation Draw each frame of the animation great control tedious Reduce burden with cel animation layer keyframe inbetween cel panoramas (Disney

More information

Animation, Motion Capture, & Inverse Kinematics. Announcements: Quiz

Animation, Motion Capture, & Inverse Kinematics. Announcements: Quiz Animation, Motion Capture, & Inverse Kinematics Announcements: Quiz On Tuesday (3/10), in class One 8.5x11 sheet of notes allowed Sample quiz (from a previous year) on website Focus on reading comprehension

More information

Using GPLVM for Inverse Kinematics on Non-cyclic Data

Using GPLVM for Inverse Kinematics on Non-cyclic Data 000 00 002 003 004 005 006 007 008 009 00 0 02 03 04 05 06 07 08 09 020 02 022 023 024 025 026 027 028 029 030 03 032 033 034 035 036 037 038 039 040 04 042 043 044 045 046 047 048 049 050 05 052 053 Using

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Florin C. Ghesu 1, Thomas Köhler 1,2, Sven Haase 1, Joachim Hornegger 1,2 04.09.2014 1 Pattern

More information

Machine Learning for Video-Based Rendering

Machine Learning for Video-Based Rendering Machine Learning for Video-Based Rendering Arno Schödl arno@schoedl.org Irfan Essa irfan@cc.gatech.edu Georgia Institute of Technology GVU Center / College of Computing Atlanta, GA 30332-0280, USA. Abstract

More information

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Christos Mousas Paul Newbury Department of Informatics University of Sussex East Sussex, Brighton BN1 9QH Email: {c.mousas,

More information

Matlab Simulator of a 6 DOF Stanford Manipulator and its Validation Using Analytical Method and Roboanalyzer

Matlab Simulator of a 6 DOF Stanford Manipulator and its Validation Using Analytical Method and Roboanalyzer Matlab Simulator of a 6 DOF Stanford Manipulator and its Validation Using Analytical Method and Roboanalyzer Maitreyi More 1, Rahul Abande 2, Ankita Dadas 3, Santosh Joshi 4 1, 2, 3 Department of Mechanical

More information

CS Decision Trees / Random Forests

CS Decision Trees / Random Forests CS548 2015 Decision Trees / Random Forests Showcase by: Lily Amadeo, Bir B Kafle, Suman Kumar Lama, Cody Olivier Showcase work by Jamie Shotton, Andrew Fitzgibbon, Richard Moore, Mat Cook, Alex Kipman,

More information

Articulated Characters

Articulated Characters Articulated Characters Skeleton A skeleton is a framework of rigid body bones connected by articulated joints Used as an (invisible?) armature to position and orient geometry (usually surface triangles)

More information

Automatic Hand-Over Animation using Principle Component Analysis

Automatic Hand-Over Animation using Principle Component Analysis Automatic Hand-Over Animation using Principle Component Analysis Nkenge Wheatland Sophie Jörg Victor Zordan UC Riverside Clemson University UC Riverside Abstract This paper introduces a method for producing

More information

CS 231. Inverse Kinematics Intro to Motion Capture. 3D characters. Representation. 1) Skeleton Origin (root) Joint centers/ bones lengths

CS 231. Inverse Kinematics Intro to Motion Capture. 3D characters. Representation. 1) Skeleton Origin (root) Joint centers/ bones lengths CS Inverse Kinematics Intro to Motion Capture Representation D characters ) Skeleton Origin (root) Joint centers/ bones lengths ) Keyframes Pos/Rot Root (x) Joint Angles (q) Kinematics study of static

More information

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method

Face Recognition Based on LDA and Improved Pairwise-Constrained Multiple Metric Learning Method Journal of Information Hiding and Multimedia Signal Processing c 2016 ISSN 2073-4212 Ubiquitous International Volume 7, Number 5, September 2016 Face Recognition ased on LDA and Improved Pairwise-Constrained

More information

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING

HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Proceedings of MUSME 2011, the International Symposium on Multibody Systems and Mechatronics Valencia, Spain, 25-28 October 2011 HUMAN COMPUTER INTERFACE BASED ON HAND TRACKING Pedro Achanccaray, Cristian

More information

Random projection for non-gaussian mixture models

Random projection for non-gaussian mixture models Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,

More information

Interpolation and extrapolation of motion capture data

Interpolation and extrapolation of motion capture data Interpolation and extrapolation of motion capture data Kiyoshi Hoshino Biological Cybernetics Lab, University of the Ryukyus and PRESTO-SORST, Japan Science and Technology Corporation Nishihara, Okinawa

More information

Applications. Systems. Motion capture pipeline. Biomechanical analysis. Graphics research

Applications. Systems. Motion capture pipeline. Biomechanical analysis. Graphics research Motion capture Applications Systems Motion capture pipeline Biomechanical analysis Graphics research Applications Computer animation Biomechanics Robotics Cinema Video games Anthropology What is captured?

More information

Gaussian Process Dynamical Models

Gaussian Process Dynamical Models Gaussian Process Dynamical Models Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science University of Toronto, Toronto, ON M5S 3G4 jmwang,hertzman@dgp.toronto.edu, fleet@cs.toronto.edu

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation

Human body animation. Computer Animation. Human Body Animation. Skeletal Animation Computer Animation Aitor Rovira March 2010 Human body animation Based on slides by Marco Gillies Human Body Animation Skeletal Animation Skeletal Animation (FK, IK) Motion Capture Motion Editing (retargeting,

More information

The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R

The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R Journal of Machine Learning Research 6 (205) 553-557 Submitted /2; Revised 3/4; Published 3/5 The flare Package for High Dimensional Linear Regression and Precision Matrix Estimation in R Xingguo Li Department

More information

The Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a

The Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a 4th National Conference on Electrical, Electronics and Computer Engineering (NCEECE 2015) The Application Research of 3D Simulation Modeling Technology in the Sports Teaching YANG Jun-wa 1, a 1 Zhengde

More information

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method

Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs

More information

Can we quantify the hardness of learning manipulation? Kris Hauser Department of Electrical and Computer Engineering Duke University

Can we quantify the hardness of learning manipulation? Kris Hauser Department of Electrical and Computer Engineering Duke University Can we quantify the hardness of learning manipulation? Kris Hauser Department of Electrical and Computer Engineering Duke University Robot Learning! Robot Learning! Google used 14 identical robots 800,000

More information

Modeling pigeon behaviour using a Conditional Restricted Boltzmann Machine

Modeling pigeon behaviour using a Conditional Restricted Boltzmann Machine Modeling pigeon behaviour using a Conditional Restricted Boltzmann Machine Matthew D. Zeiler 1,GrahamW.Taylor 1, Nikolaus F. Troje 2 and Geoffrey E. Hinton 1 1- University of Toronto - Dept. of Computer

More information

Motion Control with Strokes

Motion Control with Strokes Motion Control with Strokes Masaki Oshita Kyushu Institute of Technology oshita@ces.kyutech.ac.jp Figure 1: Examples of stroke-based motion control. Input strokes (above) and generated motions (below).

More information

Gesture Recognition using Neural Networks

Gesture Recognition using Neural Networks Gesture Recognition using Neural Networks Jeremy Smith Department of Computer Science George Mason University Fairfax, VA Email: jsmitq@masonlive.gmu.edu ABSTRACT A gesture recognition method for body

More information

Face Re-Lighting from a Single Image under Harsh Lighting Conditions

Face Re-Lighting from a Single Image under Harsh Lighting Conditions Face Re-Lighting from a Single Image under Harsh Lighting Conditions Yang Wang 1, Zicheng Liu 2, Gang Hua 3, Zhen Wen 4, Zhengyou Zhang 2, Dimitris Samaras 5 1 The Robotics Institute, Carnegie Mellon University,

More information

Motion Track: Visualizing Variations of Human Motion Data

Motion Track: Visualizing Variations of Human Motion Data Motion Track: Visualizing Variations of Human Motion Data Yueqi Hu Shuangyuan Wu Shihong Xia Jinghua Fu Wei Chen ABSTRACT This paper proposes a novel visualization approach, which can depict the variations

More information

Development of a Fall Detection System with Microsoft Kinect

Development of a Fall Detection System with Microsoft Kinect Development of a Fall Detection System with Microsoft Kinect Christopher Kawatsu, Jiaxing Li, and C.J. Chung Department of Mathematics and Computer Science, Lawrence Technological University, 21000 West

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (3 pts) Compare the testing methods for testing path segment and finding first

More information

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points] CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.

More information

Static Gesture Recognition with Restricted Boltzmann Machines

Static Gesture Recognition with Restricted Boltzmann Machines Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract

More information