Flexible Registration of Human Motion Data with Parameterized Motion Models

Size: px
Start display at page:

Download "Flexible Registration of Human Motion Data with Parameterized Motion Models"

Transcription

1 Flexible Registration of Human Motion Data with Parameterized Motion Models Yen-Lin Chen Texas A&M University Jianyuan Min Texas A&M University Jinxiang Chai Texas A&M University Figure 1: Our registration method automatically registers two motions by registering each motion with the parameterized walking model. Abstract This paper presents an efficient model-based approach for automatic human motion registration, which builds temporal correspondences between structurally similar but distinctive motion examples. The key idea of the model-based registration process is to construct a parameterized motion model from a set of preregistered motion examples. With such a model, we can register an input motion with the parameterized motion model by continuously deforming the model to best match the input motion. We formulate the registration process in a gradient-based nonlinear optimization framework by minimizing an objective function that measures differences between the input motion and deforming motion. We also develop a multi-resolution optimization process to efficiently estimate the model parameters as well as the temporal correspondences between the input motion and deforming motion. We demonstrate the performance of our approach by testing the algorithm on difficult motion sequences and comparing with alternative approaches. 1 Introduction With the proliferation of motion capture data, how to analyze and process a large set of motion capture data becomes increasingly ylchen@cs.tamu.edu jianyuan@cs.tamu.edu jchai@cs.tamu.edu Copyright 2009 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions Dept, ACM Inc., fax +1 (212) or permissions@acm.org. I3D 2009, Boston, Massachusetts, February 27 March 1, ACM /09/0002 $ important. One important and challenging motion processing problem is motion registration, which finds temporal correspondences between structurally similar motion sequences. Motion registration has many important applications; for instance, registered motions have been used for motion interpolation [Bruderlin and Williams 1995; Guo and Roberge 1996; Wiley and Hahn 1997; Rose et al. 1998; Park et al. 2002; Kovar and Gleicher 2004; Mukai and Kuriyama 2005], motion transfer [Hsu et al. 2005; Heck et al. 2006] and realtime motion control [Cooper et al. 2007]. Figure 2: Motion registration with dynamic time warping: (left) walking with arm waving. (middle) sneaky walking. (right) the red and green curves show the result from dynamic time warping and ground-truth result from manual registeration respectively. Note that image intensities visualize the frame-by-frame distances between two testing motion sequences. The higher the intensity values; the larger the frame-by-frame differences. One popular solution for automatic motion registration is dynamic time warping [Myers and Rabiner 1981]. The approach formulates the registration process as a discrete optimization problem and applies dynamic programming techniques to minimize motion differences between two motion sequences. The approach runs fast and has demonstrated a good performance for many applications. However, when two motions contain very different style variations (e.g. walking with arm waving and sneaky walking ), dynamic time warping often produces wrong results (see Figure 2). This paper presents a model-based registration technique for automatic and robust motion registration (see Figure 1). The key idea of

2 Figure 3: System overview. the model-based registration process is to register an input motion with a parameterized motion model constructed from a large set of preregistered motion examples. The parameterized motion model is a multi-dimensional morphing function that efficiently models motion variations embedded in the training examples. To register an input motion with the parameterized model, we continuously deform the parameterized model to best match the input motion. Mathematically, we formulate the registration process in a continuous optimization framework by minimizing an objective function that measures differences between the input motion and deforming motion. We develop a multi-resolution optimization process to efficiently compute the model parameters as well as the temporal correspondences between the input motion and deforming motion. We evaluate the performance of our approach by testing the algorithm on a variety of motion sequences including walking, running, and jumping. Our experiments show that the model-based registration process can produce much better results than dynamic time warping. We apply the parameterized model along with robust statistics to register spurious motions or motions corrupted with outliers. The registration framework is also very flexible and can be used for registering motions in a different format. For example, we demonstrate that we can extend the model-based registration process to register 2D video data. 2 Background Robust registration of human motion data has been an important part of many animation applications [Bruderlin and Williams 1995; Guo and Roberge 1996; Wiley and Hahn 1997; Rose et al. 1998; Park et al. 2002; Kovar and Gleicher 2003; Kovar and Gleicher 2004; Mukai and Kuriyama 2005; Hsu et al. 2005; Cooper et al. 2007]. In this section, we briefly review research on human motion registration. One approach requires the user to specify a set of key frames in input motions and then uses piecewise linear interpolation to estimate an appropriate time warping function [Rose et al. 1998; Park et al. 2002; Mukai and Kuriyama 2005]. An accurate alignment of human motion data usually requires specification of a large number of key frames. An alternative approach for motion registration is dynamic time warping [Bruderlin and Williams 1995; Kovar and Gleicher 2003; Kovar and Gleicher 2004; Hsu et al. 2005; Cooper et al. 2007]. Bruderlin and William introduced a basic dynamic time warping algorithm for motion registration and then used the registered motions for interpolation applications. Kovar and Gleicher [2003] improved dynamic time warping techniques by imposing monotonic and slope constraints on time warping functions. Recently, Hsu and Popovic [2005] proposed an iterative motion registration algorithm that estimates time warping functions as well as per-frame scale factors between the input motions. Dynamic time warping could also be performed in a continuous optimization framework. For example, Ramsay and Li [2002] introduced a continuous optimization framework for registration of various forms of 1D time series. Our work differs from previous approaches because our approach is data-driven. We register input motions by registering each motion with a parameterized motion model constructed from a large set of pre-registered motion examples. Our experiments show that the model-based registration significantly improves the accuracy and robustness of the motion registration process. Another benefit of model-based registration is its flexibility to register spurious motions or motions corrupted with outliers. The parameterized model also allows us to register 2D video data, a capability that has not been demonstrated by previous registration methods. 3 Overview The key idea of our approach is to construct a parameterized motion model from pre-registered motion examples and then deform the parameterized model to best match input motions. Figure 3 shows an overview of our system. The system contains three major components: Motion preprocessing. We pre-register a set of structurally similar but distinctive motion examples with a semi-automatic method. Motion parameterizations. We apply statistical analysis techniques to the registered motion data and construct a parameterized motion model to model motion variations in the training data. We represent the model with a small number of parameters λ 1,..., λ M. Motion registration. We deform the parameterized model to best register an input motion by minimizing the difference between the input motion and deforming motion. We formulate this as an optimization problem and automatically compute the model parameters ˆλ 1,..., ˆλ M as well as the time warping function ŵ 1,..., ŵ T in a coarse-to-fine manner. 184

3 Figure 4: The top five eigen-modes for walking motion. We describe each of these components in more detail in the next section. 4 Motion Data Preprocessing and Parameterizations This section discusses how to preprocess a set of motion examples in the database (section 4.1), how to use them to build a parameterized motion model (section 4.2), and how to apply the parameterized model for automatic motion registration (section 4.3). In the following, we will focus our discussion on constructing the parameterized model for walking. The basic reconstruction scheme that will be proposed in this section can easily be extended to other actions such as running and jumping. 4.1 Motion Data Preprocessing We construct the parameterized motion model from a set of prerecorded motion examples. We require all examples must be structurally similar. A set of walking examples, for instance, must all start out on the same foot, take the same number of steps, have the same arm swing phase, and have no spurious motions such as a head-scratch. To build the parameterized model for walking, we will record a database from an actor performing walking with various styles (different speeds, step sizes, directions, and stylized walking). We assume the database motions are already segmented into walking cycles. If the database motion contains multiple walking cycles, we manually segment the motion into multiple cycles. We denote the set of motion examples in the database as {x n(t) n = 1,..., N}, where x n(t) is the joint angle measurement of a character pose at the t-th frame for the n-th motion example. To register motion examples in the database, we pick one example motion x 0 as the reference motion and use it to register the rest of database examples {x n n = 1,..., N} with appropriate time warping functions. We register motion examples in a translation- and rotation-invariant way by decoupling each pose from its translation in the ground plane and the rotation of its hips about the up axis [Kovar and Gleicher 2003]. To ensure the quality of training data, we choose to use a semiautomatic process to align all motion examples in the database. To align each database example {x n(t) n = 1,..., N} with the reference motion x 0(t), we first manually select a small set of key frames, instants when important structural elements such as a footdown occurs. We then use the key frames to divide the example motion into multiple subsequences. The starting and ending frames of each subsequence are specified at the key frames. We use dynamic time warping to automatically align each subsequence. Finally, we use the estimated time warping function to warp the motion examples {x n(t) n = 1,..., N} into a canonical timeline { x n(t) n = 1,..., N} specified by the reference motion. 4.2 Motion Parameterizations One way to parameterize the registered motions is to use a weighted combination of motion examples in the database. This model, however, does not offer a compact representation for human motion because the number of parameters linearly depends on the number of database examples. More importantly, the representation does not utilize spatial-temporal correlation embedded in the motion examples. A better way is to apply statistical analysis to model variations in the registered motion examples. We form a DT dimensional vector X n by sequentially stacking all poses of the pre-registered motion examples x n(t), t = 1,..., T, where D is the dimensionality of the full-body configuration space and T is the number of frames in the reference motion. We apply principle component analysis (PCA) to all pre-registered motion examples X n, n = 1,..., N. As a result, we can construct a parameterized motion model P using mean motion P 0 and a weighted combination of eigenvectors P m, m = 1..., M: P (λ 1,..., λ M ) = P 0 + M λ mp m (1) m=1 where the weights λ m are control parameters for the motion model and the vectors P m are a set of orthogonal modes to model motion variations in the training examples. Therefore, we can use the parameterized model to generate a motion instance as follows: p(t, λ 1,..., λ M ) = p 0 (t) + M λ mp m (t) t = 1,..., T (2) m=1 What remains is to determine how many modes (M) to retain. This leads to a trade-off between the accuracy and the compactness of the motion model. However, it is safe to consider small-scale variation as noise. We automatically determine the number of modes by keeping 99 percent of the original variations. Figure 4 shows the top five modes constructed from the pre-registered walking database. 4.3 Motion Registration We now focus our discussion on how to register an input motion y(s), s = 1,..., S with the parameterized motion model p(t, λ 1,..., λ M ), t = 1,..., T using an appropriate time warping function s = w(t). Note that modeling the time warping function w(t) only requires recovering the finite number of values that w(t) can take since the domain of t = 1,..., T is finite. We therefore represent the time warping function w(t) with T finite values of w(t): w(1),..., w(t ). The key idea of our model-based registration process is to continuously deform the parameterized motion model to produce a motion instance p(t, λ 1,..., λ M ), t = 1,..., T that best matches the input motion y(s), s = 1,..., S. We expect an accurate registration result will be achieved when the deforming motion p(t, λ 1,..., λ M ), t = 1,..., T is close to the input motion y(s), s = 1,..., S. 185

4 Figure 5: The multi-resolution optimization procedure registers an input motion with the parameterized motion model in a coarse-to-fine manner. We start the optimization in level 1. After the optimization in level 1 converges, we initialize the time-warping curve in level 2 by upsampling the estimated time-warping curve from level 1. We initialize the motion parameters with the optimized motion parameters in level 1. We repeat this process until the algorithm converges at the finest level. Note that the gray images visualize frame-by-frame distances between the input motion and deforming motion. In order to define the motion registration process, we must formally define the criterion to be optimized. Naturally, we want to minimize the error between the input motion y(s) and its closest motion instance p(t, λ 1,..., λ M ), t = 1,..., T with an appropriate time warping function s = w(t), t = 1,.., T. If t is a frame in the canonical timeline, then the corresponding frame in the input motion is w(t). At frame t, a motion instance generated by the parameterized motion model {λ 1,..., λ M } has the pose p(t) = p 0 (t) + M λmp m=1 m (t). At frame w(t), the input motion has the pose y(w(t)). We want to minimize the sum of squares of the difference between these two quantities: T M y(w(t)) (p 0 (t) + λ mp m (t)) 2 (3) t=1 m=1 where the sum is performed over all frames of the canonical time line. The above error can be computed as follows. For each frame t in the reference motion, we have the corresponding frame w(t) in the input motion. The input motion is then sampled at the frame w(t); typically it is linearly interpolated at this frame in the time coordinate of the input motion y. The goal of motion registration is thus to minimize the cost function defined in Equation 3 with respect to the motion parameters {λ 1,..., λ M } and the time warping function w(t), t = 1,..., T. Direct optimization of the above function might result in invalid time warping functions because time warping functions are constrained functions. In our experiments, we require that time warping functions satisfy the following conditions: Positive constraints: a time warping function should be positive: w(t) > 0. Monotonic constraints: a time warping function should be strictly increasing: w(t) > w(t 1). The monotonicity property makes sure that the time warping function is invertible so that for the same event the time points on two different time scales correspond to each other uniquely. Slope constraints: a time warping function should not be too 1 steep or too shallow: w(t) w(t 1) L. This prevents very short sequences matching very long ones. In our L experiment, we set L to 3. Rather than modeling a time warping function w(t) in the original time space, we choose to transform the w(t) into a new space z(t): z(t) = ln(w(t) w(t 1)), t = 1,...T (4) We choose w(0) to be zero and further have w(t) = t exp[z(i)], t = 1,..., T (5) i=1 Equation 5 makes sure monotonic constraint of the w(t) will be automatically satisfied if we conduct the optimization in the new space z(t). Thus, the cost function for the model-based motion registration process (Equation 3) can be re-written as follows: T t M E error = y( exp[z(i)]) p 0 (t) λ mp m (t) 2 (6) t=1 i=1 m=1 We also introduce a prior term to prevent the deforming motion moving away from the registered motion examples in the database. E prior = M λ 2 m m=1 σm 2 where σ 2 m is the m-th eigenvalue of the registered motion examples x n(t), t = 1,..., T. The overall objective function is a combination of the error term (Equation 6) and the prior term (Equation 7): {ˆλ m}, {ẑ t} = arg min {λm},{z i } αe error + E prior (7) Subject to ln L z t ln L t = 1,..., T (8) 186

5 where the weight α controls the importance of the error term. We analytically evaluate the jacobian terms of the minimization function and then run the optimization with the Levenberg- Marquardt algorithm with boundary constraints in the Levmar library [Lourakis 2007]. To improve the speed and robustness of our optimization, we develop a multi-resolution optimization procedure to estimate the motion parameters and time warping function in a coarse-to-fine manner. Figure 5 visualizes the basic concept of our multi-resolution optimization process. We first form the input motion y(s) and parameterized motion p(t, λ 1,..., λ M ) in coarse levels by downsampling both the input motion y(s), the mean motion p 0 (t), and base motions p m (t), m = 1,..., M. We start the registration process in the coarsest level (see Level 1 in Figure 5) and run the optimization to register the coarsest input motion with the coarsest parameterized motion. After the optimization in level 1 converges, we initialize the time-warping curve in level 2 by upsampling the estimated time-warping curve in level 1 and initialize the motion parameters in level 2 with the estimated parameters from level 1. We repeat this process until the algorithm converges at the finest level. In our experiments, we set the downsampling rate to 2. 5 Extensions This section explores the power and flexibility of the model-based registration framework. We first extend the framework for registering motions corrupted with outliers or spurious motions and then discuss how to extend the framework to register motions in a different form (video data). 5.1 Corrupted Motion Data The model-based motion registration framework can be extended to register spurious motions or motions containing outliers. For example, we can extend the framework to register walking with arm waving to the parameterized motion model even though the pre-registered walking database does not contain any arm waving motions. To deal with spurious motion or motion corrupted with outliers, we apply robust statistics to measure the error term. Robust estimation [Hampel et al. 1986] addresses the problem for finding the values for the parameters from the measurement data with outliers, which correspond to spurious patterns or corrupted degrees of freedom in our experiments. We define the error term as follows: E outliers error = T t=1 D d=1 ρ(y d(w(t)) p d (t, λ 1,..., λ M )) (9) where the function ρ is a robust function that is used to reduce the influence of outliers. y d (w(t)) and p d (t, λ 1,..., λ M ) represent the d-th DOF (degree of freedom) of the input motion and deforming motion at the frame t respectively. To increase robustness we will consider estimators for which the influence of outliers tends to zero. We choose the Lorentzian estimator but the treatment here could be equally applied to a wide variety of other estimators. A discussion of various estimators can be found in [Hampel et al. 1986]. More specifically, the Lorentzian function is defined as follows: ρ(r) = log(1 + 1/2(r/σ) 2 ) (10) where the scalar σ is a parameter for the robust estimator and r is the residual error between the input motion y d (w(t)) and the deforming motion p d (t, λ 1,..., λ M ). We experimentally set σ to 0.1. N M T L F walking running jumping Table 1: Details of the data we used. N is the number of motion examples. M is the number of motion modes. T is the number of total frames in the reference motion. L is the number of levels for multi-resolution representations. and F is the frame rate for motion databases (frame per second). 5.2 Video Data Another advantage of the model-based registration process is its flexibility to register various forms of motion data. We can register any 2D or 3D human motion data with the parameterized model as long as we can numerically measure the difference between the input motion and the parameterized motion model. Here, we focus our discussion on how to register a video sequence with the parameterized model but the treatment here could be equally applied to other motion formats. We pick a few interesting feature points in video and track their 2D positions y 2D (s) throughout the whole sequence [Wei and Chai 2008] (see Figure 10). We use the tracked 2D trajectories to register the input video data with the parameterized motion model. However, direct application of the registration framework for this problem might not work because the distance between the input motion (i.e. 2D trajectories) and the parameterized motion model depends on an unknown projection matrix which is a function of camera parameters c. One good way to address this problem is to simultaneously estimate model parameters {λ 1,..., λ M }, time warping function w(t) and camera parameters c using the 2D trajectories y 2D (s) and parameterized model p(t, λ 1,..., λ M ). We assume a weak perspective projection model, which is valid when the average variation of the depth of an articulated object along the line of sight is small compared to the distance between the camera and object. As a result, the unknown camera parameters c include scale parameter, camera orientation and position. The new error term can be defined as follows: Eerror video = T y t=1 2D (w(t)) fproj(g(p(t, λ1,..., λm )), c) 2 (11) where the function g is the forward kinematic function which maps the motion from the joint angle space to the 3D position space. The function f proj is a projection function which projects the 3D points into the 2D space with appropriate camera parameters c. 6 Results We show the performance of the model-based registration process in a variety of experiments. We compare with dynamic time warping and test the algorithm on corrupted motion data. We also show our algorithm can be used for 2D video data registration, a capacity that has not been demonstrated in previous motion registration approaches. Except the video data, the computational time for our model-based registration method ranges between one and two seconds on the Intel Core 2 Duo CPU. Our results are best seen in the accompanying video although we show sample frames of a few results here. Data. Table 1 summarizes the details of the three motion databases. The motion was captured with a Vicon motion capture system of 12 MXF20 cameras with 41 markers for full-body movements at 187

6 Figure 6: Motion registration between two walking motions with different styles: (top) normal walking. (middle) the registered stylized walking from our method. (bottom) the registration result from DTW. The red circles highlight the differences of two results. Figure 7: Motion registration between sneaky walking and walking with arm waving : (top) sneaky walking. (middle) the registered walking with arm waving from our method. (bottom) the registration result from DTW. The red circles highlight the differences of two results. Note that pink and yellow colors highlight the left and right leg respectively. 188

7 Figure 8: Automatic segmentation and registration of a long walking sequence: (left). A long walking sequence is segmented into three distinct walking cycles, where green, orange, and blue colors indicate the first, second and third walking cycle respectively. (right). the three registered walking cycles. Figure 9: Robustness to outliers: (top) a long-distance jumping motion sequence corrupted with outliers throughout the motion. (bottom) the registered short-distance jumping sequence. Figure 10: Video registration: (top) the input video and tracked image features. (middle) the reconstructed/registered 3D motion from the estimated viewpoint. (bottom) the reconstructed 3D motion from a new viewpoint. 189

8 120Hz. The walking and running database include 200 and 100 preregistered motion examples with variations of speeds, step sizes, directions, and styles, respectively. The jumping database includes 50 registered motion examples with different jumping heights, jumping distances, directions and styles. The number of model parameters for the walking, running and jumping are 30, 20 and 18 respectively by keeping 99 percent of the motion variations. Comparisons. We report the superior performance of our algorithm by comparing with dynamic time warping. Figure 6 and Figure 7 show sample images of the side-by-side comparisons between our method and dynamic time warping. The dynamic time warping technique is based on minimizing the joint-angle motion difference across the entire sequence. The DTW implementation considers continuity, causality and slope limit described in the paper [Kovar and Gleicher 2003]. In the experiments, we set the slope limit to 3. Motion segmentation and registration. Our algorithm can be used to sequentially segment a long sequence of an input motion into multiple subsequences and then align each subsequence by registering them to the parameterized motion model. Figure 8 shows the results for automatic segmentation and alignment of a long walking sequence. The input sequence contains three walking cycles with distinctive motion styles. Corrupted motion data. The model-based registration process is robust to outliers and noise. Figure 9 shows sample frames of the registration results between a corrupted long distance jump and a short-distance jump. In the accompanying video, we also show our registration algorithm is robust to noisy motion data. Video data. We can use the parameterized motion models to register a video sequence. We first use an interactive spacetime tracking process to track 2D positions of a few interesting image features across the entire video sequence [Wei and Chai 2008] and then register the 2D trajectories with the parameterized model. Figure 10 shows sample images of the input video and the reconstructed 3D deforming motion seen from two different viewpoints (estimated viewpoint and new viewpoint). The testing video data contains two walking cycles and is downloaded from the public video database 1. It takes about 3.2 seconds to register 2D video data with the parameterized model on the Intel Core 2 Duo CPU. 7 Discussion We present an efficient model-based approach for automatic registration of human motion data. We construct a parameterized motion model from a set of pre-registered training examples and then deform the parameterized model to best register an input motion by maximizing the match between the deforming motion and input motion. We also introduce an efficient multi-resolution optimization algorithm to simultaneously compute the model parameters and time warping curves. One limitation of the model-based registration approach is that an appropriate set of training examples must be available and preregistered in the preprocessing step. To obtain a high-quality motion model, we choose to pre-register database examples with a semi-automatic method. This might become time-consuming when the number of examples in a database is large. One possible solution is to preregister a small set of examples in the database with the semi-automatic process and then use the model-based registration method to incrementally update the parameterized model. In the future, we would like to include more subjects, children to elderly people, and more motion variations to the training databases. We are also attempting to build parameterized motion models for 1 other actions. We believe there are many exciting applications for the parameterized motion models. One immediate future work is therefore to investigate its applications in motion synthesis, compression, coding, recognition, and filtering. References BRUDERLIN, A., AND WILLIAMS, L Motion signal processing. In Proceedings of ACM SIGGRAPH COOPER, S., HERTZMANN, A., AND POPOVIĆ, Z Active learning for real-time motion controllers. In ACM Transactions on Graphics. 26(3):article No.5. GUO, S., AND ROBERGE, J A high level control mechanism for human locomotion based on parametric frame space interpolation. In Eurographics Workshop on Computer Animation and Simulation HAMPEL, F. R., RONCHETTI, E. M., ROUSSEEUW, P. J., AND STAHEL, W. A Robust statistics: The approach based on influence functions. Wiley. HECK, R., KOVAR, L., AND GLEICHER, M Splicing upperbody actions with locomotion. In Computer Graphics Forum (Proceedings of Eurographics 2006). 25(3): HSU, E., PULLI, K., AND POPOVIĆ, J Style translation for human motion. In ACM Transactions on Graphics. 24(3): KOVAR, L., AND GLEICHER, M Flexible automatic motion blending with registration curves. In ACM SIG- GRAPH/EUROGRAPH Symposium on Computer Animation KOVAR, L., AND GLEICHER, M Automated extraction and parameterization of motions in large data sets. In ACM Transactions on Graphics. 23(3): LOURAKIS, M., Levmar: Levenberg-marquardt nonlinear least squares algorithms. lourakis/levmar/. MUKAI, T., AND KURIYAMA, S Geostatistical motion interpolation. In ACM Transactions on Graphics. 24(3): MYERS, C. S., AND RABINER, L. R A comparative study of several dynamic time-warping algorithms for connected word recognition. In The Bell System Technical Journal. 60(7): PARK, S., SHIN, H. J., AND SHIN, S. Y On-line locomotion generation based on motion blending. In ACM SIG- GRAPH/Eurographics symposium on Computer animation RAMSAY, J. O., AND LI, X Curve registration. In Journal of the Royal Statistical:Series B (Statistical Methodology. 60(2): ROSE, C., COHEN, M. F., AND BODENHEIMER, B Verbs and adverbs: Multidimensional motion interpolation. In IEEE Computer Graphics and Applications. 18(5): WEI, X., AND CHAI, J Interactive tracking of 2d generic objects with spacetime optimization. In Proceedings of European Conference on Computer Vision. 1: WILEY, D. J., AND HAHN, J. K Interpolation synthesis of articulated figure motion. In IEEE Computer Graphics and Applications. 17(6):

Synthesis and Editing of Personalized Stylistic Human Motion

Synthesis and Editing of Personalized Stylistic Human Motion Synthesis and Editing of Personalized Stylistic Human Motion Jianyuan Min Texas A&M University Huajun Liu Texas A&M University Wuhan University Jinxiang Chai Texas A&M University Figure 1: Motion style

More information

Modeling 3D Human Poses from Uncalibrated Monocular Images

Modeling 3D Human Poses from Uncalibrated Monocular Images Modeling 3D Human Poses from Uncalibrated Monocular Images Xiaolin K. Wei Texas A&M University xwei@cse.tamu.edu Jinxiang Chai Texas A&M University jchai@cse.tamu.edu Abstract This paper introduces an

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information

Motion Control with Strokes

Motion Control with Strokes Motion Control with Strokes Masaki Oshita Kyushu Institute of Technology oshita@ces.kyutech.ac.jp Figure 1: Examples of stroke-based motion control. Input strokes (above) and generated motions (below).

More information

3D Character Animation Synthesis From 2D Sketches

3D Character Animation Synthesis From 2D Sketches 3D Character Animation Synthesis From 2D Sketches Yi Lin University of Waterloo Abstract Traditional character animation has superiority in conveying stylized information about characters and events, but

More information

A Responsiveness Metric for Controllable Characters Technical Report CS

A Responsiveness Metric for Controllable Characters Technical Report CS A Responsiveness Metric for Controllable Characters Technical Report CS05-50-0 Madhusudhanan Srinivasan Ronald A. Metoyer School of Electrical Engineering and Computer Science Oregon State University ρ

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Generating Different Realistic Humanoid Motion

Generating Different Realistic Humanoid Motion Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

THE capability to precisely synthesize online fullbody

THE capability to precisely synthesize online fullbody 1180 JOURNAL OF MULTIMEDIA, VOL. 9, NO. 10, OCTOBER 2014 Sparse Constrained Motion Synthesis Using Local Regression Models Huajun Liu a, Fuxi Zhu a a School of Computer, Wuhan University, Wuhan 430072,

More information

Optimal motion trajectories. Physically based motion transformation. Realistic character animation with control. Highly dynamic motion

Optimal motion trajectories. Physically based motion transformation. Realistic character animation with control. Highly dynamic motion Realistic character animation with control Optimal motion trajectories Physically based motion transformation, Popovi! and Witkin Synthesis of complex dynamic character motion from simple animation, Liu

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

Motion Editing with Data Glove

Motion Editing with Data Glove Motion Editing with Data Glove Wai-Chun Lam City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong Kong email:jerrylam@cityu.edu.hk Feng Zou City University of Hong Kong 83 Tat Chee Ave Kowloon, Hong

More information

CS-184: Computer Graphics

CS-184: Computer Graphics CS-184: Computer Graphics Lecture #19: Motion Capture!!! Prof. James O Brien! University of California, Berkeley!! V2015-S-18-1.0 Today 1 18-MoCap.key - April 8, 2015 Motion Capture 2 2 18-MoCap.key -

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Realtime Style Transfer for Unlabeled Heterogeneous Human Motion

Realtime Style Transfer for Unlabeled Heterogeneous Human Motion Realtime Style Transfer for Unlabeled Heterogeneous Human Motion 1 Institute Shihong Xia1 Congyi Wang1 Jinxiang Chai2 2 of Computing Technology, CAS Texas A&M University Jessica Hodgins3 3 Carnegie Mellon

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

Splicing Upper-Body Actions with Locomotion

Splicing Upper-Body Actions with Locomotion EUROGRAPHICS 2006 / E. Gröller and L. Szirmay-Kalos (Guest Editors) Volume 25 (2006), Number 3 Splicing Upper-Body Actions with Locomotion Rachel Heck Lucas Kovar Michael Gleicher University of Wisconsin-Madison

More information

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al CS 231 Motion Capture Data I The Pipeline Bodenheimer et al 1 Marker Magnetic Optical Marker placement On limbs vs joints neither is ideal Over tight clothing or thin skin In repeatable 'landmarks' Using

More information

Articulated Structure from Motion through Ellipsoid Fitting

Articulated Structure from Motion through Ellipsoid Fitting Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 179 Articulated Structure from Motion through Ellipsoid Fitting Peter Boyi Zhang, and Yeung Sam Hung Department of Electrical and Electronic

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Analyzing and Segmenting Finger Gestures in Meaningful Phases

Analyzing and Segmenting Finger Gestures in Meaningful Phases 2014 11th International Conference on Computer Graphics, Imaging and Visualization Analyzing and Segmenting Finger Gestures in Meaningful Phases Christos Mousas Paul Newbury Dept. of Informatics University

More information

CS-184: Computer Graphics. Today

CS-184: Computer Graphics. Today CS-184: Computer Graphics Lecture #20: Motion Capture Prof. James O Brien University of California, Berkeley V2005-F20-1.0 Today Motion Capture 2 Motion Capture Record motion from physical objects Use

More information

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury

More information

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation Michael J. Black and Allan D. Jepson Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto,

More information

Geostatistical Motion Interpolation

Geostatistical Motion Interpolation Geostatistical Motion Interpolation Tomohiko Mukai Shigeru Kuriyama Toyohashi University of Technology Figure 1: Animations synthesized by our motion interpolation in a 5D parametric space. One parameter

More information

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours

Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Measuring the Steps: Generating Action Transitions Between Locomotion Behaviours Christos Mousas Paul Newbury Department of Informatics University of Sussex East Sussex, Brighton BN1 9QH Email: {c.mousas,

More information

Motion Capture & Simulation

Motion Capture & Simulation Motion Capture & Simulation Motion Capture Character Reconstructions Joint Angles Need 3 points to compute a rigid body coordinate frame 1 st point gives 3D translation, 2 nd point gives 2 angles, 3 rd

More information

Notes on Robust Estimation David J. Fleet Allan Jepson March 30, 005 Robust Estimataion. The field of robust statistics [3, 4] is concerned with estimation problems in which the data contains gross errors,

More information

Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching

Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Geometric Registration for Deformable Shapes 3.3 Advanced Global Matching Correlated Correspondences [ASP*04] A Complete Registration System [HAW*08] In this session Advanced Global Matching Some practical

More information

Dynamic Geometry Processing

Dynamic Geometry Processing Dynamic Geometry Processing EG 2012 Tutorial Will Chang, Hao Li, Niloy Mitra, Mark Pauly, Michael Wand Tutorial: Dynamic Geometry Processing 1 Articulated Global Registration Introduction and Overview

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Differential Structure in non-linear Image Embedding Functions

Differential Structure in non-linear Image Embedding Functions Differential Structure in non-linear Image Embedding Functions Robert Pless Department of Computer Science, Washington University in St. Louis pless@cse.wustl.edu Abstract Many natural image sets are samples

More information

Supplemental Material Deep Fluids: A Generative Network for Parameterized Fluid Simulations

Supplemental Material Deep Fluids: A Generative Network for Parameterized Fluid Simulations Supplemental Material Deep Fluids: A Generative Network for Parameterized Fluid Simulations 1. Extended Results 1.1. 2-D Smoke Plume Additional results for the 2-D smoke plume example are shown in Figures

More information

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Department of Computer Science The University of British Columbia duanx@cs.ubc.ca, lili1987@cs.ubc.ca Abstract

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

Motion Interpretation and Synthesis by ICA

Motion Interpretation and Synthesis by ICA Motion Interpretation and Synthesis by ICA Renqiang Min Department of Computer Science, University of Toronto, 1 King s College Road, Toronto, ON M5S3G4, Canada Abstract. It is known that high-dimensional

More information

Real-Time Motion Transition by Example

Real-Time Motion Transition by Example Brigham Young University BYU ScholarsArchive All Theses and Dissertations 2005-11-10 Real-Time Motion Transition by Example Cameron Quinn Egbert Brigham Young University - Provo Follow this and additional

More information

Vision par ordinateur

Vision par ordinateur Epipolar geometry π Vision par ordinateur Underlying structure in set of matches for rigid scenes l T 1 l 2 C1 m1 l1 e1 M L2 L1 e2 Géométrie épipolaire Fundamental matrix (x rank 2 matrix) m2 C2 l2 Frédéric

More information

Force-Control for the Automated Footwear Testing System

Force-Control for the Automated Footwear Testing System Chapter 2 Force-Control for the Automated Footwear Testing System Problem presented by: Dr. Gerald K. Cole (Biomechanigg Research Inc.) Mentors: Greg Lewis (University of Ontario IT), Rex Westbrook (University

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

Motion Retrieval. Motion Capture Data. Motion Capture Data. Motion Capture Data. Motion Capture Data. Motion Capture Data

Motion Retrieval. Motion Capture Data. Motion Capture Data. Motion Capture Data. Motion Capture Data. Motion Capture Data Lecture Information Retrieval for Music and Motion Meinard Müller and Andreas Baak Summer Term 2008 Motion Capture Data Digital 3D representations of motions Computer animation Sports Gait analysis 2 Motion

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object.

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object. CENG 732 Computer Animation Spring 2006-2007 Week 4 Shape Deformation Animating Articulated Structures: Forward Kinematics/Inverse Kinematics This week Shape Deformation FFD: Free Form Deformation Hierarchical

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Dec. 4, 2008 Harpreet S. Sawhney hsawhney@sarnoff.com Plan IBR / Rendering applications of motion / pose

More information

Photometric Stereo with Auto-Radiometric Calibration

Photometric Stereo with Auto-Radiometric Calibration Photometric Stereo with Auto-Radiometric Calibration Wiennat Mongkulmann Takahiro Okabe Yoichi Sato Institute of Industrial Science, The University of Tokyo {wiennat,takahiro,ysato} @iis.u-tokyo.ac.jp

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Motion Synthesis and Editing. in Low-Dimensional Spaces

Motion Synthesis and Editing. in Low-Dimensional Spaces Motion Synthesis and Editing in Low-Dimensional Spaces Hyun Joon Shin Div. of Digital Media, Ajou University, San 5, Woncheon-dong, Yungtong-Ku Suwon, Korea Tel. (+82)31 219 1837 Fax. (+82)31 219 1797

More information

Pro-actively Interactive Evolution for Computer Animation

Pro-actively Interactive Evolution for Computer Animation Pro-actively Interactive Evolution for Computer Animation Ik Soo Lim and Daniel Thalmann Computer Graphics Lab (LIG) Swiss Federal Institute of Technology CH 1015 Lausanne EPFL, Switzerland iksoolim@email.com,

More information

Algorithm research of 3D point cloud registration based on iterative closest point 1

Algorithm research of 3D point cloud registration based on iterative closest point 1 Acta Technica 62, No. 3B/2017, 189 196 c 2017 Institute of Thermomechanics CAS, v.v.i. Algorithm research of 3D point cloud registration based on iterative closest point 1 Qian Gao 2, Yujian Wang 2,3,

More information

Automated Modularization of Human Motion into Actions and Behaviors

Automated Modularization of Human Motion into Actions and Behaviors USC Center for Robotics and Embedded Systems Technical Report CRES-02-002 Automated Modularization of Human Motion into Actions and Behaviors Odest Chadwicke Jenkins Robotics Research Laboratory Department

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Using Perspective Rays and Symmetry to Model Duality

Using Perspective Rays and Symmetry to Model Duality Using Perspective Rays and Symmetry to Model Duality Alex Wang Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-13 http://www.eecs.berkeley.edu/pubs/techrpts/2016/eecs-2016-13.html

More information

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech

Visual Odometry. Features, Tracking, Essential Matrix, and RANSAC. Stephan Weiss Computer Vision Group NASA-JPL / CalTech Visual Odometry Features, Tracking, Essential Matrix, and RANSAC Stephan Weiss Computer Vision Group NASA-JPL / CalTech Stephan.Weiss@ieee.org (c) 2013. Government sponsorship acknowledged. Outline The

More information

Feature Tracking and Optical Flow

Feature Tracking and Optical Flow Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,

More information

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision

Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China

More information

MOTION capture is a technique and a process that

MOTION capture is a technique and a process that JOURNAL OF L A TEX CLASS FILES, VOL. 6, NO. 1, JANUARY 2008 1 Automatic estimation of skeletal motion from optical motion capture data xxx, Member, IEEE, Abstract Utilization of motion capture techniques

More information

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data

Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data Xue Mei, Fatih Porikli TR-19 September Abstract We

More information

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods

Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Parameter Estimation in Differential Equations: A Numerical Study of Shooting Methods Franz Hamilton Faculty Advisor: Dr Timothy Sauer January 5, 2011 Abstract Differential equation modeling is central

More information

A Generative Model for Motion Synthesis and Blending Using Probability Density Estimation

A Generative Model for Motion Synthesis and Blending Using Probability Density Estimation A Generative Model for Motion Synthesis and Blending Using Probability Density Estimation Dumebi Okwechime and Richard Bowden University of Surrey, Guildford, Surrey, GU2 7XH, UK {d.okwechime,r.bowden}@surrey.ac.uk

More information

Graph-Based Action Models for Human Motion Classification

Graph-Based Action Models for Human Motion Classification Graph-Based Action Models for Human Motion Classification Felix Endres Jürgen Hess Wolfram Burgard University of Freiburg, Dept. of Computer Science, Freiburg, Germany Abstract Recognizing human actions

More information

Colored Point Cloud Registration Revisited Supplementary Material

Colored Point Cloud Registration Revisited Supplementary Material Colored Point Cloud Registration Revisited Supplementary Material Jaesik Park Qian-Yi Zhou Vladlen Koltun Intel Labs A. RGB-D Image Alignment Section introduced a joint photometric and geometric objective

More information

Practical Least-Squares for Computer Graphics

Practical Least-Squares for Computer Graphics Outline Least Squares with Generalized Errors Robust Least Squares Constrained Least Squares Practical Least-Squares for Computer Graphics Outline Least Squares with Generalized Errors Robust Least Squares

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Surface Registration. Gianpaolo Palma

Surface Registration. Gianpaolo Palma Surface Registration Gianpaolo Palma The problem 3D scanning generates multiple range images Each contain 3D points for different parts of the model in the local coordinates of the scanner Find a rigid

More information

Adding Hand Motion to the Motion Capture Based Character Animation

Adding Hand Motion to the Motion Capture Based Character Animation Adding Hand Motion to the Motion Capture Based Character Animation Ge Jin and James Hahn Computer Science Department, George Washington University, Washington DC 20052 {jinge, hahn}@gwu.edu Abstract. Most

More information

Articulated Characters

Articulated Characters Articulated Characters Skeleton A skeleton is a framework of rigid body bones connected by articulated joints Used as an (invisible?) armature to position and orient geometry (usually surface triangles)

More information

3D Models and Matching

3D Models and Matching 3D Models and Matching representations for 3D object models particular matching techniques alignment-based systems appearance-based systems GC model of a screwdriver 1 3D Models Many different representations

More information

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th

Announcements. Midterms back at end of class ½ lecture and ½ demo in mocap lab. Have you started on the ray tracer? If not, please do due April 10th Announcements Midterms back at end of class ½ lecture and ½ demo in mocap lab Have you started on the ray tracer? If not, please do due April 10th 1 Overview of Animation Section Techniques Traditional

More information

Splicing of concurrent upper-body motion spaces with locomotion

Splicing of concurrent upper-body motion spaces with locomotion Splicing of concurrent upper-body motion spaces with locomotion Article (Unspecified) Mousas, Christos, Newbury, Paul and Anagnostopoulos, Christos-Nikolaos (2013) Splicing of concurrent upper-body motion

More information

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC

Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics. Velocity Interpolation. Handing Free Surface with MAC Homework 2 Questions? Animation, Motion Capture, & Inverse Kinematics Velocity Interpolation Original image from Foster & Metaxas, 1996 In 2D: For each axis, find the 4 closest face velocity samples: Self-intersecting

More information

A Neural Classifier for Anomaly Detection in Magnetic Motion Capture

A Neural Classifier for Anomaly Detection in Magnetic Motion Capture A Neural Classifier for Anomaly Detection in Magnetic Motion Capture Iain Miller 1 and Stephen McGlinchey 2 1 University of Paisley, Paisley. PA1 2BE, UK iain.miller@paisley.ac.uk, 2 stephen.mcglinchey@paisley.ac.uk

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Nov. 13, 2007 Harpreet S. Sawhney hsawhney@sarnoff.com Recapitulation Problem of motion estimation Parametric

More information

Motion Rings for Interactive Gait Synthesis

Motion Rings for Interactive Gait Synthesis Motion Rings for Interactive Gait Synthesis Tomohiko Mukai Square Enix Motion sample Pose space Time axis (a) Motion ring (b) Precomputed sphere (c) Bumpy terrain (d) Stairs (e) Stepping stones Figure

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Registration of Dynamic Range Images

Registration of Dynamic Range Images Registration of Dynamic Range Images Tan-Chi Ho 1,2 Jung-Hong Chuang 1 Wen-Wei Lin 2 Song-Sun Lin 2 1 Department of Computer Science National Chiao-Tung University 2 Department of Applied Mathematics National

More information

Motion Style Transfer in Correlated Motion Spaces

Motion Style Transfer in Correlated Motion Spaces Motion Style Transfer in Correlated Motion Spaces Alex Kilias 1 and Christos Mousas 2(B) 1 School of Engineering and Digital Arts, University of Kent, Canterbury CT2 7NT, UK alexk@kent.ac.uk 2 Department

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

Video based Animation Synthesis with the Essential Graph. Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes

Video based Animation Synthesis with the Essential Graph. Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes Video based Animation Synthesis with the Essential Graph Adnane Boukhayma, Edmond Boyer MORPHEO INRIA Grenoble Rhône-Alpes Goal Given a set of 4D models, how to generate realistic motion from user specified

More information

Learnt Inverse Kinematics for Animation Synthesis

Learnt Inverse Kinematics for Animation Synthesis VVG (5) (Editors) Inverse Kinematics for Animation Synthesis Anonymous Abstract Existing work on animation synthesis can be roughly split into two approaches, those that combine segments of motion capture

More information

Subpixel Corner Detection Using Spatial Moment 1)

Subpixel Corner Detection Using Spatial Moment 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 25 Subpixel Corner Detection Using Spatial Moment 1) WANG She-Yang SONG Shen-Min QIANG Wen-Yi CHEN Xing-Lin (Department of Control Engineering, Harbin Institute

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

DETC APPROXIMATE MOTION SYNTHESIS OF SPHERICAL KINEMATIC CHAINS

DETC APPROXIMATE MOTION SYNTHESIS OF SPHERICAL KINEMATIC CHAINS Proceedings of the ASME 2007 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2007 September 4-7, 2007, Las Vegas, Nevada, USA DETC2007-34372

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field

More information

Image Coding with Active Appearance Models

Image Coding with Active Appearance Models Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing

More information

Lecture 16: Computer Vision

Lecture 16: Computer Vision CS442/542b: Artificial ntelligence Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field Methods

More information

calibrated coordinates Linear transformation pixel coordinates

calibrated coordinates Linear transformation pixel coordinates 1 calibrated coordinates Linear transformation pixel coordinates 2 Calibration with a rig Uncalibrated epipolar geometry Ambiguities in image formation Stratified reconstruction Autocalibration with partial

More information

Human pose estimation using Active Shape Models

Human pose estimation using Active Shape Models Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body

More information

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Jian Guo, Debadyuti Roy, Jing Wang University of Michigan, Department of Statistics Introduction In this report we propose robust

More information

Animation Lecture 10 Slide Fall 2003

Animation Lecture 10 Slide Fall 2003 Animation Lecture 10 Slide 1 6.837 Fall 2003 Conventional Animation Draw each frame of the animation great control tedious Reduce burden with cel animation layer keyframe inbetween cel panoramas (Disney

More information

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Final Report. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Final Report Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This report describes a method to align two videos.

More information

Synthesis by Example. Connecting Motion Planning and Example based Movement. Michael Gleicher

Synthesis by Example. Connecting Motion Planning and Example based Movement. Michael Gleicher Synthesis by Example Connecting Motion Planning and Example based Movement Michael Gleicher Dept of Computer Sciences University of Wisconsin Madison Case Study 1 Part I. Michael Gleicher 1 What is Motion

More information

Spectral Style Transfer for Human Motion between Independent Actions

Spectral Style Transfer for Human Motion between Independent Actions Spectral Style Transfer for Human Motion between Independent Actions M. Ersin Yumer Adobe Research Niloy J. Mitra University College London Figure 1: Spectral style transfer between independent actions.

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation

Motion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information