A statistical model for coupled human shape and motion synthesis

Size: px
Start display at page:

Download "A statistical model for coupled human shape and motion synthesis"

Transcription

1 A statistical model for coupled human shape and motion synthesis Alina Kuznetsova 1, Nikolaus F. Troje 2, Bodo Rosenhahn 1 1 Institute for Information Processing, Leibniz University Hanover, Germany 2 Bio Motion Lab, Queen s University, Kingston, Canada {kuznetso, rosenhahn}@tnt.uni-hannover.de, troje@queensu.ca Keywords: Abstract: Animation, Shape and motion synthesis, Statistical modeling Due to rapid development of virtual reality industry, realistic modeling and animation is becoming more and more important. In the paper, we propose a method to synthesize both human appearance and motion given semantic parameters, as well as to create realistic animation of still meshes and to synthesize appearance based on a given motion. Our approach is data-driven and allows to correlate two databases containing shape and motion data. The synthetic output of the model is evaluated quantitatively and in terms of visual plausibility. 1 INTRODUCTION Emerging interest in 3D technologies and virtual reality introduced a need for realistic computer modeling and animation. It is a rapidly developing area and yet there are many unresolved problems, such as fast and realistic character creation. In this paper, we propose a statistical model to solve the latter problem. In general, character creation is a challenging task. The character generation problem is addressed either by manual editing and 3D modeling or by data-driven approaches; a well-known example of appearance creation is the SCAPE model (Anguelov et al., 25). Motion simulation is usually considered as a separate topic, where three types of methods exist: manual motion editing, physics-based approaches and data-driven approaches. The oldest approach is manual motion editing, such as key frame animation. However, manual editing is not able to provide enough level of motion detailization and is very time-consuming, therefore data-driven approaches emerged recently, as well as physics based and control-based methods. Unfortunately, datadriven approaches usually produce physically incorrect results and the question of fitting generated motion to the concrete character (motion retargeting) is not solved completely; physics-based approaches are usually very computationally-intensive and complex, do not provide enough variability and, as experiments showed, physical fidelity of motion does not imply visual plausibility. Another way to animate a new character is to transfer motion from another character, but when the characters have different parameters (such as proportions, height, etc), such transfer can produce unrealistic results (so-called retargeting problem), as shown on the Fig 1. Here, the same shape was animated using different motions, one of which comes from a person with similar biometric parameters and the other is the motion of the person with completely different biometric parameters. Even from a sequence of images it is possible to see a mismatch between the shape and the motion. The other disadvantage of the approaches described above is that they are difficult to apply, when the task is to animate many characters at once. In this work, we propose a model for character appearance creation and animation by combining a statistical model of human motion with one for character appearance. In this way we address the problems described above. We are able to generate both the character s appearance and motion simultaneously, therefore avoiding the retargeting problem and extensive computations, while still producing visually plausible animations. Semantic parameters, such as weight, height and proportions of the character, serve as a link between shape and motion, and are integrated in our model, allowing excellent control over the generation process. Since our model is stochastic, it can be used for random character generation. Our paper is organized in the following way. In Section 2, we give a short overview of existing methods for character generation and animation. In Section 3, we give technical details about data collection and processing. In Section 4 we explain the model we use for generation. In Section 5, we provide an evaluation of our approach in terms of visual perception

2 (a) Figure 1: Example of motion retargeting problem: (a) red figure unrealistically bends backwards, while green figure has correct balance; (b) red figure has its shoulders set backwards, as is typical for slimmer person. (b) of generated characters and their motions, as well as a quantitative assessment of an appearance-to-motion fit. Motion simulation techniques Many techniques address the motion simulation problem. A purely data-driven approach for motion simulation was introduced by (Wang et al., 28) and is based on Gaussian process models for motion generation; (Li et al., 22) proposed to use linear dynamic systems with distribution of the dynamic system parameters to control the generation; (Brand and Hertzmann, 2) used style machines to create variations in one type of motion. Finally, (Troje, 22) used Principal Component Analysis (PCA) analysis to build a manifold of cyclic motions. Another approach to motion synthesis is in combining already existing motions without usage of any statistical models, for example, in (Sidenbladh et al., 22), no motion is synthesized, but the closest real motion in the database of motions is found based on predefined metrics. Physics-based models of human motion, such as proposed in (Liu et al., 25), are used to make motion physically plausible, which is often required for animation of interactions. More frequently, however, different types of controllers are used to refine already generated motion, for example as proposed in (da Silva et al., 28; Lee et al., 29; Popovic and Witkin, 1999; Sok et al., 27). 2 RELATED WORKS In this section, we revise already existing works on motion and shape modeling. Since standard approaches to this problem separate creation of character and creation of its motion, we first provide a short review of methods proposed for both problems. Mesh simulation techniques The traditional way of creating character appearance is manual modeling. Recently, automated approaches were presented, such as the SCAPE model (Anguelov et al., 25), that is now state-of-the-art technique. A slightly different statistical model is presented in (Hasler et al., 29). Here, the use of semantic handles is also proposed to achieve variability of shapes. Our contribution In contrast to the above mentioned works, where mesh and motion are clearly separated, we propose to combine appearance creation with motion simulation, therefore avoiding many problems of motion retargeting. To our knowledge, there are no previous works bringing motion simulation and shape creation together. We apply the model to generate joint realistic human shape and motion. We also see our contribution in developing a method to find dependencies in two separate databases and building a model, that allows to unite the data from both databases, using the dependencies found. In general, we can imagine many applications of the model, for example, crowd simulation, automatic animation of already existing characters or appearance-from-motion reconstruction. In mathematical sense, our model is inspired by variational models, proposed in (Cootes et al., 21). However, we include a random component and therefore allow for more loose coupling of motion and shape, as well as for variability in sampled characters.

3 (a) Weight (b) Height (c) Age (d) Weight (e) Height (f) Age Figure 2: Distribution of the semantic parameters (weight, height, age) of females in the databases. The upper row shows parameter distribution for the shapes dataset; the lower row shows parameter distribution for the motion dataset. 3 DATA PREPARATION 3.1 Data preprocessing Since our model is purely data-driven, we first give insights of data preprocessing. To train the model we need two sources of data: body scans to learn human appearance, or shape, and motion data, collected during recording session. These two databases contain shapes and motions, recorded from the different people, although the distribution of their semantic parameters are approximately the same. The shape database (Hasler et al., 29) contains body scans of 114 different subjects. Body scans were previously registered, such that aligned meshes, containing N = 12 vertices, were produced. Into each mesh we embed a skeleton (see Fig. 4), consisting of K = 15 joints, and morph meshes in such a way, that the skeletons of these meshes are exactly in the same pose. This is done to exclude variability in shapes due to slight differences in pose. We compute the skeleton s joints positions as a linear combination of nearby vertices: s k = N i=1 ω ik v i, (1) where weighs ω ik were derived manually based on the procedure, that was used to place markers on the body during the creation of the motion database. As a source of motion data, we used the motion database, described in (Troje, 22); the database contains recordings of gait motion of 1 individuals. Firstly, motion was acquired using MoCap system with 45 markers. From 45 physical markers K = 15 virtual markers were derived; these markers correspond to the placement of skeleton joints inside the mesh. For each motion 4 so-called eigenpostures are extracted. Each motion is then represented as a linear combination of eigenpostures, where the coefficients are periodical functions, that depend on time. The walk cycle parameter is individual for each person and therefore is stored separately. That means, that all together each motion is represented with K 3 (4 + 1) + 1 coordinates. Figure 4: Alignment of a skeleton and a mesh. Moreover, for each person semantic data, such as gender, weight, heights, age, was stored. We denote semantic attributes, connected to each recorded shape s or motion m, as v(s) or v(m). Firstly, we separated male from female subjects in both databases, since variation between male and female meshes is much

4 matching scale PCA regression Gaussian model Figure 3: Block diagram of the algorithm; starting from the initial sets of motions (M) and shapes (S), Gaussian model is built. greater then the variation within the female (or male) part of the database itself and such separation makes our model able to better capture variation within both groups. Each group has the same number of members, i.e 55 females and 59 males in the mesh database and 5 females and 5 males in the motion database. The distributions of the semantic parameters of females are presented in Fig. 2. For our experiments, we took weight and height as the most varying parameters across our data sets, but in general any set of parameters could have been taken, as long as it has significant influence on shape appearance and on motion. 3.2 Binding a skeleton to a mesh Since the motion data we obtained is represented using positions of joints depending on time, we chose to use a skeleton-based approach to animate the model. Several methods are proposed to solve the problem of skeleton-based character animation. As mentioned in the previous subsection, we embedded a skeleton into each mesh. To morph a mesh in accordance with a given skeleton position, we chose to use linear skinning (Jacka et al., 27), since the technique is both fast and robust, while providing sufficiently good results. To summarize the implementation, we first assign weights w i j to each vertex of the mesh, that characterize how much bone b j influences the vertex v i. Let a coordinate system be attached to each bone, then transformation matrix T j transforms vertex coordinates from bone coordinate system to world coordinate system. For the new position of the bone, ˆT j denotes the new transformation matrix. Then the new position of the vertex is given by the formula: ˆv i = K j=1 w i j ˆT j T 1 j v i, N i=1 w i j = 1 (2) where K = K 1 is the number of bones (see Fig. 4). The weights are generated using the solver for the heat equation, where heat is propagated from each of the bones (Baran and Popović, 27) until heat equilibrium is reached, and weights are set equal to the normalized temperatures. 4 Appearance morphable model In this section, we explain our shape-motion model and its constraints and limitations. The algorithm for model preparation and training is given in the Fig. 3. The main idea of our model is to correlate shape and motion based on semantic parameters. To achieve better alignment and to avoid the retargeting problem, we first match and align the meshes with the motions. A rough match is done based on semantic parameters, i.e. we create a set of shape-motion pairs (s p,m q ), such that the values of the semantic parameters for each pair differ less than a given threshold ε : P = {(s p,m q ), v(s p ) v(m q ) < ε} (this procedure corresponds to the first block in the Fig. 3). In the next step, we morph each shape s p in the way, that its skeleton corresponds exactly to the skeleton of the motion m q. To implement scaling, we modify Eq. (2) by adding a transformation matrix that represents relative scaling: ˆv i = K j=1 w i j ˆT j S j T 1 j v i, N i=1 w i j = 1, (3) S j = s j s j (4) s j where s j is a scaling factor. After scaling, we fit the skeleton again using Eq. (1). We iterate between these two steps several times before the bone lengths of the newly fitted skelton converge to the desired values. Since the scaling coefficients, i.e. ratio between corresponding bone lengths, are in the interval of [.85,1.15], it does not affect the realistic look of the shapes. After scaling, the bone lengths of the mesh skeleton correspond exactly to those of motion skeleton. Since the dimensionality of the data is still high after this preprocessing step, we first reduce the dimensions by performing Principal Component Analysis (PCA) on both shape and motion coefficients separately and converting our data into PCA space of smaller dimensions. PCA transformation consists of finding orthogonal directions in space (called basis), corresponding to the maximal variance of the sample. Then, given a set of vectors x j X R l, N x size of

5 the set X, and the found basis U x = [u 1 x,...,u k x], the original vectors can represented as: x j = U x ˆx j + µ(x), j = 1...N x (5) where µ(x) = 1 N x N x i=1 X i is the mean of the set X and ˆx j is a vector of coordinates of x j in PCA space, dimensionality of ˆx j is smaller than the dimensionality of x j. Then, ˆx j = U T x (x j µ(x)) (6) is the reverse transformation from ˆx j to x j. To apply PCA on our data, we firstly stretch the matrices, representing shapes and motions, into vectors; therefore we produce two sets: the set of vectors representing shapes S R 3N and the set of vectors representing motion M R 15K+1. Than we perform PCA as described above (see the third block in the Fig 3). We denote the PCA coordinates of smaller dimensionality as ˆm R ˆN and ŝ R ˆK accordingly. Dimensionality of the space is chosen in such a way, that leaves 95% of variance of the sample in both cases. Now we bind mesh and motion coordinates to learn a joint Gaussian distribution over a set [ŝ, ˆm], depending on the semantic parameters v. In this sense, our model is close to Active Appearance Models (AAM, (Cootes et al., 21)), although in contrast to AAM, the relation between two sets of PCA coordinates in our model is probabilistic, i.e. they are not firmly coupled together. Since each pair has common semantic parameters, i.e. to each pair of PCA coordinates ŷ = (ŝ, ˆm) Ŷ R ˆN+ ˆK a vector of semantic parameters v is assigned, we can now derive control handles over our model. For that, we use linear regression in the space of joint PCA coordinates. Since not all coordinates are correlated with semantic parameters, we first perform standard correlation significance analysis and find significant coefficients: I(Ŷ ) r = {i [1,..., ˆN + ˆK] : P(ρ(ŷ i,v) = ) < γ}, where ρ(ŷ i,v) is thecorrelation between the i-th coordinate of vector y and semantic values, γ =.5 is p-value for testing the hypothesis that no correlation exists (Kendall and Stuart, 1973). We then use coordinates from I(Ŷ ) r to build the joint regression model: ŷ I(Ŷ )r = Θv + ε, (7) where Θ is a matrix of regression coefficients and ε N (,Σ I ) is a normally distributed random variable with covariance matrix Σ I. For the rest of the coordinates we assume a joint Gaussian distribution N (,Σ f ), where Σ f is the joint covariance and can be easily estimated from the data. The full joint model (the construction of the joint (a) Figure 5: Two examples of conditional sampling of motion: (a) motion sampled with correlation analysis; (b) motion sampled without correlation analysis. model corresponds to the last two blocks in Fig. 3) is described by the following equations: (b) ŷ = µŷ + Σξ, ξ N (,I), (8) { (Θv) k, i I(Ŷ ), (µŷ ) i = (9), otherwise Here (Θv) k denotes the corresponding coordinate, obtained from Eq. (7). Without loosing generality, we can reorder indices in such a way, that indices in the sets I(Ŝ) r are put in the beginning, followed by indices of the elements, which are independent from semantic parameters. Then, the full covariance matrix can be written as: ( ) ΣI Σ = (1) Σ f Using the proposed model for generating a character based on semantic parameters is straightforward: firstly, semantic parameters are chosen to determine the parameters of Gaussian distribution using Eq (7),(8),(1); then a random point in PCA space is sampled according to a normal distribution with determined parameters. Finally, the coordinates of the point are converted into shape-motion space using Eq (5). Another useful property of the model is the ability to animate a given mesh, i.e. create the corresponding motion, given as well as create a well-fitting appearance, i.e. mesh, based on a motion. Let an input shape s be given. Firstly, the mesh is transformed in PCA space. Secondly, we derive a sampling distribution in the PCA space of ŝ by applying Bayes theorem. We assume normal distributions and therefore are able to sample from the conditional distribution to derive the mesh coordinates, corresponding to the motion and

6 vice verse: P( ˆm ŝ) = P( ˆm,ŝ) P(ŝ). (11) Since we assume a parametrized normal distributions we can write: µ ˆm ŝ = µ ˆm + Σ ˆmŝ Σ 1 ˆm ˆm (ŝ µ ŝ) (12) Σ ˆm ŝ = Σŝŝ Σŝ ˆm Σ 1 ˆm ˆm Σ ˆmŝ (13) ˆm = µ ˆm ŝ + Σ ˆm ŝ ξ, ξ N (,I) (14) Afterwards, we convert the PCA coordinates of the generated motion to the original motion space. One can apply the similar procedure to derive the mesh coordinates from a given motion. As mentioned above, we separate coordinates, correlated with semantic parameters. We do it to avoid false dependencies between coordinates and to be able to use free coordinates to our model to introduce more variability into the generation process. To show the importance of separating correlated coordinates, we perform the following experiment: we divide the databases into two disjoint parts - test and train parts, train our model using train parts of each of the databases and then perform motion sampling conditioned on a mesh from the test part of the mesh database using the model with correlation analysis and without correlation analysis. As can be seen in the Fig. 5, the model that uses all coordinates for the regression (i.e. the model trained without correlation analysis) fails to generate suitable motion, while the model with correlation analysis produces realistic motion. We explain that by the fact, that coordinates of the mesh from the test set possibly are far from the coordinates of the meshes in train set, and therefore false dependencies introduced by regression on all coordinates have strong influence and produce implausible results. 5 EVALUATION As stated above, our model allows to generate highly realistic shape-motion correlation with relatively small afford. In this section, we evaluate simulated sequences in terms of motion fidelity, and also provide quantitative evaluation to show how our approach allows to avoid the motion retargeting problem. There is a number of methods for visual fidelity evaluation proposed in the literature. They can be divided into interrogation approaches and automatic approaches. Interrogation-based approaches can be applied on very different problems. In (Reitsma and Pollard, our model selected mesh selected motion Experiment 1 pair 1 (4, 169) (53, 173) (86, 176) pair 2 (7, 155) (44.7, 15) (75.2, 186) pair 3 (11, 17) (12, 176) (49, 168) Experiment 2 question 1 (9, 176) (12, 176) (86.3, 176) question 2 (45, 169) (49, 169) ( ) question 3 (77.9, 179) (77.9, 176) (77.9, 181) question 4 (75, 178) (75.6, 178) (74.5, 179) Table 1: Semantic parameters (weight in kg., height in cm.) of the samples used in the experiments. 23), experiments with human observers were conducted to determine the sensitivity of human perception to physical errors in motions and (Hodgins et al., 1998) investigated, which types of anomalies added to the motion disturb human perception the most. In (Pražák and O Sullivan, 211) and (McDonnell et al., 28) human crowd variety was investigated and influence of either motion or shape clones on the whole crowd perception was investigated. As an example of the automated motion evaluation approaches, (Ren et al., 25) proposed and evaluated automated data-driven method for assessing unnaturalness of the human motion; in (Ahmed, 24) motion is evaluated in terms of feet sliding and general smoothness, while in (Jang et al., 28) physical plausibility of motion was evaluated. Unfortunately, as also mentioned by (Ren et al., 25), these approaches depend a lot on the type of distortion, applied to motion, and therefore are directed on the evaluation of a specific type of motion generation. Interrogation-based approaches provide more general results and are more reliable and more suitable for our goal. Therefore, we performed two experiments designed as questionnaires and one additional experiment for quantitative evaluation. 13 male and 1 female subjects took part in our experiment. The experimental setup is described below. 5.1 Interrogation-based evaluation Experiment 1 In the first experiment, we asked the participants to compare two animations, one of which was generated using the model and the second one was generated by choosing motion and mesh randomly from our databases. We prepared 3 pairs of videos of 1 s. length. The parameters of the generated (or selected) characters are given in the Table 1. To diminish bias due to greater attractiveness of

7 pair 1 pair 2 pair 3 average positive (%) neutral (%) negative (%) Table 2: Percentage of the participants, that accepted the sequence generated with our model as more realistic, less realistic and percentage of people, having difficulty to evaluate one of the motions as more natural. The perspective and surroundings for the animated figures, as well as textures of the figures were the same. The participants were asked to indicate, which animation from the pairs looked more realistic; they also had the opportunity to state, that both animations looked equally bad or equally good. The results of the experiment are summarized in the Table 2. As the results show, the mesh-to-motion fit delivers in general better results then just combining mesh and motion without taking into account parameters of combined mesh and motion. We explain failure in the second pair of sequences due to generally lesser attractiveness of the appearance sampled from the model (see Fig. 6). In general, we observed, that mismatch between mesh and motion due to difference in weight is a lot easier to detect then mismatch due to difference in height, because in the latter case proportions of the person, unless some extreme cases are taken (e.g. a six-year child and a tall adult) are the same and therefore motion can fit quite well. Some more examples of subjects, sampled using our model, are given in the Fig 7. Taking into account, that during artificial character animation it is usually difficult to find real motion data that fits exactly to the animated shape, our approach is beneficial in terms of delivered result. one figure in comparison to another, we chose appearances, that have approximately the same semantic parameters, e.g. compared two fat humans, or two thin humans etc. (a) weight: 4kg, height: 169cm (b) weight: 61kg, height: 171cm (c) weight: 77kg, height: 178cm (d) weight: 11kg, height: 17cm Figure 7: Examples of simulated characters. (a) Figure 6: Shapes used in the second pair of sequences in experiment 1 for comparison of visual fidelity (a) mesh sampled from the model (b) mesh from the original database. (b) Experiment 2 In the second experiment, we asked our participants to evaluate several sets of videos. In each set, four videos were presented. The videos in each set were produced as follows: A 1 : an animation was generated by matching mesh and motion based on semantic parameters; A 2 : an animation was generated by sampling mesh and motion from the model with the same semantic parameters; A 3 : a mesh was taken from the database of meshes and a motion was sampled from our model given the mesh; A 4 : an animation, where a motion was taken from the database of motion and a mesh was sampled from our model given the motion. The parameters of the samples are given in Table 1. The participants should grade each video with marks from 1 to 4, where 1 means very visually plausible and 4 mean completely unrealistic. In Table 3, the results are given in the form of percentage of participants, that evaluated sequences with 1, 2, 3 or 4 respectively.

8 alg average note A A A A Table 3: Percentage of the participants, given marks from 1 to 4 and mean note for each algorithm. The notes can vary in the interval [1,4], where 1 means very realistic, and 4 means not visually plausible. As the results show, that our model produces slightly better results as match of semantic parameters, which is a doos result given that the two data bases were only matched via semantic parameters during training.the second most realistic animation is delivered by motion sampling, conditioned on the mesh, while appearance sampling (i.e. mesh sampling) based on motion generates the same or even slightly lower results, then semantic matching. We attribute this to the fact that motion still does not provide enough information to sample appearance accurately. We explain the superiority of our sampling methods with several arguments: firstly, mesh and motion sampled with our model are aligned better to each other, and secondly, due to smoothing on the PCA stage, small artifacts, appearing occasionally in matched sequences, are smoothed away, and therefore visually plausible result can be produced. 5.2 Quantitative evaluation There exist several approaches to solve the motion retargeting problem, and none of them is perfect. However, we can avoid this problem by generating mesh and motion simultaneously. We evaluate the meshmotion fit by comparing bone lengths of the mesh skeleton b s j and corresponding bone lengths of the motion skeleton b m j in terms of amount of scaling needed: s j = 1 bs j b m j (15) Here j corresponds to a body part and s j denotes the amount of scaling required to fit mesh to motion. For the motion and mesh from the same person, i.e. when no scaling is required, the amount of scaling equals zero. As shown in the Figure 8, the model captures dependencies in bone lengths in mesh with bone lengths in motion and allows to generate pairs of mesh and motion already scaled properly, so that no scaling is required afterwards. We also evaluate mesh-motion fit in terms of conditional mesh sampling, when motion is given, and conditional motion sampling, when mesh is given. We create pairs from the real data using semantic matching and for the same range of semantic parameters we sample pairs from the model. The results (Fig. 8) confirm, that mesh and motion in the pairs, sampled from the model, are already properly aligned to each other s is scale factor for the bones model gen. data, on sem. param model gen. data, cond. on mesh model gen. data, cond. on motion sem. matched real data hand forearm shoulder torco crus thigh pelvis Figure 8: The mean amount of scaling required for each of the body parts; green color corresponds to the pairs sampled from the model, red color corresponds to the matched pairs. When no scaling is required, the amount of scaling equals. 5.3 Using the model in crowd simulation As mentioned above, we also propose to use the model for automatic character appearance and motion simulation of crowds. We leave out of scope of this paper the question of crowd behavior simulation, since it is an active area of research and numerous approaches exist (see, for example, (Guy et al., 21), (Narain et al., 29)). By controlling the distribution of the semantic parameters, it is possible to generate a set of character meshes, that is specific to the scene. In our example, we generate the crowd by randomly placing sampled characters in space. For creating the crowd in Fig. 9 and 1, we used a uniform distribution of the semantic parameters weight and height. However, one can consider more advanced ways of setting semantic parameters, if some specific distribution of shapes and motions is required. More demonstrations are provided in the supplementary material. 6 Future work As our experiments have shown, there are several limitations of our model. First of all, our model was

9 Figure 9: Example of crowd generation with semantic parameters weight and height uniformly distributed on [4,7] and [15,18] accordingly. The animation itself can be viewed in additional materials. Figure 1: Example of crowd generation with semantic parameters weight and height uniformly distributed on [5,1] and [15,2] accordingly. The animation itself can be viewed in additional materials. applied to a cyclical motion, so its extension to arbitrary type of motion can represent some difficulties. Secondly, the model can be used to generate samples based on semantic parameters, when the semantic parameters are not very far from the range of the parameters used for training. However, when it is not the case, produced results can look unrealistic. We hope, that increasing size and variability of the datatbases will help to generate characters with a wider range of semantic parameters. Therefore, in future we are planning to extend our work on more complex motions, as well as achieve better mesh-motion fitting in the model training phase. More complex motions should firstly be aligned using techniques such as time warping (Hsu et al., 27) and then the similar procedures as in (Troje, 22) should be applied to produce vectorbased representation of motions, suitable for analysis. Another interesting topic is the use of more advanced stochastic models for mesh-motion coordinates coupling to be able to capture non-linear dependencies. 7 CONCLUSION In our work, we proposed a parametrized model combining shape and motion, that can be applied to generate realistic characters together with appropriate motion. While providing enough variability, the model allows tight control over the generation process with the help of semantic parameters, and has a probabilistic formulation. Although the usage of two independent databases can be seen as a weakness of our approach, we want to stress that the method was suggested to find dependencies in the initially unrelated databases and use them to bridge these databases. Such an approach can be advantageous when it is not possible or difficult to collect motion and shape data together from the same subjects. Our model can be used to generate completely new meshes and motion, as well as to generate a specific motion for a given mesh or a specific shape using an existing motion. We also evaluated our model in three experiments, that showed superiority of the proposed model in terms of visual plausibility of created samples in comparison to simple matching based on semantic parameters. We furthermore avoid the retargeting problem, since our model already contains necessary dependencies in it. The model can be easily extended to bigger datasets, since all the algorithms used here in general are computationally inexpensive. Possible applications of our model include animating existing characters, creating appearance based

10 on motion, as well as creating a visually plausible crowd. REFERENCES Ahmed, A. A. H. (24). Parametric synthesis of human animation. Anguelov, D., Srinivasan, P., Koller, D., Thrun, S., Rodgers, J., and Davis, J. (25). Scape: shape completion and animation of people. ACM Trans. Graph, 24: Baran, I. and Popović, J. (27). Automatic rigging and animation of 3d characters. In ACM SIGGRAPH 27 papers, SIGGRAPH 7, New York, NY, USA. ACM. Brand, M. and Hertzmann, A. (2). Style machines. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, SIG- GRAPH, pages , New York, NY, USA. ACM Press/Addison-Wesley Publishing Co. Cootes, T. F., Edwards, G. J., and Taylor, C. J. (21). Active appearance models. IEEE Trans. Pattern Anal. Mach. Intell., 23(6): da Silva, M., Abe, Y., and Popović, J. (28). Interactive simulation of stylized human locomotion. In ACM SIGGRAPH 28 papers, SIGGRAPH 8, pages 82:1 82:1, New York, NY, USA. ACM. Guy, S. J., Chhugani, J., Curtis, S., Dubey, P., Lin, M., and Manocha, D. (21). Pledestrians: a least-effort approach to crowd simulation. In Proceedings of the 21 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, SCA 1, pages , Airela-Ville, Switzerland, Switzerland. Eurographics Association. Hasler, N., Stoll, C., Sunkel, M., Rosenhahn, B., p. Seidel, H., and Informatik, M. (29). A statistical model of human pose and body shape. computer graphics forum28. Hodgins, J. K., O Brien, J. F., and Tumblin, J. (1998). Perception of human motion with different geometric models. IEEE Transactions on Visualization and Computer Graphics, 4(4): Hsu, E., da Silva, M., and Popović, J. (27). Guided time warping for motion editing. In Proceedings of the 27 ACM SIGGRAPH/Eurographics symposium on Computer animation, SCA 7, pages 45 52, Aire-la- Ville, Switzerland, Switzerland. Eurographics Association. Jacka, D., Merry, B., and Reid, A. (27). A comparison of linear skinning techniques for character animation. In In Afrigraph, pages ACM. Jang, W.-S., Lee, W.-K., Lee, I.-K., and Lee, J. (28). Enriching a motion database by analogous combination of partial human motions. Vis. Comput., 24(4): Kendall, G. and Stuart, A. (1973). The Advanced Theory of Statistics. Vol.2: Inference and: Relationsship. Griffin. Lee, Y., Lee, S. J., and Popović, Z. (29). Compact character controllers. In ACM SIGGRAPH Asia 29 papers, SIGGRAPH Asia 9, pages 169:1 169:8, New York, NY, USA. ACM. Li, Y., Wang, Y. L. T., and yeung Shum, H. (22). Motion texture: A two-level statistical model for character motion synthesis. In ACM Transactions on Graphics, pages Liu, C. K., Hertzmann, A., and Popovic, Z. (25). Learning physics-based motion style with nonlinear inverse optimization. ACM Trans. Graph, 24: McDonnell, R., Larkin, M., Dobbyn, S., Collins, S., and O Sullivan, C. (28). Clone attack! perception of crowd variety. In ACM SIGGRAPH 28 papers, SIG- GRAPH 8, pages 26:1 26:8, New York, NY, USA. ACM. Narain, R., Golas, A., Curtis, S., and Lin, M. C. (29). Aggregate dynamics for dense crowd simulation. In ACM SIGGRAPH Asia 29 papers, SIGGRAPH Asia 9, pages 122:1 122:8, New York, NY, USA. ACM. Popovic, Z. and Witkin, A. P. (1999). Physically based motion transformation. In SIGGRAPH, pages Pražák, M. and O Sullivan, C. (211). Perceiving human motion variety. In Proceedings of the ACM SIG- GRAPH Symposium on Applied Perception in Graphics and Visualization, APGV 11, pages 87 92, New York, NY, USA. ACM. Reitsma, P. S. A. and Pollard, N. S. (23). Perceptual metrics for character animation: sensitivity to errors in ballistic motion. In ACM SIGGRAPH 23 Papers, pages Ren, L., Patrick, A., Efros, A. A., Hodgins, J. K., and Rehg, J. M. (25). A data-driven approach to quantifying natural human motion. ACM Trans. Graph, 24: Sidenbladh, H., Black, M. J., and Sigal, L. (22). Implicit probabilistic models of human motion for synthesis and tracking. In In European Conference on Computer Vision, pages Sok, K. W., Kim, M., and Lee, J. (27). Simulating biped behaviors from human motion data. ACM Trans. Graph., 26(3). Troje, N. F. (22). Decomposing biological motion: A framework for analysis and synthesis of human gait patterns. In Journal of Vision, volume 2, pages Wang, J. M., Fleet, D. J., and Hertzmann, A. (28). Gaussian process dynamical models for human motion. IEEE Trans. Pattern Anal. Mach. Intell., 3(2): APPENDIX We would like to thank Prof D. Fleet for the very helpful discussions and good advice.

Statistical Learning of Human Body through Feature Wireframe

Statistical Learning of Human Body through Feature Wireframe Statistical Learning of Human Body through Feature Wireframe Jida HUANG 1, Tsz-Ho KWOK 2*, Chi ZHOU 1 1 Industrial and Systems Engineering, University at Buffalo, SUNY, Buffalo NY, USA; 2 Mechanical, Industrial

More information

Motion Interpretation and Synthesis by ICA

Motion Interpretation and Synthesis by ICA Motion Interpretation and Synthesis by ICA Renqiang Min Department of Computer Science, University of Toronto, 1 King s College Road, Toronto, ON M5S3G4, Canada Abstract. It is known that high-dimensional

More information

Comparison of Default Patient Surface Model Estimation Methods

Comparison of Default Patient Surface Model Estimation Methods Comparison of Default Patient Surface Model Estimation Methods Xia Zhong 1, Norbert Strobel 2, Markus Kowarschik 2, Rebecca Fahrig 2, Andreas Maier 1,3 1 Pattern Recognition Lab, Friedrich-Alexander-Universität

More information

SCAPE: Shape Completion and Animation of People

SCAPE: Shape Completion and Animation of People SCAPE: Shape Completion and Animation of People By Dragomir Anguelov, Praveen Srinivasan, Daphne Koller, Sebastian Thrun, Jim Rodgers, James Davis From SIGGRAPH 2005 Presentation for CS468 by Emilio Antúnez

More information

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Department of Computer Science The University of British Columbia duanx@cs.ubc.ca, lili1987@cs.ubc.ca Abstract

More information

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets Kyungha Min and Moon-Ryul Jung Dept. of Media Technology, Graduate School of Media Communications, Sogang Univ., Seoul,

More information

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction

Motion Texture. Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays. 1. Introduction Motion Texture Harriet Pashley Advisor: Yanxi Liu Ph.D. Student: James Hays 1. Introduction Motion capture data is often used in movies and video games because it is able to realistically depict human

More information

A Responsiveness Metric for Controllable Characters Technical Report CS

A Responsiveness Metric for Controllable Characters Technical Report CS A Responsiveness Metric for Controllable Characters Technical Report CS05-50-0 Madhusudhanan Srinivasan Ronald A. Metoyer School of Electrical Engineering and Computer Science Oregon State University ρ

More information

Motion Synthesis and Editing. Yisheng Chen

Motion Synthesis and Editing. Yisheng Chen Motion Synthesis and Editing Yisheng Chen Overview Data driven motion synthesis automatically generate motion from a motion capture database, offline or interactive User inputs Large, high-dimensional

More information

Generating Different Realistic Humanoid Motion

Generating Different Realistic Humanoid Motion Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing

More information

Image Coding with Active Appearance Models

Image Coding with Active Appearance Models Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing

More information

3D Active Appearance Model for Aligning Faces in 2D Images

3D Active Appearance Model for Aligning Faces in 2D Images 3D Active Appearance Model for Aligning Faces in 2D Images Chun-Wei Chen and Chieh-Chih Wang Abstract Perceiving human faces is one of the most important functions for human robot interaction. The active

More information

Introduction to Computer Graphics. Animation (1) May 19, 2016 Kenshi Takayama

Introduction to Computer Graphics. Animation (1) May 19, 2016 Kenshi Takayama Introduction to Computer Graphics Animation (1) May 19, 2016 Kenshi Takayama Skeleton-based animation Simple Intuitive Low comp. cost https://www.youtube.com/watch?v=dsonab58qva 2 Representing a pose using

More information

Physically Based Character Animation

Physically Based Character Animation 15-464/15-664 Technical Animation April 2, 2013 Physically Based Character Animation Katsu Yamane Disney Research, Pittsburgh kyamane@disneyresearch.com Physically Based Character Animation Use physics

More information

A Data-driven Approach to Human-body Cloning Using a Segmented Body Database

A Data-driven Approach to Human-body Cloning Using a Segmented Body Database 15th Pacific Conference on Computer Graphics and Applications A Data-driven Approach to Human-body Cloning Using a Segmented Body Database Pengcheng Xi National Research Council of Canada pengcheng.xi@nrc-cnrc.gc.ca

More information

Tiled Texture Synthesis

Tiled Texture Synthesis International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 16 (2014), pp. 1667-1672 International Research Publications House http://www. irphouse.com Tiled Texture

More information

Analyzing and Segmenting Finger Gestures in Meaningful Phases

Analyzing and Segmenting Finger Gestures in Meaningful Phases 2014 11th International Conference on Computer Graphics, Imaging and Visualization Analyzing and Segmenting Finger Gestures in Meaningful Phases Christos Mousas Paul Newbury Dept. of Informatics University

More information

Clothed and Naked Human Shapes Estimation from a Single Image

Clothed and Naked Human Shapes Estimation from a Single Image Clothed and Naked Human Shapes Estimation from a Single Image Yu Guo, Xiaowu Chen, Bin Zhou, and Qinping Zhao State Key Laboratory of Virtual Reality Technology and Systems School of Computer Science and

More information

A Model-based Approach to Rapid Estimation of Body Shape and Postures Using Low-Cost Depth Cameras

A Model-based Approach to Rapid Estimation of Body Shape and Postures Using Low-Cost Depth Cameras A Model-based Approach to Rapid Estimation of Body Shape and Postures Using Low-Cost Depth Cameras Abstract Byoung-Keon D. PARK*, Matthew P. REED University of Michigan, Transportation Research Institute,

More information

DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS

DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS DISTANCE MAPS: A ROBUST ILLUMINATION PREPROCESSING FOR ACTIVE APPEARANCE MODELS Sylvain Le Gallou*, Gaspard Breton*, Christophe Garcia*, Renaud Séguier** * France Telecom R&D - TECH/IRIS 4 rue du clos

More information

Registration of Expressions Data using a 3D Morphable Model

Registration of Expressions Data using a 3D Morphable Model Registration of Expressions Data using a 3D Morphable Model Curzio Basso, Pascal Paysan, Thomas Vetter Computer Science Department, University of Basel {curzio.basso,pascal.paysan,thomas.vetter}@unibas.ch

More information

Supplemental Material: Detailed, accurate, human shape estimation from clothed 3D scan sequences

Supplemental Material: Detailed, accurate, human shape estimation from clothed 3D scan sequences Supplemental Material: Detailed, accurate, human shape estimation from clothed 3D scan sequences Chao Zhang 1,2, Sergi Pujades 1, Michael Black 1, and Gerard Pons-Moll 1 1 MPI for Intelligent Systems,

More information

Iterative Estimation of 3D Transformations for Object Alignment

Iterative Estimation of 3D Transformations for Object Alignment Iterative Estimation of 3D Transformations for Object Alignment Tao Wang and Anup Basu Department of Computing Science, Univ. of Alberta, Edmonton, AB T6G 2E8, Canada Abstract. An Iterative Estimation

More information

Capturing Skeleton-based Animation Data from a Video

Capturing Skeleton-based Animation Data from a Video Capturing Skeleton-based Animation Data from a Video Liang-Yu Shih, Bing-Yu Chen National Taiwan University E-mail: xdd@cmlab.csie.ntu.edu.tw, robin@ntu.edu.tw ABSTRACT This paper presents a semi-automatic

More information

Active Appearance Models

Active Appearance Models Active Appearance Models Edwards, Taylor, and Cootes Presented by Bryan Russell Overview Overview of Appearance Models Combined Appearance Models Active Appearance Model Search Results Constrained Active

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model

Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model Body Trunk Shape Estimation from Silhouettes by Using Homologous Human Body Model Shunta Saito* a, Makiko Kochi b, Masaaki Mochimaru b, Yoshimitsu Aoki a a Keio University, Yokohama, Kanagawa, Japan; b

More information

Robust Model-Free Tracking of Non-Rigid Shape. Abstract

Robust Model-Free Tracking of Non-Rigid Shape. Abstract Robust Model-Free Tracking of Non-Rigid Shape Lorenzo Torresani Stanford University ltorresa@cs.stanford.edu Christoph Bregler New York University chris.bregler@nyu.edu New York University CS TR2003-840

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Human Body Shape Deformation from. Front and Side Images

Human Body Shape Deformation from. Front and Side Images Human Body Shape Deformation from Front and Side Images Yueh-Ling Lin 1 and Mao-Jiun J. Wang 2 Department of Industrial Engineering and Engineering Management, National Tsing Hua University, Hsinchu, Taiwan

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Face Hallucination Based on Eigentransformation Learning

Face Hallucination Based on Eigentransformation Learning Advanced Science and Technology etters, pp.32-37 http://dx.doi.org/10.14257/astl.2016. Face allucination Based on Eigentransformation earning Guohua Zou School of software, East China University of Technology,

More information

Single-View Dressed Human Modeling via Morphable Template

Single-View Dressed Human Modeling via Morphable Template Single-View Dressed Human Modeling via Morphable Template Lin Wang 1, Kai Jiang 2, Bin Zhou 1, Qiang Fu 1, Kan Guo 1, Xiaowu Chen 1 1 State Key Laboratory of Virtual Reality Technology and Systems School

More information

A Survey of Adaptive Sampling in Realistic Image Synthesis

A Survey of Adaptive Sampling in Realistic Image Synthesis WDS'13 Proceedings of Contributed Papers, Part I, 63 68, 2013. ISBN 978-80-7378-250-4 MATFYZPRESS A Survey of Adaptive Sampling in Realistic Image Synthesis M. S ik Charles University in Prague, Faculty

More information

A Study On Perceptual Similarity of Human Motions

A Study On Perceptual Similarity of Human Motions Workshop on Virtual Reality Interaction and Physical Simulation VRIPHYS (2011) J. Bender, K. Erleben, and E. Galin (Editors) A Study On Perceptual Similarity of Human Motions Björn Krüger 1, Jan Baumann

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

Shape from Selfies : Human Body Shape Estimation using CCA Regression Forests

Shape from Selfies : Human Body Shape Estimation using CCA Regression Forests Shape from Selfies : Human Body Shape Estimation using CCA Regression Forests Endri Dibra 1 Cengiz Öztireli1 Remo Ziegler 2 Markus Gross 1 1 Department of Computer Science, ETH Zürich 2 Vizrt {edibra,cengizo,grossm}@inf.ethz.ch

More information

Optimal Segmentation and Understanding of Motion Capture Data

Optimal Segmentation and Understanding of Motion Capture Data Optimal Segmentation and Understanding of Motion Capture Data Xiang Huang, M.A.Sc Candidate Department of Electrical and Computer Engineering McMaster University Supervisor: Dr. Xiaolin Wu 7 Apr, 2005

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Parametric Editing of Clothed 3D Avatars

Parametric Editing of Clothed 3D Avatars The Visual Computer manuscript No. (will be inserted by the editor) Parametric Editing of Clothed 3D Avatars Yin Chen Zhi-Quan Cheng Ralph R. Martin Abstract Easy editing of a clothed 3D human avatar is

More information

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner

Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner Three-dimensional nondestructive evaluation of cylindrical objects (pipe) using an infrared camera coupled to a 3D scanner F. B. Djupkep Dizeu, S. Hesabi, D. Laurendeau, A. Bendada Computer Vision and

More information

Image-Based Deformation of Objects in Real Scenes

Image-Based Deformation of Objects in Real Scenes Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method

More information

Case Study: Attempts at Parametric Reduction

Case Study: Attempts at Parametric Reduction Appendix C Case Study: Attempts at Parametric Reduction C.1 Introduction After the first two studies, we have a better understanding of differences between designers in terms of design processes and use

More information

Does Dimensionality Reduction Improve the Quality of Motion Interpolation?

Does Dimensionality Reduction Improve the Quality of Motion Interpolation? Does Dimensionality Reduction Improve the Quality of Motion Interpolation? Sebastian Bitzer, Stefan Klanke and Sethu Vijayakumar School of Informatics - University of Edinburgh Informatics Forum, 10 Crichton

More information

Semi-Automatic Prediction of Landmarks on Human Models in Varying Poses

Semi-Automatic Prediction of Landmarks on Human Models in Varying Poses Semi-Automatic Prediction of Landmarks on Human Models in Varying Poses Stefanie Wuhrer Zouhour Ben Azouz Chang Shu National Research Council of Canada, Ottawa, Ontario, Canada {stefanie.wuhrer, zouhour.benazouz,

More information

Evaluating the Performance of Face-Aging Algorithms

Evaluating the Performance of Face-Aging Algorithms Evaluating the Performance of Face-Aging Algorithms Andreas Lanitis Department of Multimedia and Graphic Arts, Cyprus University of Technology, P.O. Box 50329, 3036, Lemesos, Cyprus andreas.lanitis@cut.ac.cy

More information

Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks

Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks Endri Dibra 1, Himanshu Jain 1, Cengiz Öztireli 1, Remo Ziegler 2, Markus Gross 1 1 Department of Computer

More information

AAM Based Facial Feature Tracking with Kinect

AAM Based Facial Feature Tracking with Kinect BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 3 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0046 AAM Based Facial Feature Tracking

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Computational Design. Stelian Coros

Computational Design. Stelian Coros Computational Design Stelian Coros Schedule for presentations February 3 5 10 12 17 19 24 26 March 3 5 10 12 17 19 24 26 30 April 2 7 9 14 16 21 23 28 30 Send me: ASAP: 3 choices for dates + approximate

More information

Robust Human Body Shape and Pose Tracking

Robust Human Body Shape and Pose Tracking Robust Human Body Shape and Pose Tracking Chun-Hao Huang 1 Edmond Boyer 2 Slobodan Ilic 1 1 Technische Universität München 2 INRIA Grenoble Rhône-Alpes Marker-based motion capture (mocap.) Adventages:

More information

Stimulus Synthesis for Efficient Evaluation and Refinement of Perceptual Image Quality Metrics

Stimulus Synthesis for Efficient Evaluation and Refinement of Perceptual Image Quality Metrics Presented at: IS&T/SPIE s 16th Annual Symposium on Electronic Imaging San Jose, CA, Jan. 18-22, 2004 Published in: Human Vision and Electronic Imaging IX, Proc. SPIE, vol. 5292. c SPIE Stimulus Synthesis

More information

Dynamic Human Shape Description and Characterization

Dynamic Human Shape Description and Characterization Dynamic Human Shape Description and Characterization Z. Cheng*, S. Mosher, Jeanne Smith H. Cheng, and K. Robinette Infoscitex Corporation, Dayton, Ohio, USA 711 th Human Performance Wing, Air Force Research

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune

Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune Face sketch photo synthesis Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune Abstract Face sketch to photo synthesis has

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information

Automatic Construction of Active Appearance Models as an Image Coding Problem

Automatic Construction of Active Appearance Models as an Image Coding Problem Automatic Construction of Active Appearance Models as an Image Coding Problem Simon Baker, Iain Matthews, and Jeff Schneider The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1213 Abstract

More information

Synthesizing Realistic Facial Expressions from Photographs

Synthesizing Realistic Facial Expressions from Photographs Synthesizing Realistic Facial Expressions from Photographs 1998 F. Pighin, J Hecker, D. Lischinskiy, R. Szeliskiz and D. H. Salesin University of Washington, The Hebrew University Microsoft Research 1

More information

Volume Editor. Hans Weghorn Faculty of Mechatronics BA-University of Cooperative Education, Stuttgart Germany

Volume Editor. Hans Weghorn Faculty of Mechatronics BA-University of Cooperative Education, Stuttgart Germany Volume Editor Hans Weghorn Faculty of Mechatronics BA-University of Cooperative Education, Stuttgart Germany Proceedings of the 4 th Annual Meeting on Information Technology and Computer Science ITCS,

More information

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al

CS 231. Motion Capture Data I. The Pipeline. Bodenheimer et al CS 231 Motion Capture Data I The Pipeline Bodenheimer et al 1 Marker Magnetic Optical Marker placement On limbs vs joints neither is ideal Over tight clothing or thin skin In repeatable 'landmarks' Using

More information

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Chan-Su Lee and Ahmed Elgammal Rutgers University, Piscataway, NJ, USA {chansu, elgammal}@cs.rutgers.edu Abstract. We

More information

Synthesis and Editing of Personalized Stylistic Human Motion

Synthesis and Editing of Personalized Stylistic Human Motion Synthesis and Editing of Personalized Stylistic Human Motion Jianyuan Min Texas A&M University Huajun Liu Texas A&M University Wuhan University Jinxiang Chai Texas A&M University Figure 1: Motion style

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Human pose estimation using Active Shape Models

Human pose estimation using Active Shape Models Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body

More information

3D Morphable Model Parameter Estimation

3D Morphable Model Parameter Estimation 3D Morphable Model Parameter Estimation Nathan Faggian 1, Andrew P. Paplinski 1, and Jamie Sherrah 2 1 Monash University, Australia, Faculty of Information Technology, Clayton 2 Clarity Visual Intelligence,

More information

Estimating Human Pose in Images. Navraj Singh December 11, 2009

Estimating Human Pose in Images. Navraj Singh December 11, 2009 Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

Tracking of Human Body using Multiple Predictors

Tracking of Human Body using Multiple Predictors Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,

More information

A Method of Hyper-sphere Cover in Multidimensional Space for Human Mocap Data Retrieval

A Method of Hyper-sphere Cover in Multidimensional Space for Human Mocap Data Retrieval Journal of Human Kinetics volume 28/2011, 133-139 DOI: 10.2478/v10078-011-0030-0 133 Section III Sport, Physical Education & Recreation A Method of Hyper-sphere Cover in Multidimensional Space for Human

More information

This chapter explains two techniques which are frequently used throughout

This chapter explains two techniques which are frequently used throughout Chapter 2 Basic Techniques This chapter explains two techniques which are frequently used throughout this thesis. First, we will introduce the concept of particle filters. A particle filter is a recursive

More information

Part-based and local feature models for generic object recognition

Part-based and local feature models for generic object recognition Part-based and local feature models for generic object recognition May 28 th, 2015 Yong Jae Lee UC Davis Announcements PS2 grades up on SmartSite PS2 stats: Mean: 80.15 Standard Dev: 22.77 Vote on piazza

More information

Epitomic Analysis of Human Motion

Epitomic Analysis of Human Motion Epitomic Analysis of Human Motion Wooyoung Kim James M. Rehg Department of Computer Science Georgia Institute of Technology Atlanta, GA 30332 {wooyoung, rehg}@cc.gatech.edu Abstract Epitomic analysis is

More information

An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners

An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners An Efficient Single Chord-based Accumulation Technique (SCA) to Detect More Reliable Corners Mohammad Asiful Hossain, Abdul Kawsar Tushar, and Shofiullah Babor Computer Science and Engineering Department,

More information

Photo-realistic Renderings for Machines Seong-heum Kim

Photo-realistic Renderings for Machines Seong-heum Kim Photo-realistic Renderings for Machines 20105034 Seong-heum Kim CS580 Student Presentations 2016.04.28 Photo-realistic Renderings for Machines Scene radiances Model descriptions (Light, Shape, Material,

More information

Texture Image Segmentation using FCM

Texture Image Segmentation using FCM Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

More information

To Do. Advanced Computer Graphics. The Story So Far. Course Outline. Rendering (Creating, shading images from geometry, lighting, materials)

To Do. Advanced Computer Graphics. The Story So Far. Course Outline. Rendering (Creating, shading images from geometry, lighting, materials) Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 16 Ravi Ramamoorthi http://www.cs.ucsd.edu/~ravir To Do Assignment 3 milestone due May 29 Should already be well on way Contact us for difficulties

More information

Wavelet based Keyframe Extraction Method from Motion Capture Data

Wavelet based Keyframe Extraction Method from Motion Capture Data Wavelet based Keyframe Extraction Method from Motion Capture Data Xin Wei * Kunio Kondo ** Kei Tateno* Toshihiro Konma*** Tetsuya Shimamura * *Saitama University, Toyo University of Technology, ***Shobi

More information

CS233: The Shape of Data Handout # 3 Geometric and Topological Data Analysis Stanford University Wednesday, 9 May 2018

CS233: The Shape of Data Handout # 3 Geometric and Topological Data Analysis Stanford University Wednesday, 9 May 2018 CS233: The Shape of Data Handout # 3 Geometric and Topological Data Analysis Stanford University Wednesday, 9 May 2018 Homework #3 v4: Shape correspondences, shape matching, multi-way alignments. [100

More information

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will Lectures Recent advances in Metamodel of Optimal Prognosis Thomas Most & Johannes Will presented at the Weimar Optimization and Stochastic Days 2010 Source: www.dynardo.de/en/library Recent advances in

More information

Estimating 3D Human Shapes from Measurements

Estimating 3D Human Shapes from Measurements Estimating 3D Human Shapes from Measurements Stefanie Wuhrer Chang Shu March 19, 2012 arxiv:1109.1175v2 [cs.cv] 16 Mar 2012 Abstract The recent advances in 3-D imaging technologies give rise to databases

More information

Uncertainties: Representation and Propagation & Line Extraction from Range data

Uncertainties: Representation and Propagation & Line Extraction from Range data 41 Uncertainties: Representation and Propagation & Line Extraction from Range data 42 Uncertainty Representation Section 4.1.3 of the book Sensing in the real world is always uncertain How can uncertainty

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos Machine Learning for Computer Vision 1 22 October, 2012 MVA ENS Cachan Lecture 5: Introduction to generative models Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Center for Visual Computing Ecole Centrale Paris

More information

Adding Hand Motion to the Motion Capture Based Character Animation

Adding Hand Motion to the Motion Capture Based Character Animation Adding Hand Motion to the Motion Capture Based Character Animation Ge Jin and James Hahn Computer Science Department, George Washington University, Washington DC 20052 {jinge, hahn}@gwu.edu Abstract. Most

More information

The little difference: Fourier based synthesis of genderspecific

The little difference: Fourier based synthesis of genderspecific In: Rolf P. Würtz, Markus Lappe (Eds.) Dynamic Perception, Berlin: AKA Press, 2002, pp 115-120 The little difference: Fourier based synthesis of genderspecific biological motion Nikolaus Troje Fakultät

More information

Cross-pose Facial Expression Recognition

Cross-pose Facial Expression Recognition Cross-pose Facial Expression Recognition Abstract In real world facial expression recognition (FER) applications, it is not practical for a user to enroll his/her facial expressions under different pose

More information

THE development of stable, robust and fast methods that

THE development of stable, robust and fast methods that 44 SBC Journal on Interactive Systems, volume 5, number 1, 2014 Fast Simulation of Cloth Tearing Marco Santos Souza, Aldo von Wangenheim, Eros Comunello 4Vision Lab - Univali INCoD - Federal University

More information

On-line, Incremental Learning of a Robust Active Shape Model

On-line, Incremental Learning of a Robust Active Shape Model On-line, Incremental Learning of a Robust Active Shape Model Michael Fussenegger 1, Peter M. Roth 2, Horst Bischof 2, Axel Pinz 1 1 Institute of Electrical Measurement and Measurement Signal Processing

More information

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy

The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy Sokratis K. Makrogiannis, PhD From post-doctoral research at SBIA lab, Department of Radiology,

More information

An Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area

An Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area An Automatic 3D Face Model Segmentation for Acquiring Weight Motion Area Rio Caesar Suyoto Samuel Gandang Gunanto Magister Informatics Engineering Atma Jaya Yogyakarta University Sleman, Indonesia Magister

More information

DEPT: Depth Estimation by Parameter Transfer for Single Still Images

DEPT: Depth Estimation by Parameter Transfer for Single Still Images DEPT: Depth Estimation by Parameter Transfer for Single Still Images Xiu Li 1,2, Hongwei Qin 1,2(B), Yangang Wang 3, Yongbing Zhang 1,2, and Qionghai Dai 1 1 Department of Automation, Tsinghua University,

More information

A Global Laplacian Smoothing Approach with Feature Preservation

A Global Laplacian Smoothing Approach with Feature Preservation A Global Laplacian Smoothing Approach with Feature Preservation hongping Ji Ligang Liu Guojin Wang Department of Mathematics State Key Lab of CAD&CG hejiang University Hangzhou, 310027 P.R. China jzpboy@yahoo.com.cn,

More information

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg

More information

Character Animation from Motion Capture Data

Character Animation from Motion Capture Data Character Animation from Motion Capture Data Technical Report Adriana Schulz Instructor: Luiz Velho Rio de Janeiro, March 4, 2010 Contents 1 Introduction 1 2 Motion Data Acquisition 3 3 Motion Editing

More information

THE HALF-EDGE DATA STRUCTURE MODELING AND ANIMATION

THE HALF-EDGE DATA STRUCTURE MODELING AND ANIMATION THE HALF-EDGE DATA STRUCTURE MODELING AND ANIMATION Dan Englesson danen344@student.liu.se Sunday 12th April, 2011 Abstract In this lab assignment which was done in the course TNM079, Modeling and animation,

More information

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Motion Capture & Simulation

Motion Capture & Simulation Motion Capture & Simulation Motion Capture Character Reconstructions Joint Angles Need 3 points to compute a rigid body coordinate frame 1 st point gives 3D translation, 2 nd point gives 2 angles, 3 rd

More information