Multi-camera Tracking of Articulated Human Motion Using Motion and Shape Cues

Size: px
Start display at page:

Download "Multi-camera Tracking of Articulated Human Motion Using Motion and Shape Cues"

Transcription

1 Multi-camera Tracking of Articulated Human Motion Using Motion and Shape Cues Aravind Sundaresan and Rama Chellappa Department of Electrical and Computer Engineering, University of Maryland, College Park, MD 20742, USA {aravinds, Abstract. We present a framework and algorithm for tracking articulated motion for humans. We use multiple calibrated cameras and an articulated human shape model. Tracking is performed using motion cues as well as image-based cues (such as silhouettes and motion residues hereafter referred to as spatial cues,) as opposed to constructing a 3D volume image or visual hulls. Our algorithm consists of a predictor and corrector: the predictor estimates the pose at the t + 1 using motion information between images at t and t + 1. The error in the estimated pose is then corrected using spatial cues from images at t +1. In our predictor, we use robust multi-scale parametric optimisation to estimate the pixel displacement for each body segment. We then use an iterative procedure to estimate the change in pose from the pixel displacement of points on the individual body segments. We present a method for fusing information from different spatial cues such as silhouettes and motion residues into a single energy function. We then express this energy function in terms of the pose parameters, and find the optimum pose for which the energy is minimised. 1 Introduction The complex articulated structure of human beings makes tracking articulated human motion a difficult task. It is necessary to use multiple cameras to deal with occlusion and kinematic singularities. We also need shape models to deal with the large number of body segments and to exploit their articulated structure. In our work, we use shape models, whose parameters are known, to build a system that can track articulated human body motion using multiple cameras in a robust and accurate manner. A tracking system works better if there are more number of observations to estimate the pose and to that end our system uses different kinds of cues that can be estimated from the images. We use both motion information (in the form of pixel displacements), as well as spatial information (such as silhouettes, and motion residues, hereafter referred to as spatial cues). The motion and spatial cues are complementary in nature. We present a framework for unifying different spatial cues into a single energy image. The energy of a pose can be described in terms of this energy image. We can then obtain the pose that possesses the least energy using optimisation P.J. Narayanan et al. (Eds.): ACCV 2006, LNCS 3852, pp , c Springer-Verlag Berlin Heidelberg 2006

2 132 A. Sundaresan and R. Chellappa techniques. Much of the work in the past has focussed on using either motion or spatial parameters. In this paper we present an algorithm that fuses together information from these two kinds of cues. Since we use motion and spatial cues in our tracking algorithm, we are able to better deal with cases where the body segments are close to each other, such as when the arms are by the side of the body. Purely silhouette based methods typically experience difficulties in such cases. Silhouette or edge-based methods also have the weakness that they will not be able to deal with rotation about the axis of the body segment. Estimating the initial pose is a different problem from tracking and is difficult due to the large number of unknown parameters (joint angles). It is computationally intensive and typically requires several additional algorithms such as head detectors or hand detectors. Stochastic algorithms such as particle filtering or optimisation methods are required for the sake of robustness. While the methods we present in this paper can be used for initialisation as well, we concentrate on the tracking aspect. (a) 3D Scan (b) Super-quadric Fig. 1. Overview of the algorithm Fig. 2. 3D model comparison In our work, we use eight cameras that are placed around the subject. We use parametricshape models connected in anarticulated tree to represent the human body as described in 1.2. Our system, the block diagram of which is presented in Figure 1, consists of two parts: a predictor and corrector. We assume that the initial pose is known. The tracking algorithm is as follows. Compute 2D pixel displacement between frames at times t and t +1. Predict 3D pose at t +1basedon2Dmotionfrommultiplecameras. Compute an energy function that fuses information from different spatial cues. Use the energy function to refine estimate of pose at t +1. We represent the pose, ϕ t, in a parametric form as a vector of the position of the base-body (6 degrees of freedom) and the joint angles of the various articulated body segments (3 degrees of freedom for each joint.) δ represents the incremental pose vector. We summarise prior work in articulated tracking in 1.1. We then describe the models in 1.2 and the details of our algorithm in 2. We validate our algorithm using real images captured from eight cameras and the results are presented in 3.

3 Multi-camera Tracking of Articulated Human Motion Prior Work We address the problem of tracking articulated human motion using multiple cameras. Gavrila and Davis [1], Aggarwal and Cai [2], and Moeslund and Granum [3], provide surveys of human motion tracking and analysis methods. a We look at some existing methods that use either motion-based methods or silhouette or edge based methods to perform tracking. Yamamoto and Koshikawa [4] analyse human motion based on a robot model and Yamamoto et al. [5] track human motion using multiple cameras. Gavrila and Davis [6] discuss a multi-view approach for 3D model-based tracking of humans in action. They use a generate-and-test algorithm in which they search for poses in a parameter space and match them using a variant of Chamfer matching. Bregler and Malik [7] use an orthographic camera model and use optical flow. Rehg and Morris [8] and Rehg et al. [9] describe ambiguities and singularities in tracking of articulated objects and Cham and Rehg [10] propose a 2D scaled prismatic model. Sidenbladh et al. [11] provide a framework to track 3D human figures using 2D image motion and particle filters with a constrained motion model that restricts the kinds of motions that can be tracked. Kakadiaris and Metaxas [12] use silhouettes from multiple cameras to estimate 3D motion. Plaenkers and Fua [13] use articulated soft objects with an articulated underlying skeleton as a model and use stereo and silhouette data for shape and motion recovery. Theobalt et al. [14] project the texture of the model obtained from silhouette-based methods and refine the pose using the flow field. Delamarre and Faugeras [15] use 3D articulated models for tracking with silhouettes. They use silhouette contours and apply forces to the contours obtained from the projection of the 3D model so that they move towards the silhouette contours obtained from multiple images. Cheung et al. [16] use shapes from silhouette to estimate human body kinematics. Chu et al. [17] use volume data to acquire and track a human body model. Wachter and Nagel [18] track persons in monocular image sequences. They use an IEKF with a constant motion model and use edges to region information in the pose update step in their work. Moeslund and Granum [19] use multiple cues for model-based human motion capture and use kinematic constraints to estimate pose of a human arm. The multiple cues are depth (obtained from a stereo rig) and the extracted silhouette, whereas the kinematic constraints are applied in order to restrict the parameter space in terms of impossible poses. Sigal et al. [20, 21] use non-parametric belief propagation to track in a multi view set up. Lan and Huttenlocher [22] use hidden Markov temporal models. DeMirdjian et al. [23] constrain pose vectors based on kinematic models using SVMs. Rohr [24] performs automated initialisation of the pose for single camera motion. Krahnstoever [25] addresses the issue of model acquisition and initialisation. Mikic et al. [26] automatically extract the model and pose using voxel data. Ramanan and Forsyth [27] also suggest an algorithm that performs rough pose estimation and can be used in an initialisation step. Sminchisescu and Triggs present a method for monocular video sequences using robust image matching, joint limits and non-self-intersection constraints [28]. They also try to remove kinematic ambiguities in monocular pose estimation efficiently [29].

4 134 A. Sundaresan and R. Chellappa Our method is different in that we use both motion and spatial cues to track the pose as opposed to using volume or visual based techniques or only optical flow. We use spatial and motion cues obtained from multiple views in order to obtain robust results that overcome occlusions and kinematic singularities. We also present a novel method to use spatial cues such as silhouettes and motion residues. It is also possible to incorporate edges in our method. We also do not constrain the motion or the pose parameters for specific types of motion (such as walking) and hence our method is general. 1.2 Models A good human shape model should allow the system to represent the human body in all of it s postures and yet be simple enough to minimise the number of parameters required to represent the body accurately. We use tapered super-quadrics in order to represent the different body segments. We can use more complex triangular mesh models if we can acquire the parameters of such models. We illustrate the 3D model used in our experiments in Figure 2. The dimensions of the super-quadrics are obtained manually with the help of the 3D scanned model in the figure. The motion of the different body segments are constrained by the articulated structure of the body. The base body (trunk) has 6 degree-of-freedom (DoF) motion. All other body segments are attached to the base body in a kinematic chain and have at most 3 DoF rotational motion with respect to the parent node. The body model also includes the locations of the joints of the different body segments besides the shape of the body segment. 2 Algorithm We compute the pose at time t + 1 given the pose at time t using the images at time t and t +1. The pose at t + 1 is estimated in two steps, the prediction step and the correction step. The steps required to estimate the pose at time t +1 are first listed and then described in detail in the sections that follow. 1. Pixel-body registration at time t using known pose at t. 2. Estimate pixel displacement between time t and time t Predict pose at time t + 1 using pixel displacement. 4. Combine silhouettes and motion residue for each body segment into an energy image for each image. 5. Correct the predicted pose at time t + 1 using the energy image obtained in step Pixel-Body Registration Pixel-body registration is the process of registering each pixel in each image to a body segment as well as obtain approximate 3D coordinates of the point. We thus obtain a 2D mask for each body segment that we can use while estimating the pixel displacement. We convert each body segment into a triangular mesh

5 Multi-camera Tracking of Articulated Human Motion 135 (a) View 1 (b) View 2 (a) Mask (b) Image Diff (c) MR (d) Flow Fig. 3. Pixel registration Fig. 4. Pixel displacement and Motion Residue and project it onto each image, and compute the depth at each pixel by interpolating the depths of the triangle vertices. We can thus fairly easily extend our algorithm to use triangular mesh models instead of super-quadrics. Since the depths of all pixels are known, we can compute occlusions. Figure 3 illustrates the projection of the body onto images from two cameras. Different colours indicate different body segments. We compute approximate 3D coordinates of pixels in a similar fashion. 2.2 Estimating Pixel Displacement As we use pixel displacement between frames to estimate 3D pose change, we are not dependent on specific optical flow algorithms. Figure 4 illustrates how we obtain the pixel displacement of a single body segment, the example being that of the left forearm shown in Figure 3 (d). We use a robust parametric model for the motion of the rigid objects so that the displacement, x i,at pixel x i is given by (x i, φ), where φ =[u, v, θ, s]. The elements of φ are the displacements along the x and y axes, rotation and scale respectively. We find that the above parametric representation is more intuitive and more robust than an affine model. We obtain that value of φ [φ 0 φ B, φ 0 + φ B ] that minimises the residue given by e T e where [e] j = I t (x ij ) I t+1 (x ij + (x ij, φ)), and {x ij : j =1, 2, } is the set of all points in the mask obtained in 2.1 and illustrated in Figure 4 (a). φ denotes zero motion and φ B denotes the bounds on the motion that we impose. Figure 4 (a) is the smoothed intensity image at time t. Figure 4 (b) is the difference between image at time t and t + 1, i.e., with zero motion, and has large values in the mask region signifying that there is some motion. Figure 4 (b) is the difference between image at time t and the image at time t + 1 warped according to the estimated motion and is called the motion residue for the optimal φ. The value of the pixels in the region of the mask is close to zero where the estimated pixel displacement agrees with the actual pixel displacement. The motion residue provides us with a rough delineation of the location of the body segment, even when the original mask does not exactly match the body segment.

6 136 A. Sundaresan and R. Chellappa 2.3 Pose Prediction The pose parameter we need to estimate is the vector ϕ, which consists of the 6- DoF parameters for the base-body and the 3-DoF joint angles for each of the remaining body segments. The state vector in our state-space formulation is ϕ t (1-2). State Update : ϕ t+1 = h(ϕ t )+δ t (1) Observation : f( x t, ϕ t, ϕ t+1 )=0 (2) In our case the function h(.) is linear (3) and the pixel position x(.), in (4), is a non-linear function of the pose, ϕ, and the incremental pose, δ. However, it is well approximated by a linear function locally. ϕ t+1 = ϕ t + δ t (3) f( x t, ϕ t, δ t )= x t (x(ϕ t + δ t ) x(ϕ t )) (4) Let us consider the observation, the measured (noisy) pixel displacement, x t = x t + η, whereη is the measurement noise, and x t is the pixel displacement. We expand f( x t, ϕ t, δ t ) in a Taylor series about f( x t, ˆϕ t, ˆδ t )as f ( x t, ˆϕ t, ˆδ ) t + f ( x t x x t)+ f (ϕ t ϕ t ˆϕ t )+ f ( δ t t δ ˆδ t )+O ( ). t (5) Thelefthandside(f( x t, ϕ t, δ t )) is 0. The first term f ( x t, ˆϕ t, ˆδ ) t is given ( by x t x(ˆϕ t + ˆδ ) t ) x(ˆϕ t ). f The second term can be simplified as x t ( x t x t )=1.( η) = η. f (.) The third term in (5) ϕ (ϕ t t ˆϕ t ) is negligible because the function f(.) is not very sensitive to the current pose, ϕ t and we expect the term ϕ t ˆϕ t to be also negligible. We assume, without loss of generality that δ t is a linear function of time t, sothatδ t = δ.t, whereδ is a constant. We note that (6) follows from the fact that the pixel velocity, x(ϕ t ) t, at a given point is a linear function of the rate of change of pose, δ [30]. f ( x, ϕ, δ t ) = x(ϕ + δ t) / δ t δ t t t = F (ϕ + δ t) δ/δ = F (ϕ t + δ t ) (6) ( The fourth term is f δ t ( x t, ˆϕ t,ˆδ = F t) ˆϕ t + ˆδ ) t We neglect the higher order terms in (5) and obtain the following linearised observation equation (7). ( x t x(ˆϕ t + ˆδ ) ( t ) x(ˆϕ t ) + η = F ˆϕ t + ˆδ t )(δ t ˆδ ) t (7) We solve (7) for δ t iteratively. We set ˆδ 0 t = 0 and perform the following until we obtain numerical convergence, which we do in a few iterations. We finally set ˆϕ t+1 = ˆϕ t + ˆδ N t. ( ) Set F (i) (i) = F ˆϕ t + ˆδ t.

7 Set x (i) t Update pose: Multi-camera Tracking of Articulated Human Motion 137 ( ( = x t x ˆϕ t + ˆδ (i+1) t = ˆδ (i) t ) ) (i) ˆδ t x(ˆϕ t ) + ( F (i)t F (i)) 1 F (i)t x (i) t. 2.4 Computing Spatial Energy Function We combine different types of spatial cues into an energy image for each body segment. This allows us to use the framework irrespective of which spatial cues are available. In our work we use silhouette information as well as the motion residue obtained during motion estimation. Figure 4 (d) is the motion residue for that segment, and provides us with the region that agrees with the motion of the mask. We combine the motion residue with the silhouette as shown in Figure 5. We can form energy images even if the quality of the silhouette is not very good. There are a number of outliers, but though these may affect other silhouette based algorithms, they do not affect our algorithm much. Old Position New Position (dx,dy) Original position Displaced Displaced and Rotated φ (a) Silhouette (b) Silhouette(c) Motion Residue (d) Energy (e) Object mask (f) 2D pose Fig. 5. Obtaining unified energy image for the forearm Once we have the pixel-wise energy image for each camera and a given body segment we compute the energy for different values of 2D parameters such as displacement and rotation. We have a mask for the body segment for the body segment for a given image as illustrated in Figure 5 (e). We can move this mask by a translation (dx, dy) or a rotation ϕ as illustrated in Figure 5 (f). We can find the energy of the mask in each position by summing the energy of all the pixels that belong to the mask. Thus we can express the energy as a function of (dx, dy, θ) in the neighbourhood of (dx, dy, θ) =(0, 0, 0). When the body segment moves in 3D space by a translation and rotation, we can project the new axis on to the image and find the corresponding 2D configuration parameters in each of the images. We can then find the energy of the 3D pose by summing the energies of the mask in the 2D configurations in each image. We minimise this energy function in the local neighbourhood. We use a Levenberg-Marquardt optimisation technique which is initialised to the current 3D position. We show the new position of the axis of the body segment after optimisation in Figure 6. The red line represents the initial position of the axis of the body segment and the cyan line represents the new position. We thus correct the pose using spatial cues.

8 138 A. Sundaresan and R. Chellappa Energy of image 1 Energy of image 2 Energy of image 3 Energy of image 4 Energy of image 6 Energy of image 8 Fig. 6. Minimum energy configuration 3 Experimental Results and Conclusions In the experiments performed, we use grey-scale images from eight cameras with a spatial resolution of Calibration is performed using Tomas Svoboda s algorithm [31] and a simple calibration device to compute the scale. We use images that have been undistorted based on the radial calibration parameters of the cameras. We use perspective projection model for the cameras. Experiments were conducted on different kind of sequences and we present the results of two such experiments. The subject performs motions that exercise several joint angles in the body. Our results show that using only motion cues for tracking causes the pose estimator to lose track eventually, as we are estimating only the difference in the pose and therefore the error accumulates. This underlines the need for correcting the pose estimated using motion cues. We show the correction step of the algorithm prevents drift in the tracking. In Figure 7, we present results in which we have superimposed the images with the model assuming the estimated pose over the images obtained from two cameras. The length of the first sequence is 10 seconds (300 frames), during which there is considerable movement and bending of the arms and occlusions at various times in different cameras. The second sequence is that of the subject walking and the body parts are successfully tracked in both cases. Fig. 7. Tracking results using both motion and spatial cues

9 Multi-camera Tracking of Articulated Human Motion 139 We note that the method is fairly accurate and robust despite the fact the human body model used is not very accurate, given that it was obtained manually using visual feedback. Specifically, the method is sensitive to joint location and it is important to accurately estimate the joint location during the model acquisition stage. We also note that the method scales with respect to accuracy of the human body model. We also note that while we use super-quadrics to represent body segments, we could easily use triangular meshes instead, provided they can be obtained. We need to consider more flexible models that allow the location of certain joints, such as shoulder joints, to vary with respect to the trunk, to better model the human body. References 1. Gavrila, D.M.: The visual analysis of human movement: A survey. Computer Vision and Image Understanding: CVIU 73 (1999) Aggarwal, J., Cai, Q.: Human motion analysis: A review. Computer Vision and Image Understanding 73 (1999) Moeslund, T., Granum, E.: A survey of computer vision-based human motion capture. CVIU (2001) Yamamoto, M., Koshikawa, K.: Human motion analysis based on a robot arm model. In: CVPR. (1991) Yamamoto, M., Sato, A., Kawada, S., Kondo, T., Osaki, Y.: Incremental tracking of human actions from multiple views. In: CVPR. (1998) Gavrila, D., Davis, L.: 3-D model-based tracking of humans in action: A multi-view approach. In: Computer Vision and Pattern Recognition. (1996) Bregler, C., Malik, J.: Tracking people with twists and exponential maps. In: CVPR. (1998) Rehg, J.M., Morris, D.: Singularity analysis for articulated object tracking. In: Computer Vision and Pattern Recognition. (1998) Rehg, J., Morris, D.D., Kanade, T.: Ambiguities in visual tracking of articulated objects using two- and three-dimensional models. International Journal of Robotics Research 22 (2003) Cham, T.J., Rehg, J.M.: A multiple hypothesis approach to figure tracking. In: Computer Vision and Pattern Recognition. Volume 2. (1999) 11. Sidenbladh, H., Black, M.J., Fleet, D.J.: Stochastic tracking of 3D human figures using 2D image motion. In: ECCV. (2000) Kakadiaris, I., Metaxas, D.: Model-based estimation of 3D human motion. IEEE PAMI 22 (2000) Plänkers, R., Fua, P.: Articulated soft objects for video-based body modeling. In: ICCV. (2001) Theobalt, C., Carranza, J., Magnor, M.A., Seidel, H.P.: Combining 3D flow fields with silhouette-based human motion capture for immersive video. Graph. Models 66 (2004) Delamarre, Q., Faugeras, O.: 3D articulated models and multi-view tracking with silhouettes. In: ICCV. (1999) K.M. Cheung, S.B., Kanade, T.: Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture. In: IEEE Conference on Computer Vision and Pattern Recognition. (2003) 77 84

10 140 A. Sundaresan and R. Chellappa 17. Chu, C.W., Jenkins, O.C., Mataric, M.J.: Markerless kinematic model and motion capture from volume sequences. In: CVPR (2). (2003) Wachter, S., Nagel, H.H.: Tracking persons in monocular image sequences. Computer Vision and Image Understanding 74 (1999) Moeslund, T., Granum, E.: Multiple cues used in model-based human motion capture. In: International Conference on Face and Gesture Recognition. (2000) 20. Sigal, L., Isard, M., Sigelman, B.H., Black, M.J.: Attractive people: Assembling loose-limbed models using non-parametric belief propagation. In: NIPS. (2003) 21. Sigal, L., Bhatia, S., Roth, S., Black, M.J., Isard, M.: Tracking loose-limbed people. In: CVPR. (2004) Lan, X., Huttenlocher, D.P.: A unified spatio-temporal articulated model for tracking. In: CVPR (1). (2004) Demirdjian, D., Ko, T., Darrell, T.: Constraining human body tracking. In: ICCV. (2003) Rohr, K.: Human Movement Analysis Based on Explicit Motion Models. Kluwer Academic (1997) 25. Krahnstoever, N., Sharma, R.: Articulated models from video. In: Computer Vision and Pattern Recognition. (2004) Mikic, I., Trivedi, M., Hunter, E., Cosman, P.: Human body model acquisition and tracking using voxel data. International Journal of Computer Vision 53 (2003) Ramanan, D., Forsyth, D.A.: Finding and tracking people from the bottom up. In: CVPR (2). (2003) Sminchisescu, C., Triggs, B.: Covariance scaled sampling for monocular 3D body tracking. In: Proceedings of the Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, USA. Volume 1. (2001) Sminchisescu, C., Triggs, B.: Kinematic jump processes for monocular 3D human tracking. In: International Conference on Computer Vision & Pattern Recognition. (2003) I Sundaresan, A., RoyChowdhury, A., Chellappa, R.: Multiple view tracking of human motion modelled by kinematic chains. In: International Conference on Image Processing, Singapore. (2004) 31. Svoboda, T., Martinec, D., Pajdla, T.: A convenient multi-camera self-calibration for virtual environments. PRESENCE: Teleoperators and Virtual Environments 14 (2005) To appear.

Multi-camera Tracking of Articulated Human Motion using Shape and Motion Cues

Multi-camera Tracking of Articulated Human Motion using Shape and Motion Cues IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Multi-camera Tracking of Articulated Human Motion using Shape and Motion Cues Aravind Sundaresan, Student Member, IEEE, and Rama Chellappa, Fellow, IEEE Abstract

More information

Markerless Motion Capture using Multiple Cameras

Markerless Motion Capture using Multiple Cameras Markerless Motion Capture using Multiple Cameras Aravind Sundaresan and Rama Chellappa Center for Automation Research Department of Electrical and Computer Engineering University of Maryland, College Park,

More information

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10 BUNDLE ADJUSTMENT FOR MARKERLESS BODY TRACKING IN MONOCULAR VIDEO SEQUENCES Ali Shahrokni, Vincent Lepetit, Pascal Fua Computer Vision Lab, Swiss Federal Institute of Technology (EPFL) ali.shahrokni,vincent.lepetit,pascal.fua@epfl.ch

More information

Visual Tracking of Human Body with Deforming Motion and Shape Average

Visual Tracking of Human Body with Deforming Motion and Shape Average Visual Tracking of Human Body with Deforming Motion and Shape Average Alessandro Bissacco UCLA Computer Science Los Angeles, CA 90095 bissacco@cs.ucla.edu UCLA CSD-TR # 020046 Abstract In this work we

More information

Model-Based Human Motion Capture from Monocular Video Sequences

Model-Based Human Motion Capture from Monocular Video Sequences Model-Based Human Motion Capture from Monocular Video Sequences Jihun Park 1, Sangho Park 2, and J.K. Aggarwal 2 1 Department of Computer Engineering Hongik University Seoul, Korea jhpark@hongik.ac.kr

More information

Human Upper Body Pose Estimation in Static Images

Human Upper Body Pose Estimation in Static Images 1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project

More information

Real-time tracking of highly articulated structures in the presence of noisy measurements

Real-time tracking of highly articulated structures in the presence of noisy measurements Real-time tracking of highly articulated structures in the presence of noisy measurements T. Drummond R. Cipolla Department of Engineering University of Cambridge Cambridge, UK CB2 1PZ Department of Engineering

More information

Tracking of Human Body using Multiple Predictors

Tracking of Human Body using Multiple Predictors Tracking of Human Body using Multiple Predictors Rui M Jesus 1, Arnaldo J Abrantes 1, and Jorge S Marques 2 1 Instituto Superior de Engenharia de Lisboa, Postfach 351-218317001, Rua Conselheiro Emído Navarro,

More information

Hierarchical Part-Based Human Body Pose Estimation

Hierarchical Part-Based Human Body Pose Estimation Hierarchical Part-Based Human Body Pose Estimation R. Navaratnam A. Thayananthan P. H. S. Torr R. Cipolla University of Cambridge Oxford Brookes Univeristy Department of Engineering Cambridge, CB2 1PZ,

More information

Predicting 3D People from 2D Pictures

Predicting 3D People from 2D Pictures Predicting 3D People from 2D Pictures Leonid Sigal Michael J. Black Department of Computer Science Brown University http://www.cs.brown.edu/people/ls/ CIAR Summer School August 15-20, 2006 Leonid Sigal

More information

3D Modeling for Capturing Human Motion from Monocular Video

3D Modeling for Capturing Human Motion from Monocular Video 3D Modeling for Capturing Human Motion from Monocular Video 1 Weilun Lao, 1 Jungong Han 1,2 Peter H.N. de With 1 Eindhoven University of Technology 2 LogicaCMG Netherlands P.O. Box 513 P.O. Box 7089 5600MB

More information

A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space

A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space Eng-Jon Ong and Shaogang Gong Department of Computer Science, Queen Mary and Westfield College, London E1 4NS, UK fongej

More information

3D Human Motion Analysis and Manifolds

3D Human Motion Analysis and Manifolds D E P A R T M E N T O F C O M P U T E R S C I E N C E U N I V E R S I T Y O F C O P E N H A G E N 3D Human Motion Analysis and Manifolds Kim Steenstrup Pedersen DIKU Image group and E-Science center Motivation

More information

Markerless Motion Capture from Single or Multi-Camera Video Sequence

Markerless Motion Capture from Single or Multi-Camera Video Sequence Markerless Motion Capture from Single or Multi-Camera Video Sequence F. Remondino IGP - ETH Zurich, CH fabio@geod.baug.ethz.ch www.photogrammetry.ethz..ch N. D Apuzzo Homometrica Consulting nda@homometrica.ch

More information

Human Body Model Acquisition and Tracking using Voxel Data

Human Body Model Acquisition and Tracking using Voxel Data Submitted to the International Journal of Computer Vision Human Body Model Acquisition and Tracking using Voxel Data Ivana Mikić 2, Mohan Trivedi 1, Edward Hunter 2, Pamela Cosman 1 1 Department of Electrical

More information

Inferring 3D People from 2D Images

Inferring 3D People from 2D Images Inferring 3D People from 2D Images Department of Computer Science Brown University http://www.cs.brown.edu/~black Collaborators Hedvig Sidenbladh, Swedish Defense Research Inst. Leon Sigal, Brown University

More information

Visual Motion Analysis and Tracking Part II

Visual Motion Analysis and Tracking Part II Visual Motion Analysis and Tracking Part II David J Fleet and Allan D Jepson CIAR NCAP Summer School July 12-16, 16, 2005 Outline Optical Flow and Tracking: Optical flow estimation (robust, iterative refinement,

More information

Stochastic Tracking of 3D Human Figures Using 2D Image Motion

Stochastic Tracking of 3D Human Figures Using 2D Image Motion Stochastic Tracking of 3D Human Figures Using 2D Image Motion Hedvig Sidenbladh 1, Michael J. Black 2, and David J. Fleet 2 1 Royal Institute of Technology (KTH), CVAP/NADA, S 1 44 Stockholm, Sweden hedvig@nada.kth.se

More information

Part I: HumanEva-I dataset and evaluation metrics

Part I: HumanEva-I dataset and evaluation metrics Part I: HumanEva-I dataset and evaluation metrics Leonid Sigal Michael J. Black Department of Computer Science Brown University http://www.cs.brown.edu/people/ls/ http://vision.cs.brown.edu/humaneva/ Motivation

More information

A Unified Spatio-Temporal Articulated Model for Tracking

A Unified Spatio-Temporal Articulated Model for Tracking A Unified Spatio-Temporal Articulated Model for Tracking Xiangyang Lan Daniel P. Huttenlocher {xylan,dph}@cs.cornell.edu Cornell University Ithaca NY 14853 Abstract Tracking articulated objects in image

More information

Markerless human motion capture through visual hull and articulated ICP

Markerless human motion capture through visual hull and articulated ICP Markerless human motion capture through visual hull and articulated ICP Lars Mündermann lmuender@stanford.edu Stefano Corazza Stanford, CA 93405 stefanoc@stanford.edu Thomas. P. Andriacchi Bone and Joint

More information

Markerless Motion Capture from Monocular Videos

Markerless Motion Capture from Monocular Videos Markerless Motion Capture from Monocular Videos Vishal Mamania Appu Shaji Sharat Chandran Department of Computer Science &Engineering Indian Institute of Technology Bombay Mumbai, India 400 076 {vishalm,appu,sharat}@cse.iitb.ac.in

More information

M 3 : Marker-free Model Reconstruction and Motion Tracking

M 3 : Marker-free Model Reconstruction and Motion Tracking M 3 : Marker-free Model Reconstruction and Motion Tracking from 3D Voxel Data Edilson de Aguiar, Christian Theobalt, Marcus Magnor, Holger Theisel, Hans-Peter Seidel MPI Informatik, Saarbrücken, Germany

More information

A Parallel Framework for Silhouette-based Human Motion Capture

A Parallel Framework for Silhouette-based Human Motion Capture A Parallel Framework for Silhouette-based Human Motion Capture Christian Theobalt, Joel Carranza, Marcus A. Magnor, Hans-Peter Seidel Email: Max-Planck-Institut für Informatik Stuhlsatzenhausweg 85 66123

More information

Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences

Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Presentation of the thesis work of: Hedvig Sidenbladh, KTH Thesis opponent: Prof. Bill Freeman, MIT Thesis supervisors

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

Human Body Pose Estimation Using Silhouette Shape Analysis

Human Body Pose Estimation Using Silhouette Shape Analysis Human Body Pose Estimation Using Silhouette Shape Analysis Anurag Mittal Siemens Corporate Research Inc. Princeton, NJ 08540 Liang Zhao, Larry S. Davis Department of Computer Science University of Maryland

More information

Human 3D Motion Computation from a Varying Number of Cameras

Human 3D Motion Computation from a Varying Number of Cameras Human 3D Motion Computation from a Varying Number of Cameras Magnus Burenius, Josephine Sullivan, Stefan Carlsson, and Kjartan Halvorsen KTH CSC/CVAP, S-100 44 Stockholm, Sweden http://www.csc.kth.se/cvap

More information

A New Hierarchical Particle Filtering for Markerless Human Motion Capture

A New Hierarchical Particle Filtering for Markerless Human Motion Capture A New Hierarchical Particle Filtering for Markerless Human Motion Capture Yuanqiang Dong, Guilherme N. DeSouza Electrical and Computer Engineering Department University of Missouri, Columbia, MO, USA Abstract

More information

Articulated Model Based People Tracking Using Motion Models

Articulated Model Based People Tracking Using Motion Models Articulated Model Based People Tracking Using Motion Models Huazhong Ning, Liang Wang, Weiming Hu and Tieniu Tan National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences,

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Bayesian Reconstruction of 3D Human Motion from Single-Camera Video

Bayesian Reconstruction of 3D Human Motion from Single-Camera Video Bayesian Reconstruction of 3D Human Motion from Single-Camera Video Nicholas R. Howe Cornell University Michael E. Leventon MIT William T. Freeman Mitsubishi Electric Research Labs Problem Background 2D

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,

More information

Human model and pose Reconstruction from Multi-views

Human model and pose Reconstruction from Multi-views International Conference on Machine Intelligence, Tozeur Tunisia, November -7, 200 788 Human model and pose Reconstruction from Multi-views Michoud Brice LIRIS-CNRS UMR 20 Université de Lyon Email: b-mich02@liris.cnrs.fr

More information

3D Human Body Tracking using Deterministic Temporal Motion Models

3D Human Body Tracking using Deterministic Temporal Motion Models 3D Human Body Tracking using Deterministic Temporal Motion Models Raquel Urtasun and Pascal Fua Computer Vision Laboratory EPFL CH-1015 Lausanne, Switzerland raquel.urtasun, pascal.fua@epfl.ch Abstract.

More information

CS448A: Experiments in Motion Capture. CS448A: Experiments in Motion Capture

CS448A: Experiments in Motion Capture. CS448A: Experiments in Motion Capture CS448A: Experiments in Motion Capture Christoph Bregler, bregler@stanford.edu Gene Alexander, gene.alexander@stanford.edu CS448A: Experiments in Motion Capture Project Course Lectures on all key-algorithms

More information

Hand Pose Estimation Using Expectation-Constrained-Maximization From Voxel Data

Hand Pose Estimation Using Expectation-Constrained-Maximization From Voxel Data Hand Pose Estimation Technical Report, CVRR Laboratory, November 2004. 1 Hand Pose Estimation Using Expectation-Constrained-Maximization From Voxel Data Shinko Y. Cheng and Mohan M. Trivedi {sycheng, mtrivedi}@ucsd.edu

More information

Multimodal Motion Capture Dataset TNT15

Multimodal Motion Capture Dataset TNT15 Multimodal Motion Capture Dataset TNT15 Timo v. Marcard, Gerard Pons-Moll, Bodo Rosenhahn January 2016 v1.2 1 Contents 1 Introduction 3 2 Technical Recording Setup 3 2.1 Video Data............................

More information

Embodied Proactive Human Interface PICO-2

Embodied Proactive Human Interface PICO-2 Embodied Proactive Human Interface PICO-2 Ryo Kurazume, Hiroaki Omasa, Seiichi Uchida, Rinichiro Taniguchi, and Tsutomu Hasegawa Kyushu University 6-10-1, Hakozaki, Higashi-ku, Fukuoka, Japan kurazume@is.kyushu-u.ac.jp

More information

Accurately measuring human movement using articulated ICP with soft-joint constraints and a repository of articulated models

Accurately measuring human movement using articulated ICP with soft-joint constraints and a repository of articulated models Accurately measuring human movement using articulated ICP with soft-joint constraints and a repository of articulated models Lars Mündermann Stefano Corazza Thomas P. Andriacchi Department of Mechanical

More information

Model-based Motion Capture for Crash Test Video Analysis

Model-based Motion Capture for Crash Test Video Analysis Model-based Motion Capture for Crash Test Video Analysis Juergen Gall 1, Bodo Rosenhahn 1, Stefan Gehrig 2, and Hans-Peter Seidel 1 1 Max-Planck-Institute for Computer Science, Campus E1 4, 66123 Saarbrücken,

More information

radius length global trans global rot

radius length global trans global rot Stochastic Tracking of 3D Human Figures Using 2D Image Motion Hedvig Sidenbladh 1 Michael J. Black 2 David J. Fleet 2 1 Royal Institute of Technology (KTH), CVAP/NADA, S 1 44 Stockholm, Sweden hedvig@nada.kth.se

More information

Gradient-Enhanced Particle Filter for Vision-Based Motion Capture

Gradient-Enhanced Particle Filter for Vision-Based Motion Capture Gradient-Enhanced Particle Filter for Vision-Based Motion Capture Daniel Grest and Volker Krüger Aalborg University Copenhagen, Denmark Computer Vision and Machine Intelligence Lab {dag,vok}@cvmi.aau.dk

More information

Accelerating Pattern Matching or HowMuchCanYouSlide?

Accelerating Pattern Matching or HowMuchCanYouSlide? Accelerating Pattern Matching or HowMuchCanYouSlide? Ofir Pele and Michael Werman School of Computer Science and Engineering The Hebrew University of Jerusalem {ofirpele,werman}@cs.huji.ac.il Abstract.

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Efficient Extraction of Human Motion Volumes by Tracking

Efficient Extraction of Human Motion Volumes by Tracking Efficient Extraction of Human Motion Volumes by Tracking Juan Carlos Niebles Princeton University, USA Universidad del Norte, Colombia jniebles@princeton.edu Bohyung Han Electrical and Computer Engineering

More information

Singularity Analysis for Articulated Object Tracking

Singularity Analysis for Articulated Object Tracking To appear in: CVPR 98, Santa Barbara, CA, June 1998 Singularity Analysis for Articulated Object Tracking Daniel D. Morris Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 James M. Rehg

More information

Multi-view Appearance-based 3D Hand Pose Estimation

Multi-view Appearance-based 3D Hand Pose Estimation Multi-view Appearance-based 3D Hand Pose Estimation Haiying Guan, Jae Sik Chang, Longbin Chen, Rogerio S. Feris, and Matthew Turk Computer Science Department University of California at Santa Barbara,

More information

Robust Real-Time Upper Body Limb Detection and Tracking

Robust Real-Time Upper Body Limb Detection and Tracking Robust Real-Time Upper Body Limb Detection and Tracking Matheen Siddiqui, and Gérard Medioni Univ. of Southern Calif 1010 Watt Way Los Angeles, CA {mmsiddiq,medioni}@usc.edu ABSTRACT We describe an efficient

More information

Gesture Recognition using Temporal Templates with disparity information

Gesture Recognition using Temporal Templates with disparity information 8- MVA7 IAPR Conference on Machine Vision Applications, May 6-8, 7, Tokyo, JAPAN Gesture Recognition using Temporal Templates with disparity information Kazunori Onoguchi and Masaaki Sato Hirosaki University

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Using temporal seeding to constrain the disparity search range in stereo matching

Using temporal seeding to constrain the disparity search range in stereo matching Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Inferring 3D from 2D

Inferring 3D from 2D Inferring 3D from 2D History Monocular vs. multi-view analysis Difficulties structure of the solution and ambiguities static and dynamic ambiguities Modeling frameworks for inference and learning top-down

More information

3D Skeleton-Based Body Pose Recovery

3D Skeleton-Based Body Pose Recovery 3D Skeleton-Based Body Pose Recovery Clément Ménier, Edmond Boyer, Bruno Raffin To cite this version: Clément Ménier, Edmond Boyer, Bruno Raffin. 3D Skeleton-Based Body Pose Recovery. Marc Pollefeys and

More information

Tracking Articulated Bodies using Generalized Expectation Maximization

Tracking Articulated Bodies using Generalized Expectation Maximization Tracking Articulated Bodies using Generalized Expectation Maximization A. Fossati CVLab EPFL, Switzerland andrea.fossati@epfl.ch E. Arnaud Université Joseph Fourier INRIA Rhone-Alpes, France elise.arnaud@inrialpes.fr

More information

Perception and Action using Multilinear Forms

Perception and Action using Multilinear Forms Perception and Action using Multilinear Forms Anders Heyden, Gunnar Sparr, Kalle Åström Dept of Mathematics, Lund University Box 118, S-221 00 Lund, Sweden email: {heyden,gunnar,kalle}@maths.lth.se Abstract

More information

3D Modeling of Humans with Skeletons from Uncalibrated Wide Baseline Views

3D Modeling of Humans with Skeletons from Uncalibrated Wide Baseline Views 3D Modeling of Humans with Skeletons from Uncalibrated Wide Baseline Views Chee Kwang Quah, Andre Gagalowicz 2, Richard Roussel 2, and Hock Soon Seah 3 Nanyang Technological University, School of Computer

More information

A Generalisation of the ICP Algorithm for Articulated Bodies

A Generalisation of the ICP Algorithm for Articulated Bodies A Generalisation of the ICP Algorithm for Articulated Bodies Stefano Pellegrini a,b, Konrad Schindler b, Daniele Nardi a a Dip. di Informatica e Sistemistica, Sapienza University Rome b Computer Vision

More information

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1

Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1 Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape

More information

A Bottom Up Algebraic Approach to Motion Segmentation

A Bottom Up Algebraic Approach to Motion Segmentation A Bottom Up Algebraic Approach to Motion Segmentation Dheeraj Singaraju and RenéVidal Center for Imaging Science, Johns Hopkins University, 301 Clark Hall, 3400 N. Charles St., Baltimore, MD, 21218, USA

More information

Model-Based Silhouette Extraction for Accurate People Tracking

Model-Based Silhouette Extraction for Accurate People Tracking Model-Based Silhouette Extraction for Accurate People Tracking Ralf Plaenkers and Pascal Fua Computer Graphics Lab (LIG) Computer Graphics Lab, EPFL, CH-1015 Lausanne, Switzerland In European Conference

More information

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras

Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn

More information

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation

EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation Michael J. Black and Allan D. Jepson Xerox Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto,

More information

Single View Motion Tracking by Depth and Silhouette Information

Single View Motion Tracking by Depth and Silhouette Information Single View Motion Tracking by Depth and Silhouette Information Daniel Grest 1,VolkerKrüger 1, and Reinhard Koch 2 1 Aalborg University Copenhagen, Denmark Aalborg Media Lab 2 Christian-Albrechts-University

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models Emanuele Ruffaldi Lorenzo Peppoloni Alessandro Filippeschi Carlo Alberto Avizzano 2014 IEEE International

More information

Using Optical Flow for Step Size Initialisation in Hand Tracking by Stochastic Optimisation

Using Optical Flow for Step Size Initialisation in Hand Tracking by Stochastic Optimisation Using Optical Flow for Step Size Initialisation in Hand Tracking by Stochastic Optimisation Desmond Chik 1,2 1 National ICT Australia, Canberra Research Laboratory 2 Research School of Information Sciences

More information

Full Body Pose Estimation During Occlusion using Multiple Cameras Fihl, Preben; Cosar, Serhan

Full Body Pose Estimation During Occlusion using Multiple Cameras Fihl, Preben; Cosar, Serhan Aalborg Universitet Full Body Pose Estimation During Occlusion using Multiple Cameras Fihl, Preben; Cosar, Serhan Publication date: 2010 Document Version Publisher's PDF, also known as Version of record

More information

CS664 Lecture #18: Motion

CS664 Lecture #18: Motion CS664 Lecture #18: Motion Announcements Most paper choices were fine Please be sure to email me for approval, if you haven t already This is intended to help you, especially with the final project Use

More information

Human Motion Tracking by Combining View-Based and Model-Based Methods for Monocular Video Sequences

Human Motion Tracking by Combining View-Based and Model-Based Methods for Monocular Video Sequences Human Motion Tracking by Combining View-Based and Model-Based Methods for Monocular Video Sequences Jihun Park, Sangho Park, and J.K. Aggarwal 1 Department of Computer Engineering Hongik University Seoul,

More information

Deformable Mesh Model for Complex Multi-Object 3D Motion Estimation from Multi-Viewpoint Video

Deformable Mesh Model for Complex Multi-Object 3D Motion Estimation from Multi-Viewpoint Video Deformable Mesh Model for Complex Multi-Object 3D Motion Estimation from Multi-Viewpoint Video Shohei NOBUHARA Takashi MATSUYAMA Graduate School of Informatics, Kyoto University Sakyo, Kyoto, 606-8501,

More information

A System for Marker-Less Human Motion Estimation

A System for Marker-Less Human Motion Estimation A System for Marker-Less Human Motion Estimation B. Rosenhahn 1,4,U.G.Kersting 2,A.W.Smith 2,J.K.Gurney 2,T.Brox 3, and R. Klette 1 1 Computer Science Department, 2 Department of Sport and Exercise Science,

More information

Cardboard People: A Parameterized Model of Articulated Image Motion

Cardboard People: A Parameterized Model of Articulated Image Motion Cardboard People: A Parameterized Model of Articulated Image Motion Shanon X. Ju Michael J. Black y Yaser Yacoob z Department of Computer Science, University of Toronto, Toronto, Ontario M5S A4 Canada

More information

This is a preprint of an article published in Computer Animation and Virtual Worlds, 15(3-4): , 2004.

This is a preprint of an article published in Computer Animation and Virtual Worlds, 15(3-4): , 2004. This is a preprint of an article published in Computer Animation and Virtual Worlds, 15(3-4):399-406, 2004. This journal may be found at: http://www.interscience.wiley.com Automated Markerless Extraction

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Occlusion Robust Multi-Camera Face Tracking

Occlusion Robust Multi-Camera Face Tracking Occlusion Robust Multi-Camera Face Tracking Josh Harguess, Changbo Hu, J. K. Aggarwal Computer & Vision Research Center / Department of ECE The University of Texas at Austin harguess@utexas.edu, changbo.hu@gmail.com,

More information

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline.

C18 Computer vision. C18 Computer Vision. This time... Introduction. Outline. C18 Computer Vision. This time... 1. Introduction; imaging geometry; camera calibration. 2. Salient feature detection edges, line and corners. 3. Recovering 3D from two images I: epipolar geometry. C18

More information

C280, Computer Vision

C280, Computer Vision C280, Computer Vision Prof. Trevor Darrell trevor@eecs.berkeley.edu Lecture 11: Structure from Motion Roadmap Previous: Image formation, filtering, local features, (Texture) Tues: Feature-based Alignment

More information

Notes 9: Optical Flow

Notes 9: Optical Flow Course 049064: Variational Methods in Image Processing Notes 9: Optical Flow Guy Gilboa 1 Basic Model 1.1 Background Optical flow is a fundamental problem in computer vision. The general goal is to find

More information

Animation. CS 465 Lecture 22

Animation. CS 465 Lecture 22 Animation CS 465 Lecture 22 Animation Industry production process leading up to animation What animation is How animation works (very generally) Artistic process of animation Further topics in how it works

More information

Project Updates Short lecture Volumetric Modeling +2 papers

Project Updates Short lecture Volumetric Modeling +2 papers Volumetric Modeling Schedule (tentative) Feb 20 Feb 27 Mar 5 Introduction Lecture: Geometry, Camera Model, Calibration Lecture: Features, Tracking/Matching Mar 12 Mar 19 Mar 26 Apr 2 Apr 9 Apr 16 Apr 23

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

On the Sustained Tracking of Human Motion

On the Sustained Tracking of Human Motion On the Sustained Tracking of Human Motion Yaser Ajmal Sheikh Robotics Institute Carnegie Mellon University Pittsburgh, PA 15213 yaser@cs.cmu.edu Ankur Datta Robotics Institute Carnegie Mellon University

More information

Robust Real-time Stereo-based Markerless Human Motion Capture

Robust Real-time Stereo-based Markerless Human Motion Capture Robust Real-time Stereo-based Markerless Human Motion Capture Pedram Azad, Tamim Asfour, Rüdiger Dillmann University of Karlsruhe, Germany azad@ira.uka.de, asfour@ira.uka.de, dillmann@ira.uka.de Abstract

More information

Space-time Body Pose Estimation in Uncontrolled Environments

Space-time Body Pose Estimation in Uncontrolled Environments Space-time Body Pose Estimation in Uncontrolled Environments Marcel Germann ETH Zurich germann@inf.ethz.ch Tiberiu Popa ETH Zurich tpopa@inf.ethz.ch Remo Ziegler LiberoVision AG ziegler@liberovision.com

More information

Inferring 3D Body Pose from Silhouettes using Activity Manifold Learning

Inferring 3D Body Pose from Silhouettes using Activity Manifold Learning CVPR 4 Inferring 3D Body Pose from Silhouettes using Activity Manifold Learning Ahmed Elgammal and Chan-Su Lee Department of Computer Science, Rutgers University, New Brunswick, NJ, USA {elgammal,chansu}@cs.rutgers.edu

More information

Optical flow and tracking

Optical flow and tracking EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,

More information

Factorization with Missing and Noisy Data

Factorization with Missing and Noisy Data Factorization with Missing and Noisy Data Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, and Antonio López Computer Vision Center and Computer Science Department, Universitat Autònoma de Barcelona,

More information

Viewpoint invariant exemplar-based 3D human tracking

Viewpoint invariant exemplar-based 3D human tracking Computer Vision and Image Understanding 104 (2006) 178 189 www.elsevier.com/locate/cviu Viewpoint invariant exemplar-based 3D human tracking Eng-Jon Ong *, Antonio S. Micilotta, Richard Bowden, Adrian

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

Learning Efficient Linear Predictors for Motion Estimation

Learning Efficient Linear Predictors for Motion Estimation Learning Efficient Linear Predictors for Motion Estimation Jiří Matas 1,2, Karel Zimmermann 1, Tomáš Svoboda 1, Adrian Hilton 2 1 : Center for Machine Perception 2 :Centre for Vision, Speech and Signal

More information

An Adaptive Eigenshape Model

An Adaptive Eigenshape Model An Adaptive Eigenshape Model Adam Baumberg and David Hogg School of Computer Studies University of Leeds, Leeds LS2 9JT, U.K. amb@scs.leeds.ac.uk Abstract There has been a great deal of recent interest

More information

Visuelle Perzeption für Mensch- Maschine Schnittstellen

Visuelle Perzeption für Mensch- Maschine Schnittstellen Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2009 Prof. Dr. Rainer Stiefelhagen Dr. Edgar Seemann Institut für Anthropomatik Universität Karlsruhe (TH) http://cvhci.ira.uka.de

More information

Stereo and Epipolar geometry

Stereo and Epipolar geometry Previously Image Primitives (feature points, lines, contours) Today: Stereo and Epipolar geometry How to match primitives between two (multiple) views) Goals: 3D reconstruction, recognition Jana Kosecka

More information

Robust Human Body Shape and Pose Tracking

Robust Human Body Shape and Pose Tracking Robust Human Body Shape and Pose Tracking Chun-Hao Huang 1 Edmond Boyer 2 Slobodan Ilic 1 1 Technische Universität München 2 INRIA Grenoble Rhône-Alpes Marker-based motion capture (mocap.) Adventages:

More information

Comparison Between The Optical Flow Computational Techniques

Comparison Between The Optical Flow Computational Techniques Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information