Kernel Subspace Mapping: Robust Human Pose and Viewpoint Inference from High Dimensional Training Sets

Size: px
Start display at page:

Download "Kernel Subspace Mapping: Robust Human Pose and Viewpoint Inference from High Dimensional Training Sets"

Transcription

1 Kernel Subspace Mapping: Robust Human Pose and Viewpoint Inference from High Dimensional Training Sets by Therdsak Tangkuampien, BscEng(Hons) Thesis Submitted by Therdsak Tangkuampien for fulfillment of the Requirements for the Degree of Doctor of Philosophy Supervisor: Professor David Suter Associate Supervisor: Professor Ray Jarvis Department of Electrical and Computer Systems Engineering Monash University January, 2007

2 c Copyright by Therdsak Tangkuampien 2007

3 Addendum page 2 para 2: Comment: KSM allows generalization from a single generic model to a previously unseen person and it is a matter for future work to determine how this approach can generalize to people of different somatotype and those wearing different clothing. page 4 para 1: Comment: It is important to note that KPCA can perform both de-noising and data reduction. For the context of human motion capture via KSM, KPCA is used for de-noising because the processed data is not constrained to be a subset of the training set. page 22 para 2: Comment: KSM can be applied to any number of cameras, the emphasis of the work is on the development of a new learning technique that generalizes from only a small training set. This is demonstrated in two camera markerless tracking which appears sufficient to disambiguate the coupled problem of pose and yaw. page 30 para 3: Comment: The Gaussian white noise model in used for parameter tuning (in KSM) because KPCA de-noising has been shown to be effective and efficient in the minimization of Gaussian noise [91]. Provided that the noise level is not substantial, experiments (conducted by the author) shows that the optimal parameter in equation 3.6 remains relatively stable under changes in noise level. In practice (during capture), the robustness of KSM is tested by analyzing its accuracy in pose inference from previously seen and unseen silhouettes corrupted with synthetic noise (section 4.4). page 36 para 2: Comment: In addition to the Gaussian kernel (equation 3.3.2), other kernels can be used in KSM. It is a matter of future research to determine if some kernels perform better and more efficiently than others. The important factors that will need to be considered are the accuracy and efficiency of the pre-image approximation of KPCA [87] based on the specific kernel. page 37 para 2: Comment: The parameters K + and λ are tuned over a predefined discretized range. These parameters are tuned such as to minimize the error function in equation page 43 para 2: Comment: Specifically for the experiments in section 3.5, mean square error (Mse) is defined as the Euclidian distance between two RJC vectors as defined in section page 58 Add to the end of para 1: There are many other tunable parameters for KSM. For example, two orthogonal camera views are used (in figure 4.4) because the author believes that this provided the greatest information from two synchronized views. Another tunable parameter is the level of the pyramids used in figure 4.5. I

4 In our analysis, we found that a pyramid level of 5 is the most robust for our experimental setup. There are many other tunable parameters (such as changes in illumination level and segmentation quality) which can affect the accuracy of KSM. It is a subject of further research to evaluate the performance of KSM under different parametric conditions. page 61 para 2 Comment: In the selection of the optimal number of neighbors for LLE mapping, it is important to avoid over-fitting. If the mapping number is too small, KSM may end up locked onto the wrong poses. An interesting area to further investigate is how to dynamically identify the most robust number of neighbors to use for LLE mapping (irrespective of different motion types). page 86 para 2: Comment: It is important to note that for IMED embedded KSM, the use of LLE mapping will still be affected by the same ambiguity problem as in figure 4.8. However, the probability of this occurring is substantially reduced due to the use of photometric information in section page 93 Add to the end of para 1: Another interesting area of research for IMED embedded KSM is to investigate its robustness in an uncontrolled environment (e.g. varying light/illumination condition). In this case, it may be possible to increase the technique s robustness by training KPCA using relative change in intensity levels between neighboring pixels (as opposed to absolute pixel intensities). page 96 para 1: Comment: The reader should note that the de-noising result of non-cyclical dancing motion is not included in the analysis in section page 99 Add to the end of para 1: The optimization of Greedy Kernel PCA (GKPCA) is considered to be beyond the scope of this work. The emphasis of the work is on the performance improvement of KSM (in human motion capture) via the application of GKPCA in training set reduction. Readers interested in the topic of GKPCA optimization and upper bound minimization should refer to [41], section 5.4 (page 91). page 104 figure 6.6: The results in figure 6.6 is surprising. For further research, it would be interesting to investigate if a portion of the reduction in feature space noise can be attributed to data reduction (instead of fully to KPCA or GKPCA de-noising). page 113 Add to the end of para 1: It may also be possible to generalize KSM to colour images by transferring colour properties from the target to be tracked to a generic model and regenerating the database (re-training) for tracking. page 133 Additional References: Viewpoint invariant exemplar-based 3D human tracking, Computer Vision and Image Understanding, 104(2), , 2006, E.-J. Ong et al. The Dynamics of Linear Combinations: Tracking 3D Skeletons of Human Subjects, Image and Vision Computing, 20, , 2002, E.-J. Ong and S. Gong. A Multi-View Nonlinear Active Shape Model using Kernel PCA, BMVC 1999, S. Romdhani et al. II

5 Errata section 1.1 (p1): limps of the human body for limbs section 1.1 (p3): KSM can refer for infer section 1.2 (p4): motion capture techniques calls for called section 1.2 (p4): in lower evaluation cost for in a lower section 2.2 (p13): particles for particle section (p16): so that Euclidian norm for that the Euclidian section (p16): boosted cascade of classifier for classifiers section (p17): automatically constraining search space for the search space section (p17): to identify region of for regions of section 2.3 (p21): 3D points on as an array for on an section (p37): two free parameters that requires for require section (p39): if two different pose vector for vectors section (p41): de-noising, noisy human for a noisy human section (p41): performance improvement for improvements section (p41): integrating KPCA motion de-noiser for the KPCA section 3.5 (p41): play-backed for played-back section 3.5 (p45): As expected, smaller noise for a smaller noise section 4.3 (p59): different sets of motion for motions section (p68): there are many possible set of for sets of section (p68): false positives for false negatives section (p72): was able to created for create section 4.5 (p73): high level of noise for levels section 4.5 (p73): very well at estimation 3D for estimating section 4.5 (p74): The limits the for This limits section 4.5 (p74): the capture other for of other III

6 section 5.6 (p90): only shows minor percentage for a minor percentage section 6 (p95): Human motion de-noising for A human motion de-noising section 6.2 (p97): human de-noising for motion de-noising section 6.2 (p97): whilst minimizing reduction for the reduction section 6.2 (p98): experiment are conducted for experiments are conducted section 6.2 (p98): as well as compare the for comparing the section 6.4 (p109): removed form for removed from section 7.1 (p109): is non-linear for are non-linear section 7.2 (p113): such as way for such a way section 7.2 (p143): techniques aim at for aimed at section 7.2 (p143): calibrate cameras for calibrated cameras IV

7 List of Tables 5.1 3D pose inference comparison using the mean angular error for unseen views of the object Tom D pose inference comparison using the mean angular error for randomly selected views of the object Tom D pose inference comparison using the mean angular error for unseen views of the object Dwarf D pose inference comparison using the mean angular error for randomly selected views of the object Dwarf Comparison of capture rate for varying training sizes V

8 List of Figures 1.1 Diagram to summarize the training and testing process of Kernel Subspace Mapping D Taxonomy plot to summarize the classification of the various motion capture literature reviewed in this thesis Linear toy example to show the results of PCA de-noising Toy Example to show the projection of data via PCA Toy example to illustrate the limitation of PCA PCA de-noising of non-linear toy data KPCA de-noising of the non-linear toy data Diagram to illustrate the missing rotation in a ball & socket joint using the RJC encoding Data flow diagram to summarize the relationship between human motion capture and human motion de-noising Quantitative comparison between PCA and KPCA de-noising of human motion sequence Frame by Frame error comparison between PCA and KPCA de-noising of a walk sequence Frame by Frame error comparison between PCA and KPCA de-noising of a run sequence Diagram to show results of KPCA in the implicit de-noising of feature space noise Feature and pose space mean square error relationship for KPCA Overview of Kernel Subspace Mapping (KSM) VI

9 4.2 Scatter plot of the RJC projections onto the first 4 kernel principal components in M p Relationship between human motion de-noising, KPCA and KSM Example of a training pose and its corresponding concatenated synthetic image Diagram to summarize the silhouette encoding for 2 synchronized cameras Markerless motion capture re-expressed as the problem of mapping from the silhouette subspace Diagram to summarize the mapping from silhouette subspace to the pose subspace Diagram to show how two different poses may have similar concatenated silhouettes Illustration to summarize markerless motion capture Selected images of the different models used to test KSM Comparison between the shape context descriptor and the pyramid match kernel Intensity images of the RJC pose space kernel and the Pyramid Match kernel for a training walk sequence (fully rotated about the vertical axis) Visual comparison of the captured pose with ground truth KSM capture error (degrees per joint) for a synthetic walk motion sequence KSM capture error (cm/joint) for a synthetic walk motion sequence KSM capture error (degrees per joint) for a motion with different noise densities (Salt & Pepper Noise) KSM motion capture on real data using 2 synchronized un-calibrated cameras Selected motion capture results to illustrate the robustness of KSM Diagram to summarize the 3D pose estimation problem of an object viewed from the upper viewing hemisphere Images of the Standardizing Transform with different σ values Images of a 3D object after applying the Standardizing Transform Diagram to Summarize Kernel Subspace Mapping for 3D object pose estimation VII

10 5.5 Selected images of the test object Tom and object Dwarf Illustration to geometrically summarize GKPCA De-noising comparison of a toy example between GKPCA and KPCA Pose space mse comparison between PCA, KPCA and GKPCA de-noising Frame by Frame error comparison between PCA, KPCA and GKPCA denoising of a human walk sequence Frame by Frame error comparison between PCA, KPCA and GKPCA denoising of a human run sequence Comparison of feature and pose space mse relationship for KPCA and GKPCA Average errors per joint for the reduced training sets filtered via GKPCA Frame by Frame error comparison (degrees per joint) for a clean walk sequence with different level of GKPCA filter in KSM Frame by Frame error comparison (degrees per joint) for a noisy walk sequence with different level of GKPCA filter in KSM Diagram to illustrate the most likely relationship between using the original training set without GKPCA filtering and using training sequences directly obtained from a motion capture database (to train KSM) A.1 Diagram to summarize the hierarchical relationship of the bones of inner biped structure A.2 Diagram to summarize the proposed markerless motion capture system A.3 Example of a Acclaim motion capture (AMC) format A.4 Comparison between animation in AMC format and DirectX format (generic mesh) A.5 Comparison between animation in AMC format and RJC format A.6 Comparison between animation in AMC format and DirectX format (RBF mesh) A.7 Images to illustrate Euler rotations B.1 Selected textured mesh models of the author B.2 Examples images of the front and back scan of the author B.3 Diagram to illustrate surface fitting for RBF mesh generation VIII

11 B.4 Selected examples of the accurate mesh model of the author IX

12 Kernel Subspace Mapping: Robust Human Pose and Viewpoint Inference from High Dimensional Training Sets Therdsak Tangkuampien, BscEng(Hons) Monash University, 2007 Supervisor: Professor David Suter Associate Supervisor: Professor Ray Jarvis Abstract A novel markerless motion capture technique called Kernel Subspace Mapping (KSM) is introduced in this thesis. The technique is based on the non-linear unsupervised learning algorithm, Kernel Principal Component Analysis (KPCA). Training sets of human motions captured from marker based systems are used to train de-noising subspaces for human pose estimation. KSM learns two feature subspace representations derived from the synthetic silhouettes and pose pairs, of a single generic human model, and views motion capture as the problem of mapping vectors between the learnt subspaces. After training, novel silhouettes, of previously unseen actors and of unseen poses, can be projected through the two subspaces via Locally Linear non-parametric mapping. The captured human pose is then determined by calculating the pre-image of the de-noised projections. The inference results show that KSM can estimate pose with accuracy similar to other recently proposed state of the art approaches, but requires a substantially smaller training set (which can potentially lead to lower processing costs). To allow automated training set reduction, the novel concept of applying Greedy KPCA as a preprocessing filter for KSM is proposed. The flexibility of KSM is also further illustrated via the integration of the Image Euclidian Distance (IMED) and the technique applied to the problem of 3D object viewpoint estimation. X

13 Kernel Subspace Mapping: Robust Human Pose and Viewpoint Inference from High Dimensional Training Sets Declaration I declare that this thesis is my own work and has not been submitted in any form for another degree or diploma at any university or other institute of tertiary education. Information derived from the published and unpublished work of others has been acknowledged in the text and a list of references is given. Therdsak Tangkuampien January 23, 2007 XI

14 Acknowledgments I would like to thank my principal supervisor, Professor David Suter for his advice and guidance during my candidature. It was he who initially motivated my research ideas in the area of human motion capture and machine learning. Many thanks to David for the constructive criticisms and the countless hours spent on the discussions and proof reading of my research. This thesis would not have been possible without his invaluable guidance. I would like to thank my colleagues from the Institute for Vision Systems Engineering: Tat-jun Chin, James U, James Cheong, Dr. Konrad Schindler, Dr. Hanzi Wang, Mohamed Gobara, Ee Hui Lim, Hang Zhou, Liang Wang and my associated supervisor, Prof. Raymond Jarvis, for the wonderful discussions and constructive criticisms during our weekly seminars. It has been a wonderful experience sharing research ideas with you all. Especially, I would like to thank Tat-jun Chin for initially generating my interest in the areas of manifold learning and high dimensional data analysis. I would also like to thank many of the reviewers of my conference and journal submissions. The constructive feedbacks have been extremely helpful in guiding the direction of my research. In particular, I would like to thank Kristen Grauman for providing the source code for the Pyramid Match kernel and Gabriele Peters for making available the 3D object viewpoint estimation data set of Tom and Dwarf. These components have helped me greatly during my research candidature. Most importantly, I would like to thank my mother and father for their constant encouragement and for always being there for me, when I needed them. Therdsak Tangkuampien XII

15 Commonly used Symbols Some commonly used symbols in this thesis are defined here: X tr refers to the training set. x i refers to the i-th element (column vector) of the training set X tr. x refers to the novel input (column vector) for the model learnt from X tr. M p M s refers to the pose feature subspace. refers to the silhouette feature subspace. k p (, ) refers to the KPCA kernel function in the pose space. k s (, ) refers to the KPCA kernel function in the silhouette space. Ψ refers the the silhouette descriptor for the silhouette kernel k s (, ). v p refers to the coefficients of the KPCA projected vector x via k p (, ). v s refers to the coefficients of the KPCA projected vector Ψ via k s (, ). P tr S tr P lle S lle refers to the KPCA projected set of training poses. refers to the KPCA projected set of training silhouettes. refers to the reduced training subset of P tr used in LLE mapping. refers to the reduced training subset of S tr used in LLE mapping. s in refers to projection of the the input silhouette in M s. p out refers to the pose subspace representation of s in in M p. x out refers to the output pose vector for KSM (i.e. the pre-image of p out ). X gk refers to the reduced training subset (of X tr ), filtered via Greedy KPCA. XIII

16 XIV

17 Chapter 1 Introduction 1.1 Markerless Motion Capture and Machine Learning Markerless human motion capture is the process of registering (capturing and encoding) human pose in mathematical forms without the need for intrusive markers. Moeslund [73] defined human motion capture as the process of capturing the large scale (human) body movements at some resolution. The term large scale body movement emphasizes that only the significant body parts, such as the arms, legs, torso and head will be considered. The statement at some resolution implies that human motion capture can refer to both the tracking of human as a single object (low resolution) or inference of the relative position of the limps of the human body (high resolution). This thesis views the human body as an articulated structure consisting of multiple bones connected in a hierarchical manner (appendix A.1). Markerless motion capture, as referenced in our work, refers to the inference of the relative joint orientations of this hierarchical skeletal model. Virtual animation can be achieved by encoding these relative joint orientations (at each time frame) as pose vectors, and mapping these vectors sequentially to a hierarchical (skeletal) structure. In order to avoid the use of markers, pose information can be inferred from images captured from multiple synchronized cameras. Estimating human pose from images can be classified as computer-visioned based motion capture, and has received growing interest in the last decade. Faster processors, cheaper digital camera technology and larger memory storage have all contributed to the significant growth in the area. 1

18 CHAPTER 1. INTRODUCTION Machine learning is another area that has received significant interest from both the research community and industry. Literally, machine learning research is concerned with the development of algorithms and techniques that allow computers (machine) to learn. In particular, due to the availability of large human motion capture database, there has been many markerless motion capture techniques based on machine learning [37, 113, 45, 6, 83]. The principal concept is to generate synthetic training set of images from available poses in the motion database and learn the (inverse) mapping from image to pose, such that it generalizes well to novel inputs. The term generalizes well to novel inputs emphasizes that this is not a traditional database search problem, but a more complex one, which requires the generation of unseen poses not in the training database. An attribute which makes machine learning suitable for human motion capture is that a high percentage of human motion is coordinated [84, 24]. There has been many experiments on the application of unsupervised learning algorithms (such as Principal Components Analysis (PCA) [51, 98] and Locally Linear Embedding (LLE) [85]) in learning low dimensional embedding of human motion [37, 14, 84]. Effectively, inferring pose via a lower dimensional embedding avoids expensive computational processing and searching in the high dimensional human pose space (e.g. 57 degrees of freedom in normalized Relative Joint Center (RJC) format [section 3.4.1]). The main contribution of this thesis is a novel markerless motion capture technique called Kernel Subspace Mapping (KSM)(chapter 4). The technique is based on the nonlinear unsupervised (machine) learning algorithm, Kernel Principal Components Analysis (KPCA) [90]. KSM requires, at initialization, labelled input and output training pairs, which may both be high dimensional. In particular, for computer vision-based motion capture, KSM can learn the (inverse) mapping from synthetic (training) images to pose space encoded in the normalized RJC format (section 3.4.1). Instead of learning poses generated from a number of different mesh models (as in [45, 37]), KSM can estimate pose using training silhouettes generated from a single generic model (figure 1.1 [top left]). To ensure the robustness of the technique and test that it generalizes well to 2

19 CHAPTER 1. INTRODUCTION Figure 1.1: Diagram to summarize the training and testing process of Kernel Subspace Mapping, which learns the mapping from image to the normalized Relative Joint Centers (RJC) space (section 3.4.1). Note that different mesh models are used in training and testing. The generic model [top left] is used to generate training images, whereas the accurate mesh model of the author (Appendix B) is used to generate synthetic test images. previously unseen poses from a different actor, a different model is used in testing (figure 1.1 [top right]). Results are presented in section 4.4, which shows that KSM can refer accurate human pose and direction, without the need for 3D processing (e.g. voxel carving, shape form silhouettes), and that KSM works robustly in poor segmentation environments as well. 1.2 Thesis Outline & Contributions In chapter 2, a taxonomy and literature review of markerless motion capture techniques are presented. Relevant machine learning algorithms for markerless motion capture are discussed and classified into logical paradigms. This will serve as a context upon which the advantages and contributions of the proposed technique (KSM) can be identified. Thereafter, the remaining chapters and their contributions are outlined below: The term previously unseen is used in this thesis to refer to vectors/silhouettes that are not in the training set. 3

20 CHAPTER 1. INTRODUCTION Subspace Learning for Human Motion Capture [chapter 3]: introduces the novel concept of human motion de-noising via non-linear Kernel Principal Components Analysis (KPCA) and summarizes how de-noising can advantageously contribute to markerless motion capture. Arguments are presented, which advocates that the normalized Relative Joint Center (RJC) format for human motion encoding is, not only intuitive, but well suited for KPCA de-noising as well. De-noising results indicates that human motion is inherently non-linear, hence further supporting the integration of non-linear de-noising techniques (such as KPCA) in markerless motion capture techniques. Kernel Subspace Mapping (KSM) [chapter 4]: integrates human motion denoising via KPCA (chapter 3) and the Pyramid Match Kernel [44] into a novel (and efficient) markerless motion capture technique calls Kernel Subspace Mapping. The technique learns two feature space representations derived from the synthetic silhouettes and pose pairs, and alternatively views motion capture as the problem of mapping vectors between the two feature subspaces. Quantitative and qualitative motion capture results are presented and compared with other state of the art markerless motion capture algorithms. Image Euclidian Distance (IMED) embedded KSM [chapter 5]: shows how the Image Euclidian Distance [116], which takes into account spatial relationship of local pixels, can efficiently be embedded into KPCA via the Kronecker product and Eigenvector projections. Mathematical proofs are presented which shows that the technique retains the desirable properties of Euclidian distance, such as kernel positive definitiveness, and can hence be used in techniques based on convex optimization such as KPCA (and KSM). Results are presented which demonstrate that IMED embedded KSM is a more intuitive and accurate technique than standard KSM through a 3D object viewpoint estimation application. This chapter is based on the conference paper [108] T. Tangkuampien and D. Suter: Real-Time Human Pose Inference using Kernel Principal Component Pre-image Approximations: British Machine Vision Conference (BMVC) 2006, pages , Edinburgh, UK. This chapter is based on the conference paper [106] T. Tangkuampien and D. Suter: 3D Object Pose Inference via Kernel Principal Component Analysis with Image Euclidian Distance (IMED): British Machine Vision Conference (BMVC) 2006, pages , Edinburgh, UK. 4

21 CHAPTER 1. INTRODUCTION Greedy KPCA for Human Motion Capture [chapter 6]: presents the novel concept of applying Greedy KPCA [41] as a preprocessing filter in training set reduction for KSM. Human motion de-noising comparison between linear PCA, standard KPCA (using all poses in the original sequence) and Greedy KPCA (using the reduced set) is presented at the end of the chapter. The results show that both KPCA and greedy KPCA have superior de-noising qualities over PCA, whilst Greedy KPCA results in lower evaluation cost (for both KPCA de-noising and KSM in motion capture) due to the reduced training set. Finally, in chapter 7, overall conclusions encapsulating the techniques presented in the previous chapters are drawn. Further improvements are highlighted, and most importantly, possible future directions of research on Kernel Subspace Mapping are summarized. This chapter is based on the conference paper [107] T. Tangkuampien and D. Suter: Human Motion Denoising via Greedy Kernel Principal Component Analysis Filtering: International Conference on Pattern Recognition (ICPR) 2006, pages ,Hong Kong, China. 5

22 CHAPTER 1. INTRODUCTION 6

23 Chapter 2 Literature Review Recently, markerless human motion capture has become one of the research areas in computer vision that rely heavily on machine learning methods. Human motion capture can be classified into different paradigms and this chapter serves to elucidate the differences between learning based approaches and the more conventional ones. As markerless motion capture is a popular field of research, this chapter does not aim to provide a complete literature review of all the algorithms. Instead, the principal goal is to motivate and differentiate the proposed method of Kernel Subspace Mapping (KSM) (chapter 4) against previously proposed approaches. To this end, in section 2.1, taxonomies of computer vision-based motion capture (similar to the ones introduced by Moeslund [72, 73] and Gavrila [43]) are summarized and combined. Based on the integrated taxonomy, in section 2.2, motion capture techniques are reviewed and classified. This will serve as a context upon which the advantages and contributions of the proposed technique (KSM) can be identified (section 2.3). 2.1 Motion Capture Taxonomy There has been many attempts at developing a taxonomy for human motion capture [43, 8, 7, 22]. The most logical, as highlighted by [16], is the taxonomy suggested by Moeslund [72, 73], which categorizes the process of motion capture into four stages that should occur in order to solve the problem. These stages are: initialization, tracking, pose estimation, and finally, recognition. Initialization includes any form of off-line processing, 7

24 CHAPTER 2. LITERATURE REVIEW such as camera calibration and model acquisition. As for tracking and pose estimation, it is important to highlight the differences between them. In the classification (figure 2.1), we use tracking to refer to the run-time identification of pre-defined structure with temporal constraints. Specifically for human tracking from images, the pre-defined structure may be the entire person (high-level) or multiple rigid body parts representing a person (e.g. arms, legs). We use the term pose estimation exclusively for processes which generate as output, pose vectors which can be used to (fully or partially) animate a skeletal model in 3D. As highlighted by Agarwal and Triggs in [4], it is common for the two stages of tracking and pose estimation to interact in a closed looped relationship in that: accurate tracking may take advantage of prior pose knowledge, and pose estimation may require some form of tracking to disambiguate inconclusive pose. Finally, recognition is the classification of motion sequences into discrete paradigms (e.g. running, jumping, etc). We believe that most recognition algorithms can be augmented to pose estimation approaches if 3D joint information is available, and therefore, have explicitly highlighted this extendability (with an arrow) in figure 2.1. The stages of the classification of Moeslund [72] do not have to be in any specific order, and for some techniques, some stages (i.e. tracking and recognition) may even be excluded completely. Each human motion capture approach (e.g. segmentation based, prediction via tracking filter) can be classified as partially or fully belonging to any of the stages. For example, a motion capture approach [23], which estimates pose via the use of Kalman filters [118] (to track body parts) would be categorized in both the tracking and pose estimation stages. Specifically for markerless motion capture techniques based on machine learning, such as [37, 113, 45, 6, 1, 75, 83], these can be classified into the initialization, tracking and pose estimation stages. The learning stage (from training data) and the inference stage (from test data) would be classified as initialization and pose estimation respectively. For a full categorization and survey of markerless motion capture techniques, the reader should refer to [72, 16]. 8

25 CHAPTER 2. LITERATURE REVIEW Another popular taxonomy is the one suggested by Gavrila [43], which classifies motion capture techniques into either 2D or 3D approaches. As the goal of motion capture is to infer 3D pose information, most 2D and 3D techniques will eventually generate 3D human pose information. To distinguish between 2D and 3D techniques, in our classification, any approach which requires processing and volumetric reconstruction of the human body in 3D space (e.g. voxel carving, shape from silhouettes) will be classified as a 3D technique. The remaining image-based techniques will be classified as 2D approaches. The two taxonomies can be combined together to illustrate an abstract view of the motion capture literature (figure 2.1). For example, a motion capture technique based on voxel reconstruction [70], can be classified as 3D approaches whilst also classified as initialization, tracking and pose estimation. Since the core of Kernel Subspace Mapping (KSM) is based on an unsupervised learning algorithm (KPCA) (for comparison), it would be useful to classify all the reviewed motion capture literature into another two distinct classes: learning based techniques and nonlearning based techniques. In figure 2.1, the blue labels highlight techniques that are exemplar/learning based (i.e. requires a training database) and the yellow labels indicates techniques that are not. An interesting observation is that most of the recent work on 2D markerless motion capture is based on learning algorithms, which indicates an increase in the popularity of machine learning in computer vision applications. Note that in the survey by Gavrila [43], the taxonomy classifies techniques into 3 distinct classes: 2D approaches without explicit shape models, 2D approaches with explicit shape models, and 3D approaches. For simplicity, we have combined all 2D approaches into one category. 9

26 CHAPTER 2. LITERATURE REVIEW Figure 2.1: 2D Taxonomy plot to summarize the classification of the various motion capture literature reviewed in this thesis. The color coding emphasizes if a motion capture technique is exemplar/learning based (i.e. requires a training database)[blue] or not based on any learning algorithm [yellow]. Techniques that do not estimate full pose (e.g. upper body only or 2D pose in images) are highlighted by an uncomplete bar in the pose estimation column. Similarly, tracking in a reduced dimensional (embedded) space is signified by a reduced bar in the tracking column. We believe that most recognition algorithms can be augmented to pose estimation approaches, once 3D joint information is available. This is indicated by the arrow at the end of each row pointing towards the recognition column. 10

27 CHAPTER 2. LITERATURE REVIEW 2.2 Markerless Motion Capture A review of human motion capture techniques, as summarized in the combined taxonomy of Moeslund [72] and Gavrila [43] (figure 2.1), is now presented. The techniques are discussed in the following categorizations: Section reviews pose estimation techniques which requires processing and volumetric reconstruction in 3D (e.g. voxel carving, shape from silhouettes). For quick technical comparison with KSM, section summarizes the relevant 2D learning/exemplar based motion capture techniques (i.e. requires a training set). Note that not all 2D markerless motion capture approaches require a training set for pose inference. For example, Moeslund and Granum [74] use silhouettes in conjunction with the typical human kinematic constraints (instead of using the subspaces defined by the training data) to infer pose. Other popular approaches (that do not rely on training data) include the motion tracking algorithm of the Sony Playstation s EyeToy (which only uses simple image differencing [36, 120]), the silhouette contour technique of Takahashi et al [104] (which uses Kalman filter [118] to track feature points), the particle filter based algorithm by Shen et al [93], and the product of exponential maps and twist motion framework of Bregler and Malik [18] Motion Capture via 3D Volumetric Reconstruction Human pose estimation from 3D volumetric reconstruction is a natural approach to markerless motion capture. These techniques usually require as input, synchronized images form multiple calibrated cameras, as well as the camera s intrinsic and extrinsic parameters. The principal idea is to derive a 3D volume (enclosing the actor), which satisfies the constraint imposed by the multi-view images, and use the volume to aid in 3D pose estimation. Popular algorithms for volumetric reconstruction include the Shape-from-Silhouette (SFS) algorithm [27] and voxel-carving techniques [70, 111]. These 3D approaches can be further categorized into two distinct classes: model-free and model-based techniques. In a model-free setup, the surface/volume reconstruction at each time instance (sometimes in combination with its textures) can be captured (and used directly in animation) without prior knowledge of the human body [71, 100]. In a model-based setup [70, 111, 99, 48, 13], prior knowledge of the human structure is used to aid in motion capture, and therefore, 11

28 CHAPTER 2. LITERATURE REVIEW usually allows pose estimation (i.e. the capture of joint orientations/positions for reanimation). The advantage of pose estimation is that it is efficient to encode (only the structural joint orientations need to be encoded for each frame) and this allows its practical use in applications such as motion editing and motion synthesis [57, 62, 10, 84], as well as human computer interaction (HCI). As this thesis specifically proposes a motion capture technique (KSM) for pose estimation, only 3D model based approaches, which can capture pose for biped animation will be reviewed. A state of the art model-based approach to realistic surface reconstruction and pose estimation is the technique proposed by Starck et al [99, 48]. The technique has been shown successful when using 9 calibrated and synchronized cameras (with 8 cameras forming 4 stereo pairs, and the remaining camera positioned overhead). A prior humanoid model is used in conjunction with manually labelled feature points (e.g. skeletal joints and mesh vertices) to match the images at each time frame. A combination of silhouette, feature cues and stereo is then used in a constrained optimization framework to update the mesh model, whilst preserving its parametrization of the surface mesh. Tracking with a prior model allows the integration of temporal information to disambiguate under-constrained scenarios, which may occur as a result of self occlusion. The technique has been shown to successfully reconstruct complex motion sequences (e.g. jumping and dancing), and in some cases, it is possible to visualize realistic creases of the actor s clothing during reanimation. As the humanoid model is updated at every time frame, we believe that pose (joint orientations) can easily be estimated from the model. Two disadvantages are highlighted by the authors [99], these being the requirement for manual labelling of feature points and the constraints imposed by the shape of the prior model. Another state of the art 3D pose estimation algorithm is the technique proposed by Cheung et al [28]. The approach [28] takes into account temporal constraints in the form of a novel Shape-from- Silhouette across time algorithm [27]. Motion capture is partitioned into two distinct stages: human model acquisition and pose estimation. In model acquisition, the joint positions are registered in a sequential approach, where the actor is asked to rotate only one joint at a time. The initialization step, which consists of the joint skeleton body shape 12

29 CHAPTER 2. LITERATURE REVIEW acquisition was reported to take a total of 7 hours [joint acquisition ( 5 hours), shape acquisition ( 2 hours)]. During pose tracking and estimation, the Visual Hull alignment uses the Shape-from-Silhouette reconstruction and photometric information in conjunction with the prior model. The authors reported an average tracking rate of 1.5 minutes per frame. The technique has been shown to successfully track complicated non-cyclical motion such as aerobics, dancing and Kung Fu sequences. Pose estimation approaches are sometimes designed specifically for the inference of only the upper part of the human body (from torso upwards) (e.g. Bernier et al [13] and Fua et al [42]). These techniques may potentially be useful for human computer interaction in the near future. In [13], Bernier et al used particles filters and proposal maps to track fast moving human motion using depth images from a stereo camera. The technique can successfully estimate pose at up to 10Hz without prior knowledge of the actor nor background. In the work of Fua et al [42], 3 synchronizes pre-calibrated cameras are used to generate 3D surface point cloud to constrain pose estimation of the upper body. The approach uses optimization to deform an articulated model (with metaballs to simulate muscle and joints) to align with the synchronized imageries. Other notable (but similar to [99, 48, 28]) pose estimation techniques based on 3D volumetric reconstruction include [29, 79, 19, 117, 54, 70, 69]. In [29], Chu et al integrated (the manifold learning algorithm) Isomap [110], to allow the extraction of skeleton curve features from volumetric reconstruction of the human body. In [79], Niskanen et al used a 4-6 camera setup to estimate 3D points and 3D normals of a surface mesh enclosing a model at a time instance. Caillette et al [19] presented a volumetric reconstruction and fitting scheme based on 4 calibrated cameras, but used Variable Length Markov Models for prediction. Wang and Leow [117] derived a self-calibrated approach based on Nonparametric Belief Propagation [101], which allows pose inference via synchronized un-calibrated cameras (the self-calibration is highlighted in figure 2.1). Kerl et al [54] improved on volumetric capture techniques by introducing a fitting algorithm, which uses stochastic meta descent optimization (instead of deterministic ones). In addition to the space constraints 13

30 CHAPTER 2. LITERATURE REVIEW (imposed by the volumetric reconstruction), Mikić et al [70, 69] and Sundaresan et al [102] imposed temporal constraints via the use of extended Kalman filters [53]. In general, 3D motion capture based on volumetric reconstruction requires an expensive controlled environment of multiple calibrated cameras and clean silhouette segmentation. Voxel reconstruction, Shape-from-Silhouette or other space carving algorithms usually contribute to the higher processing cost and lower capture rate of 3D based techniques (when compared to learning based techniques [section 2.2.2]). On the other hand, volumetric reconstruction techniques with a tracking framework can capture more complicated and non-cyclical motion, and in some cases [99, 28], employing realistic surface models as well Learning or Exemplar Based Motion Capture Techniques Markerless motion capture techniques based on learning algorithms have grown significantly in this last decade due to the availability of online motion capture data. Generally learning/exemplars techniques adopt a supervised learning approach and require labelled input and output pairs as a training set. The principal concept is to learn the mapping (from input to output) from the training set, and generalize it to unseen inputs (sampled from a similar distribution as the training data) during online capture. The labelling (of the training sets) can be generated manually as in [75, 15]. However, as human motion is complicated, its encoding usually requires a large training set of high dimensional data, which makes manual labelling impractical. To allow scalability, most recent approaches, such as Agarwal & Triggs in [6] and Grauman et al in [45], used synthetic models to automatically generate synthetic training pairs for learning. To avoid over-fitting of the learnt prior to a specific actor/model, multiple synthetic (mesh) models are used in training (e.g. to capture walk sequences irrespective of yaw angle, a training set of 20,000 synthetic silhouettes are generated from a number of different models in [45]). Interesting enough, every member of the training set does not need to be labelled as shown by Navaratnam et al [76], who extended the learning-based concept to allow a semi-supervised learning approach (i.e. the training set consists of a combination of labelled [input & output pairs] 14

31 CHAPTER 2. LITERATURE REVIEW and unlabelled data [input without output, or vice versa]). There are many different types of inputs that have been successfully used in learning/exemplar based motion capture. In the technique proposed by Bowden et al [15, 14], a skin color classifier based on the Hue-Saturation space [67] is used to detect (image) locations of the hands and head (as input). Using a similar concept, Micilotta et al [68] trained the AdaBoost classifier to detect the locations of the head and hands in cluttered images. In [5], Agarwal and Triggs used the shape invariant feature transform (SIFT) descriptor [64] in conjunction with non-negative factorization [49] to encode edges in human images for training. In [65], Loy et al used manually labelled points in key frames together with a prior model (for image space fitting) to interpolate pose from cluttered images. These techniques [5, 68, 15, 14], although robust to pose estimation in cluttered environments, involve a complex scheme of mostly inefficient shape descriptors (when compared to using binary human silhouettes). A more efficient approach (also by Agarwal & Triggs [6]) infers pose using silhouettes from monocular images by directly regression from points sampled from the silhouette s edge via the Shape Context [11] descriptor to the Euler pose space. Inferring high dimensional pose from monocular silhouettes is complicated because a high level of ambiguity arises due to self occlusion and the lack of depth and foreground (photometric) information. In that case [6], a complex temporal tracking algorithm needs to be employed to disambiguate inconclusive pose. To overcome ambiguities, 3D pose can be inferred from concatenated silhouette images of the actor captured from multiple synchronized cameras (as proposed by Ren et al in [83] with 3 cameras and Grauman et al in [45] with 4 cameras). Even though concatenating silhouettes has the advantage of reducing the level of ambiguity (in input images), there is also a drawback, in that the dimension of the input (i.e. the number of pixels) will increase, hence effecting the technique s performance. In particular, for learning based approaches (where the main processing relies on comparing inputs to training data), the increase in input dimension becomes a principal factor to consider when determining the practicality of the techniques. 15

32 CHAPTER 2. LITERATURE REVIEW Specifically for silhouette-based approaches, the efficiency of shape comparison algorithms is a crucial attribute to consider for potential real-time systems. There have been many approaches aimed at optimizing matching/comparison costs between input and training data. In particular, Agarwal and Triggs [6, 1] improved on the shape context approach to motion capture (originally proposed by Mori and Malik [75]) by using vector quantization to compress the Shape Context histogram relationship between other silhouettes, so that Euclidian norm is applicable (instead of the more common approach of using the shortest augmenting path algorithm [52] to solve for the best match). In [83], Ren et al used a boosted cascade of feature classifier (introduced by Viola and Jones [115]) for efficient silhouette comparisons. Other efficient shape descriptors, which may be useful in silhouette-based motion capture, include the pyramid match kernel of Grauman et al [44] and the Active Shape Model of Cootes et al [30]. In addition to using an efficient comparison algorithm, learning/exemplar based techniques should also consider strategies to minimize the search time (of the training set) during pose inference. For exemplar based approaches that rely solely on a matching framework (i.e. given an input, find the closest training data) [35, 83], it is obvious why faster search algorithms will lead to faster pose inference rate. For learning based approaches that rely on a reconstruction framework (i.e. generate the output from a combination of training data), a complex search strategy (which can efficiently locate arrays of similar training poses) will be required. A problem usually encountered in optimizing search strategies for learning based (human) motion capture techniques is the high dimensionality and size of the training set. As highlighted by Shakhnarovich et al [94]: For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly become prohibitively high. To avoid a large training set, Agarwal & Triggs [6] use Bayesian non-linear regression to filter out a reduced training set that generalizes well to novel data. Shakhnarovich et al [94] learns a set of hash functions, which indexes only the relevant exemplars required for a specific motion. The main difference between the two approaches is that in the former [6], a smaller training set is selected form the original set to model the motion sequence, whereas in In chapter 4, we show how to efficiently integrate the Pyramid Match kernel [44] into KSM. 16

33 CHAPTER 2. LITERATURE REVIEW the latter [94], the larger set still remains, but a more efficient hashing algorithm is learnt to search the full training set. An alternative to using a hashing algorithm to reduce search time is to perform a stochastic sampling [95] or a Covariance Scaled Sampling [97] to locate the optimal pose. In Covariance Scaled Sampling, a hypothesis distribution is calculated by predicting the dynamics of the current time prior, and inflating the prior covariance at the predicted center for broader sampling. Bray et al [17] use the dynamic graph cut algorithm [55] to locate the global minimum (optimal pose), when infering pose from images without silhouette segmentation. Other human tracking approaches based on constrained sampling algorithms include [33, 34, 82]. Instead of exploring ways to optimize the sampling algorithm for tracking, an alternative solution to avoiding the curse of dimensionality is to integrate manifold or subspace learning algorithms into motion capture. This is possible because a majority of human motion is coordinated [84]. The lower dimensional embedded space can be used to reduce comparison time by automatically constraining search space to within the learnt model. Grauman et al [45] used a mixture of linear probabilistic principal components analyzer (PPCA) models [112] to locally and linearly model cluster of similar training data. Li et al [61] used the Locally Linear Coordination (LLC) algorithm [109] to enforce smoothness locally within the clusters of the pose (joint) space. Agarwal and Triggs [3] used kernel PCA [90] (with the polynomial kernel based on the Bhattacharya histogram similarity measure [56]) to learn the manifold of training silhouettes from the (vector quantized) shape context histogram descriptor [6]. In that case [3], the manifold in conjunction with the pose vectors (in Euler angles) are then combined to identify region of multi-valueness when mapping from silhouette to pose space. Elgammal and Lee [37] integrated the unsupervised learning algorithm, Locally Linear Embedding (LLE) [85] to learn lower dimensional manifolds of viewpoint specific silhouettes. Urtasun et al [113] uses Scaled Gaussian Process Latent Variable Models (SGPLVM) [46] to learn prior low dimensional embedding for specific human motion (e.g. walking, golf swing). Some of these approaches [37, 113] have only been shown to successfully track human joints using the same camera yaw angle (rotation about the vertical axis) as the ones used to capture the training data. It is not 17

34 CHAPTER 2. LITERATURE REVIEW clear if subspace models learnt from a different yaw orientation can be integrated together to form a single general model that can track joints irrespective of the camera s yaw angle. From our experiments [108], one of the most difficult aspect of motion capture is not how to track the joint positions of the human body (when viewpoint is known), but how to automatically infer the correct yaw orientation of the model, as well as to correctly track the joints. To do so, the prior learnt model must generalize well to unseen poses, as well as poses from unseen camera angles. To capture human pose irrespective of yaw orientation, Ren et al [83] separated yaw and pose inference into two independent stages. Yaw inference is viewed as a multi-class classification problem, which is always performed before pose inference. There are 36 classes in total, with each class encapsulating a 10 degree sector of the vertical axis rotation. The input is first classified into the correct class, before using a viewpoint specific model (with a 10 degree range) to infer pose. Other silhouette-based approaches that can accurately infer pose irrespective of yaw orientation include the shape+structure model of Grauman et al [45] and the regressive model of Agarwal and Triggs [6]. In general, the learning/exemplar based techniques summarized in this section [45, 5, 75, 15, 37, 83, 6, 95, 97, 113, 68], adopt a supervised learning approach and require labelled training data at initialization. In some approaches, unsupervised learning algorithms, such as PCA [15, 50], LLE [37], SGPLVM [46], have been used initially to learn lower dimensional embedding to constrain search and tracking. Even so, a supervised learning algorithm is still used to determine the mapping (between embedding and pose space) during pose inference. In comparison with 3D based markerless motion capture algorithms, learning based techniques are still limited to capturing less complicated (and mostly cyclical) motions, but 2D learning-based approaches generally do not require intrinsic camera calibration, and therefore, are cheaper and easier to initialize. The state of the art techniques [6, 96, 45, 37] mostly concentrate on robustly tracking and pose inference of spiral walk and turn sequences captured via un-calibrated cameras. Complex For this thesis, we have considered viewpoint change to be a result of rotation about the vertical axis (yaw rotation). The camera/sensor is assumed to remain at a fixed vertical height during the rotation 18

35 CHAPTER 2. LITERATURE REVIEW paradigms of pose estimation in cluttered background or poor silhouette segmentation environments are usually used to test and compare the robustness of proposed techniques. From our review of the current literatures, we believe the approach adopted by Agarwal and Triggs in [6] to be the current state of the art due to its ability to accurately and robustly infer pose and yaw orientation from monocular silhouettes. 2.3 Motivation for Kernel Subspace Mapping (KSM) Kernel Subspace Mapping (KSM) (chapter 4) falls in the paradigm of 2D learning-based approaches. With regards to the taxonomy classification of Moeslund [72] (figure 2.1), at initialization, KSM uses motion capture data from a marker-based system to learn a de-noising subspace of human motion (section 3.4). Efficient tracking is performed in the KPCA subspace (instead of the full pose space), hence its partial classification in the tracking column (figure 2.1). KSM uses concatenated silhouettes from synchronized cameras in conjunction with subspace temporal constraints to infer full pose vectors for full body biped animation. In general, most of the 2D learning-based approaches [6, 45, 37] rely on learning from a training set consisting of multiple people to allow good generalization to unseen input. KSM adopts a different approach by learning the de-noising subspace in the normalized relative joint orientation space (which is consistent for all actors), and de-noises (previously) unseen silhouettes with the learnt prior before pose inference. The advantage of this approach is a substantial reduction in training size, as only the motion sequence from a single generic model is used in learning. For comparison, in the technique proposed by Grauman et al [45], a large training set of 20,000 silhouettes (from multiple actors) are used to train a motion capture system consisting of 4 synchronized cameras. To capture similar walk sequences irrespective of (yaw) viewing angle, KSM shows similar results (to [45]) using a training set of only 343 exemplars and 2 synchronized cameras. The ability to accurately and efficiently estimate pose irrespective of yaw orientation is one of the many advantages of KSM. For learning-based approaches, inferring the yaw orientation (rotation 19

36 CHAPTER 2. LITERATURE REVIEW about vertical axis) is significantly harder than inferring, let s say, an extra rotation of the arm. This is because yaw rotation is not coordinated with any joint rotation (as any motion sequence can be captured from an infinite number of angles). Techniques which learn a view specific prior model for tracking (or a discrete combination of multiple view specific models) [37, 83, 113, 35] are limited to inferring pose from the pre-defined viewpoints. KSM is not constrained to view-specific estimation because view-point inference and pose estimation is not performed in two independent steps as in [83, 37]. As a result, view-point inference is not only possible, but continuous in the sense that pose estimation is possible from unseen (camera) yaw angles. As common with most supervised learning approaches to markerless motion capture [45, 113, 6, 83], KSM requires (for training) a labelled set of silhouette and pose pairs. KSM is similar to the mixture of regressors of Agarwal and Triggs [3], in the sense that both techniques rely on the unsupervised learning algorithm, KPCA [90]. However, there are substantial differences between the two approaches. In [3], KPCA with a complicated kernel (polynomial kernel with the Bhattacharya histogram similarity measure [56]) is used to project (vector quantized) shape context descriptors from the silhouette space. The shape context calculation (this occurs before the vector quantization step) has a quadratic complexity as the number of features (points sampled form the silhouette s edge). KSM, uses a shape descriptor, based on the efficient pyramid match kernel [44], which has only linear complexity in the cardinality of the set. Instead of learning subspaces from the histograms of silhouette descriptor (as in [3]), which are ambiguous and nosier (as the descriptor is based on discrete histogram bins), KSM initially learns a de-noising subspace from the continuous relative joint center space (section 3.4.1), which is more efficient to analyze and unambiguous (except for when a limb is fully extended [figure 3.6]). Furthermore, KSM is based on a KPCA de-noising framework (i.e. requires pre-image approximations [91]) of the Gaussian kernel, which already has well established and efficient pre-image estimators [87]. In [3], there is no approximation of pre-images, and such a stage would require the pre-image approximations of complex and inefficient polynomial kernels of discrete histogram descriptors (when compared to pre-image approximations using Gaussian 20

37 CHAPTER 2. LITERATURE REVIEW kernels [87]). Finally, instead of regression from silhouette to Euler pose space as in [6, 3], we use the normalized relative joint center encoding RJC (section 3.4.1). The problems of mapping to Euler space (appendix A.1.1) are that Euler angles suffer from singularities and gimbal locks, are non-commutative, as well as having a complicated group structure rather than that of a simple vector space. On the other hand, the normalized RJC format (section 3.4.1) encodes pose using 3D points on as an array of unit spheres, which is simple to parameterize (from vector space) and does not suffer from gimbal locks. An advantage of normalized RJC is that the mapping from Euclidian to geodesic distance (between two points) on a sphere can be closely approximated using Gaussians, and is therefore wellsuited for use with non-linear Gaussian kernels, as required in KPCA de-noising (section 3.3.1). Kernel Subspace Mapping (KSM) is also similar to the activity manifold learning method of [37], which uses Locally Linear Embedding (LLE) [85] to learn manifolds of human silhouettes for each viewpoint. In that system, during capture, the preprocessed silhouettes must be projected onto all the manifolds of each static viewpoint before performing a one dimensional search on each manifold to determine the optimal pose and viewing angle. Furthermore, because the manifolds are learnt from discrete viewing angles, it is not possible to accurately infer pose if the input silhouettes are captured from previously unseen viewpoint, where a corresponding manifold has not been learnt. Kernel Subspace Mapping (KSM), which does not suffer from these inefficiencies and problems, has the following advantages: Instead of learning separate subspaces for each viewpoint, a single combined subspace from multiple viewpoints (rotated about the vertical axis) is learnt, hence giving the technique the ability to accurately infer heading angle and human pose information in a single step (and avoid explicitly searching multiple subspaces). Kernel Subspace Mapping, which is based on already well established de-noising algorithms (KPCA) [90], generalizes well to silhouettes of unseen models as well as silhouettes from unseen viewing angles. 21

38 CHAPTER 2. LITERATURE REVIEW KSM is able to implicitly project noisy input silhouettes (which are perturbed by noise and lies outside the subspace defined by the training silhouettes) into the subspace, hence allowing the use of a single generic training model for training, which leads to a reduction in inference time. KSM uses images of silhouettes from synchronized cameras without the need for their intrinsic parameters (and therefore does not require any camera calibration). Extrinsic parameters are also not required, provided that the relative cameras position (between other cameras) are consistent between training and during online pose inference. It is important to note that KSM is a multiple camera approach to pose estimation, rather than the monocular approach proposed by Agarwal and Triggs [6, 3]. Pose inference from monocular silhouettes as in [6] is substantially harder as the level of ambiguity increases significantly in monocular sequences. In those cases, a more complicated tracking algorithm needs to be employed. KSM only uses a simple tracking algorithm embedded in the de-noising subspace, hence the requirement for (at least) two cameras to generate a learnt subspace that does not significantly overlap upon itself (e.g. the side view of a monocular silhouette walk sequence in [37]). From our experiments, we concluded that two cameras are sufficient in this regard, as oppose to the 3 camera framework in [83] or the 4 cameras setup in [45]. The important point to note is that KSM does not aim to improve on monocular approaches to pose estimation and their efficacy should not be directly compared as such. Instead, KSM aims to improve on the ability for learning-based approaches to generalize to unseen actors via the incorporating of a de-noising framework learnt from a single model, rather than the more common approach of using a large set from different models. For the capture of a walk sequence irrespective of angle, the reduction in training size allows KSM to efficiently infer full pose and yaw orientation at a rate of up to 10Hz. For comparison, Caillette et al [19] can capture at a speed of 10Hz, but requires volumetric reconstruction from four calibrated cameras. Bray et al presented an approach [17] which can infer pose without segmentation at an average of approximately 50 seconds per frame. Other markerless techniques [26] reported pose inference times of between 3 to 5 seconds per frame. 22

39 CHAPTER 2. LITERATURE REVIEW It is also important to highlight the differences between 3D markerless pose estimation techniques, which are based on volumetric reconstruction (section 2.2.1) and 2D markerless learning-based techniques (section 2.2.2). In 3D pose estimation techniques [111, 70, 102, 54, 99, 48], an expensive setup of multiple calibrated cameras (usually more than four) are required to derive the enclosing volume. Capture rate is usually lower than 2D approaches due to volumetric/surface reconstruction. On the other hand, 3D based techniques can capture more complicated non-cyclical human motion and, in some cases, have even been shown to capture complex surfaces (e.g. wrinkles in clothing) for reanimation [99, 48]. KSM, being a 2D learning-based approach, should not be directly compared with 3D approaches in relation to its ability to estimate pose, as the constraints vary substantially between the two paradigms. Nevertheless, the reader should bear in mind that the ultimate goal of full body pose estimation remains similar between the two paradigms. Finally, there is a large community of researchers in the field of machine learning, which concentrate specifically on the optimization and improvement of kernel techniques, such as Kernel PCA [91, 87] and Support Vector Machines (SVMs) [40]. KSM, being a technique which is based on the well-established KPCA algorithm [90], could most likely take advantage of any improved algorithms that may be proposed (from the machine learning community) in the future. To support this argument, two recently proposed algorithms, the Image Euclidian Distance [116] and the Greedy Kernel PCA algorithm [41] are integrated to the problem of 3D object viewpoint estimation (chapter 5) and human motion capture via KSM (chapter 6) respectively. There have not been, to the authors knowledge, any application of the Greedy Kernel KPCA algorithm in markerless motion capture, nor any use of the Image Euclidian Distance (IMED) in 3D object pose estimation problems. The results of the improved approaches indicate that theoretical improvements on KPCA can be transferred to practical improvements of KSM with relatively minor modifications. 23

40 CHAPTER 2. LITERATURE REVIEW 24

41 Chapter 3 Subspace Learning for Human Motion Capture This chapter introduces the novel concept of human motion de-noising via non-linear Kernel Principal Components Analysis (KPCA) [90] and summarizes how de-noising can contribute to markerless motion capture. Arguments are presented, which advocates that the normalized Relative Joint Center (RJC) format (section 3.4.1) for human motion encoding is, not only intuitive, but also well suited for KPCA de-noising. Motion denoising comparison between linear and non-linear approaches are presented to further demonstrate the advantages of using non-linear de-noising techniques (such as KPCA) in markerless motion capture. 3.1 Introduction The inference of full human pose and orientation (57 degrees of freedom) from silhouettes without the need of real world 3D processing is a complex and ill conditioned problem. When inferring from silhouettes, ambiguities arise due to the loss of depth cues (when projecting from 3D onto the 2D image plane) and the loss of foreground photometric information. Another complication to overcome is the high dimensionality of both the input and output spaces, which also leads to poor scalability of learning based algorithms. To avoid confusion it is important to note that, visually, an image as a whole is considered to exist in 2D. However, low level analysis of the pixels of an image (as applied in 25

42 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE KSM) is considered high dimensional (more than 2D) as each pixel intensity represents a possible dimension for processing. Specifically in our work, the input is the vectorized pixel intensities of concatenated images from synchronized cameras. The output is the high dimensional human pose vector encoded in the normalized Relative Joint Center (RJC) format (section 3.4.1). For exemplar based motion capture techniques, such as [113, 94, 45, 75, 6, 37], searching the full dimension of the output pose vector space (e.g. 57 dimensions for RJC format) is usually avoided. A possible solution (to avoid searching the complete pose space) is to exploit the correlation in human movement [84, 14] and learn a reduced subspace in which searching and processing can be performed more efficiently. The dimensionality reduction technique Principal Components Analysis (PCA) [51] has been shown to be successful in learning human subspaces for the synthesis of novel human motion [84, 14]. However, PCA, being a linear technique is limited by its inability to effectively de-noise non-linear data. By de-noising, we mean the minimization of unwanted noise by projecting data onto a learnt subspace (determined from either PCA, Kernel PCA [90] or other subspace learning algorithms) and ignoring the remainder (section 3.2). To overcome the non-linearity in encoding, the Locally Linear Embedding algorithm [85] (which is a non-linear dimensionality reduction/manifold learning technique) has become popular in human motion capture [37]. The problem with LLE is that it does not yet have well established algorithms for the projection of out-of-sample points [12]. To project unseen points to the manifold embedding, the input is usually appended to the training set and the entire manifold relearned. This chapter aims to find a suitable subspace learning algorithm (for human motion capture) which can effectively de-noise non-linear data, as well as one which is computationally efficient in the projection (de-noising) of novel inputs. To advocate the use of non-linear learning techniques in human motion capture, experiments should show that the simplest form of human motion (e.g. walking, running) is non-linear and, therefore, the motion would not be effectively de-noised by linear techniques such as PCA (when An investigation into the practical application of LLE for silhouette based motion capture was conducted by the author in [105]. Specifically for silhouette based approach to motion capture, the major problem encountered was the (computationally) expensive projection cost of novel silhouette feature vectors via LLE. Recently, the originators of LLE, Saul and Roweis, have investigated this problem and have presented two possible solutions in the update of their original Locally Linear Embedding paper [86]. 26

43 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE compared to KPCA). The remainder of this chapter is organized as follows: Firstly, section 3.2 reviews the principal components analysis (PCA) algorithm [51, 98], as well as highlighting its linear limitation. The non-linear kernel PCA (KPCA) algorithm [90] is reviewed in section 3.3. In section 3.4, the novel application of KPCA in human motion de-noising is introduced, and the advantageous relationship between human subspace learning via KPCA [90] and markerless motion capture is summarized. Results are presented in section 3.5 to allow quantitative and visual comparison between PCA de-noising and KPCA de-noising of noisy motion sequences. 3.2 Review: Principal Components Analysis (PCA) Principal components analysis (PCA) [51] is a powerful statistical technique with useful applications in areas of computer vision, data compression, pattern/face recognition and human motion analysis [84, 24]. PCA effectively creates an alternative set of orthogonal principal axes (basis vectors) with which to describe the data (figure 3.1). Geometrically speaking, expressing a data vector in terms of its principal components is a simple case of projecting the vector onto these independent axes (provided that the vector has already been centered around the training mean). Once the principal components are created, each projection is effectively a dot product, the calculation of which is only linear (in terms of the vector s dimension) in complexity. Furthermore, multiple data vectors can be concatenated into a matrix and the projection performed in batches via matrix multiplications. Figure 3.1: Linear toy example to show the results of Principal Components Analysis (PCA) de-noising. The principal axes are highlighted in red. Consider, for example, the 2D noisy (linear) toy data set in figure 3.1, where PCA will 27

44 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE calculate two principal axes. Note that the principal axes must be orthogonal to each other and therefore, there can only be as many principal axes as the dimension of the original data. In this case, the main principal axis (1 st principal component) lies along the direction of greatest variance, whereas, the less significant (2 nd component) axis lies orthogonal to it. By ignoring data projection onto the 2 nd component (and considering it as noise), the dimensionality of the problem reduces whilst minimizing loss of information. Figure 3.2: Toy Example to show the projection of data onto the 1 st principal axis from PCA. The projections along the 2 nd component is considered to be noisy data and ignored. Due to its simplicity, there are widespread applications of PCA in computer vision, more specifically in areas of high dimensionality analysis where subspace learning or dimensionality reduction are advantageous. There is a vast array of different applications (of PCA), ranging from, for example, data compression [14], optimization [84] to de-noising [91]. However, in most applications, the underlying concept remains relatively the same, in that, given a training data in R D, PCA initially learns a set of K principal axes (where K D) to efficiently represent the data. In data compression via PCA, a lossy form of compression is achieved by storing only the projection coefficients onto the first K principal axes. In human motion optimization [84], only the projection coefficients onto the K principal axes are processed and analyzed. In de-noising, the projections onto the K principal components are considered as the clean data (or that which is closest to the clean data). In most cases, vector projections onto the less significant components are ignored. To this end, two crucial questions that should be asked are: how to automatically determine the optimal number of K principal components for a specific data set, and how effective these K principal components are in representing the clean data. 28

45 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE For notational purposes, a mathematical formulation of PCA [51, 90] is now presented to aid in the investigation of these problems. A training set of N centered vectors in R D is denoted as X tr, whereas the i-th training (column) vector is represented as x i. A novel (column) vector, which is not in the training set, is represented as x. Note that the data set must be preprocessed and centered around its mean beforehand (i.e. N i=1 x i = 0). Thereafter, to determine the principal components, PCA diagonalizes the covariance matrix C = 1 N N x i x T i (3.1) i=1 and solves for the Eigenvectors v and diagonal Eigenvalues matrix λ as follows: λv = Cv, for λ k 0 and v R D. (3.2) Each (column) Eigenvector v k (in the matrix v) corresponds to a principal axis for data projection. On the other hand, each Eigenvalue λ k (the k-th diagonal in matrix λ) encodes the variance of the training data along each corresponding Eigenvector. Hence by ordering the Eigenvectors in decreasing order of their corresponding Eigenvalues, a set of ordered principal components is attained. Assuming that the optimal number of principal components is known (the selection of this will be explained in the next paragraph), a matrix of feature vectors F is constructed [98], where F =[v 1 v 2 v 3... v K ]. (3.3) Projecting the novel point x (which has already been centered around the mean of the training set) onto the first K principal components is achieved via a single matrix multiplication β = F T x, (3.4) where the k-th column β k (of the coefficient matrix β) denotes the coefficient of the projection of x on the principal axis v k. Conversely, synthesizing a data vector from its 29

46 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE projected coefficient is simply a weighted sum of the principal components: ˆx = Fβ. (3.5) To map the centered reconstruction back to its original space, the reconstructed vector ˆx can be added to the training mean. The remaining factor to consider is how to determine the optimal number of principal components. In order to do so, the meaning of optimal value, from a de-noising perspective must be defined. The goal of de-noising is to learn a subspace, which best represents the true training subspace, given the constraint of the (de-noising) algorithm. The nearest point in the learnt subspace closest to a noisy data point is considered the cleanest representative of this entity. At this point, it is important to highlight the main difference between clean data and de-noised data (which lies in the learnt subspace). As the denoiser may not be fully representative of the training data, the de-noised vector may still retain unwanted noise. Therefore, de-noised data is not necessarily (and most unlikely) clean data. The goal is to eliminate as much noise as possible by learning a subspace, which is as close as possible to the true subspace, without over-fitting. To this end, denoising is the elimination of the components of any point which causes it to deviate from the learnt subspace. For example, in figure 3.2, projecting a noisy point onto the subspace (defined by the 1 st principal axis) via equation 3.4 is considered a case of linear de-noising. In our work, for PCA de-noising [39], the optimal number of principal components K + will be defined as K + = argmin K [1,D] N x i D K (x i + n σ i ) 2, (3.6) i=1 where n σ i is a random sample from a Gaussian white noise signal with variance of σ 2. The de-noising function is signified by D K ( ) and is a combination of the linear projection (3.4) and re-synthesizing (3.5) of the noisy data from the first K principal components For interested readers, an analysis of noise in linear subspace approaches is conducted by Chen and Suter in [25]. The article aims to investigate the de-noising capacity, which indicates, in matrix terms, how close a low rank matrix (learnt subspace) is to the noise-free matrix (the true training data). 30

47 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE (ordered in decreasing Eigenvalues). For the sake of simplicity, in equation (3.6), there is only a single iteration over the entire training set for each value of K during tuning (this is the process of selecting the optimal value K + ). To ensure that the de-noising parameter generalizes well to novel test data, the optimal number of principal components is tuned using κ-fold cross-validation [39]. In this case, the training set is partitioned into κ testing subsets. For each subset, the remaining κ 1 subsets are combined to create the training set. This effectively ensures that the training and testing subsets are mutually exclusive during the tuning process, hence representing a scenario which is more likely to occur in practice. To this end, equation 3.6 becomes: K + = argmin K [1,D] κ k=1 card(x k tr ) i=1 x k i D k K(x k i + n σ i ) 2, (3.7) where Xtr k denotes the k-th subset and D K k ( ) represents the de-noising function using the principal components learnt from the remaining κ 1 subsets (i.e. not inclusive of data in subset k). As the size of each subset may vary, the cardinality of the k-th subset is denoted by card(x k tr). Many other techniques exist for the selection of the optimal parameter for PCA. For example, in human motion synthesis via PCA, Safonova et al [84] chooses K + such that: E r = K + i=1 λ i D i=1 λ i 0.9 (3.8) Geometrically speaking, because the i-th Eigenvalue encodes the variance along the i-th principal component (Eigenvector), equation 3.8 effectively selects K + such that the PCA projections represents more than 90% of the variance of the training data. In that case [84], PCA is used for data compression to allow optimization in a lower dimensional space, whereas in our work, the parameter is tuned specifically to optimize PCA s ability in the de-noising of novel data (equation 3.7). 31

48 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE Limitations of PCA Even though PCA is an efficient and simple statistical learning technique, there are two crucial requirements for PCA de-noising to work effectively. These are: that the data remains within the original space, and, that the clean data lie on or near a linear subspace. The first constraint is obvious since the orthogonal principal components lie within the span of R D (the original space), data constructed from these components must remain in R D. The second limitation is dependent on specific data set (i.e. is the data linear or non-linear?). This can best be illustrated with a simple non-linear toy sample. Consider, for example, the non-linear toy data set (in figure 3.3), which has a mean square error of Figure 3.3: Toy example to illustrate the limitation of PCA on the de-noising of nonlinear data. The clean subspace is represented by the blue curve and the noisy test data represented by the red crosses. In this case, the goal of PCA de-noising is to project the noisy (red) points onto the blue curve defining the clean subspace. Visually, for non-linear de-noising, a good projection for a noisy point, would be a projection which is orthogonal to the curve s tangent (at the point of interception). From figure 3.4, it is clear that PCA de-noising is ineffective in the de-noising of non-linear data. By projecting the test data (red points) onto the 1 st principal component, the mean error actually increases by more than four times the original error. The ability to map data to an alternative space (from the original space R D ) is advantageous because it introduces a new degree of flexibility for data fitting and processing (this concept in investigated later for KPCA in section 3.3). 32

49 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE Figure 3.4: PCA de-noising of non-linear toy data via projection onto the 1 st principal component. Obviously, two principal components axes can be used in de-noising, but as this is already the dimension of the original data, there is no de-noising effect and the error remains at its original value (up to some small rounding differences from reconstruction). For this specific example, a possible solution may be to view this as a linear fit in the polynomial space (x n,x n 1..., x 2, xy, y 2, x, y, 1). This is the basic concept of Kernel PCA (section 3.3), where data is mapped to an alternative feature space where linear fitting/de-noising is possible. The problem that needs to be solved is how to automatically determine the optimal mapping for each specific data set. 3.3 Kernel Principal Components Analysis (KPCA) Kernel Principal Components Analysis (KPCA) [90, 91, 87] is a non-linear extension of PCA. Geometrically speaking, KPCA implicitly maps non-linear data to a (potentially infinite) higher dimensional feature space where the data may lie on or near a linear subspace. In this feature space, PCA might work more effectively. The basic principal of implicitly mapping data to a high dimensional feature space has found many other areas of application, such as Support Vector Machines (SVMs) [31, 89] for optical character recognition of handwritten characters [39], and image de-noising [91]. To the author s knowledge, there is, however, no previous work on the de-noising of human motion and silhouettes for human motion capture using KPCA projections and pre-image approximations. To show that KPCA is useful in motion de-noising, in section 3.5, an analysis of 33

50 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE the motion sequences is presented which aims to show that the (motion) sequences are inherently non-linear, and therefore should be de-noised by non-linear techniques, such as KPCA KPCA Feature Extraction & De-noising For completeness, a formulation of the KPCA algorithm as introduced by Schölkopf et al [90] will now be presented. Using the same notations as PCA, the training set of N data vectors is denoted by X tr in R D. Each element of the training set is indexed by x i. Each novel input point will, again, be represented by x. The non-linear mapping from the input space to a higher dimensional feature space F will be denoted by Φ, where Φ: R D F, x Φ(x). (3.9) Provided that a suitable mapping for Φ can be found, the covariance matrix (similar to equation 3.1, but in feature space F) can be defined as: C = 1 N N Φ(x i )Φ(x i ) T. (3.10) i=1 Note that the symbol C is used instead of C to highlight the fact that the data is centered in feature space (i.e. N i=1 Φ(x i) = 0). For more details on how to center data in feature space, the reader should refer to [90, 91]. Following the same setting as PCA (section 3.2), the Eigenvectors V (in feature space) and the Eigenvalues (matrix) λ of the covariance matrix are calculated: λv = CV. (3.11) All Eigenvectors in V with Eigenvalues of more than zero must lie within the space of its training vectors Φ(x 1 ), Φ(x 2 ),..., Φ(x N ), leading to the possible reconstruction of V from its basis vectors: V =Σ N i=1α i Φ(x i ), (3.12) 34

51 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE where α i denotes the respective coefficients of each basis. The concatenated matrix α, which consists of these coefficient vectors, is the unknown that needs to be determined before any feature space projection via KPCA can be performed. By introducing an N N kernel matrix K, where K ij := Φ(x i ) Φ(x j ), (3.13) Schölkopf et al [90] showed that NλKα = K 2 α, (3.14) which further leads a simplified formulation: Nλα = Kα. (3.15) From equation 3.15, it is clear that the unknown coefficient matrix can be determine by solving for the Eigenvectors and Eigenvalues of K. Specifically for the purpose of de-noising via KPCA (where a major concern is the ability to project novel points) the projections onto the k-th principal components in feature space is then V k Φ(x) = N α k i Φ(x) Φ(x i ). (3.16) i=1 Using similar notations for projected coefficients as equation 3.4, the coefficients of the KPCA projection (onto the first K + principal axis) of x can be denoted as β =[ V 1 Φ(x),.., V K+ Φ(x) ] T, β R K+ (3.17) where β k denotes the coefficient of the projection of x on the principal axis V k (in feature space). The remaining factor to consider is how to define the feature space mapping Φ( ), such that KPCA de-noising is of any practical use. Referring back to the formulation of KPCA, there are, in fact, only two instances (when projecting points via KPCA) where an explicit mapping from input space R D to feature space F is required. These are when calculating for the kernel matrix K in equation 3.13 and during the projection of a novel 35

52 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE point in equation In both cases, a dot product (projection) with another mapped vector is performed immediately, once in feature space. To this end, it is possible to avoid defining an explicit map for Φ( ), and instead, define a positive definite kernel function k(x i, x j ) [90], such that k(x i, x j )= Φ(x i ) Φ(x j ), Φ: R D F (3.18) is a dot product in the feature space of the mapped vectors. Furthermore, due to the high (possibly infinite) dimension of the feature space, it would also be computationally expensive to explicitly perform the mapping. By implicitly mapping via a kernel function, which is a dot product in feature space [87], the system also avoids having to explicitly store high dimensional feature space vectors in memory. The following section summarizes the selection and tuning process for the relevant kernel Kernel Selection for Pre-Image Approximation There are many possible choices of valid kernels for KPCA, such as Gaussian or polynomial kernels. The choice of the optimal kernel will be dependant on the data and the application of KPCA. From a KPCA de-noising perspective, the radial basis Gaussian kernel k(x i, x) = exp γ{(x i x ) T (x i x)} (3.19) is selected due to the availability of well established and tested pre-image approximation algorithms [91, 58]. The pre-image (which exists in the original input space) is an approximation of the KPCA projected vector (which exists in the de-noised feature space). Approximating the pre-image of a projected vector (in feature space) is an extremely complex problem and for some kernels, this still remains an open question. Since the projected (de-noised) vector exists in a higher (possibly infinite) feature space, not all vectors in this space may have pre-images in the original (lower dimensional) input space. Specifically for the case of Gaussian kernels, however, the fixed point algorithm [88], gradient optimization [39] or the Kwok-Tsang algorithm [58], have been shown to be successful in approximating pre-images. For more information regarding the approximation of pre-images, the reader 36

53 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE should refer to [89]. Note that there are two free parameters that requires tuning for the Gaussian kernel (equation 3.19), these being γ, the Euclidian distance scale factor, and K +, the optimal number of principal axis projections to retain in the feature space. To do so, the same concept of tuning for de-noising (as with PCA [section 3.2]) can be implemented. By letting P γ K + define the KPCA de-noising function (projection and pre-image approximation with the Euclidian scale factor of γ and an optimal principal axes number of K + ), the optimal parameters will be chosen such as to minimize the error function ε(γ, K + )= N x i P γ (x K + i + n σ i ) 2. (3.20) i=1 Note that this is similar to PCA tuning in equation 3.6, and can be improved to generalize to unseen inputs by using cross validation in tuning, hence resulting in the following form: ε(γ, K + )= κ k=1 card(x k tr ) i=1 x k i P γ (x k K + i + n σ i ) 2, (3.21) k where Xtr k denotes the k-th subset for cross validation and P γ ( ) represents the KPCA K + k de-noising function using the principal components learnt from the remaining κ 1 subsets (i.e. not inclusive of data in subset k). Figure 3.5: KPCA de-noising of the non-linear toy data from figure 3.3. Using the tuned Gaussian kernel, figure 3.5 shows the results of KPCA de-noising of the non-linear example [39] from figure 3.3. In this case, KPCA de-noising eliminated noise of more than 40% of the original error. When compared with linear PCA, where there 37

54 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE was no possible reduction in error, KPCA s ability to de-noise data provides a significant improvement. 3.4 KPCA Subspace Learning for Motion Capture The concept of motion subspace learning via KPCA is to use (Gaussian) kernels to model the non-linearity of joint rotation, such that computationally efficient pose distance can be calculated, hence, allowing practical human motion de-noising. Clean human motion captured via a marker-based system [32] are converted to the normalized Relative Joint Center (RJC) encoding (section 3.4.1) and used as training data. The optimal de-noising subspace is learnt by mapping the concatenated joint rotation (pose) vector via a Gaussian kernel and tuning its parameters to minimize error in pre-image approximations (section 3.3.2). In section 3.4.2, the relationship between human motion de-noising and markerless motion capture is summarized Normalized Relative Joint Center (RJC) Encoding This section aims to show that encoding human pose using normalized RJC vectors is not only logical, but intuitive, in the sense that it allows efficient non-linear comparison of pose vectors, and has a structure which is well suited for KPCA. Generally, human pose can be encoded using a variety of mathematical forms, ranging from Euler angles (appendix A.1.1) to homogeneous matrices (appendix A.1.2). In terms of pose reconstruction and interpolation [9, 57], problems usually encountered (when using Euler angles and matrices) include how to overcome the non-linearity and non-commutative structure of rotation encoding. Specifically for markerless motion capture using Euler angles, we do not try to map from input to Euler space as in [6] because Euler angles also suffer from singularities and gimbal locks. Instead we model (3D) joint rotation as a point on a unit sphere and use a Gaussian kernel to approximate its non-linearity. The advantage of using normalized spherical surface points to model rotations is that the points lie on a non-linear manifold, which can be well-approximated using a combination of exponential maps and linear algebra. A similar An interesting area to consider for each kernel (of KPCA) is if any heteroscedastic noise [38, 63] in induced in feature space, and if so, how this relates to KPCA s ability to de-noise and approximate pre-images. 38

55 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE concept of applying non-linear exponential maps to model human joint rotations has been proposed by Bregler and Malik in [18]. From equation 3.16, the requirement needed to enable KPCA projection for human motion de-noising is the definition of the distance between two pose descriptors, which should be a dot product in feature space (section 3.3). To begin, let us consider the simple case of defining the distance between two different orientations of the same joint (e.g. the shoulder joint). In this case, the two rotations can be modelled as two points on a unit sphere, with a logical distance being the geodesic (surface) distance between the two surface points. If we denote the two normalized points on the unit sphere as p and q respectively, the geodesic distance is the inverse cosine of the dot product: cos 1 (p q). The difficulty of using the inverse cosine arises when trying to efficiently extend the distance definition to an array of spheres (encoding multiple joints of the human body), and ensuring that this distance is positive definite to allow embedding into KPCA [87]. Instead of adopting this approach, we use a Gaussian kernel, which has already been proven to be positive definite for KPCA de-noising [91]. In this case, the distance between two orientations of the i-th joint will be defined as k(p i, q i ), with k(p i, q i ) = exp γ{(p i q i )T (p i q i )} k(, ) [0, 1], (3.22) In equation 3.22, the Gaussian kernel in conjunction with the Euclidian distance (between p and q) have been used to approximate the non-linearity between the two spherical surface points. The reader should note that the Gaussian kernel is an inverse encoding (i.e. a complete alignment of the two orientations is indicated by the maximum value of k(p i, q i ) = 1). Any misalignment of the specific joint will tend to reduce the kernel function s output towards zero. The advantage of adopting an exponential map approach to joint distance definition becomes apparent when trying to calculate the distance between arrays of multiple spheres, which encode the full pose (stance) of a person. Since k(, ) [0, 1], the individual joint distance can be extended to multiple joints by taking a direct multiplication of the kernel s output. For example, if two different pose vector x 39

56 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE and y for a human model of M joints is encoded as x =[p 1, p 2,... p M ] T, x R 3M, y =[q 1, q 2,... q M ] T, y R 3M, and (3.23) then the distance between these pose vectors, which are encoded as an array of normalized spherical surface points, is k(x, y) =k(p 1, q 1 ) k(p 2, q 2 )... k(p M, q M ) = exp γ{(p 1 q 1 )T (p 1 q )} 1 exp γ{(p 2 q 2 )T (p 2 q )} 2... exp γ{(p M q M )T (p M q M )} = exp γ{(p 1 q 1 )T (p 1 q 1 )+(p 2 q 2 ) T (p 2 q 2 )+... (p M q M ) T (p M q M )} = exp γ{(x y)t (x y)}. (3.24) This results in the Gaussian kernel, which was used in KPCA de-noising of the non-linear toy example in figure 3.5. Intuitively, this encoding makes sense because two full body poses which are aligned will still generate a distance of 1. Any joint which results in a relative misalignment (from the same joint in a different pose) will tend to reduce this distance towards zero. Equation 3.24 shows that the distance between pose vectors of M joints can be efficiently calculated by simply taking the Euclidian distance between the concatenated spherical surface points and mapping via a Gaussian kernel. The Euclidian scale parameter γ allows the tuning of the kernel to fit different motion sequences and training sets (section 4.2.2). Furthermore, the kernel is guaranteed to be positive definite, and therefore, applicable to human motion de-noising via KPCA. A disadvantage of encoding a rotation using a normalized spherical surface point is that it loses a degree of rotation for any ball & socket joint (figure 3.6). This is because a surface point on a sphere cannot encode the rotation about the axis defined by the sphere s center and the surface point itself. However, as highlighted in the phase space representation of Moeslund [74], for the main limbs of the human body (e.g. arms and legs), the ball & socket joints (e.g. shoulder or pelvis) are directly linked to hinge joints (e.g. elbow or knee), which only has one degree of freedom. Provided that the limb is 40

57 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE not fully extended, it is possible to rediscover the missing degree of freedom (for a ball & socket joint) by analyzing the orientation of the corresponding hinge joint. If the limb is fully extended, then temporal constraints can be used to select the most probable missing rotation of the ball & socket joint. Figure 3.6: Diagram to illustrate the missing rotation in a ball & socket joint using the RJC encoding. The missing rotation of the ball & socket joint (shoulder) can be inferred from the hinge joint (elbow) below it in the skeletal hierarchy (provided that the limb is not fully extended) Relationship between KPCA De-noising and Motion Capture It is important to highlight the possible relationship between KPCA de-noising of human motion and exemplar based motion capture techniques. In human motion de-noising, noisy human pose vector can be de-noised by projecting it onto the feature subspace via the kernel trick, and mapping back via pre-image approximation [87]. In exemplarbased motion capture, performance improvement can be achieved by constraining search to within a more efficient lower dimensional human motion subspace (rather than the original pose space). Both techniques require the use of a human motion subspace for processing. To some readers, an obvious way to integrate KPCA de-noising into markerless motion capture would be to add a KPCA de-noiser as a post-processor to motion capture and have some kind of feedback loop to constrain the search space (figure 3.7 [top]). This thesis explores a more efficient concept, that of integrating KPCA human motion de-noiser into 41

58 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE Figure 3.7: [Top] Data flow diagram to summarize a likely relationship between human motion capture and human motion de-noising. [Bottom] Data flow diagram summary of Kernel Subspace Mapping (KSM) [chapter 4] (based on KPCA [section 3.3]), which integrates motion capture and de-noising into a single efficient processing step. the core of a novel markerless motion capture technique called Kernel Subspace mapping (KSM)(figure 3.7 [bottom]). Instead of capturing a pose and de-noising from the normalized RJC space (figure 3.7 [top]), KSM maps silhouette descriptors immediately to the projected feature subspace learnt from KPCA, hence avoiding the KPCA projection step (equation 3.16) during pose inference. In order to test the effectiveness of KPCA for human motion capture, an engineering approach is adopted and experiments are performed on the KPCA de-noiser (for human motion) independently, before motion capture integration. To advocate the use of the more complex non-linear KPCA, instead of linear PCA, experiments (section 3.5) should show that the RJC encoding (section 3.4.1) of human motion is non-linear, and more importantly, that more noise can be eliminated via KPCA (when compared to PCA). Denoising of synthetic Gaussian white noise in the feature space of various motion sequences is also analyzed. This is investigated because in KSM, noise may also be induced in the feature space (crucial details regarding this will be discussed in chapter 4). 42

59 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE 3.5 Experiments & Results To compare PCA and KPCA for the de-noising of human motion, various motion sequences were downloaded from the CMU motion capture database. All motions were captured via the VICON marker-based motion capture system [103], and were originally downloaded in AMC format (section A.1.1). To avoid the singularities and non-linearity of Euler angles (AMC format), the data is converted to its RJC equivalent (section 3.4.1) for training. All experiments were performed on a Pentium TM 4 with a 2.8 GHz processor. Synthetic Gaussian white noise are added to motion sequence X tr in normalized RJC format, and the de-noising qualities compared quantitatively (figure 3.8) and qualitatively in 3D animation playback. Figure 3.8: Quantitative comparison between PCA and KPCA de-noising of human motion sequence. The level of input synthetic noise (mean square error [mse]) is depicted on the horizontal axes. The corresponding output noise level in the de-noised motion is depicted on the vertical axes. Figure 3.8 highlights the superiority of KPCA over linear PCA in human motion denoising. Specifically for the walking and running motion sequences (figure 3.9 and figure 3.10 respectively), the frame by frame comparison further emphasizes the superiority of KPCA (blue line) over PCA (black line). Furthermore, the KPCA algorithm was able to generate realistic and smooth animation when the de-noised motion is mapped to a For motion de-noising results via PCA and KPCA, please refer to the attached file: videos/motiondenoisingrun.mp4 & videos/motiondenoisingwalk.mp4. 43

60 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE Figure 3.9: Frame by Frame error comparison between PCA and KPCA de-noising of a walk sequence. Figure 3.10: Frame by Frame error comparison between PCA and KPCA de-noising of a run sequence. skeleton model and play-backed in real time. PCA de-noising, on the other hand, was unsuccessful due to its linear limitation and resulted in jittery unrealistic motions similar to the original noisy sequences. A repeat of this experiment (motivated by our work) has also been conducted by Schraudolph et al (in [92]) with the fast iterative Kernel PCA algorithm, and similar de-noising results are confirmed. For feature space de-noising of another non-linear toy example (figure 3.11 [center]), the result shows that KPCA can implicitly de-noise noisy feature space data by directly We are interested in the de-noising of feature space noise because in KSM (section 4), input data is initially mapped to the feature space, before pose estimation. 44

61 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE Figure 3.11: Diagram to show results of KPCA in the implicit de-noising of feature space noise of a toy example via the fixed-point algorithm [88]. [Left] Projections of the clean (blue) training data and the noisy (red) test data onto the first 3 principal axes in feature space. [Center] The direct pre-images of the noisy data in the original input space. [Right] The direct pre-images re-projected back to the feature space. Note the reduction in mean squared error between the noisy data [left] and the de-noised data [right]. calculating its pre-image. This result is further supported when the pre-images of the noisy feature space vectors are re-projected back to feature space without any other form of processing via KPCA. Figure 3.11 [right] highlights geometrically and quantitatively the reduction in error of the re-projected points onto the first 3 principal axes in feature space. Figure 3.12 [top] shows the correlation (though erratic) between feature space noise and pose (RJC) space noise (in mean square error [mse]) for KPCA de-noising of a walk sequence. As expected, smaller noise level in feature space is an indication of smaller noise level in the original space, and vice versa. As the processing cost of KPCA is dependant on the training size (equation 3.16), an analysis of this cost with respect to the training size was also investigated. Figure 3.12 [bottom] confirms the linear relationship between the de-noising cost of KPCA and its training size. 3.6 Conclusions & Future Directions Human motion can be considered as a high dimensional set of concatenated vectors (57 dimensions in RJC format). As human motion is coordinated, it is possible to use less 45

62 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE Figure 3.12: Feature and pose space mean square error [mse] relationship for KPCA [top]. Average computational de-noising cost for KPCA [bottom]. dimensions to represent the original data. PCA can encode more than 98% of a simple walking motion in less than 10 principal components [14], and it has even been shown successful in learning principal axes for the synthesis of new human motion via optimization [84]. However, because a set of axes represents a high percentage of the original data, it does not necessary mean that it is a good de-noiser of the data. The main point to take into account, is that neither PCA, nor KPCA, is being investigated (in this chapter) for its ability to compress (reduce the dimension of) human motion data. The main issue of concern is to compare the de-noising qualities (of human motion) between PCA and KPCA (and use the results to motivate the use of KPCA de-noising in Kernel Subspace Mapping [chapter 4]). From the results in figure 3.8, it is clear that KPCA performs significantly better than PCA in human motion de-noising, hence, verifying that the simplest forms of human motion (e.g. walking, running) are inherently non-linear. Note that PCA is, in fact, a special case of KPCA with linear kernels, hence, eliminating the possibility that PCA de-noising can perform better than KPCA de-noising. KPCA simply allows the selection of different 46

63 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE kernels (which may also have more parameters than PCA), more flexibility in parameter tuning to avoid over/under fitting, and hence, enables custom improvements over linear PCA. Specifically for KPCA de-noising via the RBF kernel (equation 3.16), from experiments, it was discovered that the optimal value for γ (the Gaussian scale factor) plays a substantial role in improving the de-noising qualities. This parameter is not available for tuning via the traditional linear PCA de-noising algorithm. The ability for KPCA to implicitly de-noise data in feature space is also interesting. Instead, assume a situation where noise is actually induced in feature space, and not input space. By directly approximating the pre-images from the noisy data in feature space, it is possible to implicitly de-noise the data in the original space. To reiterate, because the feature space usually exists in much higher dimension than the input space, most points in the feature space do not have corresponding pre-images. This is even more likely if the (feature space) point is located away from the subspace of training data, since the feature space is, in fact, defined by the training data itself. In the case where the pre-image does not exist, then gradient descent techniques [88] can be used to approximate the most likely pre-image. From the results in figure 3.11, this approximation has the desirable de-noising effect of implicitly projecting the noisy feature space data into the non-linear subspace (i.e. the toy data set in figure 3.11 [center]) in the input space. From a markerless motion capture perspectively, human motion de-noising is advantageous as it allows the automated learning of human subspaces for specific applications. In chapter 4, the KPCA human motion de-noiser will be integrated into the core of the markerless motion capture technique (called Kernel Subspace Mapping), where noise may be induced in both the input and feature space. For real-time motion capture, a factor to take into account for future consideration is the complexity for KPCA projection (figure 3.12 [bottom]), which is O(N), where N is the cardinality of the training set. As motion capture data is usually stored as a set of concatenated vectors captured (sampled) at high frame rates (from 60Hz to 120Hz), training the de-noiser with all the frames from each motion file will be extremely expensive when it comes to motion de-noising (and motion 47

64 CHAPTER 3. SUBSPACE LEARNING FOR HUMAN MOTION CAPTURE capture). To this end, the Greedy KPCA algorithm [41], which selects a reduced training set that retains a similar span in feature space will be reviewed for motion de-noising (chapter 6). In particular, the capture rate (which is dependant on the de-noising rate) can be controlled by modifying the size of the reduced training set via this greedy algorithm. 48

65 Chapter 4 Kernel Subspace Mapping (KSM) This chapter integrates human motion de-noising via KPCA (chapter 3) and the Pyramid Match Kernel [44] into a novel markerless motion capture technique called Kernel Subspace Mapping. The technique learns two feature space representations derived from the synthetic silhouettes and pose pairs, and views motion capture as the problem of mapping vectors between the two feature subspaces. Quantitative and qualitative motion capture results are presented and compared with other state of the art markerless motion capture algorithms. 4.1 Introduction Kernel Subspace Mapping (KSM) concentrates on the problem of inferring articulated human pose from concatenated and synchronized human silhouettes. Due to ambiguities when mapping from 2D silhouette to 3D pose space, previous silhouette based techniques, such as [6, 19, 26, 37, 45, 75, 77], involve complex and expensive schemes to disambiguate between the multi-valued silhouette and pose pairs. A major contributor to the expensive cost of human pose inference is the high dimensionality of the output pose space. To mitigate exhaustively searching in high dimensional pose space, pose inference can be constrained to a lower dimensional subspace [37, 113]. This is possible due to a high degree This chapter is based on the conference paper [108] T. Tangkuampien and D. Suter: Real-Time Human Pose Inference using Kernel Principal Component Pre-image Approximations: British Machine Vision Conference (BMVC) 2006, pages , Edinburgh, UK. 49

66 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) of correlation in human motion. In particular, a novel markerless motion capture technique called Kernel Subspace Mapping (KSM), which takes advantage of the correlation in human motion, is introduced. The algorithm, which is based on Kernel Principal Components Analysis (KPCA) [90] (section 3.3), learns two feature subspace representations derived from the synthetic silhouettes and normalized relative joint centers (section 3.4.1) of a single generic human mesh model (figure 4.1). After training, novel silhouettes of previously unseen actors and of unseen poses are projected through the two feature subspaces (learnt from KPCA) via Locally Linear non-parametric mapping [85]. The captured pose is then determined by calculating the pre-image [87] of the projected silhouettes. As highlighted in chapter 3, an advantage of KPCA is its ability to de-noise non-linear data before processing, as shown in [91] with images of handwritten characters. This chapter further explores this concept, and shows how this novel technique can be applied in the area of markerless motion capture. Results in section show that KSM can infer relatively accurate poses (compared to [45, 6, 2]) from noisy unseen silhouettes by using only one synthetic human training model. A limitation of the technique is that silhouette data will be projected onto the subspace spanned by the training pose, hence restricting the output to within this pose subspace. This restriction, however, is not crucial since the system can be initialized with the correct pre-trained data set if prior knowledge on the expected type of motion is available. The main contribution of this chapter includes the introduction of a novel technique for markerless motion capture called Kernel Subspace Mapping (KSM), which is based on mapping between two feature subspaces learnt via KPCA. A novel concept of silhouette de-noising is presented, which allows previously unseen (test) silhouettes to be projected onto the subspace learnt from the generic (training) silhouettes (figure 4.1), hence allowing pose inference using only a single training model (which also leads to a significant decrease in training size and inference time). For mapping from silhouettes to the pose subspace, instead of using standard or robust regression (which was found to be both slower and less 50

67 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Figure 4.1: Overview of Kernel Subspace Mapping (KSM), a markerless motion capture technique base on Kernel Principal Components Analysis (KPCA) [90]. accurate in our experiments), non-parametric LLE mapping [85] is applied. The silhouette kernel parameters are tuned to optimize the silhouette-to-pose mapping by minimizing the LLE mapping error (section 4.3.3). Finally, by mapping silhouettes to the pose feature subspace, the search space can be implicitly constrained to a set of valid poses, whilst taking advantage of well established and optimized pre-image (inverse mapping) approximation techniques, such as the fixed-point algorithm of [87] or the gradient optimization technique [39]. 4.2 Markerless Motion Capture: A Mapping Problem As stated previously, Kernel Subspace Mapping (KSM) views markerless motion capture as a mapping problem. Given as input, silhouettes of a person, the algorithm maps the pixel data to a pose space defined by the articulated joints of the human body. The static pose of a person (in the pose space) is encoded using the normalized relative joint centers (RJC) format (section 3.4.1), where a pose (at a time instance) is denoted by x =[p 1, p 2,...p M ] T, x R 3M, (4.1) 51

68 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) and p k represents normalized spherical surface point encoding of the k-th joint relative to its parent joint in the skeletal s hierarchy (appendix A.1). Regression is not performed from silhouette to Euler pose vectors as in [2, 6] because the mapping from pose to Euler joint coordinates is non-linear and multi-valued. Any technique, like KPCA and regression based on standard linear algebra and convex optimization will therefore eventually breakdown when applied to vectors consisting of Euler angles (as it may potentially map the same 3D joint rotation to different locations in vector space). To avoid these problems, the KPCA de-noising subspace is learnt from (training) pose vectors in the relative joint center format (section 3.4.1). Figure 4.2: Scatter plot of the RJC projections onto the first 4 kernel principal components in M p. of a walk motion, which is fully rotated about the vertical axis. The 4 th dimension is represented by the intensity of each point. The pose feature subspace M p (figure 4.2) is learnt from the set of training poses. Similarly, for the silhouette space, synchronized and concatenated silhouettes (synthesized from the pose x i ) are preprocessed to a hierarchical shape descriptor Ψ i (section 4.2.3) using a technique similar to the pyramid match kernel of [44]. The preprocessed hierarchical training set is embedded into KPCA, and the silhouette subspace M s learnt (figure 4.1 [bottom left]). The system is tuned to minimize the LLE non-parametric mapping [85] from M s to M p (section 4.3). During Capture, novel silhouettes of unseen actors (figure 4.1 [top left]) are projected through the two subspaces, before mapping to the output pose space using pre-image approximation techniques [87, 39]. Crucial steps are explained fully in the remainder of this chapter. 52

69 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Learning the Pose Subspace via KPCA From a motion capture perspective, pose subspace learning is the same as learning the subspace of human motion for de-noising (chapter 3). For the latter (motion de-noising), a noisy pose vector is projected onto the (kernel) principal components for de-noising in the feature subspace M p. Thereafter, the pre-image of the projected vector is approximated, hence mapping back to the original space (figure 4.3 [KPCA De-noiser: black rectangle]). Figure 4.3: Diagram to highlight the relationship between human motion de-noising (section 3), Kernel Principal Components Analysis (KPCA) and Kernel Subspace Mapping (KSM). For markerless motion capture (figure 4.3 [KSM: red rectangle]), KSM initially learns the projection subspace M p as in KPCA de-noising. However, instead of beginning in the normalized RJC (pose) space, the input is derived from concatenated silhouettes (figure 4.3 [left]) and mapped (via a silhouette feature subspace M s ) to the pose feature subspace M p. Thereafter, KSM and KPCA de-noising follow the same path in pre-image approximation. To update the notations for KSM, for a motion sequence, the training set of RJC pose vectors is denoted as X tr. The KPCA projection of a novel pose vector x onto the k-th 53

70 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) principal axis V k p in the pose feature space can be expressed implicitly via the kernel trick as Vp k Φ(x) = = N α k i Φ(x i ) Φ(x) i=1 N α k i k p (x i, x), i=1 (4.2) where α refers to the Eigenvectors of the centered RJC kernel matrix (equation 3.16). In this case, the radial basis Gaussian kernel k p (x i, x) = exp γp{(x i x) T (x i x )} (4.3) is used because of the availability of well established and tested fixed point pre-image approximation algorithm [87] (The reason for this selection will become clear later in section 4.2.2). To avoid confusion, the symbol k p (, ) will be used explicitly for the pose space kernel and k s (, ) for the silhouette kernel [the kernel k s (, ) will be discussed in section 4.2.3]. As in section 3.3.2, there are two free parameters in need of tuning of k p (, ), these being γ p the Euclidian distance scale factor, and η p the optimal number of principal axis projections to retain in the pose feature space. The KPCA projection (onto the first η p principal axis) of x i is denoted as v p i, where v p i =[ V1 p Φ(x i),.., V ηp p Φ(x i ) ] T, v p R ηp. (4.4) For novel input pose x, the KPCA projection is simply signified by v p Pose Parameter Tuning via Pre-image Approximation In order to understand how to optimally tune the KPCA parameters γ p and η p for the pose feature subspace M p, the context of how M p is applied in KSM must be highlighted (figure 4.1 [bottom right]). Ideally, a tuned system should minimize the inverse mapping error from M p to the original pose space in normalized RJC format (figure 4.1 [top right]). In the context of KPCA, if we encode each pose as x and its corresponding KPCA 54

71 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) projected vector as v p, the inverse mapping from v p to x is commonly referred as the pre-image [87] mapping. Since a novel input will first be mapped from M s to M p, the system needs to determine its inverse mapping (pre-image) from M p to the original pose space. Therefore, for this specific case, the Gaussian kernel parameters, γ p and η p, are tuned using cross-validation to minimize pre-image reconstruction error (as in equation 3.21). Cross validation prevents over-fitting and it ensures that the pre-image mapping generalized well to unseen poses, which may be projected from the silhouette subspace M s. An interesting advantage of using pre-image approximation for mapping is its implicit ability to de-noise if the input vector (which is perturbed by noise) lies outside the learnt subspace M p. This is relatively the same problem as that encountered by Elgammal and Lee in [37], which they solved by fitting each separate manifold to a spline and rescaling before performing a one dimensional search (for the closest point) on each separate manifold. For Kernel Subspace Mapping, however, it is usual that the noisy input (from M s ) lies outside the clean subspace, in which case, the corresponding pre-image of v p usually does not exist [91]. In such a scenario, the algorithm will implicitly locate the closest point in the clean subspace corresponding to such an input, without the need to explicitly search for the optimal point Learning the Silhouette Subspace This section shows how to optimally learn the silhouette subspace M s, which is a more complicated problem than learning the pose subspace M p (section 4.2.1). Efficiently embedding silhouette distance into KPCA is more complex and expensive because the silhouette exists in a much higher dimensional image space. The use of Euclidian distance between vectorized images, which is common but highly inefficient, is therefore avoided. For KPCA, an important factor that must be taken into account when deciding on a silhouette kernel (to define the distance between silhouettes) is if the kernel is positive definite and satisfies Mercer s condition [87]. In KSM, we use a modified version of the pyramid match kernel [44] (to define silhouette distances), which has already been proven positive definite. The main difference (between the original pyramid match kernel [44] and 55

72 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) the one used in KSM) is that instead of using feature points sampled from the silhouette s edges (as in [44]), KSM consider all the silhouette foreground pixels as feature points. This has the advantage of skipping the edge segmentation and contour sampling step. During training (figure 4.4), virtual cameras (with the same extrinsic parameters as the real cameras to be used during capture) are set up to capture synthetic training silhouettes. Note that more cameras can be added by concatenating more silhouettes into the image. Figure 4.4: Example of a training pose and its corresponding concatenated synthetic image using 2 synchronized cameras. Each segmented silhouette is first rotated to align the silhouette s principal axis with the vertical axis before cropping and concatenation (figure 4.4) Each segmented silhouette is normalized by first rotating the silhouette s principal axis to align with its vertical axis before cropping and concatenation. Thereafter, a simplified recursive multi-resolution approach is applied to encode the concatenated silhouette (figure 4.5). At each resolution (level) the silhouette area ratio is registered in the silhouette descriptor Ψ. 56

73 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Figure 4.5: Diagram to summarize the silhouette encoding for 2 synchronized cameras. Each concatenated image is preprocessed to Ψ before projection onto M s. A five level pyramid is implemented, which results in a 341 dimensional silhouette descriptor. To compare the difference between two concatenated images, their respective silhouette descriptors Ψ i and Ψ j are compared using the weighted distance D Ψ (Ψ i, Ψ j )= F f=1 1 L(f) { Ψ i(f) Ψ j (f) γ L(f)+1 }. (4.5) The counter L(f) denotes the current level of the sub-image f in the pyramid, with the smallest sub-images located at the bottom of the pyramid and the original image at the top. In order to minimize segmentation and silhouette noise (located mainly at the top levels), the lower resolution images are biased by scaling each level comparison by 1/L(f). As the encoding process moves downwards from the top of the pyramid to the bottom, it must continually update the cumulative mean area difference γ L(f)+1 at each level. This is because at any current level L(f), only the differences in features that have not already been recorded at levels above it is recorded, hence the subtraction of γ L(f)+1. To embed D Ψ (Ψ i, Ψ j ) into KPCA, the Euclidian distance in k p is replaced with the weighted distance, resulting in a silhouette kernel k s (Ψ i, Ψ) = exp γsdψ (Ψ i,ψ) 2. (4.6) For a five level pyramid, the feature vector dimension is determined by the following formula: =

74 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Using the same implicit technique as that of equation 4.2, KPCA silhouette projection is achieved by using k s (, ) and the projection (onto the first η s principal axis) of Ψ i is denoted as v s i =[ V1 s Φ(Ψ i),.., V ηs s Φ(Ψ i) ] T, v s R ηs, (4.7) where V k s Φ(Ψ i ) = N j=1 γk j k s(ψ j, Ψ i ), with γ representing the Eigenvectors of the corresponding centered silhouette kernel matrix. 4.3 Locally Linear Subspace Mapping Having obtained M s and M p, markerless motion capture can now be viewed as the problem of mapping from the silhouette subspace M s to the pose subspace M p. Using P tr to denote the KPCA projected set of training poses (where P tr =[v p 1, v p 2,..., v p M ]) and, similarly, letting S tr denote the projected set of training silhouettes (where S tr =[v s 1, v s 2,..., v s M ]), the subspace mapping can now be summarized as follows: Given S tr (the set of training silhouettes projected onto M s via KPCA using the pyramid kernel) and its corresponding pose subspace training set P tr in M p, how do we learn a mapping from M s to M p, such that it generalizes well to previously unseen silhouettes projected onto M s at run-time? Referring to figure 4.6, the projection of the (novel) input silhouette during capture (onto M s ) is denoted by s in and the corresponding pose subspace representation (in M p ) is denoted by p out. The captured (output) pose vector x out (representing the joint positions of a generic mesh in normalized RJC format) is approximated by determining the preimage of p out. The (non-parametric) LLE mapping [85] is used to transfer projected vectors from M s to M p. This only requires the first 2 (efficient) steps of LLE: 1. Neighborhood selection in M s and M p. 2. Computation of the weights for neighborhood reconstruction. 58

75 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Figure 4.6: Markerless motion capture re-expressed as the problem of mapping from the silhouette subspace M s to the pose subspace M p. Only the projection of the training set (S tr and P tr ) onto the first 4 principal axis in their respective feature spaces are shown (point intensity is the 4th dimension). Note that the subspaces displayed here is that of a walk motion fully rotated about the vertical axis. Different sets of motion will obviously have different subspace samples, however, the underlying concept of KSM remains the same. The goal of LLE mapping for KSM is to map s in (in M s ) to p out (in M p ), whilst trying to preserve the local isometry of its nearest K neighbors in both subspaces. Note that, in this case, we are not trying to learn a complete embedding of the data, which requires the inefficient third step of LLE, which is of complexity O(dN 2 ), where d is the dimension of the embedding and N is the training size. Crucial details regarding the first two steps of LLE, as used in KSM, are now summarized. Figure 4.7: Diagram to summarize the mapping of s in from silhouette subspace M s to p out in pose subspace M p via non-parametric LLE mapping [85]. Note that the reduced subsets S lle and P lle are used in mapping novel input silhouette feature vectors. 59

76 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Neighborhood Selection for LLE Mapping From initial visual inspection of the two projected training sets S tr and P tr in figure 4.6, there may not appear to be any similarity. Therefore, the selection of the nearest K neighbors (of an input silhouette captured at run-time) by simply using Euclidian distances in M s is not ideal. This is because the local neighborhood relationships between the training silhouettes themselves do not appear to be preserved between M s and M p, let alone trying to preserve the neighborhood relationship for a novel silhouette. For example to understand how the distortion in M s may have been generated, the silhouette kernel k s is used to embed two different RJC pose subspace vectors p A and p B which have similar silhouettes (figure 4.8). The silhouette kernel k s, in this case, will map both silhouettes to positions close together (represented as s A and s B ) in M s, even though they are far apart in M p. Figure 4.8: Diagram to show how two different poses p A and p B may have similar concatenated silhouettes in image space. As a result, an extended neighborhood selection criterion, which takes into account temporal constraints, must be enforced. That is, given the unseen silhouette projected onto M s, training exemplars that are neighbors in both M s and M p must be identified. Euclidian distances between training vectors in S tr and s in can still identify neighbors in M s. The problem lies in finding local neighbors in P tr, since the system does not yet know the projected output pose vector p out (this is what the KSM is trying to determine 60

77 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) in the first place). However, a rough estimation of p out can be predicted by tracking in the subspace M p via a predictive tracker, such as the Kalman Filter [53]. From experiments, it was concluded that tracking using linear extrapolation is sufficient in KSM for motion capture. This is because tracking is only used to eliminate potential neighbors which may be close in M s, but far apart (from the tracked vector) in M p. Therefore, no accurate Euclidian distance between the training exemplars in P tr and the predicted pose needs to be calculated. In summary, to select the K neighbors needed for LLE mapping, the expected pose is predicted using linear extrapolation in M p. A subset of training exemplars nearest to the predicted pose is then selected (from P tr ) to form a reduced subset P lle (in M p ) and S lle (in M s ). From the reduced subsets, the closest K neighbors to s in (using Euclidian distances in M s ) can be identified and p out reconstructed from the linked neighbors in M p LLE Weight Calculation Based on the filtered neighbor subset S lle (with K exemplars identified from the previous section), the weight vector w s is calculated, such as to minimize the following reconstruction cost function: ε(s in )= s in κ j=1 w s j slle j 2. (4.8) LLE mapping also enforces that the κ j=1 ws j = 1. Saul and Roweis [85] showed that w s can be determined by initially calculating the symmetrical (and semi-positive definite) local Gram matrix H, where H ij =(s in s lle i ) (s in s lle j ). (4.9) From experiments, subsets consisting from 10% to 40% of the entire training set are usually sufficient in filtering out silhouette outliers. The subset size also does not affect the output pose (as visually determined by the naked eye) whilst in this range, and is therefore not included as a parameter in the tuning process - section

78 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Solving w s j from H is a constrained least square problem, which has the following closedform solution: wj s = Σ kh 1 jk Σ lm H 1. (4.10) lm To avoid explicit inversion of the Gram matrix H, w j s can efficiently be determined by solving the linear system of equation, Σ k H jk wk s = 1 and re-scaling the weights to sum to one [86]. From w s, the projected pose subspace representation can be calculated as, p out = κ j=1 w s j plle j, pout M p, (4.11) where p lle j is an instance of P lle (the pose subspace representation of S lle in M p ). From p out (which exists in M p ), the captured (pre-image) pose x out (in the normalized RJC pose space) is approximated via the fixed point algorithm of Schölkopf et al [87] Silhouette Parameter Tuning via LLE Optimization Similar to M p, the two free parameters γ s and η s (in equation 4.6 and equation 4.7) need to be tuned for the silhouette subspace M s. In addition, an optimal value for K, the number of neighbors for LLE mapping must also be selected. Referring back to figure 4.1, the parameters should be tuned to optimize the non-parametric LLE mapping from M s to M p. The same concept as in section is applied, but instead of using preimage approximations in tuning, the parameters γ s, η s and K are tuned to optimize the LLE silhouette-to-pose mapping. Note that due to the use of an unconventional kernel in silhouette KPCA projection, it is unlikely that a good pre-image can be approximated. However, in KSM for motion capture, there is no need to map back to the silhouette input space, hence, no requirement to determine the pre-image in silhouette space. The silhouette tuning is achieved by minimizing the LLE reconstruction cost function C s = 1 N N K v p i wij s v p j 2, (4.12) i=1 j=1 62

79 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) where j indexes the K neighbors of v p i. The mapping weight ws ij, in this case, is the weight factor of v s i that can be encoded in v s j (as determined by the first two steps of LLE in [85] and the reduced training set from M p ) using the Euclidian distance in M s. To ensure that the tuned parameters generalizes well to unseen novel inputs, training is again performed using cross validation on the training set. Once the optimal parameters are selected for both M s and M p, the system is ready for capture by projecting the input silhouettes through the learnt feature subspaces. 63

80 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) 4.4 Experiments & Results Section presents experiments which compares the Pyramid Match kernel [44] (adopted in KSM) and the Shape Context [11] (which is a popular choice for silhouette-based markerless motion capture [75, 6]) in defining correspondence between human silhouettes. Motion capture results for Kernel Subspace Mapping (KSM) from real and synthetic data are presented in section and section respectively. The system is trained with a generic mesh model (figure 4.4) using motion captured data from the Carnegie Mellon University motion capture database [32]. Note that even though the system is trained with a generic model, the technique is still model free as no prior knowledge nor manual labelling of the test model is required. All concatenated silhouettes are preprocessed and resized to pixels. Figure 4.9: Illustration to summarize markerless motion capture and the training and testing procedures. The generic mesh model (figure 4.10 [center]) in used in training, whilst a previously unseen mesh model is used for testing Pyramid Match Kernel for Motion Capture This section aims to show that the pyramid kernel for human silhouette comparison (section 4.2.3) is as descriptive as the shape context [11] (which has already been shown successful in silhouette-based markerless motion capture techniques such as [6, 75]). A set of synthetic silhouettes of a generic mesh model walking (figure 4.11 [right]) is used to test 64

81 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Figure 4.10: Selected images of the different models used to test Kernel Subspace Mapping (KSM) for markerless motion capture. [left] Biped model used to create deformable meshes for training and testing. [center] Generic model used to generate synthetic training silhouettes. [right] Example of a previously unseen mesh model, which can be used for synthetic testing purposes (appendix B). the efficacy of the modified pyramid Match kernel (section 4.2.3) on human silhouettes. For comparison, the Shape Context of the silhouettes are also calculated and the silhouette correspondence recorded (figure 4.11 [left]). From the experiments, our pyramid kernel is as expressive as the Shape Context for human silhouette comparison [11, 75] and its cost is (only) linear in the number of silhouette features (i.e. the sets cardinality). Figure 4.11: Comparison between the shape context descriptor (341 sample points) and the optimized pyramid match kernel (341 features) for walking human silhouettes. The silhouette distances are given relative to the first silhouette at the bottom left of each silhouette bar. 65

82 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) To further test the kernel, a synthetic sequence of a walk motion fully rotated about the vertical [yaw] axis is used. In this case, the corresponding normalized RJC data (section 3.4.1) is used as the clean data (for comparison). From a markerless motion capture perspective, the goal is to infer the RJC vectors from the noisy input silhouettes. Therefore, a good silhouette descriptor must be able to define shape correspondence that shows a visual similarity with the RJC equivalent. The kernel matrix (using a Gaussian RBF kernel) of the corresponding RJC data (equation 3.19) is shown in figure 4.12 [left]. The kernel matrix generated from the pyramid kernel from the corresponding silhouettes is shown in figure 4.12 [right]. The silhouettes are captured with two synchronized virtual cameras (orthogonal to each other) and concatenated into a single feature image before processing. Note that a single synthetic camera can be used. However, due to ambiguities from 3D to 2D projections of silhouettes, it is difficult to generate visually similar kernel matrix (to the RJC matrix) via a single camera, let alone infer accurate pose and yaw direction for motion capture. In figure 4.12, the similarities in the diagonal intensity bands indicates a correlation between the two tuned kernels. Figure 4.12: Intensity images of the normalized RJC pose space kernel (RJC) and the Pyramid Match kernel of silhouettes captured from synchronized cameras(right) of a training walk sequence (fully rotated about the vertical axis). Similarities in the diagonal intensity bands indicates a correlation between the two tuned kernels. 66

83 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Quantitative Experiments with Synthetic Data In order to test KSM quantitatively, novel motions (similar to the training set) are used to animate a previously unseen mesh model of the author (appendix B) and their corresponding synthetic silhouettes are captured for use as control test images. Using a walking training set of 323 exemplars, KSM is able to infer novel poses at an average speed of seconds per frame on a Pentium TM 4 with 2.8 GHz processor. The captured pre-image pose is compared with the original pose that was used to generate the synthetic silhouettes (figure 4.13). At this point, it should be highlighted that the test mesh model is different to the training mesh model, and all our test images are from an unseen viewing angle or pose (though relative camera-to-camera positions should be the same as was used in training). Figure 4.13: Visual comparison of the captured pose (red with O joints) and the original ground truth pose (blue with * joints) used to generate the synthetic test silhouettes. For 1260 unseen test silhouettes of a walking sequence, which is captured from different yaw orientations, KSM (with prior knowledge of the starting pose) can achieve accurate For motion capture results of synthetic walking sequences via KSM, please refer to the attached file: videos/syntheticmotioncapture.mp4. 67

84 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) pose reconstructions with an average error of 2.78 per joint. Figure 4.14: KSM capture error (degrees per joint) for a synthetic walk motion sequence. The entire sequence has a mean error of 2.78 per joint. For comparison with other related work [75] the error may reduce down to less than 2 of error per each Euler degree of freedom [d.o.f.] (each 3D rotation is represented by a set of 3 Euler rotations.) Agarwal and Triggs in [2] were able to achieve a mean angular error of 4.1 per d.o.f., but their approach requires only a single camera. We have intentionally recorded our errors in terms of degrees per joint, and not in Euler degree of freedom (as in [2]), because for each 3D rotation, there are many possible set of Euler rotations encoding. Therefore, using the Euler degree of freedom to encode error can result in scenarios where the same 3D rotation, can be interpreted as different, hence inducing false positives. Our technique (which uses a reduced training set of 343 silhouettes and 2 synchronized cameras) also shows visually comparable results to the technique proposed by Grauman et al in [45], which uses a synthetic training set of 20,000 silhouettes and 4 synchronized cameras. In [45], the pose error is recorded using the Euclidian distance between the joint centers in real world scale. We believe that this error measurement is not normalized, in the sense, that, for a similar motion sequence, a taller person (with longer limbs) will more likely record larger average error than a shorter person (due to larger variance for the joint located at the end of each limb). Nevertheless, to enable future comparison, we We note that the error relationship between a 3D rotation and the set of 3 Euler rotations (encoding a 3D rotation) is complicated. In this case, only an approximation has been made to allow error comparison between other markerless motion capture approaches. 68

85 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) have presented our real world distance results (as in [45]) in combination with the test model s height. For a test model with a height of 1.66cm, KSM can estimate pose with a mean (real world) error of approximately 2.02 cm per joint (figure 4.15). The technique proposed by Grauman et al in [45] reported an average pose error of 3 cm per joint, when using the full system of 4 cameras. A fairer comparison can be achieved when the two approaches (KSM and [45]) both use 2 synchronized cameras, in which case, the approach in [45] reported an average pose estimation error of over 10 cm per joint. Figure 4.15: KSM capture error (cm/joint) for a synthetic walk motion sequence. The entire sequence has a mean error of 2.02 cm per joint. To further test the robustness of KSM, binary salt & pepper noise with different noise densities are added to the original synthetic data set (as figure 4.14). The noise densities of 0.2, 0.4 and 0.6 were added to the test silhouettes and the following mean errors of 2.99 per joint, 4.45 per joint and per joint were attained respectively. At this point, we stress that KSM is now being tested for its ability to handle a combination of silhouette noise (i.e. the test silhouettes are different to that of the training silhouettes) and pixel noise (which is used to simulate scenarios with poor silhouette segmentation). An interesting point to note is that the increase in noise level does not equally affect the inference error for each silhouette, but rather, the noise increases the error significantly for a minority of the poses (i.e. the peaks in figure 4.16). This indicates that the robustness of KSM can be improved substantially by introducing some form of temporal smoothing to minimize these peaks in error. 69

86 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Figure 4.16: KSM capture error for a walking motion (fully rotates about the vertical axis) with different noise densities (Salt & Pepper Noise). The errors are recorded in degrees per joint. 70

87 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) Qualitative Experiments with Real Data For testing with real data, silhouettes of a spiral walk sequence were captured using simple background subtraction. Two perpendicular cameras were set up (with the same extrinsic parameters as the training cameras) without precise measurements. Due to the simplicity of the setup, segmentation noise (as a result of shadows and varying light) was prominent in our test sequences. Selected results are shown in figure Synthetic salt & pepper noise were also added to the concatenated real silhouettes to test the robustness of the system (figure 4.18) The ability to capture motion under these varying conditions demonstrates the robustness of KSM. Figure 4.17: Kernel subspace mapping motion capture on real data using 2 synchronized un-calibrated cameras. All captured poses are mapped to a generic mesh model and rendered from the same angles as the cameras. For motion capture results of real walking sequences (noise and clean silhouettes) via KSM, please refer to the attached file: videos/realmotioncapture.mp4 and videos/realmotioncapturenoisy.mp4 71

88 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) For animation, captured poses were mapped to a generic mesh model and the technique was able to created smooth realistic animation in most parts of the sequence. In other parts ( 12% of the captured animation s time frame), the animation was unrealistic as inaccurate captured poses were being appended, the presence of which is further exaggerated in animation. However, even though an incorrect pose may lead to unrealistic animation, it still remains within the subspace of realistic pose when viewed statically (by itself). This is because all output poses are constrained to lie within the subspace spanned by the set of realistic training poses in the first place. Figure 4.18: Selected motion capture results to illustrate the robustness of KSM. The system was able to infer pose (at lower accuracy) in the presence of Gaussian noise in segmented images. The algorithm was able to capture visually accurate pose ( 80% of the test set composing of 500 exemplars) from most concatenated silhouette images - Top two images and bottom left. In some cases KSM output visually incorrect pose vector (bottom right). However, because the pose was generated in the subspace of possible walking motion, even though the pose are visually inaccurate when compared to the silhouettes, it still is a pose of a person in a walking motion. 4.5 Conclusions & Future Directions This chapter introduces Kernel Subspace Mapping (KSM), a novel markerless motion capture technique that can capture 3D human pose using un-calibrated cameras. The result 72

89 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) shows that no detailed measurements, nor initialization of the cameras are required. The main advantages of the approach are its simplicity in setup, its speed and its robustness. Furthermore the technique generalizes well to unseen examples and is able to generate realistic poses that are not in the original database, even when using a single generic model for training. The motion capture system, which is model free, does not require prior knowledge of the actor nor manual labelling of body parts. This makes our un-calibrated system well suited for real-time and low cost human computer interaction (HCI) as no accurate initialization is required. Furthermore, the technique is still able to accurately track and estimate full pose from silhouettes with high level of noise (figure 4.16). To the author s knowledge, there has not been any previously proposed silhouette-based motion capture approaches, which can robustly handle these levels of noise. From the motion capture results (where KSM has shown the ability to effective infer pose from both synthetic and real silhouettes, and also in the presence of noise), we believe KSM to be a motion capture technique that works very well at estimation 3D pose from binary (human) silhouettes. It is important to elaborate other interesting aspects of KSM, such as the ability for KSM to handle gray scale images (as opposed to only binary images), as well as how KSM can take into account pixel relationships along the image plane. By taking into account the relationship of pixels along the image plane, a better distance kernel can be defined (for KSM), which allows improvement in the areas of training set reduction and more accurate mapping results. In chapter 5, we investigate this concept on the problem of view-point estimation of 3D rigid objects form gray scale images. There is a good reason why KSM has not been applied to human pose inference from gray scale images. This is because by taking into account the photometric information of an actor at training, the system will only be constrained to the capture of that actor at run-time. Furthermore, during capture, the actor will need to be wearing the same clothing that he/she was wearing at the time the training sequences were capture. The background pattern would also need to remain the same between training and during 73

90 CHAPTER 4. KERNEL SUBSPACE MAPPING (KSM) motion capture. The limits the ability for KSM to generalize to the capture other unseen actors, as well as preventing KSM from being trained off-site (i.e. KSM cannot be trained in one place and used at a later stage to capture pose in an environment with a different background). Off-site training is possible for silhouettes because normalized silhouettes (i.e. after alignment, cropping and resizing [section 4.2.3]) are relatively similar irrespective of their environment (provided that acceptable silhouette segmentation can be attained). Specifically with the goal of improving the capture rate of KSM in mind, an interesting aspect to further investigate is how the size of the training set affects the computational cost of pose estimation. As highlighted earlier, because the cost of KPCA projection (a component in KSM) is dependant on the cardinality of the training set (section 3.3), it may be possible to further improve KPCA (motion) de-noising rate by using a smaller training set (than the original training set). Furthermore, if it is possible to find a way to select only the best data that represent the de-noising subspace (and discard the remainder), then an increase in capture rate can be achieved, perhaps at the expense of relatively minor decrease in de-noising and (motion) capture quality. Chapter 6 investigates this concept of training set reduction (for human motion de-noising) via the Greedy KPCA algorithm [41]. 74

91 Chapter 5 Image Euclidian Distance (IMED) embedded KSM The previous chapter shows how Kernel Subspace Mapping (KSM) (chapter 4) can be apply in markerless motion capture to infer high dimensional pose from concatenated binary images. This chapter further confirms the flexibility of KSM by extending its application to the analysis of monocular intensity images (via a 3D object view-point estimation problem [figure 5.1]). The novel application of the Image Euclidian Distance (IMED) [116] in KPCA and KSM is introduced. We show how IMED (which takes into account spatial relationship of pixels and their intensities) allows KSM to accurately estimate viewpoints (less than 4 error) via the use a sparse training set (as low as 30 training images). This chapter aims to demonstrate that KSM embedded with the Image Euclidian Distance (IMED) [116] is a more accurate technique (and one with a sparser solution set) than KSM (using vectorized Euclidian distance [i.e. raster scan an image into a vector and calculate the standard Euclidian distance]) and other previously proposed approaches in 3D viewpoint estimation [80, 119, 47, 78, 80]. This chapter is based on the conference paper [106] T. Tangkuampien and D. Suter: 3D Object Pose Inference via Kernel Principal Component Analysis with Image Euclidian Distance (IMED): British Machine Vision Conference (BMVC) 2006, pages , Edinburgh, UK. 75

92 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM 5.1 Introduction Kernel techniques such as Kernel Principal Component Analysis (KPCA) [87] and Support Vector Machines (SVM) [114] have both been shown to be powerful non-linear techniques in the analysis of pixel patterns in images [90, 91, 40]. In supervised SVM image classification [39], images are usually vectorized before embedding (via a kernel function) for discrete classification in high dimensional feature space. In unsupervised KPCA [90, 87], vectorized images are also embedded to a high dimensional feature space, but instead of determining optimal separating hyper planes, linear Principal Component Analysis (PCA) is performed. Both KPCA and SVM take advantage of the kernel trick (in order to avoid explicitly mapping input vectors), which involves defining the distance between two vectors. In both cases, the traditional vectorized Euclidian distance is used for embedding. Each pixel, in this case, is usually considered as an independent dimension, and therefore these approaches do not take into account the spatial relationship of nearby pixels along the image plane. This chapter presents results indicating that the Image Euclidian Distance (IMED) [116] (which takes into account spatial pixel relation on the image plane) is a better distance criterion (than vectorized Euclidian distance), especially for 3D object pose estimation (figure 5.1). IMED can be embedded into KPCA via the use of the Standardizing Transform (ST) [116], and it can also be implemented efficiently via a combination of the Kronecker product and Eigenvector projections [60]. A major significance of ST, is that it can alternatively be viewed as a pre-processing transformation (on the pixel intensities), after which traditional vectorized Euclidian distance can be applied [116]. Effectively, this means that all desirable properties (such as positive definiteness, non-linear de-noising [91] and pre-image approximation using gradient descent [87]) of using traditional Euclidian distances in KPCA still applies. A minor disadvantage of ST is that it can be expensive in its memory consumption. For an image of size M N, the full ST matrix needs to be of size MN MN. Fortunately, for the case of Gaussian kernel embedding, this matrix is separable [116] and can be stored as the Kronecker product [60] of two reduced matrices 76

93 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM of size M M and N N. A significant contribution of this chapter is the demonstration of a practically viable embedding of IMED (with Gaussian kernel) into Kernel Principal Components Analysis (KPCA) [90] and Kernel Subspace Mapping (KSM) (section 4). By separating the originally proposed Standardizing Transform (ST) into the Kronecker product of two identical reduced transform [116], this chapter shows how IMED is a better image distance criterion than traditional vectorized Euclidian distance. To support this, results using IMED embedded KSM for 3D object pose estimation (compared to [80]), are presented in section Problem Definition and Related Work The problem of 3D object pose estimation using machine learning can be summarized as follows: Given images of the object, from different known orientations, as the training set, how can a system optimally learn a mapping to accurately determine the orientation of unseen images of the same object from a static camera? Alternatively, instead of calculating the object s orientation, the system could determine the orientation and position of a moving camera that the static object is viewed from. To test the system s robustness, the unseen input images can be deliberately corrupted with noise, in which case, image de-nosing techniques, such as KPCA, can be applied. Zhao et al [119] used KPCA with a neural network architecture to determine the orientation of objects from the Columbia Object Image Library (COIL-20) database [78]. In that case, however, the pose is only restricted to rotations about a single vertical axis. This chapter considers the more complex pose estimation problem, that of determining the camera position where an object is viewed from anywhere on the upper hemisphere of an object s viewing sphere. 77

94 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM Figure 5.1: Diagram to summarize the 3D pose estimation problem of an object viewed from the upper viewing hemisphere. Peters et al [80] introduced the notion of view-bubbles, which are area views where the object remains visually the same (up to within a pre-defined criterion). The problem of hemispherical object pose estimation can then be re-formulated as the selection of the correct view bubble and interpolating from within the bubble. A disadvantage of this technique, however, is the need for a large and well-sampled training set (up to 2500 images in [80]), to select the view bubble training images from. In this chapter, IMED embedded KSM presents comparable results using only 30 randomly selected training images of the same object (i.e. we do not select the best 30 training images by scanning through all 2500 images ). 5.3 Image Euclidian Distance (IMED) For notation purposes, a summary of the Image Euclidian Distance as introduced by Wang et al [116] is now presented. Each M N input image must be first vectorized to x, where x R MN. The intensity at the (m, n) pixel is then represented as the (mn+n)-th dimension in x. The standard vectorized Euclidian distance d E (x, y) between vectorized The data set used in this chapter to test IMED embedded KSM (for 3D viewpoint estimation) is the same set described in [81]. Each data set consists of 2500 images of a 3D object taken at a regular interval from the top hemisphere of the object s viewing sphere. 78

95 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM images x and y is then defined as MN d 2 E(x, y) = (x k y k ) 2 k=1 (5.1) =(x y) T (x y). Alternatively, the IMED between images, introduces the notion of a metric matrix G of size MN MN, where the element g ij represents how the dimension x i affects the dimension x j. To avoid confusion, it is important to note that i, j, k [1, MN], whereas m [1,M] and n [1,N]. Provided that the metric matrix G is known, the Image Euclidian Distance can then be calculated as d 2 IM(x, y) = MN i,j=1 g ij (x i y i )(x j y j ) (5.2) =(x y) T G(x y). The metric matrix G solely defines how the IMED deviates from the standard (vectorized) Euclidian distance (which is induced when G is replaced by the identity matrix). As highlighted in [116], the main constraints for IMED are that the element g ij is dependent on the distance between pixels P i and P j, that is g ij = f( P i P j ), and that g ij monotonically decreases as P i P j increases. A constraint on f is that it must be a continuous positive definite function [116], thereby ensuring that G is positive definite and excluding the tradition Euclidian distance as a subset of IMED. To avoid confusion, the reader should note that f refers to a continuous positive definite function encoding pixel spatial relationship along the image place. f should not be confused with the positive definite kernel function as required in KPCA (section 3.3). Note that at this current stage, the calculation of IMED is not memory efficient, due to the need to store the metric G, which is of size MN MN. Section shows how to improve on this, but first, a summary of the Standardizing Transform (ST) [116] (which allows IMED to be embedded into more powerful learning algorithms such as SVMs, KPCA and KSM) will be summarized. Note that even though the typical vectorized euclidian distance can be induced by replacing G with the identity matrix, the Euclidian distance is not a subset of IMED as it is not continuous across the image plane. 79

96 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM Standardizing Transform (ST) As suggested in [116], the calculation of IMED can be simplified by decomposing G to A T A, leading to d 2 IM (x, y) =(x y)t A T A(x y) =(u v) T (u v), (5.3) where u = Ax and v = Ay. The Standardizing Transform (ST) is then merely a special case of A, where G = A T A = G 1 2 G 1 2. (5.4) This reveals symmetric, positive definite and unique solutions for both G and G 1 2 : G = ΓΛΓ T, (5.5) G 1 2 = ΓΛ 1 2 Γ T, (5.6) where Γ is the orthogonal column matrix of the Eigenvectors of G, and Λ the diagonal matrix of the corresponding Eigenvalues. The Standardizing Transform (ST) is then the transformation G 1 2 ( ). For example, to embed IMED into learning algorithms based on standard Euclidian norm, the technique simply applies the transformation G 1 2 ( ) to the vectorized images x and y, in order to obtain u and v respectively. From (5.3), the IMED is then simply the traditional Euclidian distance between the transformed vectors u and v Kronecker Product IMED Up until now, only the constraints of IMED and its corresponding Standardizing Transform G 1 2 ( ) have been summarized, but not how to construct the matrix G itself. This section reveals how to construct G efficiently using the Kronecker Product (as suggested in [116]) 80

97 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM and concentrating specifically on the Gaussian function, where g ij = 1 2πσ 2 exp{ P i P j 2 2σ 2 }. (5.7) In this case, σ controls the spread of the Gaussian signal, which acts like a bandlimited filter. For large values of σ, the influence of, let s say P i to its neighboring pixel P j on the image plane, is more significant (compared to when σ is small). Therefore, G acts like a low pass filter, since the signal induced is smoother and flatter and can be constructed from low frequency signals. As σ decreases, the Gaussian signal induced becomes steeper and thinner, reducing its influence on neighboring pixels, and requiring higher frequency signals for reconstruction in the Fourier domain. For the extreme case, where σ 0, the dirac delta signal is induced, which requires infinite bandwidth, leading to the traditional Euclidian distance. Figure 5.2: Images of the Standardizing Transform G 1 2 the images tends to the diagonal matrix as σ 0. with different σ values. Note how By letting P i and P j be the pixels at location (m i,n i ) and (m j,n j ) respectively, the corresponding squared pixel distance between the two pixels on the image plane is then given by P i P j 2 =(m i m j ) 2 +(n i n j ) 2. (5.8) 81

98 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM As highlighted in [116], by substituting the above equation into equation 5.7, the separable, and reducible metric representation can be obtained: g ij = 1 2πσ 2 [exp{ [(mi m j ) 2 +(n i n j ) 2 ] 2σ 2 }] = 1 2πσ 2 [exp{ (mi m j ) 2 2σ 2 } exp{ (ni n j ) 2 2σ 2 }]. (5.9) As previously mentioned, m [1,M] and n [1,N], hence leading to a re-formulation of equation 5.9 as the Kronecker product [60] of two smaller matrices Ψ M and Ψ N of size M M and N N respectively, where Ψ M (m i,m j )= 1 2πσ 2 exp{ (mi m j ) 2 2σ 2 }, Ψ N ( n i,n j )= 1 2πσ 2 exp{ (ni n j ) 2 2σ 2 }. (5.10) This leads to a simplified and memory efficient version of the metric matrix [116], where G =Ψ M Ψ N. (5.11) For the case of squared images, where M = N, Ψ M and Ψ N are identical (henceforth both Ψ M and Ψ N will be denoted as Ψ) and only one matrix copy of size M N is required. Hence, Ψ can be decomposed into its corresponding matrix of Eigenvectors Ω and diagonal Eigenvalues Θ, where Ψ = ΩΘΩ T. (5.12) Using established properties of the Kronecker product [60] compatible with standard matrix multiplication, the metric matrix becomes G = (ΩΘΩ T ) (ΩΘΩ T ) = (Ω Ω)(Θ Θ)(Ω Ω) T (5.13) = ΓΛΓ T For the remainder of this chapter, only square images will be considered, as this can effective halve the current memory consumption. Note that this does not exclude all non-square images as they can simply be pre-processed to square images before calculating IMED. 82

99 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM From this and equation 5.6, the Standardizing Transform G 1 2 ( ) can be derived as G 1 2 = ΓΛ 1 2 Γ T = (Ω Ω)(Θ Θ) 1 2 (Ω Ω) T (5.14) Furthermore, the reduced Eigenvalue matrix Θ is diagonal and can be diagonally vectorized as Θ(i.e. take only the diagonal values), where Θ m represents Θ(m, m). The corresponding Kronecker product of the Eigenvalue matrices (Θ Θ) 1 2 can efficiently be represented as the outer product (Θ Θ) 1 2 =(R[ Θ Θ T ]) 1 2, (5.15) where R is the vectorization function followed by the diagonalization function. Further improvements of the calculation of the Standardizing Transform is possible by realizing that G 1 2 is a series of linear Eigenvector projections and scaling [116]. Effectively, this means that a specified minimum Eigenvalue scaling threshold can be pre-selected, and the Eigenvalue projection ignored for Eigenvalues below the selected threshold. Letting τ be the minimum Eigenvalue scaling and Θ the Eigenvalues re-ordered in descending order, the reduced Eigenvector column matrix can be rewritten as Ω=[ J K], (5.16) where J represents the column of J Eigenvectors corresponding to the Eigenvalues larger than τ, and K the remaining Eigenvectors. The approximate Standardizing Transform then becomes G 1 2 τ =(J J )(R[ Θ (1:J) Θ T (1:J) ]) 1 2 (J J ) T (5.17) where J is only of dimension M J (with J<M), and Θ (1:J) being a one dimensional vector of the highest J Eigenvalues. Therefore, for high resolution images where M is large, it is possible to use the approximate Standardizing Transform G 1 2 τ instead of G

100 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM On the other hand, if the image resolution is low, and speed is a factor, it is possible to construct the full G 1 2 matrix using a single Kronecker product. From equation 5.14, the following can be derived: G 1 2 = (Ω Ω)(Θ Θ) 1 2 (Ω Ω) T = ΩΘ 1 2 Ω T ΩΘ 1 2 Ω T (5.18) =Ψ 1 2 Ψ 1 2, with the constraints: Ψ = (Ψ 1 2 ) T Ψ 1 2 =Ψ 1 2 Ψ 1 2. Visually, G 1 2 is a transform domain smoothing [116], where the Eigenvectors with the higher Eigenvalues, represent the low frequency basis signals. Hence, by leaving out the Eigenvectors represented by K, the higher frequency signals are suppressed. Referring back to figure 5.3 and equation 5.7, the larger the value of σ, the smoother the Standardizing Transform image will be, and therefore, the higher the Eigenvalues corresponding to the low frequency basis. The sharpest that the image can ever be (including noise) after the Standardizing Transformation is when σ 0, where only the original image itself remains. Figure 5.3: Images of a 3D object after applying the Standardizing Transform G 1 2 different σ values. with 5.4 IMED embedded Kernel PCA Since KSM is based on KPCA, a factor to initially consider is the embedding of IMED into unsupervised Kernel Principal Components Analysis (KPCA) [90]. A summary of KPCA is presented in section 3.3. KSM will be used in the 3D view-point estimation problem 84

101 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM (which is summarized in section 5.2). To embed IMED into KPCA, the Standardization Transform G 1 2 is initially applied to the vectorized training set X, to get standardized training set U, where U = G 1 2 X. (5.19) KPCA is then performed using the standard RBF kernel, which leads to the overall effect of: k im (x i, x j ) = exp γ{(x i x j ) T G(x i x j )} = exp γ{(u i u j ) T (u i u j )} (5.20) = k v (u i, u j ), which we refer to as IMED embedded KSM. Considering the relationship in equation 5.20, we are assured that all desirable properties of the standard RBF kernel, such as positive definitiveness and gradient descent pre-image approximations, are held by IMED. The free parameters are then tuned on U and the IMED embedded KPCA projection becomes V k im Φ(x) = N α k i k im (x i, x) = i=1 N α k i k v (u i, u). (5.21) i=1 The pre-image approximation using IMED embedded KPCA (trained using the set U) is then straight forward. For example, the fixed point algorithm of [87] can be used to determine the standardized pre-image u p, and the inverse Standardizing Transform (G 1 2 ) 1 applied to obtain the vectorized pre-image x p, where x p =(G 1 2 ) 1 u p. (5.22) The inverse transform (G 1 2 ) 1 can be efficiently computed (without storing an explicit version of the inverse matrix) by realizing that it is also a projection onto the same Eigenvectors of G 1 2, but instead, the inverse scaling (by the corresponding Eigenvalues) are applied, 85

102 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM (G 1 2 ) 1 = Γ(Λ 1 2 ) 1 Γ T (5.23) = Γ(R 1 [ Θ Θ T ])Γ T where R 1 [ Θ Θ T ] is defined as the vectorization function of the inverse scaling of each element in Θ Θ T, followed by the diagonalization function. It is important to note that the positive definite matrix G 1 2 may have Eigenvalue Θ m = 0. The Standardizing Transform G 1 2 is then singular, and the inverse (G 1 2 ) 1 does not exist [116]. However, ( G 1 2 ) 1 can be approximated using the same concept as in equation 5.17 and ignoring the Eigenvectors whose corresponding Eigenvalues are close to zero IMED embedded Kernel Subspace Mapping Recall the basic structure and properties of KSM: Kernel Subspace Mapping can be used to map data between two high dimensional spaces (chapter 4). KPCA is initialized to learn the subspaces of the high dimensional data, for let s say the input set X and the output set Y, and LLE non-parametric mapping [85] is used to map between the two feature subspaces. Given novel inputs, the data is projected through the two subspaces and the pre-image calculated to determine the input s representation in the output space. If any of the training vectors are derived from images, then IMED embedded KPCA can be applied. For generality, this section will consider both cases, where the input set X is constructed from images of a 3D object, and the output set Y are three dimensional camera positions on the upper hemisphere of the object s viewing sphere. Note that this is a simple case (as the output space is only 3 dimensional) for KSM, which can map to higher dimensional output spaces as in human markerless motion capture (chapter 4). KSM is initialized by learning the subspace representation of X and Y using IMED embedded KPCA and standard (RBF kernel) KPCA respectively. From these, the KPCA projected training sets V X im and VY v are obtained, where the corresponding instance in the sets have the following forms: 86

103 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM Figure 5.4: Diagram to Summarize Kernel Subspace Mapping for 3D object pose estimation. Selected members of the image training set X are highlighted with black bounding boxes, whereas the output training set Y are represented as blue circles on the viewing hemisphere. v X i =[ V 1 im Φ(x i),.., V η im Φ(x i) ] T, v X R η v Y i =[ V 1 v Φ(y i),..., V ψ v Φ(y i) ] T, v Y R ψ, (5.24) where η and ψ are the training set s corresponding tuned projected feature dimensions (using the tuning techniques in section and 4.3.3). During run time, given a new unseen vectorized image x of the same 3D object, projection via IMED embedded KPCA can be used to obtain v X. The projected vector is then mapped between the two feature subspaces using LLE neighborhood reconstruction weight w, which is calculated by minimizing the reconstruction cost function (as shown in [85] and section 4.3). ε(v X )= v X i I w i v X i 2. (5.25) where I is the set of neighborhood indices of the projected vector v X in the tuned feature space. This leads to the mapping v Y = i I w i v Y i, (5.26) 87

104 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM from which the corresponding output hemispherical position y p (which is the pre-image of v Y ) can be determined. Note that there is now another free parameter to tune, this being card(i), the cardinality of the index set I. For KSM, this is tuned using cross validation as in section 4.2.2, but instead of minimizing the pre-image reconstruction, the mapping error is minimized (section 4.3.3). 5.5 Experiments & Results To test the efficacy of IMED embedded KSM on 3D object viewpoint estimation, training images of the object Tom (figure 5.5[left]) are selected from the data set described in [81]. From the original image set of 2500 images, reduced sets of 30, 60, 90 and 145 training images are randomly selected. The training size of 30 and 145 were selected to match the experiments in [80]. A test set of 600 unseen images are then filtered from the remaining images at regular intervals, such as to maximize the distribution around the viewing hemisphere. Table 5.1: 3D pose inference comparison using the mean angular error for 600 unseen views of the object Tom. Training size Clean data set KSM via standard KPCA IMED embedded KSM Gaussian noise set KSM via standard KPCA IMED embedded KSM Salt/Pepper noisy set KSM via standard KPCA IMED embedded KSM In [80] - figure 1, the parameters are given in terms of the tracking threshold, and their corresponding view bubbles, each consisting of 5 training images. For view bubble count of 6 and 29 this corresponds to 30 and 145 training images respectively. For 3D object pose estimation results, please refer to the attached file: videos/[imedposeest.mp4, IMEDposeEstNoisy.mp4, IMEDposeEstNoisy2.mp4, IMEDposeEstNoisySP.mp4]. 88

105 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM Figure 5.5: Selected images of the object Tom [left] and object Dwarf [right] used in the 3D pose estimation problem. The images consist of the original clean image (1 st & 4 th image), image corrupted with Gaussian noise (2 nd &5 th image) and Salt/Pepper noise (3 rd &6 th image). IMED embedded KSM is tested with clean images as well as with images corrupted with zero mean Gaussian noise (variance of 0.01) and salt/pepper noise (noise density of 0.05) as shown in figure 5.5. Table 5.1 shows that in all cases, IMED embedded KSM provides more accurate viewpoint estimation (than standard KSM) from both clean and noisy images. Furthermore, IMED embedded KSM is also more robust to noise and it shows the least percentage increase in error for both Gaussian and salt/pepper noise. As expected, with any exemplar based learning algorithm, the mean error gradually decreases as more training images are included in the set. For completeness, another set of pose inference results for the same tuned parameters as in table 5.1 is presented. In this case, however, 400 test images are selected at random from the original hemispherical data set (2500 images) and the selected images are not constrained to be novel (i.e. the test images may also include images that were used in training). This is done because it gives a more realistic representation of the input data in practice, where there is no exclusion of the training set, let alone any idea of which images were used in training. As expected, IMED embedded KSM performs even better in this case (table 5.2), as KSM will simply project any training images to the correct position in the feature space, and this has zero error to begin with. Results from [80] are also included for comparison in table 5.2 even though only 30 test images were used. It must be noted that the accuracy of the pose inference is dependent on the complexity of the object. For example, it is not possible to determine the hemispherical pose using Note that the test images in [80] have not been considered as novel/unseen. This is because a sparse representation of the object is built from the original set of 2500 images using a greedy approach, which is then used for pose estimation. Since the 30 test images are also derived from the original 2500 images, they cannot be considered novel. 89

106 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM Table 5.2: 3D pose inference comparison using the mean angular error for 400 randomly selected views of the object Tom. Training size Clean data set View Bubble Method [80] KSM via standard KPCA IMED embedded KSM Gaussian noise set KSM via standard KPCA IMED embedded KSM Salt/Pepper noisy set KSM via standard KPCA IMED embedded KSM a sphere with uniform colour (i.e. all images of the sphere will look the same from any viewpoint). To test the efficacy of IMED embedded KPCA on a more symmetrical object, the object Dwarf (which also appears in [80]) is used. All experiments are performed in exactly the same was as before, and results summarized in table 5.3 & table 5.4. The training size of 30, 60, 120 and 145 are also used to allow comparison (with object Tom ), in which case table 5.3 should be compared with table 5.1, and table 5.4 with table 5.2. For cross comparison with [80], IMED embedded KSM is able to achieve more accurate results for a training set of only 30 images (3.26 ) as compared to using the view bubble sparse set of 130 images (4.2 ). A surprising result is that it is possible to achieve relatively the same level of accuracy (even higher in some cases) for the Dwarf object (as compared to object Tom ). This was not the case using the view bubble method in [80]. 5.6 Discussions and Concluding Remarks For the case of IMED embedded KSM, pose inference results with mean errors of 3.22 were achieved, whilst using only a training set of 30 images. This is quite accurate, as it is unlikely that a human would be able to achieve the same level of accuracy [80]. Our technique is also robust to noise (especially Gaussian noise) and, in such a case, only shows minor percentage increase in mean angular error. Furthermore, this was achieved 90

107 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM Table 5.3: 3D pose inference comparison using the mean angular error for 600 unseen views of the object Dwarf. Training size Clean data set KSM via standard KPCA IMED embedded KSM Gaussian noise set KSM via standard KPCA IMED embedded KSM Salt/Pepper noisy set KSM via standard KPCA IMED embedded KSM without explicit knowledge of the full training set of size 2500 images as is the case with [80]. This is because a greedy approach is not used to filter out (from the original set of 2500 images) training data, in order to build a sparse representation; but simply, training images are randomly selected from it. The constraint, in this case, being that the training set is relatively evenly spread over the probability distribution of the data (as opposed to spatial configuration of the data). The test images used are also novel (table 5.1 and 5.3) and are regularly spread over the entire hemisphere, whereas in [80], only 30 test images were used from three different patches of the viewing hemisphere. Results with more than 145 training images are not presented because for industrial applications, it becomes impractical to capture such a large training set, and in such a case, the hemispherical data will be so dense that linear interpolation will be as accurate. This chapter presents a technique that potentially allows automatic parameter selection (i.e. without human intervention). This leads to the ability to tune KSM for a novel 3D object by simply placing it on a rotating table (which rotates about the vertical axis). For the other degree of freedom, a synchronized camera that moves up and down along the longitudinal arc of the hemisphere can be installed (which will automatically capture images of the object from different orientations). From the captured images and corresponding orientation, the system can automatically tune the parameters, which can then 91

108 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM Table 5.4: 3D pose inference comparison using the mean angular error for 400 randomly selected views of the object Dwarf. Training size Clean data set KSM via standard KPCA IMED embedded KSM Gaussian noise set KSM via standard KPCA IMED embedded KSM Salt/Pepper noisy set KSM via standard KPCA IMED embedded KSM be loaded into a single camera pose inference system in situ. Note that in this case an assumption was made that the background of the training images and test images are the same. In reality, it is usually the case that the training images are captured in controlled environment, whereas the test images are not. To solve this problem, a robust background segmentation algorithm can be appended as a pre-processing step to mask out the varying backgrounds in both controlled training images and run-time images on site. Learning and pose inference can then be performed on the masked images, hence mitigating the problem of uncontrolled backgrounds. For the more common problem of determining the relative yaw orientation (rotation about the vertical axis only) of an object on a (factory) conveyor belt using static cameras, this is an even simpler problem. This is because the problem reduces to constraining the space of possible orientation from three dimensional space onto a two dimensional plane. Kernel Subspace Mapping (KSM) was initially developed for markerless motion capture (chapter 4). In that case, KSM was used to learn the mapping between concatenated images of the human silhouette and the normalized relative joint centers (section 4.2). For future directions, IMED embedded KPCA can be integrated into the core of markerless motion capture using KSM. Results have been presented indicating that significant improvements can be achieved by embedding IMED into KSM (for viewpoint estimation). 92

109 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM It is likely that relatively the same increase in accuracy can be achieved by applying the same concept in markerless motion capture. The problem to overcome is how to effectively integrate IMED into the pyramid kernel (section 4.2.3) for (computationally) efficient silhouette comparison. Another area to consider is using IMED on local gradients found from SIFT -like algorithms [64] and possibly improving on previous techniques on human pose estimation in cluttered environments [5]. 93

110 CHAPTER 5. IMAGE EUCLIDIAN DISTANCE (IMED) EMBEDDED KSM 94

111 Chapter 6 Greedy KPCA for Human Motion Capture This chapter presents the novel concept of applying Greedy KPCA (GKPCA) [41] as a preprocessing filter (in training set reduction) for Kernel Subspace Mapping (KSM). Human motion de-noising comparison between linear PCA, standard KPCA (using all poses in the original sequence) and Greedy KPCA (using the reduced set) is presented at the end of the chapter. The chapter advocates the use of Greedy KPCA in KSM by showing that both KPCA and Greedy KPCA have superior de-noising qualities over PCA, whilst KSM with Greedy KPCA results in relatively similar pose estimation quality (as standard KSM [chapter 4]) but with lower evaluation cost (due to the reduced training set). 6.1 Introduction Due to the high degree of correlation in human motion, unsupervised learning techniques like Principal Components Analysis (PCA) [51, 98] and its non-linear extension, Kernel Principal Components Analysis (KPCA) [90, 91] are commonly used to learn subspaces of human motion. As highlighted in section 3.2.1, PCA, being a linear technique, is not well suited for the de-noising of non-linearly correlated human motion. This hypothesis This chapter is based on the conference paper [107] T. Tangkuampien and D. Suter: Human Motion Denoising via Greedy Kernel Principal Component Analysis Filtering: International Conference on Pattern Recognition (ICPR) 2006, pages ,Hong Kong, China. 95

112 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE is further supported by the human de-noising results in section 3.5. KPCA, on the other hand, shows significant improvement over PCA, for both cyclical human motion (e.g. walking and running) to more complex non-cyclical motion (e.g. dancing and boxing). A drawback of KPCA, however, is that the training and evaluation costs are dependant on the size of the training set (equation 3.13 and equation 3.16). During training, the kernel matrix (which grows quadratically with the number of exemplars in the training set) needs to be calculated before standard PCA can be applied in feature space. As for the de-noising (projection and pre-image approximations [91]) via KPCA, the cost is linear in the exemplar number because each projection requires a kernel comparison with each vector in the training set. As a result, the cardinality of the training set is vital in any real system incorporating KPCA. The goal of KPCA with greedy filtering (GKPCA) [41] is to filter (from the original motion sequences) a reduced training subset than can optimally represent the original de-noising subspace, given a specific prior constraint (e.g. using 70% of the original training data). Figure 6.1: Illustration to geometrically summarize Greedy Kernel Principal Components Analysis (GKPCA) [41] in relation to Kernel Principal Components Analysis (KPCA) [90]. Novel captured motions similar to the original training sequence can then be de-noised using this reduced set. To this end, the recently proposed GKPCA algorithm proposed by [40] is investigated for the goal of filtering the reduced training set. To the author s knowledge, there is currently no previous work on the practical applications of Greedy Kernel 96

113 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE Principal Components Analysis (GKPCA) on human motion de-noising. Similarly, KPCA with greedy algorithm filtering is a relatively novel approach to the problem of markerless motion capture based on subspace learning. Results are presented in section 6.3, which supports the integration of GKPCA into the pre-processing of KSM (chapter 4). The experiments aim to show that GKPCA filtering (for training set reduction) can aid in the reduction of pose estimation cost, whilst still retaining the de-noising and pose estimation qualities of the original training set. 6.2 Greedy KPCA filtering on Motion Capture Data To understand the concept of GKPCA filtering, imagine that the non-linear toy data set in figure 3.3 were cloned and the training set doubled in size (i.e. there are now two instances of each point in the training set). In this case, there should not be any substantial change in the de-noising quality as the feature subspace defined by the training set should remain relatively the same (as the original set). However, from a performance perspective (due to the linear complexity of KPCA projections) the computation cost would have (approximately) doubled. Most likely, there are redundant (training) poses, which when removed would increase performance, whilst minimizing reduction in de-noising quality. Put more generally, the objective (of GKPCA filtering) is to remove training vectors, which do not contribute substantially to the definition of the de-noising subspace. Specifically for human motion capture data, GKPCA is highly compatible because most animation data (which is a concatenation of poses captured from a marker-based system) is repetitive and usually sampled at a high frame rate (between Hz). The higher the sampling rate, the more likely it is that there will be redundant poses when it comes to defining subspaces for human de-noising. In most motion capture training data (where there is a high percentage of redundant poses), the removal of any redundant pose will increase the speed of KSM for markerless motion capture. GKPCA can act as a preprocessing filter that will select a reduced training set P gk (which defines a de-noising 97

114 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE pose subspace similar to P tr [section 4.3]) from the original training set. The remainder of this chapter is structured as follows: in section the Greedy KPCA algorithm is summarized. In section 6.3, experiment are conducted to show how a reduced training set can play an important role in controlling the capture rate and pose estimation quality of KSM for markerless motion capture (as well as compare the motion de-noising quality between GKPCA, KPCA and PCA) Training Set Filtering via Greedy KPCA Using the same notation an section 3.3, recall that standard KPCA basically maps the training set X tr of size N non-linearly to a feature space F, before performing the equivalent of linear PCA. Greedy KPCA [41] aims to filter X tr to a reduced set X gk of size M (where M N), such that the linear span of F gk is similar to the linear span of F, where Φ: X gk F gk. (6.1) Assuming that X gk can be found, it is then possible to express every vector in F as a linear combination of the filtered set in F gk [41]. To summarize Greedy KPCA as in [41, 40]: let J = {j 1,j 2,..., j M } be the set of M indices, a subset of I, where I = {i 1,i 2,..., i N } is the original indices of X tr. The approximate feature space representation of the original training exemplars can be expressed as follows: Φ(x i )= ω ij Φ(x j ), i I. (6.2) j J The reduced set s objective function to minimize is the mean square error ε MS (H tr J )= 1 N Φ(x i ) ω g ij Φ(x j) 2. (6.3) j J i I It is important to note that given the subset X tr indexed by J, it is possible to compute ω g optimally to minimize ε MS (H tr J ). Therefore ω g can be removed from the error function ε MS (H tr J ), and the greedy approximation problem re-expressed as the problem 98

115 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE of determining J, where J = argmin ε MS (H tr J ) (6.4) card(j )=M and card(j ) denoting the cardinality of subset J. Furthermore, as shown in [40, 87], it is possible to avoid explicitly mapping to feature space F tr by re-expressing equation (6.3) using the kernel trick as ε MS (F tr J )= 1 N i I (k p (x i, x i ) 2K gk c kgk (x i )+ k gk (x i ), K gk c kgk (x i ) ). (6.5) K gk c, in this case, is the centered kernel matrix of the filtered set X gk and k gk (x i ) = [k p (x j1, x i ),..., k p (x jm, x i )] T, (6.6) is the projection of X tr onto the reduced set s feature space. Greedy KPCA, therefore only needs to determine the optimal subset J from I [41]. In order to achieve this, we could try all possible combination, but this would be impractical as there exists ( N M) combinations. Instead, as shown in [40], we can choose to minimize the upper bound where ε MS (F tr J ) 1 (N M) max N Φ(x i) Φ(x i ) 2. (6.7) i I\J Intuitively, the upper bound can be viewed as initially finding the maximum feature space error (between the approximate set X construction and the original set X ) and multiplying it by (N M). The reason (N M) is chosen as the scale factor, and not N is because M vectors in X tr can already be represented with zero error [41]. Equation (6.7) should hold because the mean error ε MS (F tr J ) cannot be higher than the mean of the maximum error. Further discussion of the Greedy KPCA algorithm is beyond the scope of this chapter, but can be found in [40]. Given the original motion sequence X tr, it is possible to use greedy KPCA to filter out X gk = X tr (J ), a subset of X tr (I), which has similar linear span in the pose feature space. Figure 6.2 compares the de-noising quality (on a toy example from figure 3.3) between 99

116 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE GKPCA using 50% of the original training set) and KPCA (using the full training set). Figure 6.2: De-noising comparison of a toy example between GKPCA (using 50% of the original training set) [top] and standard KPCA (using the full training set) [bottom]. 6.3 Experiments & Results There are three factors that should be considered in concluding if Greedy KPCA can filter out a reduced set (in a manner which will enhance the performance of KSM). These are: 1. Does a reduced training size lead to a reduction in motion capture time (i.e. an increase in motion capture rate)? 2. What is the effect of a reduced training set on the de-noising qualities of human motion when compared with using the full set? 3. What is the effect of a reduced training set on the quality of pose estimation via KSM? 100

117 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE Experiments regarding the motion capture rate and de-noising qualities are summarized in section and section respectively. Section shows the results of using Greedy KPCA as a preprocessing filter for KSM in markerless motion capture Capture Rate Control for KSM To understand how a reduced training set can control the capture rate of KSM motion capture, the reader should refer to equation The outer iteration over the cardinality of the set confirms that any data projection cost via KPCA will be dependent on the training size. Therefore, training KPCA using the reduced set X gk will result in a reduced kernel matrix K gk and lower computational load in equation 3.16, hence reduction in the kernel subspace mapping cost. As previously discussed, figure 3.12 [bottom] highlights the linear relationship between the de-noising cost of KPCA and the training size. Table 6.1 summarizes how the current capture rate of KSM for motion capture can be controlled via training size modification. The dominant cost, being the recursive pyramid cost of calculating the silhouette feature vector Ψ(f), which is currently implemented in un-optimized Matlab TM code. This cost is not shown in the table as it is independent of the training size, but can be determined by taking the difference between the KSM total cost and the LLE mapping and pre-image cost. Table 6.1: Comparison of capture rate for varying training sizes. Training LLE mapping KSM total Capture size & pre-image (s) cost (s) rate (Hz) The cost of calculating Ψ(f) using the pyramid kernel is dependent on the number of silhouette features and not on the size of the training set. For a five level pyramid of 431 features, this results in a average cost of seconds. 101

118 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE Comparison between Greedy KPCA and KPCA De-noising This section begins by comparing the mean square error [mse] of human motion de-noising (in normalized RJC format [section 3.4.1]) using PCA, standard KPCA trained with the entire set X tr, and Greedy KPCA trained with the reduced set X gk. All experiments were performed on a Pentium TM 4 with a 2.8 GHz processor. In figure 6.3, GKPCA selects 30% of the original sequence to build the reduced set subspace. Synthetic gaussian white noise is added to motion sequence X in pose space, and the de-noising qualities compared quantitatively (figure 6.3) and qualitatively in a 3D animation playback. Figure 6.3: Pose space mse comparison between PCA, KPCA and GKPCA de-noising for walking and boxing sequences. Figure 6.3 highlights the superiority of both KPCA and GKPCA over linear PCA in motion de-noising. GKPCA will tend towards the KPCA de-noising limit as a greater percentage of the original sequence is included in the reduced set. Specifically for the walking and running motion sequences, the frame by frame comparison in figure 6.4 and figure 6.5 further emphasizes the similarity of de-noising qualities between KPCA (blue line) and GKPCA (red line). Both the error plots for GKPCA and KPCA are well below the error plot for PCA de-noising (black line). For human animation, the GKPCA algorithm is able to generate realistic and smooth animation comparable with KPCA when For motion de-noising results via PCA, KPCA and GKPCA, please refer to the attached file: videos/motiondenoisingrun.mp4 & videos/motiondenoisingwalk.mp4. 102

119 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE Figure 6.4: Frame by Frame error comparison between PCA, KPCA and GKPCA denoising of a human walk sequence. GKPCA selects 30% of the original sequence. Figure 6.5: Frame by Frame error comparison between PCA, KPCA and GKPCA denoising of a human run sequence. GKPCA selects 30% of the original sequence. the de-noised motion is mapped to a skeleton model and play backed in real time. To analyze the ability for GKPCA to implicitly de-noise feature subspace noise, we add synthetic noise in the feature space of a walk motion sequence. We are interested in this aspect of de-noising because, in KSM, noise may be induced in the process of mapping from one feature subspace to another (i.e. from M s to M p ). Figure 6.6 shows the relationship between feature space noise and pose space noise for both KPCA and GKPCA de-noising for a walk sequence from various yaw orientations (GKPCA uses a reduced set consisting of 30% of the original set). Surprisingly, the ratio of pose space noise to 103

120 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE Figure 6.6: Comparison of feature and pose space mse relationship for KPCA and GKPCA. GKPCA uses a reduced set consisting of 30% of the original set. feature subspace noise (i.e. pose space noise/feature subspace noise) is lower for GKPCA de-noising when compared to using KPCA de-noising. The lower GKPCA plot indicates its superiority over KPCA at minimizing feature space noise, which may be induced in the process of pose inference via KSM (i.e. the same level of noise in GKPCA feature subspace will more likely map to a lower level of noise [when compared to KPCA] in the RJC pose space). However, it is important to note that the noise analysis presented here is only for human motion de-noising. The effect of a reduced set (filtered via GKPCA) on markerless motion capture has not yet been analyzed. The following section attempts to investigate this relationship between the size of the reduced training set and the quality of pose estimation via KSM Greedy KPCA for Kernel Subspace Mapping To test the efficacy of KSM motion capture with a GKPCA pre-processing filter, the training set of 323 exemplars (as previously applied in section 4.4.2) is used as the original (starting) set. Using the distance in the pose feature subspace for filtering, the GKPCA algorithm [39, 41] is initialized to extract reduced training sets from the original set. The reduced sets are then tuned independently using the process described in chapter 4. For comparison with previous results of motion capture via KSM, the same synthetic set of 1260 unseen silhouettes (as used in section 4.4.2) is used for testing. 104

121 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE For pose inference from clean silhouettes (i.e. clean silhouette segmentation, but still using silhouettes of different models between testing and training), and using reduced training sets consisting of 90%, 80% and 70% of the original set: KSM can infer pose with average errors of 3.17 per joint, 4.40 per joint and 5.86 degree per joint respectively. The frame by frame error in pose estimation (from clean silhouettes) are plotted in figure 6.8. Figure 6.9 shows the frame by frame estimation error from noisy silhouettes. For training sets consisting of 90%, 80% and 70% of the original set, the mean pose inference error from noisy silhouettes (with salt & pepper noise density of 0.2) are recorded as 3.71 per joint, 4.80 per joint and 6.51 degree per joint respectively (figure 6.9). The average errors per joint for all the tested reduced sets are summarized in figure 6.7. As expected, the mean pose estimation error (for KSM) when inferring from clean or noisy silhouettes increases with a reduction in the training size. The important factor to note is the nonlinear relationship between the training size and (pose) inference error. Similar to KSM pose estimation in the presence of noise (section 4.4.2), the reduction in training size does not equally affect the inference error for each silhouette, but rather, the size reduction increases the error significantly for a minority of the poses (i.e. the peaks in figure 6.9). Figure 6.7: Average errors per joint for the reduced training sets filtered via GKPCA. 105

122 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE Figure 6.8: Frame by Frame error comparison (degrees per joint) for a clean walk sequence with different level of GKPCA filter in KSM. 106

123 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE Figure 6.9: Frame by Frame error comparison (degrees per joint) for a noisy walk sequence with different level of GKPCA filter in KSM. 107

124 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE 6.4 Conclusions & Future Directions The chapter investigates the novel approach of applying Greedy KPCA [41] in the minimization of noise in non-linearly correlated human motion. Section 3.5 has already shown how non-linear KPCA is superior than linear PCA in the de-noising of Gaussian noise in human motion captured data. GKPCA shares this superiority over linear PCA. In terms of its advantage over KPCA, Greedy KPCA can create realistic and comparable (to KPCA) animation from noisy motion sequences, but at a fraction of KPCA s processing cost. GKPCA, therefore enables KSM to be extended to capture complex sequences defined by a large training size. Currently KSM learns pose and silhouette subspaces via KPCA, and approaches motion capture as the problem of mapping from the projected silhouette subspace M s to the projected pose subspace M p (section 4.2). GKPCA can be applied, in this case, to control capture rate by filtering out a reduced training set that can define relatively similar de-noising subspaces (as the original set), thereby reducing the de-noising cost of KPCA (equation 3.16) and pose inference cost of KSM. An area that should be further investigated is GKPCA s superiority over KPCA in the reduction of pose (feature) subspace noise (figure 6.6). A possible explanation for this improvement may be due to GKPCA s reduced set constraint, therefore resulting in a reduced space of valid pre-images in pose space, and favorably, a reduced space for unwanted noise as well. This improvement provided by GKPCA filtering, however, does not necessarily mean that GKPCA can lead to superior pose estimation results for KSM. As expected, a reduction in training data leads to an increase in pose estimation error. An interesting point to note is the pattern of the error increase (which occurs in peaks [figure 6.9]) as the training size is reduced. This pattern arises because some silhouettes (of a walk motion) are significantly more ambiguous than others. A relatively similar pattern of error increase has also been recorded (in section 4.4.2) for KSM pose estimation in the presence of noise. This indicates that the robustness of KSM (to both noise and training set reduction) may be improved by applying a better tracking and neighbor selection criterion, which allows the integration of more complex temporal smoothing (to minimize 108

125 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE error due to ambiguous silhouettes located at the error peaks [figure 6.9]). Figure 6.10: Diagram to illustrate the most likely relationship between using the original training set (used to capture results of figure 4.14) without GKPCA filtering (to train KSM) and using training sequences directly obtained from a motion capture database (to train KSM). The vertical axis indicates the probable mean error for different sizes of the training set. An important point to note is that the original training set (of 323 exemplars) is already a sparse set, in the sense that there is no redundant information and the training poses are (relatively) uniformly spread out in the normalized RJC space (i.e. training exemplars have already been removed form it to begin with). In practice, the marker-based motion capture training set (which can be downloaded from motion capture databases, such as [32]), is usually sampled at high frame rates of between 60Hz to 120Hz. Therefore, when these motion sequences are used directly as the original training set (before GKPCA filtering), they will have a high level of redundant information. For these scenarios, we believe GKPCA filtering will be extremely useful because it may be able to remove a majority of training exemplars before resulting in a significant decrease in the pose estimation error of KSM (figure 6.10). 109

126 CHAPTER 6. GREEDY KPCA FOR MOTION HUMAN CAPTURE 110

127 Chapter 7 Conclusions & Future Work 7.1 Concluding Remarks This thesis introduces a novel markerless motion capture technique called Kernel Subspace Mapping (KSM) [chapter 4], which can estimate full body (human) pose without the need for any intrinsic camera calibration. The technique uses the well established unsupervised learning algorithm, Kernel Principal Components Analysis (KPCA) [90], to learn the denoising subspace of human motion in the normalized relative joint center space (which is consistent for all actors) [section 3.4.1]. After training, novel silhouettes are projected into the pose feature subspace, before pose inference via pre-image approximations [87]. This has the advantageous effect of implicitly de-noising the input silhouettes onto the subspace defined by the generic (clean) training data, hence allowing pose inference (of similar motion to the training set) from silhouettes of unseen actors and of unseen poses. To begin, the thesis advocates the use of non-linear KPCA in human motion de-noising by showing the simplest forms of human motion (encoded in the normalized RJC format [section 3.4.1]) is non-linear. Thereafter, to test our hypothesis of a de-noising framework for markerless motion capture, KSM was trained using synthetic silhouettes of a walk sequence (fully rotated about the yaw axis) generated from a generic model. For testing, a substantially different mesh model is used to generate a large set of silhouettes in unseen poses, and from previously unseen viewing orientations. The pose inference results (average error of 2.78 /joint or 2.02 cm/joint) show that KSM can estimate pose with 111

128 CHAPTER 7. CONCLUSIONS & FUTURE WORK accuracy similar to other state of the art approaches [2, 45]. KSM is also one of the few 2D learning-based pose estimation approaches (others are [6, 45, 83]), which can accurately infer pose irrespective of yaw orientation. Furthermore, KSM also works robustly in the presence of synthetic binary noise, which are used to simulate scenarios with poor silhouette segmentation. As KPCA [90] is a well established machine learning technique, there is a large community of researchers developing novel optimization algorithms for it. We believe that because KSM is based on this popular technique (KPCA), most of the improvements on KPCA can be transferred to practical improvements of KSM with relatively minor modifications. To illustrate the ease of integration (of novel improved algorithms) and elaborate further some of the important aspects of KSM, two recently proposed techniques are embedded into KSM: the Image Euclidian Distance (IMED) [116], and the greedy KPCA (GKPCA) algorithm [41]. Specifically for IMED embedded KSM, we concentrate on the problem of 3D object viewpoint estimation from intensity images [81]. In this case, IMED embedded KSM shows an improvement in the accuracy of viewpoint estimation over KSM (using vectorized Euclidian distance) and other previously proposed approaches [80, 119, 47, 78, 80]. For greedy KPCA integration, we concentrate specifically on the problem of human pose inference (from synchronized silhouettes) via the use of a reduced training set. The greedy KPCA algorithm is used as a preprocessing filter to select a reduced training subset than can optimally represent the original de-noising subspace, given a specific prior constraint (e.g. using 70% of the original training data). The experiments show that Greedy KPCA filtering for KSM results in lower evaluation cost (for both KPCA denoising and KSM in motion capture) due to the reduced training set. More importantly, KSM with a GKPCA filter is able to retain most of the pose inference quality when compared to using the original training set (to train KSM). 112

129 CHAPTER 7. CONCLUSIONS & FUTURE WORK 7.2 Future Directions Kernel Subspace Mapping (KSM) has been shown to be accurate in human pose estimation from binary human silhouettes. An interesting area to explore is the possibility of extending KSM to take into account photometric information without limiting its flexibility. Using silhouettes is advantageous because normalized silhouettes (section 4.2.3) of the same pose (for most actors) are relatively similar irrespective of their environment (provided that there is acceptable silhouette segmentation). However, silhouettes encode less information due to the loss of foreground photometric information. The use of pixel intensity and colors in KSM could potentially lead to more accurate pose estimation. This is because photometric data can be used to disambiguate inconclusive silhouettes (i.e. the ones located mostly at the error peaks in figure 4.16). However, photometric data is also model specific, in the sense that two different persons will most likely have extremely different patterns and color imageries for the same stance (pose). The problem that will need to be overcome is how to standardized the color/intensity information in such as way that the training data (using the photometric patterns generated from one person) can be generalized to a different person for (computationally) efficient pose estimation. A possible solution to this may be to use local gradients from SIFT -like algorithms [64] (as in the work of Agarwal and Triggs [5]) to first encode photometric data before training and pose estimation via KSM. The integration of IMED [116] into KSM (for viewpoint estimation) allows improved estimation results over using vectorized Euclidian distance in KSM. An interesting area to further explore is if IMED can be embedded into KSM to improve human pose estimation from silhouettes. The integration of IMED into the pyramid silhouette kernel (of the original KSM) is substantially harder (than vectorized Euclidian distance) due to the hierarchical structure of the silhouette descriptor. Furthermore, the (silhouette) pyramid kernel with embedded IMED would also need to be positive definite to ensure its efficacy with techniques based on convex optimization such as KPCA and KSM. 113

130 CHAPTER 7. CONCLUSIONS & FUTURE WORK Greedy KPCA filtering for reduced set selection in KSM is a research area which should be further investigated. Interestingly enough, KSM with GKPCA filtering results in superior reduction in pose (feature) subspace noise (i.e. lower ratio of pose space noise/feature subspace noise) than using KSM trained with the full set. As previously mentioned, an explanation for this (improvement) may be due to GKPCA s reduced set constraint, which leads to a reduced space of valid pre-images in the output space, and favorably, a reduced space for unwanted noise as well. The frame by frame analysis of the pose estimation error (when using GKPCA filtering) is also interesting (figure 6.9). The error plot shows that as the training size reduces, the errors do not increase (relatively) equally for all the test silhouettes, but the errors increases substantially for a small group of silhouettes. This similar pattern in error increase is also observed for pose estimation from noisy silhouettes (figure 4.16). Therefore, an important future area of research for KSM would be to explore techniques aim at the minimization of the error peaks due to ambiguous silhouettes. We believe that such a improvement would increase the robustness of KSM in pose estimation, in the presence of noise and when learning from reduced training sets. Finally, most of the learning based human pose estimation approaches do not rely on 3D volumetric reconstruction of the human body. This is mainly because 3D approaches mostly require an expensive array of synchronized and calibrate cameras. Specifically for previously proposed 3D pose estimation techniques (e.g. [99, 28]), an interesting area to investigate would be the integration of KSM into 3D pose estimation approaches, by learning the mapping from 3D feature points to pose vectors. Effectively, this may lead to a reduction in computational cost of pose estimation, as the full 3D volumetric reconstruction of the actor for each frame would not be required (to constrain the skeletal structure for pose vector estimation). Instead, the pose vectors can be inferred directly from 3D feature points, which can be more efficiently tracked and reconstructed. 114

131 Appendix A Motion Capture Formats and Converters A.1 Motion Capture Formats in KSM A deformable mesh model can be animated by simply controlling its inner hierarchical biped/skeleton structure (figure A.1). As a result, most motion formats only encode pose information to animate these structures. For information on how to create a deformable skinned mesh in DirectX (i.e. attach a static mesh model to its skeleton), the reader should refer to [66]. This section does not aim to present a complete review of all motion capture formats, but rather a summary of the relevant formats applied in the proposed technique, Kernel Subspace Mapping (KSM). In KSM, three different formats are used. These are: the Acclaim Motion Capture (AMC) (section A.1.1) [59], the DirectX format (section A.1.2) [66] and the Relative Joint Center (RJC) format (section 3.4.1). Two motion capture format converters are used in this thesis (Appendix A.2), one to convert AMC to DirectX format and the other from AMC to RJC format. Figure A.2 shows a diagram summarizing the use of these converters in the proposed markerless motion capture system. KSM learns the mapping from the synthetic silhouette space to the corresponding RJC space (figure A.2 [right:dotted arrow]) for a specific motion set (e.g. walking, running). For training, a database of accurate human motion is required, which can be downloaded in AMC format 115

132 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS Figure A.1: Diagram to summarize the hierarchical relationship of the bones of inner biped structure. from the CMU motion capture database [32]. From this, synthetic silhouettes for machine learning purposes can be generated via the DirectX skinned mesh model [66]. From our experiments, the interpolation and reconstruct (of realistic new poses) is best synthesized in the normalized RJC format (section 3.4.1). A.1.1 Acclaim Motion Capture (AMC) Format The Acclaim motion capture (AMC) format, developed by the game maker Acclaim, stores human motion using concatenated Euler rotations. The full motion capture format consists of two types of file, the Acclaim skeleton file (ASF) and the AMC file [59]. The ASF file stores the attributes of the skeleton such as the number of bones, their geometric dimension, their hierarchical relationships and the base pose, which is usually the tpose (figure A.3 [right]). For the case where a single skeletal structure is used, the ASF file can be kept constant and ignored. On the other hand, the AMC file (which encodes human motion using joint rotations over a sequence of frames) is different for every set of motion and is the file generated during motion capture. 116

133 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS Figure A.2: Diagram to summarize the proposed markerless motion capture system and how the reviewed motion formats combine together to create a novel pose inference technique called Kernel Subspace Mapping (KSM). Concentrating principally on the AMC file format, human motion is stored by encoding each 3D local joint rotation as a set of consecutive Euler rotations. This is highlighted in figure A.3 [left humerus bone], where the left shoulder joint rotation (which corresponds to a ball & socket joint) is represented by 3 concatenated Euler rotations. Note that it is not necessary for a joint to be encoded using 3 Euler transformations. For example, the left elbow joint, which is a hinge joint (and represented by the left radius bone in figure A.3) has a single degree of freedom, and hence, is only represented by one Euler rotation. A full human pose can be encoded as a set of Euler pose (stance) vectors representing the actor s joint orientations at specific point in time (as used in [1, 84, 6, 24, 83]). Skeleton animation is achieved by concatenating these pose vectors (over time) and mapping them sequentially to a skeleton embedded inside a human mesh model. The AMC format is the simplest form of joint encoding and is one of the format adopted by the VICON marker based motion capture system [103]. On the other hand, encoding pose with Euler rotations presents numerus problems due to the fact that the mapping of pose to Euler joint coordinates is non-linear and multivalued. Any technique, like Kernel Principal Components Analysis (KPCA)[90, 91, 87] (which is the core component in KSM) will eventually breakdown when applied to vectors consisting of Euler angles. This is because it may potentially map the same 3D (joint) 117

134 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS Figure A.3: Example of a Acclaim motion capture (AMC) format with example rotation of the left humerus bone (i.e. the left shoulder joint) rotation to different locations in vector space. Furthermore, linear pose interpolation using Euler pose vectors also presents a problem, because Euler rotations are not commutative, as well as suffering from the problems of Gimbal locks (Appendix A.2 [figure A.7]). A.1.2 DirectX Animation Format Human motion capture format in DirectX is similar to the AMC format in that there are two parts to the file structure, the skeleton/mesh attribute section and the human motion section. The skeleton/mesh attribute section encodes the biped hierarchical structure, its geometric attributes and additional information such as its relationship to the mesh model (i.e. the weight matrix of the mesh vertices [66]). This information can again be considered constant and can be ignored in the pose inference process. Similar to the AMC format (section A.1.1), we concentrating principally on the file section that encodes the motion data. The DirectX encoding is similar to the AMC format in that the relative 3D 118

135 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS joint rotations are recorded in a sequential frame structure. However, instead of encoding 3D rotations using Euler rotations, the 4 4 homogeneous matrix is adopted. Note that the 4 4 matrix can also encode translation and scaling data (in addition to the rotation). However, for motion capture, where the geometric attributes of the model (e.g. length of arms, height, etc) is known and considered constant, pose inference can be constrained to inferring a set of 3 3 rotation matrices, which only encodes the rotation information. The main advantage of the DirectX format is the optimized skinned mesh animation and support for the format [66], which enables the efficient rendering and capture of synthetic silhouettes for training and testing purposes. As there are major similarities between the AMC and the DirectX format, AMC data from the marker based VICON system can easily be converted to its DirectX equivalent by converting each set of xyz Euler rotations to its corresponding 3 3 rotation matrix as follows: L [1:3,1:3] = R x (Θ x i )R y(θ y i )R z(θ z i ), (A.1) where Θ x i, Θy i and Θ z i are the Euler rotations about the x,y and z axis respectively, and L [1:3,1:3] denoting the first 3 rows and columns of the DirectX homogenous matrix (see Appendix A.2 for more details regarding format conversion). Unfortunately, the algebra of rotations using matrices is non-commutative and its corresponding manifold is nonlinear [9]. Therefore, use of the DirectX format for the synthesis of novel pose is also prone to relatively the same problems as when using Euler angles. 119

136 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS Figure A.4: Comparison between the 3D models view using the AMC viewer [32] (yellow figure) and the DirectX mesh viewer of a generic skinned mesh model (grey mesh) for a gold swing [top], a boxing motion sequence [center] and a salsa dancing motion sequence [bottom]. 120

137 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS Figure A.5: Comparison between the 3D models view using the AMC viewer [32] (yellow figure) and the connected stick figure of the normalized RJC format of a running motion sequence [top], a boxing motion sequence [center] and a dancing motion sequence [bottom]. 121

138 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS Figure A.6: Comparison between the 3D models view using the AMC viewer [32] (yellow figure) and the DirectX mesh viewer of an RBF mesh model of the author (blue background). 122

139 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS A.2 Motion Capture Format Converters A.2.1 The AMC to DirectX converter The AMC and DirectX formats basically store pose as a set of 3D rotations in different forms. Therefore, conversion between the two pose can be simplified to the process of converting one 3D rotation format to another. In this case, the converter will simply convert Euler rotations to a 4 4 homogenous matrix. A 3D Euler rotation is characterized by three concatenated rotations, usually about the x, y and z axis, which can commonly be referred to as the yaw, pitch and row respectively. The order of this concatenation plays a significant role in how the object is orientated after applying the rotations. Geometrically, if we apply the Euler rotations in the order of the yaw (x), the pitch (y) and the roll (z) as in figure A.7, the pitch axis will be rotated by the yaw rotation, the roll axis by both the yaw and the pitch rotations, however, the yaw axis will be affected by neither the roll nor the pitch rotation. Figure A.7: Images to illustrate Euler rotations in term of the yaw, pitch and roll. The image on the right shows the negative effect of Gimbal lock, where a degree of rotation is lost due to the alignment of the roll and the yaw axis. Notice how the pitch is affected by the yaw rotation, the roll is by both the yaw and pitch rotations, whereas the yaw is independent of any other rotations. 123

140 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS We can represent a rotation Θ x i about the yaw axis as R x (Θ x i ), where R x (Θ x i )= cos(θ x i ) sin(θx i ) 0 sin(θ x i ) cos(θx i ) (A.2) and the remaining pitch and roll rotations as R y (Θ y i ) and R z(θ z i ) respectively as: R y (Θ y i )= cos(θ y i ) 0 sin(θy i ) sin(θ y i ) 0 cos(θy i ) R z (Θ z i )= cos(θ z i ) sin(θz i 0) 0 sin(θ z i ) cos(θz i 0) (A.3). The 3 3 rotation matrix of the xyz (yaw-pitch-roll) Euler rotation can be generated by concatenating the separate rotation matrices in reverse order as in equation A.1. The reversal in matrix application may initially appear incorrect, however, closer examination of figure A.7 will reveal that from the object s point of view, the roll is performed first (z axis), followed by the pitch (y axis) and finally the yaw (x axis) rotation. For each pose, the set of 3 3 rotations matrices for the i-th joint is converted to its corresponding 4 4 homogeneous matrix L i (which also encodes the bone s translation displacement relative to its parent) as follows: L i = R x(θ x i ) O O T 1 R y(θ y i ) O O T 1 R z(θ z i ) O O T 1 b i I O T 1 [0, 0, 0, 1] T. (A.4) In equation A.4, O represents a 3 1 zero vector, I the 3 3 identity matrix, and b i the length vector of the i-th bone, whose x element stores the actual length of the bone (the y and z elements are both zero). A.2.2 The AMC to RJC converter The conversion from the AMC format to the RJC format (section 3.4.1) can be summarized as the problem of transforming Euler rotations to 3D points on a unit sphere. As the local homogeneous 4 4 matrix L i already encodes rotation information for the i-th joint, the simplest form of conversion is to transform the appended zero vector [0, 0, 0, 1] T as follows: 124

141 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS p i = L i [0, 0, 0, 1] T L i [0, 0, 0, 1] T. (A.5) Each joint rotation is now encoded as a point on a unit sphere and denoted by p i. Specifically in our work, a column-wise concatenation of the separate joint rotation forms the full pose vector p, which requires 57 dimensions (19 joints). Geometrically, for each frame, the normalized RJC encoding simply stores the relative position of the joints relative to each other along the skeletal hierarchy, with the pelvis node as the root. 125

142 APPENDIX A. MOTION CAPTURE FORMATS AND CONVERTERS 126

143 Appendix B Accurate Mesh Acquisition Figure B.1: Selected textured mesh models of the author in varying boxing poses. This section focuses on the capturing, synthesizing and texturing of accurate human mesh models for realistic skinned mesh animation in the DirectX format (section A.1.2). Human model acquisition was achieved via a laser scanner integrated with a synchronized high-resolution camera. In section B.1, the surface fitting and re-sampling technique of Carr et al [20, 21] is applied in a novel way to the scanned data in order to create accurate human mesh models. Thereafter, images captured from the digital camera during scanning are exported to the DirectX format and textured onto the mesh. This will enable the generation of synthetic test silhouettes of real people, hence allowing the quantitative analysis of any markerless motion capture technique by comparing the captured pose with the pose that was used to generate the synthetic images. This appendix summarizes one of the many solutions available to capturing an accurate human mesh model for synthetic testing purposes. Note that there are other techniques, which can also be used for accurate human model acquisition and texturing. 127

144 APPENDIX B. ACCURATE MESH ACQUISITION B.1 Model Acquisition for Surface Fitting & Re-sampling The model acquisition was performed using the Riegl LMS-Z420i terrestrial laser scanner equipped with a calibrated Nikon D100 6 mega pixels digital camera. The laser is positioned approximately five metres from the actor and two scans of the front and back of the actor are captured. Two stands are positioned on the left and right of the actor for hand placement in order to ensure ease of biped and mesh alignment. Thereafter, the front and back point cloud data are filtered and merged using the Riscan Pro software and the combined point cloud data exported for surface fitting. Figure B.2: Examples images of the front and back scan of the author [left] and the merged filtered scan data, which is used for surface fitting and re-sampling purposes. B.2 Radial Basis Fitting & Re-sampling From the merged point cloud data of the actor (figure B.2 [right]), a radial basis function (RBF) can be fitted to the data set for re-sampling, after which a smooth mesh surface can be attained by joining the sampled points. For the case of 3D point cloud data, the original scanned points (also referred to as on-surface points) are each assigned a 4 th dimension density value of zero [20, 21]. By letting x denote a 3D vector, a smooth surface in R 3 can be obtained by fitting an RBF function S(x) to the labelled scanned points and 128

145 APPENDIX B. ACCURATE MESH ACQUISITION re-sampling the function at the same density value of S(x) = 0. It is obvious that if only data points with density values of 0 are available, RBF fitting (to this set) will produce the trivial solution [i.e. S(x) = 0 for all x R 3 ]. To avoid this, a signed-distance function, which encodes the relationship between off -surface and on-surface points, can be adopted (figure B.3). Figure B.3: Diagram to illustrate the generation of off -surface points (blue and red) from the input on-surface points (green) before fitting an RBF function. An off -surface point, in this case, is a synthetic point x s m whose density is assigned the signed value d m, which is proportional to the distance to its closest on-surface point. Off -surface points are created along the projected normals, and can be created inside (blue points: d m < 0) or outside (red points: d m > 0) the surface defined by the onsurface (green) points. Normals are determined from local patches of on-surface points and performed using an evaluation version of the FastRBF TM Matlab toolbox [20, 21]. The surface fitting problem can now be re-formulated as the problem of determining an RBF function S(x) such that S(x n )=0, for n =1, 2,..., N and S(x s m)=d m, for m =1, 2,..., M (B.1) 129

146 APPENDIX B. ACCURATE MESH ACQUISITION where N and M are the number of on-surface and off -surface points respectively. Using the notation from [20], an RBF S(x) can be represented as follows: M+N S(x) =p(x)+ i=1 λ i Φ( x x i ), (B.2) with λ i denoting the RBF coefficients of Φ, the real values basis function, and p representing a linear polynomial function. By additionally constraining the RBF to the Beppo-Levi space in R 3 [20], the following side conditions can be implicitly guaranteed: M+N i=1 λ i = M+N i=1 λ i x i = M+N i=1 λ i y i = M+N i=1 λ i z i =0. (B.3) Amalgamating the side constraints with the RBF representation in (B.2), the problem of RBF fitting can effectively be reduced to that of solving the following system of linear equation: A P P T 0 λ c = B λ c (B.4) where A is the matrix defining the relationship between x i and x j (i.e. A i = φ( x i x j ) for i, k [1,M + N]), and P ij = p j (x i ), with j denoting the index to the basis of the polynomials. By solving for the unknown coefficients c and λ, the RBF function S(x) can now be sampled at any point within the range of the on-surface and off -surface training data. More importantly, it is possible to sample the RBF at S(x) = 0, which corresponds to the surface defined by the on-surface points. Furthermore, it is possible to sample this RBF at regular grid interval, adding to the simplicity and efficiency of the mesh construction algorithm. A disadvantage of RBF fitting, as highlighted in [20, 21], is the complexity of solving for the unknown coefficients c and λ, which is O(M +N) 3. For an accurate mesh model derived from approximately 8000 on-surface points (as in figure B.2 - right), this complexity makes the technique impractical on a home computer. By applying fast approximation techniques [20, 21], it is possible to reduce this complexity to O((M + N) log(m + N)). Figure B.4 shows the resultant mesh model of the author generated via the RBF fast approximation 130

147 APPENDIX B. ACCURATE MESH ACQUISITION and re-sampling technique. Figure B.4: Selected examples of the accurate mesh model of the author create via the RBF fast approximation & sampling technique of Carr et al [20, 21]. For increases realism, images of the person captured from the synchronized digital camera can be textured onto the plain model. Selected poses of textured mesh models of the author (using this simple solution) are shown in figure B

Real-Time Human Pose Inference using Kernel Principal Component Pre-image Approximations

Real-Time Human Pose Inference using Kernel Principal Component Pre-image Approximations 1 Real-Time Human Pose Inference using Kernel Principal Component Pre-image Approximations T. Tangkuampien and D. Suter Institute for Vision Systems Engineering Monash University, Australia {therdsak.tangkuampien,d.suter}@eng.monash.edu.au

More information

Human Upper Body Pose Estimation in Static Images

Human Upper Body Pose Estimation in Static Images 1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project

More information

Department of Electrical and Computer Systems Engineering

Department of Electrical and Computer Systems Engineering Department of Electrical and Computer Systems Engineering Technical Report MECSE-30-2005 Multiple Cameras Human Motion Capture using Non-linear Manifold Learning and De-noising T. Tangkuampien Multiple

More information

A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space

A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space A Dynamic Human Model using Hybrid 2D-3D Representations in Hierarchical PCA Space Eng-Jon Ong and Shaogang Gong Department of Computer Science, Queen Mary and Westfield College, London E1 4NS, UK fongej

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

3D Human Motion Analysis and Manifolds

3D Human Motion Analysis and Manifolds D E P A R T M E N T O F C O M P U T E R S C I E N C E U N I V E R S I T Y O F C O P E N H A G E N 3D Human Motion Analysis and Manifolds Kim Steenstrup Pedersen DIKU Image group and E-Science center Motivation

More information

Posture detection by kernel PCA-based manifold learning

Posture detection by kernel PCA-based manifold learning University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2010 Posture detection by kernel PCA-based manifold learning Peng

More information

Announcements. Recognition. Recognition. Recognition. Recognition. Homework 3 is due May 18, 11:59 PM Reading: Computer Vision I CSE 152 Lecture 14

Announcements. Recognition. Recognition. Recognition. Recognition. Homework 3 is due May 18, 11:59 PM Reading: Computer Vision I CSE 152 Lecture 14 Announcements Computer Vision I CSE 152 Lecture 14 Homework 3 is due May 18, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images Given

More information

Generic Face Alignment Using an Improved Active Shape Model

Generic Face Alignment Using an Improved Active Shape Model Generic Face Alignment Using an Improved Active Shape Model Liting Wang, Xiaoqing Ding, Chi Fang Electronic Engineering Department, Tsinghua University, Beijing, China {wanglt, dxq, fangchi} @ocrserv.ee.tsinghua.edu.cn

More information

Image Processing. Image Features

Image Processing. Image Features Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching

More information

Contents I IMAGE FORMATION 1

Contents I IMAGE FORMATION 1 Contents I IMAGE FORMATION 1 1 Geometric Camera Models 3 1.1 Image Formation............................. 4 1.1.1 Pinhole Perspective....................... 4 1.1.2 Weak Perspective.........................

More information

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map?

Announcements. Recognition I. Gradient Space (p,q) What is the reflectance map? Announcements I HW 3 due 12 noon, tomorrow. HW 4 to be posted soon recognition Lecture plan recognition for next two lectures, then video and motion. Introduction to Computer Vision CSE 152 Lecture 17

More information

CITY UNIVERSITY OF HONG KONG 香港城市大學

CITY UNIVERSITY OF HONG KONG 香港城市大學 CITY UNIVERSITY OF HONG KONG 香港城市大學 Modeling of Single Character Motions with Temporal Sparse Representation and Gaussian Processes for Human Motion Retrieval and Synthesis 基於時域稀疏表示和高斯過程的單角色動作模型的建立及其在動作檢索和生成的應用

More information

Detecting and Segmenting Humans in Crowded Scenes

Detecting and Segmenting Humans in Crowded Scenes Detecting and Segmenting Humans in Crowded Scenes Mikel D. Rodriguez University of Central Florida 4000 Central Florida Blvd Orlando, Florida, 32816 mikel@cs.ucf.edu Mubarak Shah University of Central

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS

Mobile Robotics. Mathematics, Models, and Methods. HI Cambridge. Alonzo Kelly. Carnegie Mellon University UNIVERSITY PRESS Mobile Robotics Mathematics, Models, and Methods Alonzo Kelly Carnegie Mellon University HI Cambridge UNIVERSITY PRESS Contents Preface page xiii 1 Introduction 1 1.1 Applications of Mobile Robots 2 1.2

More information

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur

IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS. Kirthiga, M.E-Communication system, PREC, Thanjavur IMPROVED FACE RECOGNITION USING ICP TECHNIQUES INCAMERA SURVEILLANCE SYSTEMS Kirthiga, M.E-Communication system, PREC, Thanjavur R.Kannan,Assistant professor,prec Abstract: Face Recognition is important

More information

The Role of Manifold Learning in Human Motion Analysis

The Role of Manifold Learning in Human Motion Analysis The Role of Manifold Learning in Human Motion Analysis Ahmed Elgammal and Chan Su Lee Department of Computer Science, Rutgers University, Piscataway, NJ, USA {elgammal,chansu}@cs.rutgers.edu Abstract.

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Dynamic Human Shape Description and Characterization

Dynamic Human Shape Description and Characterization Dynamic Human Shape Description and Characterization Z. Cheng*, S. Mosher, Jeanne Smith H. Cheng, and K. Robinette Infoscitex Corporation, Dayton, Ohio, USA 711 th Human Performance Wing, Air Force Research

More information

Human pose estimation using Active Shape Models

Human pose estimation using Active Shape Models Human pose estimation using Active Shape Models Changhyuk Jang and Keechul Jung Abstract Human pose estimation can be executed using Active Shape Models. The existing techniques for applying to human-body

More information

Articulated Structure from Motion through Ellipsoid Fitting

Articulated Structure from Motion through Ellipsoid Fitting Int'l Conf. IP, Comp. Vision, and Pattern Recognition IPCV'15 179 Articulated Structure from Motion through Ellipsoid Fitting Peter Boyi Zhang, and Yeung Sam Hung Department of Electrical and Electronic

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Semi-Supervised Hierarchical Models for 3D Human Pose Reconstruction

Semi-Supervised Hierarchical Models for 3D Human Pose Reconstruction Semi-Supervised Hierarchical Models for 3D Human Pose Reconstruction Atul Kanaujia, CBIM, Rutgers Cristian Sminchisescu, TTI-C Dimitris Metaxas,CBIM, Rutgers 3D Human Pose Inference Difficulties Towards

More information

Markerless human motion capture through visual hull and articulated ICP

Markerless human motion capture through visual hull and articulated ICP Markerless human motion capture through visual hull and articulated ICP Lars Mündermann lmuender@stanford.edu Stefano Corazza Stanford, CA 93405 stefanoc@stanford.edu Thomas. P. Andriacchi Bone and Joint

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Monocular Human Motion Capture with a Mixture of Regressors. Ankur Agarwal and Bill Triggs GRAVIR-INRIA-CNRS, Grenoble, France

Monocular Human Motion Capture with a Mixture of Regressors. Ankur Agarwal and Bill Triggs GRAVIR-INRIA-CNRS, Grenoble, France Monocular Human Motion Capture with a Mixture of Regressors Ankur Agarwal and Bill Triggs GRAVIR-INRIA-CNRS, Grenoble, France IEEE Workshop on Vision for Human-Computer Interaction, 21 June 2005 Visual

More information

Articulated Pose Estimation with Flexible Mixtures-of-Parts

Articulated Pose Estimation with Flexible Mixtures-of-Parts Articulated Pose Estimation with Flexible Mixtures-of-Parts PRESENTATION: JESSE DAVIS CS 3710 VISUAL RECOGNITION Outline Modeling Special Cases Inferences Learning Experiments Problem and Relevance Problem:

More information

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models

A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models A novel approach to motion tracking with wearable sensors based on Probabilistic Graphical Models Emanuele Ruffaldi Lorenzo Peppoloni Alessandro Filippeschi Carlo Alberto Avizzano 2014 IEEE International

More information

Object detection using non-redundant local Binary Patterns

Object detection using non-redundant local Binary Patterns University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Object detection using non-redundant local Binary Patterns Duc Thanh

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Automatic Colorization of Grayscale Images

Automatic Colorization of Grayscale Images Automatic Colorization of Grayscale Images Austin Sousa Rasoul Kabirzadeh Patrick Blaes Department of Electrical Engineering, Stanford University 1 Introduction ere exists a wealth of photographic images,

More information

Inferring 3D Body Pose from Silhouettes using Activity Manifold Learning

Inferring 3D Body Pose from Silhouettes using Activity Manifold Learning CVPR 4 Inferring 3D Body Pose from Silhouettes using Activity Manifold Learning Ahmed Elgammal and Chan-Su Lee Department of Computer Science, Rutgers University, New Brunswick, NJ, USA {elgammal,chansu}@cs.rutgers.edu

More information

An Adaptive Eigenshape Model

An Adaptive Eigenshape Model An Adaptive Eigenshape Model Adam Baumberg and David Hogg School of Computer Studies University of Leeds, Leeds LS2 9JT, U.K. amb@scs.leeds.ac.uk Abstract There has been a great deal of recent interest

More information

5LSH0 Advanced Topics Video & Analysis

5LSH0 Advanced Topics Video & Analysis 1 Multiview 3D video / Outline 2 Advanced Topics Multimedia Video (5LSH0), Module 02 3D Geometry, 3D Multiview Video Coding & Rendering Peter H.N. de With, Sveta Zinger & Y. Morvan ( p.h.n.de.with@tue.nl

More information

The Isometric Self-Organizing Map for 3D Hand Pose Estimation

The Isometric Self-Organizing Map for 3D Hand Pose Estimation The Isometric Self-Organizing Map for 3D Hand Pose Estimation Haiying Guan, Rogerio S. Feris, and Matthew Turk Computer Science Department University of California, Santa Barbara {haiying,rferis,mturk}@cs.ucsb.edu

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Modeling 3D Human Poses from Uncalibrated Monocular Images

Modeling 3D Human Poses from Uncalibrated Monocular Images Modeling 3D Human Poses from Uncalibrated Monocular Images Xiaolin K. Wei Texas A&M University xwei@cse.tamu.edu Jinxiang Chai Texas A&M University jchai@cse.tamu.edu Abstract This paper introduces an

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi 1, Francois de Sorbier 1 and Hideo Saito 1 1 Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi,

More information

Evaluating Example-based Pose Estimation: Experiments on the HumanEva Sets

Evaluating Example-based Pose Estimation: Experiments on the HumanEva Sets Evaluating Example-based Pose Estimation: Experiments on the HumanEva Sets Ronald Poppe Human Media Interaction Group, Department of Computer Science University of Twente, Enschede, The Netherlands poppe@ewi.utwente.nl

More information

Fundamentals of Digital Image Processing

Fundamentals of Digital Image Processing \L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,

More information

Gaze interaction (2): models and technologies

Gaze interaction (2): models and technologies Gaze interaction (2): models and technologies Corso di Interazione uomo-macchina II Prof. Giuseppe Boccignone Dipartimento di Scienze dell Informazione Università di Milano boccignone@dsi.unimi.it http://homes.dsi.unimi.it/~boccignone/l

More information

Data-driven Approaches to Simulation (Motion Capture)

Data-driven Approaches to Simulation (Motion Capture) 1 Data-driven Approaches to Simulation (Motion Capture) Ting-Chun Sun tingchun.sun@usc.edu Preface The lecture slides [1] are made by Jessica Hodgins [2], who is a professor in Computer Science Department

More information

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints

Last week. Multi-Frame Structure from Motion: Multi-View Stereo. Unknown camera viewpoints Last week Multi-Frame Structure from Motion: Multi-View Stereo Unknown camera viewpoints Last week PCA Today Recognition Today Recognition Recognition problems What is it? Object detection Who is it? Recognizing

More information

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin

More information

Shape Classification and Cell Movement in 3D Matrix Tutorial (Part I)

Shape Classification and Cell Movement in 3D Matrix Tutorial (Part I) Shape Classification and Cell Movement in 3D Matrix Tutorial (Part I) Fred Park UCI icamp 2011 Outline 1. Motivation and Shape Definition 2. Shape Descriptors 3. Classification 4. Applications: Shape Matching,

More information

Adaptive Fingerprint Image Enhancement Techniques and Performance Evaluations

Adaptive Fingerprint Image Enhancement Techniques and Performance Evaluations Adaptive Fingerprint Image Enhancement Techniques and Performance Evaluations Kanpariya Nilam [1], Rahul Joshi [2] [1] PG Student, PIET, WAGHODIYA [2] Assistant Professor, PIET WAGHODIYA ABSTRACT: Image

More information

Capturing Skeleton-based Animation Data from a Video

Capturing Skeleton-based Animation Data from a Video Capturing Skeleton-based Animation Data from a Video Liang-Yu Shih, Bing-Yu Chen National Taiwan University E-mail: xdd@cmlab.csie.ntu.edu.tw, robin@ntu.edu.tw ABSTRACT This paper presents a semi-automatic

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO

PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,

More information

Image Coding with Active Appearance Models

Image Coding with Active Appearance Models Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing

More information

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper):

Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00. Topic (Research Paper): Thiruvarangan Ramaraj CS525 Graphics & Scientific Visualization Spring 2007, Presentation I, February 28 th 2007, 14:10 15:00 Topic (Research Paper): Jinxian Chai and Jessica K. Hodgins, Performance Animation

More information

2. LITERATURE REVIEW

2. LITERATURE REVIEW 2. LITERATURE REVIEW CBIR has come long way before 1990 and very little papers have been published at that time, however the number of papers published since 1997 is increasing. There are many CBIR algorithms

More information

Stacked Denoising Autoencoders for Face Pose Normalization

Stacked Denoising Autoencoders for Face Pose Normalization Stacked Denoising Autoencoders for Face Pose Normalization Yoonseop Kang 1, Kang-Tae Lee 2,JihyunEun 2, Sung Eun Park 2 and Seungjin Choi 1 1 Department of Computer Science and Engineering Pohang University

More information

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10

International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXIV-5/W10 BUNDLE ADJUSTMENT FOR MARKERLESS BODY TRACKING IN MONOCULAR VIDEO SEQUENCES Ali Shahrokni, Vincent Lepetit, Pascal Fua Computer Vision Lab, Swiss Federal Institute of Technology (EPFL) ali.shahrokni,vincent.lepetit,pascal.fua@epfl.ch

More information

CAP 6412 Advanced Computer Vision

CAP 6412 Advanced Computer Vision CAP 6412 Advanced Computer Vision http://www.cs.ucf.edu/~bgong/cap6412.html Boqing Gong April 21st, 2016 Today Administrivia Free parameters in an approach, model, or algorithm? Egocentric videos by Aisha

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, ISSN

International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18,   ISSN International Journal of Computer Engineering and Applications, Volume XII, Issue I, Jan. 18, www.ijcea.com ISSN 2321-3469 SURVEY ON OBJECT TRACKING IN REAL TIME EMBEDDED SYSTEM USING IMAGE PROCESSING

More information

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction

Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Dynamic Time Warping for Binocular Hand Tracking and Reconstruction Javier Romero, Danica Kragic Ville Kyrki Antonis Argyros CAS-CVAP-CSC Dept. of Information Technology Institute of Computer Science KTH,

More information

Estimating Human Pose in Images. Navraj Singh December 11, 2009

Estimating Human Pose in Images. Navraj Singh December 11, 2009 Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks

More information

Computational Foundations of Cognitive Science

Computational Foundations of Cognitive Science Computational Foundations of Cognitive Science Lecture 16: Models of Object Recognition Frank Keller School of Informatics University of Edinburgh keller@inf.ed.ac.uk February 23, 2010 Frank Keller Computational

More information

Model-free Viewpoint Invariant Human Activity Recognition

Model-free Viewpoint Invariant Human Activity Recognition Model-free Viewpoint Invariant Human Activity Recognition Zaw Zaw Htike, Simon Egerton, Kuang Ye Chow Abstract The viewpoint assumption is becoming an obstacle in human activity recognition systems. There

More information

Differential Structure in non-linear Image Embedding Functions

Differential Structure in non-linear Image Embedding Functions Differential Structure in non-linear Image Embedding Functions Robert Pless Department of Computer Science, Washington University in St. Louis pless@cse.wustl.edu Abstract Many natural image sets are samples

More information

Non-linear dimension reduction

Non-linear dimension reduction Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods

More information

Object Tracking using HOG and SVM

Object Tracking using HOG and SVM Object Tracking using HOG and SVM Siji Joseph #1, Arun Pradeep #2 Electronics and Communication Engineering Axis College of Engineering and Technology, Ambanoly, Thrissur, India Abstract Object detection

More information

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi IJISET - International Journal of Innovative Science, Engineering & Technology, Vol. 2 Issue 11, November 2015. Video Surveillance System for Object Detection and Tracking Methods R.Aarthi, K.Kiruthikadevi

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

Visual Motion Analysis and Tracking Part II

Visual Motion Analysis and Tracking Part II Visual Motion Analysis and Tracking Part II David J Fleet and Allan D Jepson CIAR NCAP Summer School July 12-16, 16, 2005 Outline Optical Flow and Tracking: Optical flow estimation (robust, iterative refinement,

More information

Short Survey on Static Hand Gesture Recognition

Short Survey on Static Hand Gesture Recognition Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of

More information

A Statistical Image-Based Shape Model for Visual Hull Reconstruction and 3D Structure Inference. Kristen Lorraine Grauman

A Statistical Image-Based Shape Model for Visual Hull Reconstruction and 3D Structure Inference. Kristen Lorraine Grauman A Statistical Image-Based Shape Model for Visual Hull Reconstruction and 3D Structure Inference by Kristen Lorraine Grauman Submitted to the Department of Electrical Engineering and Computer Science in

More information

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction

Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Face Recognition At-a-Distance Based on Sparse-Stereo Reconstruction Ham Rara, Shireen Elhabian, Asem Ali University of Louisville Louisville, KY {hmrara01,syelha01,amali003}@louisville.edu Mike Miller,

More information

Model-based Motion Capture for Crash Test Video Analysis

Model-based Motion Capture for Crash Test Video Analysis Model-based Motion Capture for Crash Test Video Analysis Juergen Gall 1, Bodo Rosenhahn 1, Stefan Gehrig 2, and Hans-Peter Seidel 1 1 Max-Planck-Institute for Computer Science, Campus E1 4, 66123 Saarbrücken,

More information

Introduction to behavior-recognition and object tracking

Introduction to behavior-recognition and object tracking Introduction to behavior-recognition and object tracking Xuan Mo ipal Group Meeting April 22, 2011 Outline Motivation of Behavior-recognition Four general groups of behaviors Core technologies Future direction

More information

Model-based Visual Tracking:

Model-based Visual Tracking: Technische Universität München Model-based Visual Tracking: the OpenTL framework Giorgio Panin Technische Universität München Institut für Informatik Lehrstuhl für Echtzeitsysteme und Robotik (Prof. Alois

More information

Human detection using local shape and nonredundant

Human detection using local shape and nonredundant University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2010 Human detection using local shape and nonredundant binary patterns

More information

Visuelle Perzeption für Mensch- Maschine Schnittstellen

Visuelle Perzeption für Mensch- Maschine Schnittstellen Visuelle Perzeption für Mensch- Maschine Schnittstellen Vorlesung, WS 2009 Prof. Dr. Rainer Stiefelhagen Dr. Edgar Seemann Institut für Anthropomatik Universität Karlsruhe (TH) http://cvhci.ira.uka.de

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

Detecting motion by means of 2D and 3D information

Detecting motion by means of 2D and 3D information Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,

More information

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting R. Maier 1,2, K. Kim 1, D. Cremers 2, J. Kautz 1, M. Nießner 2,3 Fusion Ours 1

More information

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images

Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Automatic Generation of Animatable 3D Personalized Model Based on Multi-view Images Seong-Jae Lim, Ho-Won Kim, Jin Sung Choi CG Team, Contents Division ETRI Daejeon, South Korea sjlim@etri.re.kr Bon-Ki

More information

Object Classification Using Tripod Operators

Object Classification Using Tripod Operators Object Classification Using Tripod Operators David Bonanno, Frank Pipitone, G. Charmaine Gilbreath, Kristen Nock, Carlos A. Font, and Chadwick T. Hawley US Naval Research Laboratory, 4555 Overlook Ave.

More information

Preparation Meeting. Recent Advances in the Analysis of 3D Shapes. Emanuele Rodolà Matthias Vestner Thomas Windheuser Daniel Cremers

Preparation Meeting. Recent Advances in the Analysis of 3D Shapes. Emanuele Rodolà Matthias Vestner Thomas Windheuser Daniel Cremers Preparation Meeting Recent Advances in the Analysis of 3D Shapes Emanuele Rodolà Matthias Vestner Thomas Windheuser Daniel Cremers What You Will Learn in the Seminar Get an overview on state of the art

More information

Illumination invariant face detection

Illumination invariant face detection University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2009 Illumination invariant face detection Alister Cordiner University

More information

Head Frontal-View Identification Using Extended LLE

Head Frontal-View Identification Using Extended LLE Head Frontal-View Identification Using Extended LLE Chao Wang Center for Spoken Language Understanding, Oregon Health and Science University Abstract Automatic head frontal-view identification is challenging

More information

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA

TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA TEXTURE OVERLAY ONTO NON-RIGID SURFACE USING COMMODITY DEPTH CAMERA Tomoki Hayashi, Francois de Sorbier and Hideo Saito Graduate School of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-ku,

More information

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation

Face Tracking. Synonyms. Definition. Main Body Text. Amit K. Roy-Chowdhury and Yilei Xu. Facial Motion Estimation Face Tracking Amit K. Roy-Chowdhury and Yilei Xu Department of Electrical Engineering, University of California, Riverside, CA 92521, USA {amitrc,yxu}@ee.ucr.edu Synonyms Facial Motion Estimation Definition

More information

Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences

Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Presentation of the thesis work of: Hedvig Sidenbladh, KTH Thesis opponent: Prof. Bill Freeman, MIT Thesis supervisors

More information

Face Recognition using SURF Features and SVM Classifier

Face Recognition using SURF Features and SVM Classifier International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 8, Number 1 (016) pp. 1-8 Research India Publications http://www.ripublication.com Face Recognition using SURF Features

More information

Predicting 3D People from 2D Pictures

Predicting 3D People from 2D Pictures Predicting 3D People from 2D Pictures Leonid Sigal Michael J. Black Department of Computer Science Brown University http://www.cs.brown.edu/people/ls/ CIAR Summer School August 15-20, 2006 Leonid Sigal

More information

Determinant of homography-matrix-based multiple-object recognition

Determinant of homography-matrix-based multiple-object recognition Determinant of homography-matrix-based multiple-object recognition 1 Nagachetan Bangalore, Madhu Kiran, Anil Suryaprakash Visio Ingenii Limited F2-F3 Maxet House Liverpool Road Luton, LU1 1RS United Kingdom

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Hierarchical GEMINI - Hierarchical linear subspace indexing method

Hierarchical GEMINI - Hierarchical linear subspace indexing method Hierarchical GEMINI - Hierarchical linear subspace indexing method GEneric Multimedia INdexIng DB in feature space Range Query Linear subspace sequence method DB in subspace Generic constraints Computing

More information

Hand Tracking Miro Enev UCDS Cognitive Science Department 9500 Gilman Dr., La Jolla CA

Hand Tracking Miro Enev UCDS Cognitive Science Department 9500 Gilman Dr., La Jolla CA Hand Tracking Miro Enev UCDS Cognitive Science Department 9500 Gilman Dr., La Jolla CA menev@ucsd.edu Abstract: Tracking the pose of a moving hand from a monocular perspective is a difficult problem. In

More information

Human Activity Recognition Using Multidimensional Indexing

Human Activity Recognition Using Multidimensional Indexing Human Activity Recognition Using Multidimensional Indexing By J. Ben-Arie, Z. Wang, P. Pandit, S. Rajaram, IEEE PAMI August 2002 H. Ertan Cetingul, 07/20/2006 Abstract Human activity recognition from a

More information