The Laplacian Eigenmaps Latent Variable Model
|
|
- Deirdre Ross
- 6 years ago
- Views:
Transcription
1 The Laplacian Eigenmaps Latent Variable Model with applications to articulated pose tracking Miguel Á. Carreira-Perpiñán EECS, UC Merced
2 Articulated pose tracking We want to extract the 3D pose of a moving person (e.g. 3D positions of several markers located on body joints) from monocular video: From the CMU motion-capture database, ØØÔ»»ÑÓ Ôº ºÑÙº Ù Idea: model patterns of human motion using motion-capture data (also useful in psychology, biomechanics, etc.) p.
3 Articulated pose tracking (cont.) Some applications: recognising orientation (e.g. front/back), activities (running, walking... ), identity, sex computer graphics: rendering graphics model of the person (from different viewpoints) entertainment: realistic animation of cartoon characters in movies and computer games Difficult because: ambiguity of perspective projection 3D D (depth loss) self-occlusion of body parts noise in image, clutter high-dimensional space of poses: this makes it hard to track (e.g. in a Bayesian framework) p.
4 Articulated pose tracking (cont.) Pose = 3D positions of 3+ markers located on body joints: vector y R D (D ) Intrinsic pose x R L with L D: markers positions are correlated because of physical constraints (e.g. elbow and wrist are always 3 cm apart) so the poses y,y,... live in a low-dimensional manifold with dimension L D p. 3
5 Articulatory inversion The problem of recovering the sequence of vocal tract shapes (lips, tongue, etc.) that produce a given acoustic utterance. articulatory configurations?? acoustic signal A long-standing problem in speech research (but solved effortlessly by humans). p. 4
6 Articulatory inversion (cont.) Applications: speech coding speech recognition real-time visualisation of vocal tract (e.g. for speech production studies or for language learning) speech therapy (e.g. assessment of dysarthria) etc. Difficult because: different vocal tract shapes can produce the same acoustics high-dimensional space of vocal-tract shapes but, again, low-dimensional intrinsic manifold because of physical constraints p. 5
7 Articulatory inversion (cont.) Data collection: electromagnetic articulography (EMA) or X ray microbeam: record D positions along midsagittal plane of several pellets located on tongue, lips, velum, etc. X ray microbeam database (U. Wisconsin) MOCHA database (U. Edinburgh & QMUC) Other techniques being developed: ultrasound, MRI, etc. p. 6
8 Visualisation of blood test analytes One blood sample yields + analytes = (glucose, albumin, Na+, LDL,... ) The D map represents in different regions normal vs. abnormal samples. Extreme values of certain analytes are potentially associated with diseases (glucose: diabetes; urea nitrogen and creatinine: kidney disease; total bilirubin: liver). all data Inpatients Outpatients Normal samples glucose > urea nitrogen > 5 creatinine > total bilirubin > 5 p. 7
9 Visualisation of blood test analytes (cont.) The temporal trajectories (over a period of days) for different patients indicate their evolution. Also useful to identify bad samples, e.g. due to machine malfunction. Inpatient Outpatient 4889 p. 8 Kazmierczak, Leen, Erdogmus & Carreira-Perpiñán, Clinical Chemistry and Laboratory Medicine 7
10 Dimensionality reduction (manifold learning) Given a high-dimensional data set Y = {y,...,y N } R D, assumed to lie near a low-dimensional manifold of dimension L D, learn (estimate): Dimensionality reduction mapping F : y x Reconstruction mapping f : x y Manifold x x f y 3 f(x) F(y) y F y y x Latent low-dimensional space R L Observed high-dimensional space R D p. 9
11 Dimensionality reduction (cont.) Two large classes of nonlinear methods: Latent variable models Probabilistic, mappings, local optima, scale badly with dimension Spectral methods Not probabilistic, no mappings, global optimum, scale well with dimension They have developed separately so far, and have complementary advantages and disadvantages. Our new method, LELVM, shares the advantages of both. p.
12 Latent variable models (LVMs) Probabilistic methods that learn a joint density model p(x, y) from the training data Y. This yields: Marginal densities p(y) = p(y x)p(x)dx and p(x) Mapping F(y) = E {x y} the mean of p(x y) = p(y x)p(x) p(y) Mapping f(x) = E {y x} Several types: (Bayes th.) Linear LVMs: probabilistic PCA, factor analysis, ICA... Nonlinear LVMs: Generative Topographic Mapping (GTM) (Bishop et al, NECO 998) Generalised Elastic Net (GEN) (Carreira-Perpiñán et al 5) p.
13 Latent variable models (cont.) Nonlinear LVMs are very powerful in principle: can represent nonlinear mappings can represent multimodal densities can deal with missing data But in practice they have disadvantages: The objective function has many local optima, most of which yield very poor manifolds Computational cost grows exponentially with latent dimension L, so this limits L 3 Reason: need to discretise the latent space to compute p(y) = R p(y x)p(x) dx This has limited their use in certain applications. p.
14 Spectral methods Very popular recently in machine learning: multidimensional scaling, Isomap, LLE, Laplacian eigenmaps, etc. Essentially, they find latent points x,...,x N such that distances in X correlate well with distances in Y. Example: draw map of the US given city-to-city distances distance(y m,y n ) = We focus on Laplacian eigenmaps x x = x p. 3
15 Spectral methods: Laplacian eigenmaps (Belkin & Niyogi, NECO 3). Neighborhood graph on dataset y,...,y N with weighted edges w mn = exp ( (y m y n )/σ ).. Set up quadratic optimisation problem min X tr( XLX T) s.t. XDX T = I, XD = X = (x,...,x N ), affinity matrix W = (w mn ), degree matrix D = diag ` P N n= w nm, graph Laplacian L = D W. Intuition: tr ( XLX T) = n m w nm x n x m = place x n, x m nearby if y n and y m are similar. The constraints fix the location and scale of X. 3. Solution: eigenvectors V = (v,...,v L+ ) of D WD, which yield the low-dimensional points X = V T D. p. 4
16 Spectral methods: Laplacian eigenmaps (cont.) Example: Swiss roll (from Belkin & Niyogi, NECO 3) High-dimensional space Low-dimensional space Y = (y,...,y N ) X = (x,...,x N ) p. 5
17 Spectral methods: Laplacian eigenmaps (cont.) Advantages: No local optima (unique solution) Yet succeeds with nonlinear, convoluted manifolds (if the neighborhood graph is good) Computational cost O(N 3 ) or, for sparse graphs, O(N ) Can use any latent space dimension L (just use L eigenvectors) Disadvantages: No mapping for points not in Y = (y,...,y N ) or X = (x,...,x N ) (out-of-sample mapping) No density p(x,y) What should the mappings and densities be for unseen points (not in the training set)? p. 6
18 The Laplacian Eigenmaps Latent Variable Model (Carreira-Perpiñán & Lu, AISTATS 7) Natural way to embed unseen points Y u = (y N+,...,y N+M ) without perturbing the points Y s = (y,...,y N ) previously embedded: ( min tr ( X s X u ) ( L ss L su ) ( )) X T s X u R L M L us L uu X T u That is, solve the LE problem but subject to keeping X s fixed. Semi-supervised learning point of view: labelled data (X s,y s ) (real-valued labels), unlabelled data Y u, graph prior on Y = (Y s,y u ). Solution: X u = X s L su L uu. In particular, to embed a single unseen point y = Y u R D, we obtain x = F(y) = N K((y y n )/σ) P n= x N n = K((y y n )/σ) n. This gives an out-of-sample mapping F(y) for Laplacian eigenmaps. p. 7
19 LELVM (cont.) Further, we can define a joint probability model on x and y (thus a LVM) that is consistent with that mapping: p(x,y) = N ( ) ( ) y yn x xn K y K x N σ n= y σ x p(y) = N ( ) y yn K y p(x) = N ( ) x xn K x N σ y N σ x F(y) = f(x) = N n= N n= n= K y ((y y n )/σ y ) N n = K y((y y n )/σ y ) x n = K x ((x x n )/σ x ) N n = K x((x x n )/σ x ) y n = n= N p(n y)x n = E {x y} n= N p(n x)y n = E {y x} n= The densities are kernel density estimates, the mappings are Nadaraya-Watson estimators (all nonparametric). p. 8
20 LELVM (cont.) All the user needs to do is set: the graph parameters for Laplacian eigenmaps (as usual) σ x, σ y to control the smoothness of mappings & densities Advantages: those of latent variable models and spectral methods: yields mappings (nonlinear, infinitely differentiable and based on a global coordinate system) yields densities (potentially multimodal) no local optima succeeds with convoluted manifolds can use any dimension computational efficiency O(N 3 ) or O(N ) (sparse graph) Disadvantages: it relies on the success of Laplacian eigenmaps (which depends on the graph). p. 9
21 LELVM example: spiral Dataset: spiral in D; reduction to D. σ x = 5 5 σ x =.5 4 σ x =.5 4 σ x = σ x =.5 3 GTM p(x) x x x x x f(x) σ y =.5 σ y =. σ y =.5 σ y =. σ y = p(y) p.
22 LELVM example: motion-capture dataset LELVM GTM GPLVM GPLVM with back-constraints p.
23 LELVM example: mocap dataset (cont.) Smooth interpolation (e.g. for animation): p.
24 LELVM example: mocap dataset (cont.) Reconstruction of missing patterns (e.g. due to occlusion) using p(x y obs ) and the mode-finding algorithms of Carreira-Perpiñán, PAMI, 7: y obs y y obs y (mode ) y (mode ) p. 3
25 LELVM application: people tracking (Lu, Carreira-Perpiñán & Sminchisescu, NIPS 7) The probabilistic nature of LELVM allows seamless integration in a Bayesian framework for nonlinear, nongaussian tracking (particle filters): At time t: observation z t, unobserved state s t = (d t,x t ) rigid motion d, intrinsic pose x Prediction: p(s t z :t ) = p(s t s t )p(s t z :t )ds t Correction: p(s t z :t ) p(z t s t )p(s t z :t ) We use the Gaussian mixture Sigma-point particle filter (v.d.merwe & Wan, ICASSP 3). Dynamics: p(s t s t ) Gaussian {}}{ p d (d t d t ) Gaussian {}}{ p x (x t x t ) LELVM {}}{ p(x t ) Observation model: p(z t s t ) given by D tracker with Gaussian noise, and mapping from state to observations x R L f (LELVM) y R 3M Perspective z R M d R 3 p. 4
26 LELVM application: people tracking (cont.) 3D mocap, small training set 3D mocap, occlusion 3D mocap, front view CMU mocap database Fred turning Fred walking p. 5
27 LELVM: summary Probabilistic method for dimensionality reduction Natural, principled way of combining two large classes of methods (latent variable models and spectral methods), sharing the advantages of both We think it is asymptotically consistent (N ). Same idea applicable to out-of-sample extensions for LLE, Isomap, etc. Very simple to implement in practice training set + eigenvalue problem + kernel density estimate Useful for applications: Priors for articulated pose tracking with multiple motions (walking, dancing... ), multiple people Low-dim. repr. of state spaces in reinforcement learning Low-dim. repr. of degrees of freedom in humanoid robots Visualisation of high-dim. datasets, with uncertainty estimates p. 6
People Tracking with the Laplacian Eigenmaps Latent Variable Model
People Tracking with the Laplacian Eigenmaps Latent Variable Model Zhengdong Lu CSEE, OGI, OHSU zhengdon@csee.ogi.edu Miguel Á. Carreira-Perpiñán EECS, UC Merced http://eecs.ucmerced.edu Cristian Sminchisescu
More information3D Human Motion Analysis and Manifolds
D E P A R T M E N T O F C O M P U T E R S C I E N C E U N I V E R S I T Y O F C O P E N H A G E N 3D Human Motion Analysis and Manifolds Kim Steenstrup Pedersen DIKU Image group and E-Science center Motivation
More informationLocally Linear Landmarks for large-scale manifold learning
Locally Linear Landmarks for large-scale manifold learning Max Vladymyrov and Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://eecs.ucmerced.edu
More informationThe K-modes and Laplacian K-modes algorithms for clustering
The K-modes and Laplacian K-modes algorithms for clustering Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://faculty.ucmerced.edu/mcarreira-perpinan
More informationFast, Accurate Spectral Clustering Using Locally Linear Landmarks
Fast, Accurate Spectral Clustering Using Locally Linear Landmarks Max Vladymyrov Google Inc. Miguel Á. Carreira-Perpiñán EECS, UC Merced May 17, 2017 Spectral clustering Focus on the problem of Spectral
More informationDoes Dimensionality Reduction Improve the Quality of Motion Interpolation?
Does Dimensionality Reduction Improve the Quality of Motion Interpolation? Sebastian Bitzer, Stefan Klanke and Sethu Vijayakumar School of Informatics - University of Edinburgh Informatics Forum, 10 Crichton
More informationLarge-Scale Face Manifold Learning
Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random
More informationNon-linear dimension reduction
Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods
More informationDay 3 Lecture 1. Unsupervised Learning
Day 3 Lecture 1 Unsupervised Learning Semi-supervised and transfer learning Myth: you can t do deep learning unless you have a million labelled examples for your problem. Reality You can learn useful representations
More informationExperimental Evaluation of Latent Variable Models. for Dimensionality Reduction
Experimental Evaluation of Latent Variable Models for Dimensionality Reduction Miguel Á. Carreira-Perpiñán and Steve Renals a Dept. of Computer Science, University of Sheffield {M.Carreira,S.Renals}@dcs.shef.ac.uk
More informationTrajectory Inverse Kinematics By Conditional Density Models
Trajectory Inverse Kinematics By Conditional Density Models Chao Qin and Miguel Á. Carreira-Perpiñán EECS, School of Engineering, UC Merced ICRA 08, Pasadena 1 Introduction Robot arm inverse kinematics
More informationConditional Visual Tracking in Kernel Space
Conditional Visual Tracking in Kernel Space Cristian Sminchisescu 1,2,3 Atul Kanujia 3 Zhiguo Li 3 Dimitris Metaxas 3 1 TTI-C, 1497 East 50th Street, Chicago, IL, 60637, USA 2 University of Toronto, Department
More informationManifold Learning and Missing Data Recovery through Unsupervised Regression
Manifold Learning and Missing Data Recovery through Unsupervised Regression Miguel Á. Carreira-Perpiñán EECS, University of California, Merced http://eecs.ucmerced.edu Zhengdong Lu Microsoft Research Asia,
More informationSemi-Supervised Hierarchical Models for 3D Human Pose Reconstruction
Semi-Supervised Hierarchical Models for 3D Human Pose Reconstruction Atul Kanaujia, CBIM, Rutgers Cristian Sminchisescu, TTI-C Dimitris Metaxas,CBIM, Rutgers 3D Human Pose Inference Difficulties Towards
More informationPriors for People Tracking from Small Training Sets Λ
Priors for People Tracking from Small Training Sets Λ Raquel Urtasun CVLab EPFL, Lausanne Switzerland David J. Fleet Computer Science Dept. University of Toronto Canada Aaron Hertzmann Computer Science
More informationStyle-based Inverse Kinematics
Style-based Inverse Kinematics Keith Grochow, Steven L. Martin, Aaron Hertzmann, Zoran Popovic SIGGRAPH 04 Presentation by Peter Hess 1 Inverse Kinematics (1) Goal: Compute a human body pose from a set
More informationComputer Vision II Lecture 14
Computer Vision II Lecture 14 Articulated Tracking I 08.07.2014 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Outline of This Lecture Single-Object Tracking Bayesian
More informationA Stochastic Optimization Approach for Unsupervised Kernel Regression
A Stochastic Optimization Approach for Unsupervised Kernel Regression Oliver Kramer Institute of Structural Mechanics Bauhaus-University Weimar oliver.kramer@uni-weimar.de Fabian Gieseke Institute of Structural
More informationLearning a Manifold as an Atlas Supplementary Material
Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito
More informationA Taxonomy of Semi-Supervised Learning Algorithms
A Taxonomy of Semi-Supervised Learning Algorithms Olivier Chapelle Max Planck Institute for Biological Cybernetics December 2005 Outline 1 Introduction 2 Generative models 3 Low density separation 4 Graph
More informationGraph based machine learning with applications to media analytics
Graph based machine learning with applications to media analytics Lei Ding, PhD 9-1-2011 with collaborators at Outline Graph based machine learning Basic structures Algorithms Examples Applications in
More information08 An Introduction to Dense Continuous Robotic Mapping
NAVARCH/EECS 568, ROB 530 - Winter 2018 08 An Introduction to Dense Continuous Robotic Mapping Maani Ghaffari March 14, 2018 Previously: Occupancy Grid Maps Pose SLAM graph and its associated dense occupancy
More informationCSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo
CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Do Something..
More informationNetwork Traffic Measurements and Analysis
DEIB - Politecnico di Milano Fall, 2017 Introduction Often, we have only a set of features x = x 1, x 2,, x n, but no associated response y. Therefore we are not interested in prediction nor classification,
More informationDifferential Structure in non-linear Image Embedding Functions
Differential Structure in non-linear Image Embedding Functions Robert Pless Department of Computer Science, Washington University in St. Louis pless@cse.wustl.edu Abstract Many natural image sets are samples
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationCS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas
CS839: Probabilistic Graphical Models Lecture 10: Learning with Partially Observed Data Theo Rekatsinas 1 Partially Observed GMs Speech recognition 2 Partially Observed GMs Evolution 3 Partially Observed
More informationData fusion and multi-cue data matching using diffusion maps
Data fusion and multi-cue data matching using diffusion maps Stéphane Lafon Collaborators: Raphy Coifman, Andreas Glaser, Yosi Keller, Steven Zucker (Yale University) Part of this work was supported by
More informationJoint Feature Distributions for Image Correspondence. Joint Feature Distribution Matching. Motivation
Joint Feature Distributions for Image Correspondence We need a correspondence model based on probability, not just geometry! Bill Triggs MOVI, CNRS-INRIA, Grenoble, France http://www.inrialpes.fr/movi/people/triggs
More informationTracking Human Body Pose on a Learned Smooth Space
Boston U. Computer Science Tech. Report No. 2005-029, Aug. 2005. Tracking Human Body Pose on a Learned Smooth Space Tai-Peng Tian Rui Li Stan Sclaroff Computer Science Department Boston University Boston,
More informationLearning Appearance Manifolds from Video
Learning Appearance Manifolds from Video Ali Rahimi MIT CS and AI Lab, Cambridge, MA 9 ali@mit.edu Ben Recht MIT Media Lab, Cambridge, MA 9 brecht@media.mit.edu Trevor Darrell MIT CS and AI Lab, Cambridge,
More informationExperimental Evaluation of Latent Variable Models for Dimensionality Reduction
In: Proc. of the 18 IEEE Signal Processing Society Workshop on Neural Networks for Signal Processing (NNSP8), pp.5-17, Cambridge, UK. URL: http://www.dcs.shef.ac.uk/ miguel/papers/nnsp8.html Experimental
More informationThe Role of Manifold Learning in Human Motion Analysis
The Role of Manifold Learning in Human Motion Analysis Ahmed Elgammal and Chan Su Lee Department of Computer Science, Rutgers University, Piscataway, NJ, USA {elgammal,chansu}@cs.rutgers.edu Abstract.
More informationNon-linear CCA and PCA by Alignment of Local Models
Non-linear CCA and PCA by Alignment of Local Models Jakob J. Verbeek, Sam T. Roweis, and Nikos Vlassis Informatics Institute, University of Amsterdam Department of Computer Science,University of Toronto
More informationGlobal versus local methods in nonlinear dimensionality reduction
Global versus local methods in nonlinear dimensionality reduction Vin de Silva Department of Mathematics, Stanford University, Stanford. CA 94305 silva@math.stanford.edu Joshua B. Tenenbaum Department
More informationFMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu
FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationSpectral Latent Variable Models for Perceptual Inference
Technical Report, Department of Computer Science, Rutgers University, #DCS-TR-6, February 7 Spectral Latent Variable for Perceptual Inference Atul Kanaujia 1 Cristian Sminchisescu 2,3 Dimitris Metaxas
More informationImage Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial
Image Similarities for Learning Video Manifolds Selen Atasoy MICCAI 2011 Tutorial Image Spaces Image Manifolds Tenenbaum2000 Roweis2000 Tenenbaum2000 [Tenenbaum2000: J. B. Tenenbaum, V. Silva, J. C. Langford:
More informationReal-Time Human Pose Inference using Kernel Principal Component Pre-image Approximations
1 Real-Time Human Pose Inference using Kernel Principal Component Pre-image Approximations T. Tangkuampien and D. Suter Institute for Vision Systems Engineering Monash University, Australia {therdsak.tangkuampien,d.suter}@eng.monash.edu.au
More informationGraphical Models for Human Motion Modeling
Graphical Models for Human Motion Modeling Kooksang Moon and Vladimir Pavlović Rutgers University, Department of Computer Science Piscataway, NJ 8854, USA Abstract. The human figure exhibits complex and
More informationHierarchical Gaussian Process Latent Variable Models
Neil D. Lawrence neill@cs.man.ac.uk School of Computer Science, University of Manchester, Kilburn Building, Oxford Road, Manchester, M13 9PL, U.K. Andrew J. Moore A.Moore@dcs.shef.ac.uk Dept of Computer
More informationCSE 6242 / CX October 9, Dimension Reduction. Guest Lecturer: Jaegul Choo
CSE 6242 / CX 4242 October 9, 2014 Dimension Reduction Guest Lecturer: Jaegul Choo Volume Variety Big Data Era 2 Velocity Veracity 3 Big Data are High-Dimensional Examples of High-Dimensional Data Image
More informationGenerating Different Realistic Humanoid Motion
Generating Different Realistic Humanoid Motion Zhenbo Li,2,3, Yu Deng,2,3, and Hua Li,2,3 Key Lab. of Computer System and Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing
More informationMotion Tracking and Event Understanding in Video Sequences
Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!
More informationThe Analysis of Parameters t and k of LPP on Several Famous Face Databases
The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun
More informationLocally Linear Landmarks for Large-Scale Manifold Learning
Max Vladymyrov MVLADYMYROV@UCMERCED.EDU Miguel Á. Carreira-Perpiñán MCARREIRA-PERPINAN@UCMERCED.EDU Electrical Engineering and Computer Science, School of Engineering, University of California, Merced
More informationWarped Mixture Models
Warped Mixture Models Tomoharu Iwata, David Duvenaud, Zoubin Ghahramani Cambridge University Computational and Biological Learning Lab March 11, 2013 OUTLINE Motivation Gaussian Process Latent Variable
More informationA Course in Machine Learning
A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling
More informationPredicting 3D People from 2D Pictures
Predicting 3D People from 2D Pictures Leonid Sigal Michael J. Black Department of Computer Science Brown University http://www.cs.brown.edu/people/ls/ CIAR Summer School August 15-20, 2006 Leonid Sigal
More informationA Geometric Perspective on Machine Learning
A Geometric Perspective on Machine Learning Partha Niyogi The University of Chicago Collaborators: M. Belkin, V. Sindhwani, X. He, S. Smale, S. Weinberger A Geometric Perspectiveon Machine Learning p.1
More informationLearning Two-View Stereo Matching
Learning Two-View Stereo Matching Jianxiong Xiao Jingni Chen Dit-Yan Yeung Long Quan Department of Computer Science and Engineering The Hong Kong University of Science and Technology The 10th European
More informationPERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO
Stefan Krauß, Juliane Hüttl SE, SoSe 2011, HU-Berlin PERFORMANCE CAPTURE FROM SPARSE MULTI-VIEW VIDEO 1 Uses of Motion/Performance Capture movies games, virtual environments biomechanics, sports science,
More informationRobust Pose Estimation using the SwissRanger SR-3000 Camera
Robust Pose Estimation using the SwissRanger SR- Camera Sigurjón Árni Guðmundsson, Rasmus Larsen and Bjarne K. Ersbøll Technical University of Denmark, Informatics and Mathematical Modelling. Building,
More informationConstrained Hidden Markov Models
Constrained Hidden Markov Models Sam Roweis roweis@gatsby.ucl.ac.uk Gatsby Unit, University College London Abstract By thinking of each state in a hidden Markov model as corresponding to some spatial region
More informationClustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin
Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014
More informationUnsupervised Learning
Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover
More informationCS448A: Experiments in Motion Capture. CS448A: Experiments in Motion Capture
CS448A: Experiments in Motion Capture Christoph Bregler, bregler@stanford.edu Gene Alexander, gene.alexander@stanford.edu CS448A: Experiments in Motion Capture Project Course Lectures on all key-algorithms
More informationTrajectory Inverse Kinematics by Conditional Density Modes
Trajectory Inverse Kinematics by Conditional Density Modes Chao Qin Miguel Á. Carreira-Perpiñán Abstract We present a machine learning approach for trajectory inverse kinematics: given a trajectory in
More informationLearning Deformations of Human Arm Movement to Adapt to Environmental Constraints
Learning Deformations of Human Arm Movement to Adapt to Environmental Constraints Stephan Al-Zubi and Gerald Sommer Cognitive Systems, Christian Albrechts University, Kiel, Germany Abstract. We propose
More informationLocally Linear Landmarks for Large-Scale Manifold Learning
Locally Linear Landmarks for Large-Scale Manifold Learning Max Vladymyrov and Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science, University of California, Merced, USA {mvladymyrov,mcarreira-perpinan}@ucmerced.edu
More informationVisual Motion Analysis and Tracking Part II
Visual Motion Analysis and Tracking Part II David J Fleet and Allan D Jepson CIAR NCAP Summer School July 12-16, 16, 2005 Outline Optical Flow and Tracking: Optical flow estimation (robust, iterative refinement,
More information10-701/15-781, Fall 2006, Final
-7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly
More informationMachine Learning
Machine Learning 10-601 Tom M. Mitchell Machine Learning Department Carnegie Mellon University October 9, 2012 Today: Graphical models Bayes Nets: Inference Learning Readings: Required: Bishop chapter
More informationAppearance Manifold of Facial Expression
Appearance Manifold of Facial Expression Caifeng Shan, Shaogang Gong and Peter W. McOwan Department of Computer Science Queen Mary, University of London, London E1 4NS, UK {cfshan, sgg, pmco}@dcs.qmul.ac.uk
More informationLecture 20: Tracking. Tuesday, Nov 27
Lecture 20: Tracking Tuesday, Nov 27 Paper reviews Thorough summary in your own words Main contribution Strengths? Weaknesses? How convincing are the experiments? Suggestions to improve them? Extensions?
More informationGaussian Process Dynamical Models
Gaussian Process Dynamical Models Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science University of Toronto, Toronto, ON M5S 3G4 jmwang,hertzman@dgp.toronto.edu, fleet@cs.toronto.edu
More informationGaussian Process Motion Graph Models for Smooth Transitions among Multiple Actions
Gaussian Process Motion Graph Models for Smooth Transitions among Multiple Actions Norimichi Ukita 1 and Takeo Kanade Graduate School of Information Science, Nara Institute of Science and Technology The
More informationNon-Local Estimation of Manifold Structure
Non-Local Estimation of Manifold Structure Yoshua Bengio, Martin Monperrus and Hugo Larochelle Département d Informatique et Recherche Opérationnelle Centre de Recherches Mathématiques Université de Montréal
More informationLearning to Transform Time Series with a Few Examples
Learning to Transform Time Series with a Few Examples Ali Rahimi, Ben Recht, Trevor Darrell Abstract We describe a semi-supervised regression algorithm that learns to transform one time series into another
More informationNon-Local Manifold Tangent Learning
Non-Local Manifold Tangent Learning Yoshua Bengio and Martin Monperrus Dept. IRO, Université de Montréal P.O. Box 1, Downtown Branch, Montreal, H3C 3J7, Qc, Canada {bengioy,monperrm}@iro.umontreal.ca Abstract
More informationManifold Learning for Video-to-Video Face Recognition
Manifold Learning for Video-to-Video Face Recognition Abstract. We look in this work at the problem of video-based face recognition in which both training and test sets are video sequences, and propose
More informationMonocular Human Motion Capture with a Mixture of Regressors. Ankur Agarwal and Bill Triggs GRAVIR-INRIA-CNRS, Grenoble, France
Monocular Human Motion Capture with a Mixture of Regressors Ankur Agarwal and Bill Triggs GRAVIR-INRIA-CNRS, Grenoble, France IEEE Workshop on Vision for Human-Computer Interaction, 21 June 2005 Visual
More informationVector Field Visualisation
Vector Field Visualisation Computer Animation and Visualization Lecture 14 Institute for Perception, Action & Behaviour School of Informatics Visualising Vectors Examples of vector data: meteorological
More informationGaussian Process Dynamical Models
DRAFT Final version to appear in NIPS 18. Gaussian Process Dynamical Models Jack M. Wang, David J. Fleet, Aaron Hertzmann Department of Computer Science University of Toronto, Toronto, ON M5S 3G4 jmwang,hertzman
More information3D Human Motion Tracking Using Dynamic Probabilistic Latent Semantic Analysis
3D Human Motion Tracking Using Dynamic Probabilistic Latent Semantic Analysis Kooksang Moon and Vladimir Pavlović Department of Computer Science, Rutgers University Piscataway, NJ 885 {ksmoon, vladimir}@cs.rutgers.edu
More informationParticle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore
Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction
More informationProbabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences
Probabilistic Tracking and Reconstruction of 3D Human Motion in Monocular Video Sequences Presentation of the thesis work of: Hedvig Sidenbladh, KTH Thesis opponent: Prof. Bill Freeman, MIT Thesis supervisors
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationCapturing, Modeling, Rendering 3D Structures
Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights
More informationMSA220 - Statistical Learning for Big Data
MSA220 - Statistical Learning for Big Data Lecture 13 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Clustering Explorative analysis - finding groups
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationData Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 3: Visualizing and Exploring Data Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Exploratory data analysis tasks Examine the data, in search of structures
More informationGenerative and discriminative classification techniques
Generative and discriminative classification techniques Machine Learning and Category Representation 013-014 Jakob Verbeek, December 13+0, 013 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.13.14
More informationMonocular Multiple People Tracking
EDIC RESEARCH PROPOSAL 1 Monocular Multiple People Tracking Timur Bagautdinov CVLAB, I&C, EPFL Abstract In this work we present a generic approach to the task of tracking multiple people using a single
More informationCSC 411: Lecture 14: Principal Components Analysis & Autoencoders
CSC 411: Lecture 14: Principal Components Analysis & Autoencoders Raquel Urtasun & Rich Zemel University of Toronto Nov 4, 2015 Urtasun & Zemel (UofT) CSC 411: 14-PCA & Autoencoders Nov 4, 2015 1 / 18
More informationCSC 411: Lecture 14: Principal Components Analysis & Autoencoders
CSC 411: Lecture 14: Principal Components Analysis & Autoencoders Richard Zemel, Raquel Urtasun and Sanja Fidler University of Toronto Zemel, Urtasun, Fidler (UofT) CSC 411: 14-PCA & Autoencoders 1 / 18
More informationCIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]
CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.
More informationGaussian Process Latent Variable Models for Visualisation of High Dimensional Data
Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data Neil D. Lawrence Department of Computer Science University of Sheffield Regent Court, 211 Portobello Street, Sheffield,
More information( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components
Review Lecture 14 ! PRINCIPAL COMPONENT ANALYSIS Eigenvectors of the covariance matrix are the principal components 1. =cov X Top K principal components are the eigenvectors with K largest eigenvalues
More informationTrajectory Mixture Density Networks with Multiple Mixtures for Acoustic-articulatory Inversion
Trajectory Mixture Density Networks with Multiple Mixtures for Acoustic-articulatory Inversion Korin Richmond Centre for Speech Technology Research Edinburgh University, Edinburgh, United Kingdom korin@cstr.ed.ac.uk
More informationCSE 6242 A / CX 4242 DVA. March 6, Dimension Reduction. Guest Lecturer: Jaegul Choo
CSE 6242 A / CX 4242 DVA March 6, 2014 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Analyze! Limited memory size! Data may not be fitted to the memory of your machine! Slow computation!
More informationAcoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing
Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing Samer Al Moubayed Center for Speech Technology, Department of Speech, Music, and Hearing, KTH, Sweden. sameram@kth.se
More informationTwo-View Geometry (Course 23, Lecture D)
Two-View Geometry (Course 23, Lecture D) Jana Kosecka Department of Computer Science George Mason University http://www.cs.gmu.edu/~kosecka General Formulation Given two views of the scene recover the
More informationTracking People on a Torus
IEEE TRANSACTION ON PATTERN RECOGNITION AND MACHINE INTELLIGENCE (UNDER REVIEW) Tracking People on a Torus Ahmed Elgammal Member, IEEE, and Chan-Su Lee, Student Member, IEEE, Department of Computer Science,
More informationGeneralized Principal Component Analysis CVPR 2007
Generalized Principal Component Analysis Tutorial @ CVPR 2007 Yi Ma ECE Department University of Illinois Urbana Champaign René Vidal Center for Imaging Science Institute for Computational Medicine Johns
More informationHuman Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation
Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Chan-Su Lee and Ahmed Elgammal Rutgers University, Piscataway, NJ, USA {chansu, elgammal}@cs.rutgers.edu Abstract. We
More informationCS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample
CS 1675 Introduction to Machine Learning Lecture 18 Clustering Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Clustering Groups together similar instances in the data sample Basic clustering problem:
More informationUsing GPLVM for Inverse Kinematics on Non-cyclic Data
000 00 002 003 004 005 006 007 008 009 00 0 02 03 04 05 06 07 08 09 020 02 022 023 024 025 026 027 028 029 030 03 032 033 034 035 036 037 038 039 040 04 042 043 044 045 046 047 048 049 050 05 052 053 Using
More informationHomework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:
Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes
More information