Warped Mixture Models
|
|
- Cory Sutton
- 5 years ago
- Views:
Transcription
1 Warped Mixture Models Tomoharu Iwata, David Duvenaud, Zoubin Ghahramani Cambridge University Computational and Biological Learning Lab March 11, 2013
2 OUTLINE Motivation Gaussian Process Latent Variable Model Warped Mixtures Generative Model Inference Results Generative Model Inference Future Work: Variational Inference Semi-supervised learning Other Latent Priors Life of a Bayesian Model
3 MOTIVATION I: MANIFOLD SEMI-SUPERVISED LEARNING Most manifold learning algorithms start by constructing a graph locally. X O X O
4 MOTIVATION I: MANIFOLD SEMI-SUPERVISED LEARNING Most manifold learning algorithms start by constructing a graph locally. X O X O Most don t update original topology to account for long-range structure or label information.
5 Most don t update original topology to account for long-range structure or label information. Often hard to recover from a bad connectivity graph. MOTIVATION I: MANIFOLD SEMI-SUPERVISED LEARNING Most manifold learning algorithms start by constructing a graph locally. X O X O
6 MOTIVATION II: INFINITE GAUSSIAN MIXTURE MODEL Dirichelt Process prior on cluster weights. Recovers number of clusters automatically. Since each cluster must be Gaussian, number of clusters is often innapropriate
7 MOTIVATION II: INFINITE GAUSSIAN MIXTURE MODEL Dirichelt Process prior on cluster weights. Recovers number of clusters automatically. Since each cluster must be Gaussian, number of clusters is often innapropriate How to create nonparametric cluster shapes?
8 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, latent coordinates X = (x 1,, x N ), where x n R Q. y = f (x). observed y coordinate Latent x coordinate
9 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, latent coordinates X = (x 1,, x N ), where x n R Q. y = f (x) observed y coordinate Latent x coordinate
10 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, latent coordinates X = (x 1,, x N ), where x n R Q. y = f (x) observed y coordinate Latent x coordinate
11 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, latent coordinates X = (x 1,, x N ), where x n R Q. y = f (x) observed y coordinate Latent x coordinate
12 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, latent coordinates X = (x 1,, x N ), where x n R Q. y = f (x). observed y coordinate Latent x coordinate
13 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, latent coordinates X = (x 1,, x N ), where x n R Q. y = f (x). observed y coordinate Latent x coordinate
14 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, latent coordinates X = (x 1,, x N ), where x n R Q. y = f (x). 0 observed y coordinate Latent x coordinate
15 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, have latent coordinates X = (x 1,, x N ), where x n R Q. y d = f d (x), where each f d (x) GP(0, K). Mapping marginal likelihood: p(y X, θ) = (2π) DN 2 K D 2 exp ( 1 ) 2 tr(y K 1 Y) Prior on x, treated mainly as a regularizer: p(x) = N (x 0, I) Can be interpreted as a density model.
16 GAUSSIAN PROCESS LATENT VARIABLE MODEL Suppose observations Y = (y 1,, y N ) where y n R D, have latent coordinates X = (x 1,, x N ), where x n R Q. y d = f d (x), where each f d (x) GP(0, K). Mapping marginal likelihood: p(y X, θ) = (2π) DN 2 K D 2 exp ( 1 ) 2 tr(y K 1 Y) Prior on x, treated mainly as a regularizer: p(x) = N (x 0, I) Can be interpreted as a density model. Can give warped densities; how to get clusters?
17 WARPED MIXTURE MODEL GPs Latent space Observed space A sample from the iwmm prior: Sample a latent mixture of Gaussians. Warp the latent mixture to produce non-gaussian manifolds in observed space. Some areas with almost no density; some edges and peaks.
18 WARPED MIXTURE MODEL An extension of GP-LVM, where p(x) is a mixture of Gaussians. Or: An extension of igmm, where mixture is warped. Given mixture assignments, likelihood has only two parts: GP-LVM and GMM p(y X, Z, θ) = (2π) DN 2 K D 2 exp ( 1 ) 2 tr(k 1 YY ) }{{} i c=1 GP-LVM Likelihood λ c N (x i µ c, R 1 c )I(x i Z c ) } {{ } Mixture of Gaussians Likelihood
19 INFERENCE observed y coordinate Latent x coordinate Find posterior over latent X s. Many ways to do inference; high dimension of latent space means derivatives helpful. Our scheme: Alternate: 1. Sampling latent cluster assignments 2. Updating latent positions and GP hypers with HMC No cross-validation, but HMC params annoying to set Show demo!
20 INFERENCE: MIXING Changing number of clusters helps mixing.
21 DENSITY RESULTS Automatically reduces latent dimension, separately per-cluster! Wishart prior may be causing problems.
22 LATENT VISUALIZATION Hard to summarize posterior which is symmetric - average for now. VB might address problem of summarizing posterior.
23 Captures number, dimension, relationship between manifolds. LATENT VISUALIZATION: UMIST FACES DATASET
24 THE WARPED DENSITY MODEL What if we take density model of GP-LVM seriously? Why not just warp one Gaussian? Even one latent Gaussian can be made fairly flexible.
25 THE WARPED DENSITY MODEL Even one latent Gaussian can be made fairly flexible, but must place some mass between clusters. Also easier to interpret latent clusters.
26 RESULTS Evaluated iwmm as a density model, as well as a clustering model.
27 LIMITATIONS O(N 3 ) runtime Stationary kernel means diffculty modeling clusters of different sizes.
28 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
29 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
30 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
31 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
32 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
33 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
34 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
35 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
36 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
37 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
38 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
39 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
40 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
41 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
42 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
43 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
44 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
45 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
46 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
47 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
48 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
49 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
50 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
51 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
52 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
53 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
54 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
55 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
56 FUTURE WORK: VARIATIONAL INFERENCE Joint work with James Hensman. Optimization instead of integration. SVI could allow large datasets. Non-convex optimization is hard; harder than mixing?
57 FUTURE WORK: SEMI-SUPERVISED LEARNING X O X O Spread labels along regions of high density.
58 OTHER PRIORS ON LATENT DENSITIES Density model is separate from warping model. Hierarchical clustering (bio applications) Deep Gaussian Processes
59 LIFE OF A BAYESIAN MODEL Write down generative model. Sample from it to see if it looks reasonable. Fiddle with sampler for a month. Maybe years later, a decent inference scheme comes out. Modeling decisions are in principle separate from inference scheme Can verify approximate inference schemes on examples. Modeling sophistication is far ahead of inference sophistication
60 LIFE OF A BAYESIAN MODEL Write down generative model. Sample from it to see if it looks reasonable. Fiddle with sampler for a month. Maybe years later, a decent inference scheme comes out. Modeling decisions are in principle separate from inference scheme Can verify approximate inference schemes on examples. Modeling sophistication is far ahead of inference sophistication Thanks!
Clustering Lecture 5: Mixture Model
Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics
More informationInference and Representation
Inference and Representation Rachel Hodos New York University Lecture 5, October 6, 2015 Rachel Hodos Lecture 5: Inference and Representation Today: Learning with hidden variables Outline: Unsupervised
More informationClustering web search results
Clustering K-means Machine Learning CSE546 Emily Fox University of Washington November 4, 2013 1 Clustering images Set of Images [Goldberger et al.] 2 1 Clustering web search results 3 Some Data 4 2 K-means
More information08 An Introduction to Dense Continuous Robotic Mapping
NAVARCH/EECS 568, ROB 530 - Winter 2018 08 An Introduction to Dense Continuous Robotic Mapping Maani Ghaffari March 14, 2018 Previously: Occupancy Grid Maps Pose SLAM graph and its associated dense occupancy
More informationCOMS 4771 Clustering. Nakul Verma
COMS 4771 Clustering Nakul Verma Supervised Learning Data: Supervised learning Assumption: there is a (relatively simple) function such that for most i Learning task: given n examples from the data, find
More informationClustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin
Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014
More informationCS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas
CS839: Probabilistic Graphical Models Lecture 10: Learning with Partially Observed Data Theo Rekatsinas 1 Partially Observed GMs Speech recognition 2 Partially Observed GMs Evolution 3 Partially Observed
More informationCSC412: Stochastic Variational Inference. David Duvenaud
CSC412: Stochastic Variational Inference David Duvenaud Admin A3 will be released this week and will be shorter Motivation for REINFORCE Class projects Class Project ideas Develop a generative model for
More informationThe Multi Stage Gibbs Sampling: Data Augmentation Dutch Example
The Multi Stage Gibbs Sampling: Data Augmentation Dutch Example Rebecca C. Steorts Bayesian Methods and Modern Statistics: STA 360/601 Module 8 1 Example: Data augmentation / Auxiliary variables A commonly-used
More informationClustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin
Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014
More informationAn Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework
IEEE SIGNAL PROCESSING LETTERS, VOL. XX, NO. XX, XXX 23 An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework Ji Won Yoon arxiv:37.99v [cs.lg] 3 Jul 23 Abstract In order to cluster
More informationCS281 Section 9: Graph Models and Practical MCMC
CS281 Section 9: Graph Models and Practical MCMC Scott Linderman November 11, 213 Now that we have a few MCMC inference algorithms in our toolbox, let s try them out on some random graph models. Graphs
More informationExpectation Maximization. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University
Expectation Maximization Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University April 10 th, 2006 1 Announcements Reminder: Project milestone due Wednesday beginning of class 2 Coordinate
More informationVariational Methods for Discrete-Data Latent Gaussian Models
Variational Methods for Discrete-Data Latent Gaussian Models University of British Columbia Vancouver, Canada March 6, 2012 The Big Picture Joint density models for data with mixed data types Bayesian
More informationScalable Bayes Clustering for Outlier Detection Under Informative Sampling
Scalable Bayes Clustering for Outlier Detection Under Informative Sampling Based on JMLR paper of T. D. Savitsky Terrance D. Savitsky Office of Survey Methods Research FCSM - 2018 March 7-9, 2018 1 / 21
More informationA Taxonomy of Semi-Supervised Learning Algorithms
A Taxonomy of Semi-Supervised Learning Algorithms Olivier Chapelle Max Planck Institute for Biological Cybernetics December 2005 Outline 1 Introduction 2 Generative models 3 Low density separation 4 Graph
More informationLearning Alignments from Latent Space Structures
Learning Alignments from Latent Space Structures Ieva Kazlauskaite Department of Computer Science University of Bath, UK i.kazlauskaite@bath.ac.uk Carl Henrik Ek Faculty of Engineering University of Bristol,
More informationA Nonparametric Bayesian Approach to Detecting Spatial Activation Patterns in fmri Data
A Nonparametric Bayesian Approach to Detecting Spatial Activation Patterns in fmri Data Seyoung Kim, Padhraic Smyth, and Hal Stern Bren School of Information and Computer Sciences University of California,
More informationEstimating Labels from Label Proportions
Estimating Labels from Label Proportions Novi Quadrianto Novi.Quad@gmail.com The Australian National University, Australia NICTA, Statistical Machine Learning Program, Australia Joint work with Alex Smola,
More informationBig Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)
Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017) Week 9: Data Mining (4/4) March 9, 2017 Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo These slides
More informationIntroduction to Machine Learning CMU-10701
Introduction to Machine Learning CMU-10701 Clustering and EM Barnabás Póczos & Aarti Singh Contents Clustering K-means Mixture of Gaussians Expectation Maximization Variational Methods 2 Clustering 3 K-
More informationOverview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010
INFORMATICS SEMINAR SEPT. 27 & OCT. 4, 2010 Introduction to Semi-Supervised Learning Review 2 Overview Citation X. Zhu and A.B. Goldberg, Introduction to Semi- Supervised Learning, Morgan & Claypool Publishers,
More informationECG782: Multidimensional Digital Signal Processing
ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting
More informationCPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017
CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2017 Assignment 3: 2 late days to hand in tonight. Admin Assignment 4: Due Friday of next week. Last Time: MAP Estimation MAP
More informationApplied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University
Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University NIPS 2008: E. Sudderth & M. Jordan, Shared Segmentation of Natural
More informationK-Means and Gaussian Mixture Models
K-Means and Gaussian Mixture Models David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DS-GA 1003 June 15, 2015 1 / 43 K-Means Clustering Example: Old Faithful Geyser
More informationQuantitative Biology II!
Quantitative Biology II! Lecture 3: Markov Chain Monte Carlo! March 9, 2015! 2! Plan for Today!! Introduction to Sampling!! Introduction to MCMC!! Metropolis Algorithm!! Metropolis-Hastings Algorithm!!
More informationClustering algorithms
Clustering algorithms Machine Learning Hamid Beigy Sharif University of Technology Fall 1393 Hamid Beigy (Sharif University of Technology) Clustering algorithms Fall 1393 1 / 22 Table of contents 1 Supervised
More informationMonocular Human Motion Capture with a Mixture of Regressors. Ankur Agarwal and Bill Triggs GRAVIR-INRIA-CNRS, Grenoble, France
Monocular Human Motion Capture with a Mixture of Regressors Ankur Agarwal and Bill Triggs GRAVIR-INRIA-CNRS, Grenoble, France IEEE Workshop on Vision for Human-Computer Interaction, 21 June 2005 Visual
More informationGrundlagen der Künstlichen Intelligenz
Grundlagen der Künstlichen Intelligenz Unsupervised learning Daniel Hennes 29.01.2018 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Supervised learning Regression (linear
More information3D Human Motion Analysis and Manifolds
D E P A R T M E N T O F C O M P U T E R S C I E N C E U N I V E R S I T Y O F C O P E N H A G E N 3D Human Motion Analysis and Manifolds Kim Steenstrup Pedersen DIKU Image group and E-Science center Motivation
More informationConditional Random Fields - A probabilistic graphical model. Yen-Chin Lee 指導老師 : 鮑興國
Conditional Random Fields - A probabilistic graphical model Yen-Chin Lee 指導老師 : 鮑興國 Outline Labeling sequence data problem Introduction conditional random field (CRF) Different views on building a conditional
More information3 : Representation of Undirected GMs
0-708: Probabilistic Graphical Models 0-708, Spring 202 3 : Representation of Undirected GMs Lecturer: Eric P. Xing Scribes: Nicole Rafidi, Kirstin Early Last Time In the last lecture, we discussed directed
More informationMachine Learning and Data Mining. Clustering (1): Basics. Kalev Kask
Machine Learning and Data Mining Clustering (1): Basics Kalev Kask Unsupervised learning Supervised learning Predict target value ( y ) given features ( x ) Unsupervised learning Understand patterns of
More informationOne-Shot Learning with a Hierarchical Nonparametric Bayesian Model
One-Shot Learning with a Hierarchical Nonparametric Bayesian Model R. Salakhutdinov, J. Tenenbaum and A. Torralba MIT Technical Report, 2010 Presented by Esther Salazar Duke University June 10, 2011 E.
More informationMulti-task Multi-modal Models for Collective Anomaly Detection
Multi-task Multi-modal Models for Collective Anomaly Detection Tsuyoshi Ide ( Ide-san ), Dzung T. Phan, J. Kalagnanam PhD, Senior Technical Staff Member IBM Thomas J. Watson Research Center This slides
More informationUnsupervised Learning
Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover
More informationExtracting and Composing Robust Features with Denoising Autoencoders
Presenter: Alexander Truong March 16, 2017 Extracting and Composing Robust Features with Denoising Autoencoders Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol 1 Outline Introduction
More informationMachine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling
Machine Learning B. Unsupervised Learning B.1 Cluster Analysis Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim,
More information22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos
Machine Learning for Computer Vision 1 22 October, 2012 MVA ENS Cachan Lecture 5: Introduction to generative models Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Center for Visual Computing Ecole Centrale Paris
More informationCS 229 Midterm Review
CS 229 Midterm Review Course Staff Fall 2018 11/2/2018 Outline Today: SVMs Kernels Tree Ensembles EM Algorithm / Mixture Models [ Focus on building intuition, less so on solving specific problems. Ask
More informationLecture 21 : A Hybrid: Deep Learning and Graphical Models
10-708: Probabilistic Graphical Models, Spring 2018 Lecture 21 : A Hybrid: Deep Learning and Graphical Models Lecturer: Kayhan Batmanghelich Scribes: Paul Liang, Anirudha Rayasam 1 Introduction and Motivation
More informationUniversity of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 11: Non-Parametric Techniques.
. Non-Parameteric Techniques University of Cambridge Engineering Part IIB Paper 4F: Statistical Pattern Processing Handout : Non-Parametric Techniques Mark Gales mjfg@eng.cam.ac.uk Michaelmas 23 Introduction
More informationFMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu
FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)
More informationECE 5424: Introduction to Machine Learning
ECE 5424: Introduction to Machine Learning Topics: Unsupervised Learning: Kmeans, GMM, EM Readings: Barber 20.1-20.3 Stefan Lee Virginia Tech Tasks Supervised Learning x Classification y Discrete x Regression
More informationHomework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:
Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes
More informationSGN (4 cr) Chapter 11
SGN-41006 (4 cr) Chapter 11 Clustering Jussi Tohka & Jari Niemi Department of Signal Processing Tampere University of Technology February 25, 2014 J. Tohka & J. Niemi (TUT-SGN) SGN-41006 (4 cr) Chapter
More informationModeling time series with hidden Markov models
Modeling time series with hidden Markov models Advanced Machine learning 2017 Nadia Figueroa, Jose Medina and Aude Billard Time series data Barometric pressure Temperature Data Humidity Time What s going
More informationA Deterministic Global Optimization Method for Variational Inference
A Deterministic Global Optimization Method for Variational Inference Hachem Saddiki Mathematics and Statistics University of Massachusetts, Amherst saddiki@math.umass.edu Andrew C. Trapp Operations and
More informationClustering: Classic Methods and Modern Views
Clustering: Classic Methods and Modern Views Marina Meilă University of Washington mmp@stat.washington.edu June 22, 2015 Lorentz Center Workshop on Clusters, Games and Axioms Outline Paradigms for clustering
More informationClustering in R d. Clustering. Widely-used clustering methods. The k-means optimization problem CSE 250B
Clustering in R d Clustering CSE 250B Two common uses of clustering: Vector quantization Find a finite set of representatives that provides good coverage of a complex, possibly infinite, high-dimensional
More informationLatent Variable Models and Expectation Maximization
Latent Variable Models and Expectation Maximization Oliver Schulte - CMPT 726 Bishop PRML Ch. 9 2 4 6 8 1 12 14 16 18 2 4 6 8 1 12 14 16 18 5 1 15 2 25 5 1 15 2 25 2 4 6 8 1 12 14 2 4 6 8 1 12 14 5 1 15
More informationMultiplicative Mixture Models for Overlapping Clustering
Multiplicative Mixture Models for Overlapping Clustering Qiang Fu Dept of Computer Science & Engineering University of Minnesota, Twin Cities qifu@cs.umn.edu Arindam Banerjee Dept of Computer Science &
More informationNote Set 4: Finite Mixture Models and the EM Algorithm
Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for
More informationDeep Generative Models Variational Autoencoders
Deep Generative Models Variational Autoencoders Sudeshna Sarkar 5 April 2017 Generative Nets Generative models that represent probability distributions over multiple variables in some way. Directed Generative
More informationDensity estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate
Density estimation In density estimation problems, we are given a random sample from an unknown density Our objective is to estimate? Applications Classification If we estimate the density for each class,
More informationClustering Relational Data using the Infinite Relational Model
Clustering Relational Data using the Infinite Relational Model Ana Daglis Supervised by: Matthew Ludkin September 4, 2015 Ana Daglis Clustering Data using the Infinite Relational Model September 4, 2015
More informationMachine Learning (BSMC-GA 4439) Wenke Liu
Machine Learning (BSMC-GA 4439) Wenke Liu 01-25-2018 Outline Background Defining proximity Clustering methods Determining number of clusters Other approaches Cluster analysis as unsupervised Learning Unsupervised
More informationPruning Nearest Neighbor Cluster Trees
Pruning Nearest Neighbor Cluster Trees Samory Kpotufe Max Planck Institute for Intelligent Systems Tuebingen, Germany Joint work with Ulrike von Luxburg We ll discuss: An interesting notion of clusters
More informationSegmentation: Clustering, Graph Cut and EM
Segmentation: Clustering, Graph Cut and EM Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu
More informationClustering and The Expectation-Maximization Algorithm
Clustering and The Expectation-Maximization Algorithm Unsupervised Learning Marek Petrik 3/7 Some of the figures in this presentation are taken from An Introduction to Statistical Learning, with applications
More informationMCMC Methods for data modeling
MCMC Methods for data modeling Kenneth Scerri Department of Automatic Control and Systems Engineering Introduction 1. Symposium on Data Modelling 2. Outline: a. Definition and uses of MCMC b. MCMC algorithms
More informationWill Monroe July 21, with materials by Mehran Sahami and Chris Piech. Joint Distributions
Will Monroe July 1, 017 with materials by Mehran Sahami and Chris Piech Joint Distributions Review: Normal random variable An normal (= Gaussian) random variable is a good approximation to many other distributions.
More informationECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov
ECE521: Week 11, Lecture 20 27 March 2017: HMM learning/inference With thanks to Russ Salakhutdinov Examples of other perspectives Murphy 17.4 End of Russell & Norvig 15.2 (Artificial Intelligence: A Modern
More informationVariational Inference for Non-Stationary Distributions. Arsen Mamikonyan
Variational Inference for Non-Stationary Distributions by Arsen Mamikonyan B.S. EECS, Massachusetts Institute of Technology (2012) B.S. Physics, Massachusetts Institute of Technology (2012) Submitted to
More informationRandomized Algorithms for Fast Bayesian Hierarchical Clustering
Randomized Algorithms for Fast Bayesian Hierarchical Clustering Katherine A. Heller and Zoubin Ghahramani Gatsby Computational Neuroscience Unit University College ondon, ondon, WC1N 3AR, UK {heller,zoubin}@gatsby.ucl.ac.uk
More informationNested Sampling: Introduction and Implementation
UNIVERSITY OF TEXAS AT SAN ANTONIO Nested Sampling: Introduction and Implementation Liang Jing May 2009 1 1 ABSTRACT Nested Sampling is a new technique to calculate the evidence, Z = P(D M) = p(d θ, M)p(θ
More informationProbabilistic Graphical Models
Overview of Part One Probabilistic Graphical Models Part One: Graphs and Markov Properties Christopher M. Bishop Graphs and probabilities Directed graphs Markov properties Undirected graphs Examples Microsoft
More informationCOMPUTATIONAL STATISTICS UNSUPERVISED LEARNING
COMPUTATIONAL STATISTICS UNSUPERVISED LEARNING Luca Bortolussi Department of Mathematics and Geosciences University of Trieste Office 238, third floor, H2bis luca@dmi.units.it Trieste, Winter Semester
More informationSemi- Supervised Learning
Semi- Supervised Learning Aarti Singh Machine Learning 10-601 Dec 1, 2011 Slides Courtesy: Jerry Zhu 1 Supervised Learning Feature Space Label Space Goal: Optimal predictor (Bayes Rule) depends on unknown
More informationGradient of the lower bound
Weakly Supervised with Latent PhD advisor: Dr. Ambedkar Dukkipati Department of Computer Science and Automation gaurav.pandey@csa.iisc.ernet.in Objective Given a training set that comprises image and image-level
More informationMixture Models and the EM Algorithm
Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine c 2017 1 Finite Mixture Models Say we have a data set D = {x 1,..., x N } where x i is
More informationWhat is machine learning?
Machine learning, pattern recognition and statistical data modelling Lecture 12. The last lecture Coryn Bailer-Jones 1 What is machine learning? Data description and interpretation finding simpler relationship
More informationUniversity of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 11: Non-Parametric Techniques
University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 11: Non-Parametric Techniques Mark Gales mjfg@eng.cam.ac.uk Michaelmas 2015 11. Non-Parameteric Techniques
More informationLiangjie Hong*, Dawei Yin*, Jian Guo, Brian D. Davison*
Tracking Trends: Incorporating Term Volume into Temporal Topic Models Liangjie Hong*, Dawei Yin*, Jian Guo, Brian D. Davison* Dept. of Computer Science and Engineering, Lehigh University, Bethlehem, PA,
More informationUnit 5: Estimating with Confidence
Unit 5: Estimating with Confidence Section 8.3 The Practice of Statistics, 4 th edition For AP* STARNES, YATES, MOORE Unit 5 Estimating with Confidence 8.1 8.2 8.3 Confidence Intervals: The Basics Estimating
More informationCOMP 551 Applied Machine Learning Lecture 13: Unsupervised learning
COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning Associate Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551
More informationComputer vision: models, learning and inference. Chapter 10 Graphical Models
Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x
More informationDay 3 Lecture 1. Unsupervised Learning
Day 3 Lecture 1 Unsupervised Learning Semi-supervised and transfer learning Myth: you can t do deep learning unless you have a million labelled examples for your problem. Reality You can learn useful representations
More informationMachine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves
Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves
More informationAuto-Encoding Variational Bayes
Auto-Encoding Variational Bayes Diederik P (Durk) Kingma, Max Welling University of Amsterdam Ph.D. Candidate, advised by Max Durk Kingma D.P. Kingma Max Welling Problem class Directed graphical model:
More informationImplicit generative models: dual vs. primal approaches
Implicit generative models: dual vs. primal approaches Ilya Tolstikhin MPI for Intelligent Systems ilya@tue.mpg.de Machine Learning Summer School 2017 Tübingen, Germany Contents 1. Unsupervised generative
More informationGraphical Models for Computer Vision
Graphical Models for Computer Vision Pedro F Felzenszwalb Brown University Joint work with Dan Huttenlocher, Joshua Schwartz, Ross Girshick, David McAllester, Deva Ramanan, Allie Shapiro, John Oberlin
More informationMachine Learning. Unsupervised Learning. Manfred Huber
Machine Learning Unsupervised Learning Manfred Huber 2015 1 Unsupervised Learning In supervised learning the training data provides desired target output for learning In unsupervised learning the training
More informationScene Segmentation in Adverse Vision Conditions
Scene Segmentation in Adverse Vision Conditions Evgeny Levinkov Max Planck Institute for Informatics, Saarbrücken, Germany levinkov@mpi-inf.mpg.de Abstract. Semantic road labeling is a key component of
More informationMixture Models and EM
Table of Content Chapter 9 Mixture Models and EM -means Clustering Gaussian Mixture Models (GMM) Expectation Maximiation (EM) for Mixture Parameter Estimation Introduction Mixture models allows Complex
More informationMesh segmentation. Florent Lafarge Inria Sophia Antipolis - Mediterranee
Mesh segmentation Florent Lafarge Inria Sophia Antipolis - Mediterranee Outline What is mesh segmentation? M = {V,E,F} is a mesh S is either V, E or F (usually F) A Segmentation is a set of sub-meshes
More informationBioimage Informatics
Bioimage Informatics Lecture 13, Spring 2012 Bioimage Data Analysis (IV) Image Segmentation (part 2) Lecture 13 February 29, 2012 1 Outline Review: Steger s line/curve detection algorithm Intensity thresholding
More informationGraphical Models, Bayesian Method, Sampling, and Variational Inference
Graphical Models, Bayesian Method, Sampling, and Variational Inference With Application in Function MRI Analysis and Other Imaging Problems Wei Liu Scientific Computing and Imaging Institute University
More informationGenerative and discriminative classification techniques
Generative and discriminative classification techniques Machine Learning and Category Representation 2014-2015 Jakob Verbeek, November 28, 2014 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.14.15
More informationOverview. Monte Carlo Methods. Statistics & Bayesian Inference Lecture 3. Situation At End Of Last Week
Statistics & Bayesian Inference Lecture 3 Joe Zuntz Overview Overview & Motivation Metropolis Hastings Monte Carlo Methods Importance sampling Direct sampling Gibbs sampling Monte-Carlo Markov Chains Emcee
More informationMachine Learning A W 1sst KU. b) [1 P] For the probability distribution P (A, B, C, D) with the factorization
Machine Learning A 708.064 13W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence a) [1 P] For the probability distribution P (A, B, C, D) with the factorization P (A, B,
More informationMachine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme
Machine Learning B. Unsupervised Learning B.1 Cluster Analysis Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim, Germany
More informationSmoothing Dissimilarities for Cluster Analysis: Binary Data and Functional Data
Smoothing Dissimilarities for Cluster Analysis: Binary Data and unctional Data David B. University of South Carolina Department of Statistics Joint work with Zhimin Chen University of South Carolina Current
More informationProbabilistic multi-resolution scanning for cross-sample differences
Probabilistic multi-resolution scanning for cross-sample differences Li Ma (joint work with Jacopo Soriano) Duke University 10th Conference on Bayesian Nonparametrics Li Ma (Duke) Multi-resolution scanning
More informationContent-based image and video analysis. Machine learning
Content-based image and video analysis Machine learning for multimedia retrieval 04.05.2009 What is machine learning? Some problems are very hard to solve by writing a computer program by hand Almost all
More informationRecap: The E-M algorithm. Biostatistics 615/815 Lecture 22: Gibbs Sampling. Recap - Local minimization methods
Recap: The E-M algorithm Biostatistics 615/815 Lecture 22: Gibbs Sampling Expectation step (E-step) Given the current estimates of parameters λ (t), calculate the conditional distribution of latent variable
More informationGenerative and discriminative classification techniques
Generative and discriminative classification techniques Machine Learning and Category Representation 013-014 Jakob Verbeek, December 13+0, 013 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.13.14
More informationProbabilistic Graphical Models
Overview of Part Two Probabilistic Graphical Models Part Two: Inference and Learning Christopher M. Bishop Exact inference and the junction tree MCMC Variational methods and EM Example General variational
More informationThorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA
Retrospective ICML99 Transductive Inference for Text Classification using Support Vector Machines Thorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA Outline The paper in
More information