Cut-And-Stitch: Efficient Parallel Learning of Linear Dynamical Systems on SMPs

Size: px
Start display at page:

Download "Cut-And-Stitch: Efficient Parallel Learning of Linear Dynamical Systems on SMPs"

Transcription

1 School of Computer Science Cut-And-Stitch: Efficient Parallel Learning of Linear Dynamical Systems on SMPs Lei Li, WenjieFu, Fan Guo, Todd C. Mowry, Christos Faloutsos Computer Science Department Carnegie Mellon University 1

2 Outline Motivation Background Proposed Method: Cut-And-Stitch Experiments Conclusion 2

3 Motivation Motion Capture, Human Motion Modeling financial data modeling, prediction Common tool: Linear Dynamical System (LDS) Challenge? Learning LDS is slow sensor data, pattern mining Need For Speed 3

4 Related Work Parallel mining of closed sequential patterns, [KDD 05, S. Cong, J. Han, D. Padua] Mining terabytes of data for frequent itemsets, [PPoPP07, G. Buehrer, S. Parthasarathy, S. Tatikonda, T. Kurc, J. Saltz] Parallelize the discovery of frequent patterns in large graphs, [IPDPS 07, S. Reinhardt, G. Karypis] Parallel algorithms for SVM Cascade SVM, [NIPS 04, H. Graf, E. Cosatto, L. Bottou, I. Durdanovic, V. Vapnik] Parallel Mixture of SVMs, [NIPS 02, R. Collobert, S. Bengio, Y. Bengio] PSVM (open src), [NIPS 07, E. Chang, K. Zhu, H. Wang, H. Bai, J. Li, Z. Qiu, H. Cui] MulticoreMap-Reduce approach, [NIPS 06, C. Chu, S. Kim, Y. Lin, Y. Yu, G. Bradski, A. Ng, K. Olukotun] 4

5 Problem Definition Problem: Givena sequence of data, findthe best model parameters for Linear Dynamical System Traditional Method: Maximum Likelihood Estimation via Expectation- Maximization(EM) algorithm Objective: Parallelize the learning algorithm Assumption: shared memory architecture 5

6 Linear Dynamical System aka. Kalman Filter Parameters: θ=(u 0, V 0, A, Γ, C, Σ) Observation: y 1 y n Hidden variables: z n (u 0, V 0 ) Z 1 (A, Γ) Z 2 (A z 2, Γ) Z 3 (A z 3, Γ) Z 4 (A z 4, Γ) Z 5 (C, Σ) (C z 2, Σ) (C z 3, Σ) (C z 4, Σ) (C z 5, Σ) Y 1 Y 2 Y 3 Y 4 Y 5 6

7 Example z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 given positions, estimate dynamics (i.e. params) Position of left elbow Time 7

8 Traditional: How to learn LDS? 8

9 Sequential Learning (EM) z 2 z 2 z 3 z 3 z 4 z 4 z 5 z 5 z 6 z 6 y 2 y 4 y 4 y 5 y 5 y 6 y 6 Compute P( y 1 ) Position of left elbow Measured Estimated Time

10 Sequential Learning (EM) z 2 z 3 z 3 z 4 z 4 z 5 z 5 z 6 z 6 y 4 y 4 y 5 y 5 y 6 y 6 From P( y 1 ) Compute P(z 2 y 1, y 2 ) Position of left elbow Measured Intuition: z 2 may be close to Estimated Time 10

11 Sequential Learning (EM) z 2 z 3 z 4 z 4 z 5 z 5 z 6 z 6 y 4 y 4 y 5 y 5 y 6 y 6 Position of left elbow From P(z 2 y 1, y 2 ) Compute P(z 3 y 1, y 2, ) Measured Estimated Time 11

12 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 5 z 6 z 6 y 4 y 5 y 5 y 6 y 6 From P(z 3 y 1, y 2, ) Compute P(z 4 y 1, y 2,, y 4 ) Position of left elbow Measured Estimated Time 12

13 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 z 6 y 4 y 5 y 6 y 6 From P(z 4 y 1, y 2,, y 4 ) Compute P(z 5 y 1, y 2,, y 4, y 5 ) Position of left elbow Measured Estimated Time 13

14 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 From P(z 5 y 1, y 2,, y 4, y 5 ) Compute P(z 6 y 1, y 2,, y 4, y 5, y 6 ) Position of left elbow Measured Estimated Time 14

15 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 From P(z 6 y 1, y 2,, y 4, y 5, y 6 ) Compute P(z 5 y 1, y 2,, y 4, y 5, y 6 ) Position of left elbow Measured Intuition: take the future backward Estimated Time 15

16 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 From P(z 6 y 1, y 2,, y 4, y 5, y 6 ) Compute P(z 4 y 1, y 2,, y 4, y 5, y 6 ) Position of left elbow Measured Estimated Time 16

17 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 From P(z 4 y 1, y 2,, y 4, y 5, y 6 ) Compute P(z 3 y 1, y 2,, y 4, y 5, y 6 ) Position of left elbow Measured Estimated Time 17

18 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 From P(z 3 y 1, y 2,, y 4, y 5, y 6 ) Compute P(z 2 y 1, y 2,, y 4, y 5, y 6 ) Position of left elbow Measured Estimated Time 18

19 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 From P(z 2 y 1, y 2,, y 4, y 5, y 6 ) Compute P( y 1, y 2,, y 4, y 5, y 6 ) Position of left elbow Measured Estimated Time 19

20 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 From all posterior,z 2,z 3,z 4,z 5,z 6 P( y 1, y 2,, y 4, y 5, y 6 ), P(z 2 y 1, y 2,, y 4, y 5, y 6 ) Compute sufficient statistics E[z i ] E[z i z i ] E[z i-1 z i ]

21 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 with sufficient statistics, compute argmax likelihood(θ) θ Position of left elbow Measured reconstructed signal Time 21

22 Sequential Learning (EM) z 2 z 3 z 4 z 5 z 6 y 4 y 5 y 6 From P(z 6 y 1, y 2,, y 4, y 5, y 6 ) Compute P(z 5 y 1, y 2,, y 4, y 5, y 6 ) Position Estimated Measured Time 22

23 z 3 z 4 z 2 z 6 z 5 y 4 y 5 y 6 Speed Bottleneck: sequential computation of posterior How to parallelizeit? 23

24 Leap of faith start computation without feedback from previous node (cut), and reconcile later (stitch) 24

25 Proposed Method: Cut-And-Stitch z 3 z 4 z 2 z 6 z 5 y 4 y 5 y 6 start computation without feedback from previous node (cut) reconcile later (stitch) υ 1,Φ 1,η 1,Ψ 1 υ 2,Φ 2,η 2,Ψ 2 υ 3,Φ 3,η 3,Ψ 3 z' 2 z 2 z 3 z 4 z' 4 z 5 z 6 y 4 y 5 y 6 25

26 Cut-And-Stitch υ 1,Φ 1,η 1,Ψ 1 υ 2,Φ 2,η 2,Ψ 2 υ 3,Φ 3,η 3,Ψ 3 z' 2 z' 2 z 2 z 2 z 3 z 4 z 4 z' 4 z' 4 z 5 z 6 z 6 y 2 Position of left elbow Measured Estimated Cut step: Estimate posteriors (E) y 4 y 4 y 5 y 6 y 6 P( y 1 ), P(z 3 ), P(z 5 y 5 ) Intuition: compute all three at once Time

27 Cut-And-Stitch υ 1,Φ 1,η 1,Ψ 1 υ 2,Φ 2,η 2,Ψ 2 υ 3,Φ 3,η 3,Ψ 3 z' 2 z' 2 z 2 z 3 z 4 z' 4 z' 4 z 5 z 6 Position of left elbow Measured Estimated Cut step: Estimate posteriors (E) y 4 y 5 y 6 Time

28 Cut-And-Stitch υ 1,Φ 1,η 1,Ψ 1 υ 2,Φ 2,η 2,Ψ 2 υ 3,Φ 3,η 3,Ψ 3 z' 2 z' 2 z 2 z 3 z 4 z' 4 z' 4 z 5 z 6 Position of left elbow Measured Cut step: Estimate posteriors (E) y 4 y 5 y 6 Intuition: backward adjust all at once Time

29 Cut-And-Stitch Stitch step: Collect sufficient statistics for each block (C) E[z i ] E[z i z i ] E[z i-1 z i ] υ 1,Φ 1,η 1,Ψ 1 υ 2,Φ 2,η 2,Ψ 2 υ 3,Φ 3,η 3,Ψ 3 z' 2 z' 2 z 2 z 3 z 4 z' 4 z' 4 z 5 z 6 y 4 y 5 y 6

30 Cut-And-Stitch υ 1,Φ 1,η 1,Ψ 1 υ 2,Φ 2,η 2,Ψ 2 υ 3,Φ 3,η 3,Ψ 3 z' 2 z' 2 z 2 z 3 z 4 z' 4 z' 4 z 5 z 6 Position of left elbow Measured y 4 y 5 y 6 Stitch step: Collect sufficient Statistics (C) Maximize parameters (M) Time

31 Cut-And-Stitch υ 1,Φ 1,η 1,Ψ 1 υ 2,Φ 2,η 2,Ψ 2 υ 3,Φ 3,η 3,Ψ 3 z' 2 z 2 z 3 z 4 z' 4 z 5 z 6 Position of left elbow Measured Stitch together: Re-estimate block parameters (R) Intuition: exchange messages cross block y 4 y 5 y 6 Iterate reconstructed signal Time

32 Experiments Q1: How much speed up can we get? Q2: How good is the reconstuction accuracy? 32

33 Experiments Dataset: 58 human motion sequences, frames Each frame with 93 bone positions in body local coordinates Setup: Supercomputer: SGI Altixsystem, distributed shared memory architecture Multi-core desktop: 4 Intel Xeon cores, shared memory Task: Learn the dynamics, hidden variables and reconstruct motion 33

34 Q1: How much speed up? Supercomputer Result speedup ideal average of 58 # of processors 34

35 Q1: How much speed up? Multi-core Result ideal speedup average of 58 # of cores 35

36 Q2: How good? 2.500% 2.000% Normalized Reconstruction Error 1.500% 1.000% Sequential alg Cut-And-Stitch 0.500% 0.000% walking (#22) jumping (#1) running (#45) Result: ~ IDENTICAL accuracy 36

37 Conclusion & Contributions General approximate parallel learning algorithm for LDS Near linear speed up Accuracy (NRE): ~ identical to sequential learning Easily extended to HMM and other chain Markovian models Software (C++ w. openmp) and datasets: 37

38 Thank you Questions? WELCOME TO POSTER TONIGHT Contact info: Wenjie Fu Lei Li Christos Faloutsos Fan Guo Todd Mowry 38

39 Promising Extensions Extension HMM other Markov models (similar graphical model) Open Problem: Can prove the error bound? 39

40 cpu1 cpu2 cpu3 cpu4 E E E E E C C C C M R R R R C C C C M R R R R 40 Timeline Stitch Cut Stitch Cut Initial iteration Iteration 1 I

41 Approximate Parallel Learning Iteration 0 Warm Up Iteration 1 Iteration 2 Iteration 3 Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 41

Cut-And-Stitch: Efficient Parallel Learning of Linear Dynamical Systems on SMPs

Cut-And-Stitch: Efficient Parallel Learning of Linear Dynamical Systems on SMPs Cut-And-Stitch: Efficient Parallel Learning of Linear Dynamical Systems on SMPs Lei Li, Wenjie Fu, Fan Guo, Todd C. Mowry, Christos Faloutsos Carnegie Mellon University {leili,wenjief,fanguo,tcm,christos}@cs.cmu.edu

More information

Fast Algorithms for Mining and Summarizing Co-evolving Sequences

Fast Algorithms for Mining and Summarizing Co-evolving Sequences Fast Algorithms for Mining and Summarizing Co-evolving Sequences Lei Li Computer Science Department Carnegie Mellon University 12/14/2009 Singapore Management University Time Series (TS) Chlorine level

More information

Fast Algorithms for Coevolving Time Series Mining

Fast Algorithms for Coevolving Time Series Mining Fast Algorithms for Coevolving Time Series Mining Lei Li Computer Science Department Carnegie Mellon University Advisor: Christos Faloutsos 3/21/2010 ICDE 2010 PHD workshop Thanks Organizers: Nikos Mamoulis

More information

DYNAMMO: MINING AND SUMMARIZATION OF COEVOLVING SEQUENCES WITH MISSING VALUES

DYNAMMO: MINING AND SUMMARIZATION OF COEVOLVING SEQUENCES WITH MISSING VALUES DYNAMMO: MINING AND SUMMARIZATION OF COEVOLVING SEQUENCES WITH MISSING VALUES Christos Faloutsos joint work with Lei Li, James McCann, Nancy Pollard June 29, 2009 CHALLENGE Multidimensional coevolving

More information

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov ECE521: Week 11, Lecture 20 27 March 2017: HMM learning/inference With thanks to Russ Salakhutdinov Examples of other perspectives Murphy 17.4 End of Russell & Norvig 15.2 (Artificial Intelligence: A Modern

More information

Structured Learning. Jun Zhu

Structured Learning. Jun Zhu Structured Learning Jun Zhu Supervised learning Given a set of I.I.D. training samples Learn a prediction function b r a c e Supervised learning (cont d) Many different choices Logistic Regression Maximum

More information

Expectation Maximization. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University

Expectation Maximization. Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University Expectation Maximization Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University April 10 th, 2006 1 Announcements Reminder: Project milestone due Wednesday beginning of class 2 Coordinate

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Probabilistic Graphical Models Lecture 17 EM CS/CNS/EE 155 Andreas Krause Announcements Project poster session on Thursday Dec 3, 4-6pm in Annenberg 2 nd floor atrium! Easels, poster boards and cookies

More information

Liangjie Hong*, Dawei Yin*, Jian Guo, Brian D. Davison*

Liangjie Hong*, Dawei Yin*, Jian Guo, Brian D. Davison* Tracking Trends: Incorporating Term Volume into Temporal Topic Models Liangjie Hong*, Dawei Yin*, Jian Guo, Brian D. Davison* Dept. of Computer Science and Engineering, Lehigh University, Bethlehem, PA,

More information

MULTICORE LEARNING ALGORITHM

MULTICORE LEARNING ALGORITHM MULTICORE LEARNING ALGORITHM CHENG-TAO CHU, YI-AN LIN, YUANYUAN YU 1. Summary The focus of our term project is to apply the map-reduce principle to a variety of machine learning algorithms that are computationally

More information

Graphical Models & HMMs

Graphical Models & HMMs Graphical Models & HMMs Henrik I. Christensen Robotics & Intelligent Machines @ GT Georgia Institute of Technology, Atlanta, GA 30332-0280 hic@cc.gatech.edu Henrik I. Christensen (RIM@GT) Graphical Models

More information

Hidden Markov Models. Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi

Hidden Markov Models. Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi Hidden Markov Models Slides adapted from Joyce Ho, David Sontag, Geoffrey Hinton, Eric Xing, and Nicholas Ruozzi Sequential Data Time-series: Stock market, weather, speech, video Ordered: Text, genes Sequential

More information

Recurrent Neural Network (RNN) Industrial AI Lab.

Recurrent Neural Network (RNN) Industrial AI Lab. Recurrent Neural Network (RNN) Industrial AI Lab. For example (Deterministic) Time Series Data Closed- form Linear difference equation (LDE) and initial condition High order LDEs 2 (Stochastic) Time Series

More information

A Taxonomy of Semi-Supervised Learning Algorithms

A Taxonomy of Semi-Supervised Learning Algorithms A Taxonomy of Semi-Supervised Learning Algorithms Olivier Chapelle Max Planck Institute for Biological Cybernetics December 2005 Outline 1 Introduction 2 Generative models 3 Low density separation 4 Graph

More information

Time series, HMMs, Kalman Filters

Time series, HMMs, Kalman Filters Classic HMM tutorial see class website: *L. R. Rabiner, "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition," Proc. of the IEEE, Vol.77, No.2, pp.257--286, 1989. Time series,

More information

Clustering. Image segmentation, document clustering, protein class discovery, compression

Clustering. Image segmentation, document clustering, protein class discovery, compression Clustering CS 444 Some material on these is slides borrowed from Andrew Moore's machine learning tutorials located at: Clustering The problem of grouping unlabeled data on the basis of similarity. A key

More information

Modeling time series with hidden Markov models

Modeling time series with hidden Markov models Modeling time series with hidden Markov models Advanced Machine learning 2017 Nadia Figueroa, Jose Medina and Aude Billard Time series data Barometric pressure Temperature Data Humidity Time What s going

More information

On Fast Parallel Detection of Strongly Connected Components (SCC) in Small-World Graphs

On Fast Parallel Detection of Strongly Connected Components (SCC) in Small-World Graphs On Fast Parallel Detection of Strongly Connected Components (SCC) in Small-World Graphs Sungpack Hong 2, Nicole C. Rodia 1, and Kunle Olukotun 1 1 Pervasive Parallelism Laboratory, Stanford University

More information

Lecture 21 : A Hybrid: Deep Learning and Graphical Models

Lecture 21 : A Hybrid: Deep Learning and Graphical Models 10-708: Probabilistic Graphical Models, Spring 2018 Lecture 21 : A Hybrid: Deep Learning and Graphical Models Lecturer: Kayhan Batmanghelich Scribes: Paul Liang, Anirudha Rayasam 1 Introduction and Motivation

More information

arxiv: v1 [cs.lg] 1 Jan 2013

arxiv: v1 [cs.lg] 1 Jan 2013 CloudSVM : Training an SVM Classifier in Cloud Computing Systems arxiv:1301.0082v1 [cs.lg] 1 Jan 2013 F. Ozgur CATAK National Research Institute Of Electronics and Cryptology (UEKAE),TUBITAK M. Erdal BALABAN

More information

Introduction to Mobile Robotics

Introduction to Mobile Robotics Introduction to Mobile Robotics Clustering Wolfram Burgard Cyrill Stachniss Giorgio Grisetti Maren Bennewitz Christian Plagemann Clustering (1) Common technique for statistical data analysis (machine learning,

More information

Fast Parallel Detection of Strongly Connected Components (SCC) in Small-World Graphs

Fast Parallel Detection of Strongly Connected Components (SCC) in Small-World Graphs Fast Parallel Detection of Strongly Connected Components (SCC) in Small-World Graphs Sungpack Hong 2, Nicole C. Rodia 1, and Kunle Olukotun 1 1 Pervasive Parallelism Laboratory, Stanford University 2 Oracle

More information

MRFs and Segmentation with Graph Cuts

MRFs and Segmentation with Graph Cuts 02/24/10 MRFs and Segmentation with Graph Cuts Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Today s class Finish up EM MRFs w ij i Segmentation with Graph Cuts j EM Algorithm: Recap

More information

Localized Anomaly Detection via Hierarchical Integrated Activity Discovery

Localized Anomaly Detection via Hierarchical Integrated Activity Discovery Localized Anomaly Detection via Hierarchical Integrated Activity Discovery Master s Thesis Defense Thiyagarajan Chockalingam Advisors: Dr. Chuck Anderson, Dr. Sanjay Rajopadhye 12/06/2013 Outline Parameter

More information

Expectation-Maximization. Nuno Vasconcelos ECE Department, UCSD

Expectation-Maximization. Nuno Vasconcelos ECE Department, UCSD Expectation-Maximization Nuno Vasconcelos ECE Department, UCSD Plan for today last time we started talking about mixture models we introduced the main ideas behind EM to motivate EM, we looked at classification-maximization

More information

Easy Samples First: Self-paced Reranking for Zero-Example Multimedia Search

Easy Samples First: Self-paced Reranking for Zero-Example Multimedia Search Easy Samples First: Self-paced Reranking for Zero-Example Multimedia Search Lu Jiang 1, Deyu Meng 2, Teruko Mitamura 1, Alexander G. Hauptmann 1 1 School of Computer Science, Carnegie Mellon University

More information

Epitomic Analysis of Human Motion

Epitomic Analysis of Human Motion Epitomic Analysis of Human Motion Wooyoung Kim James M. Rehg Department of Computer Science Georgia Institute of Technology Atlanta, GA 30332 {wooyoung, rehg}@cc.gatech.edu Abstract Epitomic analysis is

More information

Jing Gao 1, Feng Liang 1, Wei Fan 2, Chi Wang 1, Yizhou Sun 1, Jiawei i Han 1 University of Illinois, IBM TJ Watson.

Jing Gao 1, Feng Liang 1, Wei Fan 2, Chi Wang 1, Yizhou Sun 1, Jiawei i Han 1 University of Illinois, IBM TJ Watson. Jing Gao 1, Feng Liang 1, Wei Fan 2, Chi Wang 1, Yizhou Sun 1, Jiawei i Han 1 University of Illinois, IBM TJ Watson Debapriya Basu Determine outliers in information networks Compare various algorithms

More information

An Improved Apriori Algorithm for Association Rules

An Improved Apriori Algorithm for Association Rules Research article An Improved Apriori Algorithm for Association Rules Hassan M. Najadat 1, Mohammed Al-Maolegi 2, Bassam Arkok 3 Computer Science, Jordan University of Science and Technology, Irbid, Jordan

More information

Thesis Proposal : Switching Linear Dynamic Systems with Higher-order Temporal Structure. Sang Min Oh

Thesis Proposal : Switching Linear Dynamic Systems with Higher-order Temporal Structure. Sang Min Oh Thesis Proposal : Switching Linear Dynamic Systems with Higher-order Temporal Structure Sang Min Oh sangmin@cc.gatech.edu 28th May 2008 Contents 1 Introduction 1 1.1 Automated Temporal Sequence Analysis.................................

More information

Image classification by a Two Dimensional Hidden Markov Model

Image classification by a Two Dimensional Hidden Markov Model Image classification by a Two Dimensional Hidden Markov Model Author: Jia Li, Amir Najmi and Robert M. Gray Presenter: Tzung-Hsien Ho Hidden Markov Chain Goal: To implement a novel classifier for image

More information

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.1 Cluster Analysis Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim,

More information

Sequences Modeling and Analysis Based on Complex Network

Sequences Modeling and Analysis Based on Complex Network Sequences Modeling and Analysis Based on Complex Network Li Wan 1, Kai Shu 1, and Yu Guo 2 1 Chongqing University, China 2 Institute of Chemical Defence People Libration Army {wanli,shukai}@cqu.edu.cn

More information

Final Exam Practice Fall Semester, 2012

Final Exam Practice Fall Semester, 2012 COS 495 - Autonomous Robot Navigation Final Exam Practice Fall Semester, 2012 Duration: Total Marks: 70 Closed Book 2 hours Start Time: End Time: By signing this exam, I agree to the honor code Name: Signature:

More information

Overview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010

Overview Citation. ML Introduction. Overview Schedule. ML Intro Dataset. Introduction to Semi-Supervised Learning Review 10/4/2010 INFORMATICS SEMINAR SEPT. 27 & OCT. 4, 2010 Introduction to Semi-Supervised Learning Review 2 Overview Citation X. Zhu and A.B. Goldberg, Introduction to Semi- Supervised Learning, Morgan & Claypool Publishers,

More information

Fast Support Vector Machine Training and Classification on Graphics Processors

Fast Support Vector Machine Training and Classification on Graphics Processors Fast Support Vector Machine Training and Classification on Graphics Processors Bryan Catanzaro catanzar@eecs.berkeley.edu Narayanan Sundaram narayans@eecs.berkeley.edu Kurt Keutzer keutzer@eecs.berkeley.edu

More information

CARPENTER Find Closed Patterns in Long Biological Datasets. Biological Datasets. Overview. Biological Datasets. Zhiyu Wang

CARPENTER Find Closed Patterns in Long Biological Datasets. Biological Datasets. Overview. Biological Datasets. Zhiyu Wang CARPENTER Find Closed Patterns in Long Biological Datasets Zhiyu Wang Biological Datasets Gene expression Consists of large number of genes Knowledge Discovery and Data Mining Dr. Osmar Zaiane Department

More information

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning Associate Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551

More information

Bayesian Reconstruction of 3D Human Motion from Single-Camera Video

Bayesian Reconstruction of 3D Human Motion from Single-Camera Video Bayesian Reconstruction of 3D Human Motion from Single-Camera Video Nicholas R. Howe Cornell University Michael E. Leventon MIT William T. Freeman Mitsubishi Electric Research Labs Problem Background 2D

More information

Variational Methods for Discrete-Data Latent Gaussian Models

Variational Methods for Discrete-Data Latent Gaussian Models Variational Methods for Discrete-Data Latent Gaussian Models University of British Columbia Vancouver, Canada March 6, 2012 The Big Picture Joint density models for data with mixed data types Bayesian

More information

Acceleration of Probabilistic Tractography Using Multi-GPU Parallel Processing. Jungsoo Lee, Sun Mi Park, Dae-Shik Kim

Acceleration of Probabilistic Tractography Using Multi-GPU Parallel Processing. Jungsoo Lee, Sun Mi Park, Dae-Shik Kim Acceleration of Probabilistic Tractography Using Multi-GPU Parallel Processing Jungsoo Lee, Sun Mi Park, Dae-Shik Kim Introduction In particular, probabilistic tractography requires relatively long computation

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Clustering and EM Barnabás Póczos & Aarti Singh Contents Clustering K-means Mixture of Gaussians Expectation Maximization Variational Methods 2 Clustering 3 K-

More information

Semi-Amortized Variational Autoencoders

Semi-Amortized Variational Autoencoders Semi-Amortized Variational Autoencoders Yoon Kim Sam Wiseman Andrew Miller David Sontag Alexander Rush Code: https://github.com/harvardnlp/sa-vae Background: Variational Autoencoders (VAE) (Kingma et al.

More information

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter

ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter ECE276A: Sensing & Estimation in Robotics Lecture 11: Simultaneous Localization and Mapping using a Particle Filter Lecturer: Nikolay Atanasov: natanasov@ucsd.edu Teaching Assistants: Siwei Guo: s9guo@eng.ucsd.edu

More information

DCMSVM: Distributed Parallel Training For Single-Machine Multiclass Classifiers

DCMSVM: Distributed Parallel Training For Single-Machine Multiclass Classifiers DCMSVM: Distributed Parallel Training For Single-Machine Multiclass Classifiers Xufeng Han Alexander C. Berg Computer Science Department Stony Brook University Abstract We present an algorithm and implementation

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 4, 2016 Outline Multi-core v.s. multi-processor Parallel Gradient Descent Parallel Stochastic Gradient Parallel Coordinate Descent Parallel

More information

Graphical Models, Bayesian Method, Sampling, and Variational Inference

Graphical Models, Bayesian Method, Sampling, and Variational Inference Graphical Models, Bayesian Method, Sampling, and Variational Inference With Application in Function MRI Analysis and Other Imaging Problems Wei Liu Scientific Computing and Imaging Institute University

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Nov. 13, 2007 Harpreet S. Sawhney hsawhney@sarnoff.com Recapitulation Problem of motion estimation Parametric

More information

Rewind to Track: Parallelized Apprenticeship Learning with Backward Tracklets

Rewind to Track: Parallelized Apprenticeship Learning with Backward Tracklets Rewind to Track: Parallelized Apprenticeship Learning with Backward Tracklets Jiang Liu 1,2, Jia Chen 2, De Cheng 2, Chenqiang Gao 1, Alexander G. Hauptmann 2 1 Chongqing University of Posts and Telecommunications

More information

Deep Learning and Its Applications

Deep Learning and Its Applications Convolutional Neural Network and Its Application in Image Recognition Oct 28, 2016 Outline 1 A Motivating Example 2 The Convolutional Neural Network (CNN) Model 3 Training the CNN Model 4 Issues and Recent

More information

Hidden Markov Models in the context of genetic analysis

Hidden Markov Models in the context of genetic analysis Hidden Markov Models in the context of genetic analysis Vincent Plagnol UCL Genetics Institute November 22, 2012 Outline 1 Introduction 2 Two basic problems Forward/backward Baum-Welch algorithm Viterbi

More information

A Deep Learning primer

A Deep Learning primer A Deep Learning primer Riccardo Zanella r.zanella@cineca.it SuperComputing Applications and Innovation Department 1/21 Table of Contents Deep Learning: a review Representation Learning methods DL Applications

More information

GraphGAN: Graph Representation Learning with Generative Adversarial Nets

GraphGAN: Graph Representation Learning with Generative Adversarial Nets The 32 nd AAAI Conference on Artificial Intelligence (AAAI 2018) New Orleans, Louisiana, USA GraphGAN: Graph Representation Learning with Generative Adversarial Nets Hongwei Wang 1,2, Jia Wang 3, Jialin

More information

The EM Algorithm Lecture What's the Point? Maximum likelihood parameter estimates: One denition of the \best" knob settings. Often impossible to nd di

The EM Algorithm Lecture What's the Point? Maximum likelihood parameter estimates: One denition of the \best knob settings. Often impossible to nd di The EM Algorithm This lecture introduces an important statistical estimation algorithm known as the EM or \expectation-maximization" algorithm. It reviews the situations in which EM works well and its

More information

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework

An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework IEEE SIGNAL PROCESSING LETTERS, VOL. XX, NO. XX, XXX 23 An Efficient Model Selection for Gaussian Mixture Model in a Bayesian Framework Ji Won Yoon arxiv:37.99v [cs.lg] 3 Jul 23 Abstract In order to cluster

More information

Probabilistic Modelling and Reasoning Assignment 2

Probabilistic Modelling and Reasoning Assignment 2 Probabilistic Modelling and Reasoning Assignment 2 Instructor: Prof. Chris Williams Published: Fri November 4 2011, Revised Nov 18 Due: Fri December 2, 2011 by 4pm Remember that plagiarism is a university

More information

MCMC Methods for data modeling

MCMC Methods for data modeling MCMC Methods for data modeling Kenneth Scerri Department of Automatic Control and Systems Engineering Introduction 1. Symposium on Data Modelling 2. Outline: a. Definition and uses of MCMC b. MCMC algorithms

More information

Lecture 8: The EM algorithm

Lecture 8: The EM algorithm 10-708: Probabilistic Graphical Models 10-708, Spring 2017 Lecture 8: The EM algorithm Lecturer: Manuela M. Veloso, Eric P. Xing Scribes: Huiting Liu, Yifan Yang 1 Introduction Previous lecture discusses

More information

Hidden Markov Models. Gabriela Tavares and Juri Minxha Mentor: Taehwan Kim CS159 04/25/2017

Hidden Markov Models. Gabriela Tavares and Juri Minxha Mentor: Taehwan Kim CS159 04/25/2017 Hidden Markov Models Gabriela Tavares and Juri Minxha Mentor: Taehwan Kim CS159 04/25/2017 1 Outline 1. 2. 3. 4. Brief review of HMMs Hidden Markov Support Vector Machines Large Margin Hidden Markov Models

More information

Machine Learning. Unsupervised Learning. Manfred Huber

Machine Learning. Unsupervised Learning. Manfred Huber Machine Learning Unsupervised Learning Manfred Huber 2015 1 Unsupervised Learning In supervised learning the training data provides desired target output for learning In unsupervised learning the training

More information

Managing and Mining Graph Data

Managing and Mining Graph Data Managing and Mining Graph Data by Charu C. Aggarwal IBM T.J. Watson Research Center Hawthorne, NY, USA Haixun Wang Microsoft Research Asia Beijing, China

More information

Dynamic Bayesian network (DBN)

Dynamic Bayesian network (DBN) Readings: K&F: 18.1, 18.2, 18.3, 18.4 ynamic Bayesian Networks Beyond 10708 Graphical Models 10708 Carlos Guestrin Carnegie Mellon University ecember 1 st, 2006 1 ynamic Bayesian network (BN) HMM defined

More information

18 October, 2013 MVA ENS Cachan. Lecture 6: Introduction to graphical models Iasonas Kokkinos

18 October, 2013 MVA ENS Cachan. Lecture 6: Introduction to graphical models Iasonas Kokkinos Machine Learning for Computer Vision 1 18 October, 2013 MVA ENS Cachan Lecture 6: Introduction to graphical models Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Center for Visual Computing Ecole Centrale Paris

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Dec. 4, 2008 Harpreet S. Sawhney hsawhney@sarnoff.com Plan IBR / Rendering applications of motion / pose

More information

Mixture Models and EM

Mixture Models and EM Table of Content Chapter 9 Mixture Models and EM -means Clustering Gaussian Mixture Models (GMM) Expectation Maximiation (EM) for Mixture Parameter Estimation Introduction Mixture models allows Complex

More information

Monte Carlo Localization for Mobile Robots

Monte Carlo Localization for Mobile Robots Monte Carlo Localization for Mobile Robots Frank Dellaert 1, Dieter Fox 2, Wolfram Burgard 3, Sebastian Thrun 4 1 Georgia Institute of Technology 2 University of Washington 3 University of Bonn 4 Carnegie

More information

Asynchronous Parallel Stochastic Gradient Descent. A Numeric Core for Scalable Distributed Machine Learning Algorithms

Asynchronous Parallel Stochastic Gradient Descent. A Numeric Core for Scalable Distributed Machine Learning Algorithms Asynchronous Parallel Stochastic Gradient Descent A Numeric Core for Scalable Distributed Machine Learning Algorithms J. Keuper and F.-J. Pfreundt Competence Center High Performance Computing Fraunhofer

More information

Learning Motion Patterns of People for Compliant Robot Motion

Learning Motion Patterns of People for Compliant Robot Motion Learning Motion Patterns of People for Compliant Robot Motion Maren Bennewitz Wolfram Burgard Grzegorz Cielniak Sebastian Thrun Department of Computer Science, University of Freiburg, 79110 Freiburg, Germany

More information

Tracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods

Tracking Algorithms. Lecture16: Visual Tracking I. Probabilistic Tracking. Joint Probability and Graphical Model. Deterministic methods Tracking Algorithms CSED441:Introduction to Computer Vision (2017F) Lecture16: Visual Tracking I Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state,

More information

Assignment 2. Unsupervised & Probabilistic Learning. Maneesh Sahani Due: Monday Nov 5, 2018

Assignment 2. Unsupervised & Probabilistic Learning. Maneesh Sahani Due: Monday Nov 5, 2018 Assignment 2 Unsupervised & Probabilistic Learning Maneesh Sahani Due: Monday Nov 5, 2018 Note: Assignments are due at 11:00 AM (the start of lecture) on the date above. he usual College late assignments

More information

CS 229 Midterm Review

CS 229 Midterm Review CS 229 Midterm Review Course Staff Fall 2018 11/2/2018 Outline Today: SVMs Kernels Tree Ensembles EM Algorithm / Mixture Models [ Focus on building intuition, less so on solving specific problems. Ask

More information

COMP90051 Statistical Machine Learning

COMP90051 Statistical Machine Learning COMP90051 Statistical Machine Learning Semester 2, 2016 Lecturer: Trevor Cohn 20. PGM Representation Next Lectures Representation of joint distributions Conditional/marginal independence * Directed vs

More information

Supervised and unsupervised classification approaches for human activity recognition using body-mounted sensors

Supervised and unsupervised classification approaches for human activity recognition using body-mounted sensors Supervised and unsupervised classification approaches for human activity recognition using body-mounted sensors D. Trabelsi 1, S. Mohammed 1,F.Chamroukhi 2,L.Oukhellou 3 and Y. Amirat 1 1 University Paris-Est

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

CSCI 599 Class Presenta/on. Zach Levine. Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates

CSCI 599 Class Presenta/on. Zach Levine. Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates CSCI 599 Class Presenta/on Zach Levine Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates April 26 th, 2012 Topics Covered in this Presenta2on A (Brief) Review of HMMs HMM Parameter Learning Expecta2on-

More information

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications

Evaluation of Moving Object Tracking Techniques for Video Surveillance Applications International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation

More information

Inference and Representation

Inference and Representation Inference and Representation Rachel Hodos New York University Lecture 5, October 6, 2015 Rachel Hodos Lecture 5: Inference and Representation Today: Learning with hidden variables Outline: Unsupervised

More information

Markov Decision Processes (MDPs) (cont.)

Markov Decision Processes (MDPs) (cont.) Markov Decision Processes (MDPs) (cont.) Machine Learning 070/578 Carlos Guestrin Carnegie Mellon University November 29 th, 2007 Markov Decision Process (MDP) Representation State space: Joint state x

More information

Clustering web search results

Clustering web search results Clustering K-means Machine Learning CSE546 Emily Fox University of Washington November 4, 2013 1 Clustering images Set of Images [Goldberger et al.] 2 1 Clustering web search results 3 Some Data 4 2 K-means

More information

Clustering Relational Data using the Infinite Relational Model

Clustering Relational Data using the Infinite Relational Model Clustering Relational Data using the Infinite Relational Model Ana Daglis Supervised by: Matthew Ludkin September 4, 2015 Ana Daglis Clustering Data using the Infinite Relational Model September 4, 2015

More information

Using Graphs to Improve Activity Prediction in Smart Environments based on Motion Sensor Data

Using Graphs to Improve Activity Prediction in Smart Environments based on Motion Sensor Data Using Graphs to Improve Activity Prediction in Smart Environments based on Motion Sensor Data S. Seth Long and Lawrence B. Holder Washington State University Abstract. Activity Recognition in Smart Environments

More information

Conditional Random Fields - A probabilistic graphical model. Yen-Chin Lee 指導老師 : 鮑興國

Conditional Random Fields - A probabilistic graphical model. Yen-Chin Lee 指導老師 : 鮑興國 Conditional Random Fields - A probabilistic graphical model Yen-Chin Lee 指導老師 : 鮑興國 Outline Labeling sequence data problem Introduction conditional random field (CRF) Different views on building a conditional

More information

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves

More information

Memory Placement Techniques for Parallel Association Mining

Memory Placement Techniques for Parallel Association Mining Memory Placement Techniques for Parallel ssociation Mining Srinivasan Parthasarathy, Mohammed J. Zaki, and Wei Li omputer Science epartment, University of Rochester, Rochester NY 14627 srini,zaki,wei @cs.rochester.edu

More information

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore

Particle Filtering. CS6240 Multimedia Analysis. Leow Wee Kheng. Department of Computer Science School of Computing National University of Singapore Particle Filtering CS6240 Multimedia Analysis Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore (CS6240) Particle Filtering 1 / 28 Introduction Introduction

More information

(University Improving of Montreal) Generative Adversarial Networks with Denoising Feature Matching / 17

(University Improving of Montreal) Generative Adversarial Networks with Denoising Feature Matching / 17 Improving Generative Adversarial Networks with Denoising Feature Matching David Warde-Farley 1 Yoshua Bengio 1 1 University of Montreal, ICLR,2017 Presenter: Bargav Jayaraman Outline 1 Introduction 2 Background

More information

Clustering. Bruno Martins. 1 st Semester 2012/2013

Clustering. Bruno Martins. 1 st Semester 2012/2013 Departamento de Engenharia Informática Instituto Superior Técnico 1 st Semester 2012/2013 Slides baseados nos slides oficiais do livro Mining the Web c Soumen Chakrabarti. Outline 1 Motivation Basic Concepts

More information

LDA for Big Data - Outline

LDA for Big Data - Outline LDA FOR BIG DATA 1 LDA for Big Data - Outline Quick review of LDA model clustering words-in-context Parallel LDA ~= IPM Fast sampling tricks for LDA Sparsified sampler Alias table Fenwick trees LDA for

More information

10. MLSP intro. (Clustering: K-means, EM, GMM, etc.)

10. MLSP intro. (Clustering: K-means, EM, GMM, etc.) 10. MLSP intro. (Clustering: K-means, EM, GMM, etc.) Rahil Mahdian 01.04.2016 LSV Lab, Saarland University, Germany What is clustering? Clustering is the classification of objects into different groups,

More information

Templates. for scalable data analysis. 2 Synchronous Templates. Amr Ahmed, Alexander J Smola, Markus Weimer. Yahoo! Research & UC Berkeley & ANU

Templates. for scalable data analysis. 2 Synchronous Templates. Amr Ahmed, Alexander J Smola, Markus Weimer. Yahoo! Research & UC Berkeley & ANU Templates for scalable data analysis 2 Synchronous Templates Amr Ahmed, Alexander J Smola, Markus Weimer Yahoo! Research & UC Berkeley & ANU Running Example Inbox Spam Running Example Inbox Spam Spam Filter

More information

Thorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA

Thorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA Retrospective ICML99 Transductive Inference for Text Classification using Support Vector Machines Thorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA Outline The paper in

More information

COMS 4771 Clustering. Nakul Verma

COMS 4771 Clustering. Nakul Verma COMS 4771 Clustering Nakul Verma Supervised Learning Data: Supervised learning Assumption: there is a (relatively simple) function such that for most i Learning task: given n examples from the data, find

More information

FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D Jülich, Tel. (02461)

FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D Jülich, Tel. (02461) FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D-52425 Jülich, Tel. (02461) 61-6402 Technical Report Efficient Implementation of Serial and Parallel Support Vector Machine Training

More information

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014

More information

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a Week 9 Based in part on slides from textbook, slides of Susan Holmes Part I December 2, 2012 Hierarchical Clustering 1 / 1 Produces a set of nested clusters organized as a Hierarchical hierarchical clustering

More information

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation

Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Human Motion Synthesis by Motion Manifold Learning and Motion Primitive Segmentation Chan-Su Lee and Ahmed Elgammal Rutgers University, Piscataway, NJ, USA {chansu, elgammal}@cs.rutgers.edu Abstract. We

More information

Data Parallelism and the Support Vector Machine

Data Parallelism and the Support Vector Machine Data Parallelism and the Support Vector Machine Solomon Gibbs he support vector machine is a common algorithm for pattern classification. However, many of the most popular implementations are not suitable

More information

732A54/TDDE31 Big Data Analytics

732A54/TDDE31 Big Data Analytics 732A54/TDDE31 Big Data Analytics Lecture 10: Machine Learning with MapReduce Jose M. Peña IDA, Linköping University, Sweden 1/27 Contents MapReduce Framework Machine Learning with MapReduce Neural Networks

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Clustering Lecture 5: Mixture Model

Clustering Lecture 5: Mixture Model Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics

More information