MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Similar documents
Hierarchical Clustering

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

CSE 347/447: DATA MINING

Hierarchical Clustering

Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

Data Mining Concepts & Techniques

Clustering Part 3. Hierarchical Clustering

Clustering Lecture 3: Hierarchical Methods

Clustering CS 550: Machine Learning

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

Knowledge Discovery in Databases

CS7267 MACHINE LEARNING

Hierarchical clustering

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

CSE 5243 INTRO. TO DATA MINING

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Cluster Analysis. Ying Shen, SSE, Tongji University

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Cluster analysis. Agnieszka Nowak - Brzezinska

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

DATA MINING - 1DL105, 1Dl111. An introductory class in data mining

Lecture Notes for Chapter 8. Introduction to Data Mining

University of Florida CISE department Gator Engineering. Clustering Part 4

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

CHAPTER 4: CLUSTER ANALYSIS

Clustering Part 4 DBSCAN

Network Traffic Measurements and Analysis

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Chapter VIII.3: Hierarchical Clustering

Data Mining. Clustering. Hamid Beigy. Sharif University of Technology. Fall 1394

Machine Learning (BSMC-GA 4439) Wenke Liu

University of Florida CISE department Gator Engineering. Clustering Part 2

Unsupervised Learning. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Unsupervised Learning

Clustering in Data Mining

Clustering Lecture 4: Density-based Methods

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

5/15/16. Computational Methods for Data Analysis. Massimo Poesio UNSUPERVISED LEARNING. Clustering. Unsupervised learning introduction

HW4 VINH NGUYEN. Q1 (6 points). Chapter 8 Exercise 20

Introduction to Mobile Robotics

CS Introduction to Data Mining Instructor: Abdullah Mueen

Clustering part II 1

Lesson 3. Prof. Enza Messina

CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

What is Cluster Analysis?

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

Clustering Lecture 5: Mixture Model

DS504/CS586: Big Data Analytics Big Data Clustering II

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

数据挖掘 Introduction to Data Mining

Solution Sketches Midterm Exam COSC 6342 Machine Learning March 20, 2013

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining, 2 nd Edition

DS504/CS586: Big Data Analytics Big Data Clustering II

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

Gene Clustering & Classification

Unsupervised Learning : Clustering

Clustering fundamentals

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Clustering algorithms

Clustering in Ratemaking: Applications in Territories Clustering

Cluster Analysis. Jia Li Department of Statistics Penn State University. Summer School in Statistics for Astronomers IV June 9-14, 2008

Cluster Analysis. Angela Montanari and Laura Anderlucci

Cluster Analysis: Basic Concepts and Algorithms

4. Ad-hoc I: Hierarchical clustering

Data Mining Algorithms

Finding Clusters 1 / 60

Clustering in R d. Clustering. Widely-used clustering methods. The k-means optimization problem CSE 250B

Lecture 7: Segmentation. Thursday, Sept 20

Statistics 202: Data Mining. c Jonathan Taylor. Week 8 Based in part on slides from textbook, slides of Susan Holmes. December 2, / 1

Machine Learning (BSMC-GA 4439) Wenke Liu

Methods for Intelligent Systems

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Mixture Models and the EM Algorithm

SGN (4 cr) Chapter 11

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Clustering. K-means clustering

Machine Learning. Unsupervised Learning. Manfred Huber

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

The K-modes and Laplacian K-modes algorithms for clustering

Understanding Clustering Supervising the unsupervised

Unsupervised Learning. Andrea G. B. Tettamanzi I3S Laboratory SPARKS Team

Based on Raymond J. Mooney s slides

Kapitel 4: Clustering


Introduction to Data Mining

Content-based image and video analysis. Machine learning

Data Mining 4. Cluster Analysis

Clustering. Bruno Martins. 1 st Semester 2012/2013

IBL and clustering. Relationship of IBL with CBR

ALTERNATIVE METHODS FOR CLUSTERING

STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

Transcription:

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A. 205-206 Pietro Guccione, PhD DEI - DIPARTIMENTO DI INGEGNERIA ELETTRICA E DELL INFORMAZIONE POLITECNICO DI BARI Pietro Guccione Assistant Professor in Signal Processing (pietro.guccione@poliba.it, http://dee.poliba.it/guccioneweb/index.html )

Lecture 7 - Summary Hierarchical clustering and DBSCAN Hierarchical clustering Density Based Clustering and DBSCAN 2

Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram o A tree-like diagram that records the sequences of merges or splits 0.2 0.5 0. 6 4 3 4 2 5 2 5 0.05 3 0 3 2 5 4 6

Strengths of Hierarchical Clustering No assumptions on the number of clusters o Any desired number of clusters can be obtained by cutting the dendogram at the proper level Hierarchical clusterings may correspond to meaningful taxonomies o Example in biological sciences (e.g., phylogeny reconstruction, etc), web (e.g., product catalogs) etc

Hierarchical Clustering Two main types of hierarchical clustering o Agglomerative: Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left o Divisive: Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters) Traditional hierarchical algorithms use a similarity or distance matrix o Merge or split one cluster at a time

Complexity of hierarchical clustering Distance matrix is used for deciding which clusters to merge/split At least quadratic in the number of data points Not usable for large datasets

Agglomerative clustering algorithm Most popular hierarchical clustering technique Basic algorithm. Compute the distance matrix between the input data points 2. Let each data point be a cluster 3. Repeat 4. Merge the two closest clusters 5. Update the distance matrix 6. Until only a single cluster remains Key operation is the computation of the distance between two clusters o Different definitions of the distance between clusters lead to different algorithms

Input/ Initial setting Start with clusters of individual points and a distance/proximity matrix p p2 p3 p4 p5 p p2 p3 p4 p5..... Ḋistance/Proximity Matrix... p p2 p3 p4 p9 p0 p p2

Intermediate State After some merging steps, we have some clusters C C2 C3 C4 C5 C3 C4 C C2 C3 C4 C C5 Distance/Proximity Matrix C2 C5... p p2 p3 p4 p9 p0 p p2

Intermediate State Merge the two closest clusters (C2 and C5) and update the distance matrix. C C C2 C3 C4 C5 C3 C4 C2 C3 C4 C C5 Distance/Proximity Matrix C2 C5... p p2 p3 p4 p9 p0 p p2

After Merging How do we update the distance matrix? C C2 U C5 C3 C4 C3 C? C4 C2 U C5???? C3? C C4? C2 U C5... p p2 p3 p4 p9 p0 p p2

Distance between two clusters Each cluster is a set of points How do we define distance between two sets of points o Lots of alternatives o Not an easy task

Distance between two clusters Single-link distance between clusters C i and C j is the minimum distance between any object in C i and any object in C j The distance is defined by the two most similar objects D sl C C min d ( x, y) x C, y C i, j x, y i j

Single-link clustering: example Determined by one pair of points, i.e., by one link in the proximity graph. I I2 I3 I4 I5 I.00 0.90 0.0 0.65 0.20 I2 0.90.00 0.70 0.60 0.50 I3 0.0 0.70.00 0.40 0.30 I4 0.65 0.60 0.40.00 0.80 I5 0.20 0.50 0.30 0.80.00 2 3 4 5

Single-link clustering: example 3 5 5 2 2 3 6 0.2 0.5 0. 4 4 0.05 0 3 6 2 5 4 Nested Clusters Dendrogram

Strengths of single-link clustering Original Points Two Clusters Can handle non-elliptical shapes

Limitations of single-link clustering Original Points Two Clusters Sensitive to noise and outliers It produces long, elongated clusters

Distance between two clusters Complete-link distance between clusters C i and C j is the maximum distance between any object in C i and any object in C j The distance is defined by the two most dissimilar objects D cl C C max d ( x, y) x C, y C i, j x, y i j

Complete-link clustering: example Distance between clusters is determined by the two most distant points in the different clusters I I2 I3 I4 I5 I.00 0.90 0.0 0.65 0.20 I2 0.90.00 0.70 0.60 0.50 I3 0.0 0.70.00 0.40 0.30 I4 0.65 0.60 0.40.00 0.80 I5 0.20 0.50 0.30 0.80.00 2 3 4 5

Complete-link clustering: example 4 5 2 5 2 3 6 3 4 0.4 0.35 0.3 0.25 0.2 0.5 0. 0.05 0 3 6 4 2 5 Nested Clusters Dendrogram

Strengths of complete-link clustering Original Points Two Clusters More balanced clusters (with equal diameter) Less susceptible to noise

Limitations of complete-link clustering Original Points Two Clusters Tends to break large clusters All clusters tend to have the same diameter Small clusters are merged with larger ones

Distance between two clusters Group average distance between clusters C i and C j is the average distance between any object in C i and any object in C j D avg C i, C j C i C j xc, y i C j d( x, y)

Average-link clustering: example Proximity of two clusters is the average of pairwise proximity between points in the two clusters. I I2 I3 I4 I5 I.00 0.90 0.0 0.65 0.20 I2 0.90.00 0.70 0.60 0.50 I3 0.0 0.70.00 0.40 0.30 I4 0.65 0.60 0.40.00 0.80 I5 0.20 0.50 0.30 0.80.00 2 3 4 5

Average-link clustering: example 5 4 2 0.25 5 2 4 3 3 6 0.2 0.5 0. 0.05 0 3 6 4 2 5 Nested Clusters Dendrogram

Average-link clustering: discussion Compromise between Single and Complete Link Strengths o Less susceptible to noise and outliers Limitations o Biased towards globular clusters

Distance between two clusters Centroid distance between clusters C i and C j is the distance between the centroid r i of C i and the centroid r j of C j D centroids C C d( r, r ) i, j i j

Distance between two clusters Ward s distance between clusters C i and C j is the difference between the total within cluster sum of squares for the two clusters separately, and the within cluster sum of squares resulting from merging the two clusters in cluster C ij D w r i : centroid of C i r j : centroid of C j r ij : centroid of C ij 2 2 C i, C j x ri x rj x rij xc i xc j xc ij 2

Ward s distance for clusters Similar to group average and centroid distance Less susceptible to noise and outliers Biased towards globular clusters Hierarchical analogue of k-means o Can be used to initialize k-means (at a given step of the hierarchical, we stop and check the determined clusters. The centroid of such clusters are the initial position of k-means)

Hierarchical Clustering: Comparison Group Average Ward s Method 2 3 4 5 6 2 5 3 4 MIN (Single link) MAX (Complete link) 2 3 4 5 6 2 5 3 4 2 3 4 5 6 2 5 3 4 2 3 4 5 6 2 3 4 5

Hierarchical Clustering: Time and Space requirements For a dataset X consisting of n points O(n 2 ) space; it requires storing the distance matrix O(n 3 ) time in most of the cases o o There are n steps and at each step the size n 2 distance matrix must be updated and searched Complexity can be reduced to O(n 2 log(n) ) time for some approaches by using appropriate data structures

Divisive hierarchical clustering Start with a single cluster composed of all data points Split this into components Continue recursively Monothetic divisive methods split clusters using one variable/dimension at a time Polythetic divisive methods make splits on the basis of all variables together Any intercluster distance measure can be used Computationally intensive (they use an exhaustive search, O(2 n )), less widely used than agglomerative methods

Model-based clustering Assume data generated from k probability distributions Goal: find the distribution parameters Algorithm: Expectation Maximization (EM) (i.e. we do now know the parameters, nor the structure of the model fraction of data to each pdf) Output: Distribution parameters and a soft assignment of points to clusters (i.e. assigning their probability to belong to a given cluster)

Model-based clustering Assume k probability distributions with parameters: (θ,, θ k ) Given data X, compute (θ,, θ k ) such that Pr(X θ,, θ k ) [likelihood] or log(pr(x θ,, θ k )) [loglikelihood] is maximized. Every point xєx need not to be generated by a single distribution but it can be generated by multiple distributions with some probability [soft clustering] A remark: if we knew which observations belong to which group or class, then we could divide the data by class and then estimate the parameters of each component density separately. Not knowing the class labels means that the labels and the parameters have to be estimated at the same time

EM Algorithm Initialize k distribution parameters (θ,, θ k ); Each distribution parameter characterize the clusters (they may be cluster center/diameter/ ) Iterate between two steps o Expectation step: (probabilistically) assign points to clusters (i.e. to a given pdf) o Maximation step: estimate model parameters that maximize the likelihood for the given assignment of points o With those parameters, repeat the assignment (E) step

Mixtures of Gaussians -- notes.2 Posterior Class Probabilities labeling 0.9 0.8 0.8 0.6 0.7 Secondfeature 0.4 0.2 0 0.6 0.5 0.4-0.2-0.4-0.6 w ik K F( x j k j i F( x i k ) -0.8 -.5 - -0.5 0 0.5.5 2 2.5 3 First feature j ) 0.3 0.2 0. 0

EM algorithm for mixture of Gaussians What is a mixture of K Gaussians? with and F(x Θ) is the Gaussian distribution with parameters Θ = {μ,σ} K k k k x F x p ) ( ) ( K k k

EM algorithm for mixture of Gaussians If all points xєx are mixtures of K Gaussians then p( X ) p( x ) F( x i Goal: Find π,, π k and Θ,, Θ k such that P(X) is maximized Or, ln(p(x)) is maximized: n i n K i k k i n K L( ) ln k F( x i k ) i k k )

Mixtures of Gaussians -- notes Every point x i is probabilistically assigned (generated) to (by) the k-th Gaussian Probability that point x i is generated by the k-th Gaussian is w ik K F( x j k j i F( x i k ) j )

Mixtures of Gaussians -- notes Every Gaussian (cluster) C k has an effective number of points assigned to it N k With mean And variance n N k w ik k i n Nk i w ik x i k n Nk i w ik x x x i k i i k T

EM for Gaussian Mixtures Initialize the means μ k, variances Σ k (Θ k =(μ k,σ k )) and mixing coefficients π k, and evaluate the initial value of the loglikelihood Expectation step: Evaluate weights w ik K F( x j k j i F( x i k ) j )

EM for Gaussian Mixtures Maximization step: Re-evaluate parameters new k new k new k n Nk i n Nk i N k N w w ik ik x Evaluate L(Θ new ) and stop if converged i new new x x x i k i i k T

Density-Based Clustering Methods Clustering based on density (local cluster criterion), such as density-connected points Major features: o Discover clusters of arbitrary shape o Handle noise o One scan o Need density parameters as termination condition Several interesting studies: o DBSCAN, o OPTICS (an algorithm for finding density-based clusters in spatial data), o others

Classification of points in density-based clustering Core points: Interior points of a density-based cluster. A point p is a core point if for distance Eps : N Eps (p)={q dist(p,q) <= e } MinPts [i.e.: a point p is a core point if at least minpts points are within distance ε of it] Border points: Not a core point but within the neighborhood of a core point (it can be in the neighborhoods of many core points) Noise points: Not a core or a border point

Core, border and noise points Eps Eps Eps

DBSCAN: The Algorithm Density-based spatial clustering of applications with noise Given a set of points, DBSCAN groups together points that are closely packed together (points with many nearby neighbors), marking as outliers points that lie alone in low-density regions (whose nearest neighbors are too far away). DBSCAN requires two parameters: ε (eps) and the minimum number of points required to form a dense region (minpts). It starts with an arbitrary starting point that has not been visited. This point's ε-neighborhood is retrieved, and if it contains sufficiently many points, a cluster is started. Otherwise, the point is labeled as noise. [This point might later be found in a sufficiently sized ε-environment of a different point and hence be made part of a cluster] If a point is found to be a dense part of a cluster, its ε-neighborhood is also part of that cluster. Hence, all points that are found within the ε-neighborhood are added, as is their own ε-neighborhood when they are also dense. This process continues until the density-connected cluster is completely found. Then, a new unvisited point is retrieved and processed, leading to the discovery of a further cluster or noise.

Time and space complexity of DBSCAN For a dataset X consisting of n points, the time complexity of DBSCAN is O(n x time to find points in the Eps-neighborhood) Worst case O(n 2 ) In low-dimensional spaces O(n log n); efficient data structures (e.g., kd-trees) allow for efficient retrieval of all points within a given distance of a specified point

Strengths and weaknesses of DBSCAN Resistant to noise Finds clusters of arbitrary shapes and sizes Difficulty in identifying clusters with varying densities Problems in high-dimensional spaces; notion of density unclear Can be computationally expensive when the computation of nearest neighbors is expensive