Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

Similar documents
Data Mining Concepts & Techniques

Hierarchical Clustering

Clustering Part 3. Hierarchical Clustering

CSE 347/447: DATA MINING

Hierarchical Clustering

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

CS7267 MACHINE LEARNING

Hierarchical clustering

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

Clustering Lecture 3: Hierarchical Methods

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

5/15/16. Computational Methods for Data Analysis. Massimo Poesio UNSUPERVISED LEARNING. Clustering. Unsupervised learning introduction

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Cluster analysis. Agnieszka Nowak - Brzezinska

CSE 5243 INTRO. TO DATA MINING

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

Lecture Notes for Chapter 8. Introduction to Data Mining

DATA MINING - 1DL105, 1Dl111. An introductory class in data mining

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Unsupervised Learning. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Cluster Analysis. Ying Shen, SSE, Tongji University

Knowledge Discovery in Databases

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Clustering CS 550: Machine Learning

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

COMP20008 Elements of Data Processing. Outlier Detection and Clustering

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

CS570: Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

Clustering fundamentals

University of Florida CISE department Gator Engineering. Clustering Part 2

What is Cluster Analysis?

Online Social Networks and Media. Community detection

HW4 VINH NGUYEN. Q1 (6 points). Chapter 8 Exercise 20

Machine Learning 15/04/2015. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Hierarchical Clustering Lecture 9

Types of general clustering methods. Clustering Algorithms for general similarity measures. Similarity between clusters

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Unsupervised Learning

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

Clustering Algorithms for general similarity measures

Hierarchical clustering

4. Ad-hoc I: Hierarchical clustering

Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining, 2 nd Edition

Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

Chapter VIII.3: Hierarchical Clustering

Clustering: Overview and K-means algorithm

Data Mining. Cluster Analysis: Basic Concepts and Algorithms

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Clustering Part 4 DBSCAN

INF4820, Algorithms for AI and NLP: Hierarchical Clustering

Cluster Analysis. Angela Montanari and Laura Anderlucci

数据挖掘 Introduction to Data Mining

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Hierarchy. No or very little supervision Some heuristic quality guidances on the quality of the hierarchy. Jian Pei: CMPT 459/741 Clustering (2) 1

Distances, Clustering! Rafael Irizarry!

Clustering part II 1

Lesson 3. Prof. Enza Messina

CHAPTER 4: CLUSTER ANALYSIS

Hierarchical Clustering

Hierarchical and Ensemble Clustering

University of Florida CISE department Gator Engineering. Clustering Part 4

Today s lecture. Clustering and unsupervised learning. Hierarchical clustering. K-means, K-medoids, VQ

Unsupervised Learning : Clustering

Clustering Part 2. A Partitional Clustering

and Algorithms Dr. Hui Xiong Rutgers University Introduction to Data Mining 8/30/ Introduction to Data Mining 08/06/2006 1

Chuck Cartledge, PhD. 23 September 2017

Gene Clustering & Classification

Cluster Analysis for Microarray Data

Cluster Analysis: Agglomerate Hierarchical Clustering

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 16

Machine Learning and Data Mining. Clustering. (adapted from) Prof. Alexander Ihler

Data Mining and Analysis: Fundamental Concepts and Algorithms

Machine Learning (BSMC-GA 4439) Wenke Liu

Hierarchical Clustering 4/5/17

Multivariate Analysis

Hierarchical Clustering

Data Mining Classification: Alternative Techniques. Lecture Notes for Chapter 4. Instance-Based Learning. Introduction to Data Mining, 2 nd Edition

Statistics 202: Data Mining. c Jonathan Taylor. Clustering Based in part on slides from textbook, slides of Susan Holmes.

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

Unsupervised Learning and Clustering

Clustering Tips and Tricks in 45 minutes (maybe more :)

Lecture 4 Hierarchical clustering

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Slides by Eamonn Keogh

Information Retrieval and Web Search Engines

Lecture 10: Semantic Segmentation and Clustering

Clustering (COSC 416) Nazli Goharian. Document Clustering.

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Summer School on Geocomputation. 27 June July 2011 Vysoké Pole

A Comparison of Document Clustering Techniques

Segmentation (continued)

Transcription:

Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 7 Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar

Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram A tree like diagram that records the sequences of merges or splits 6 5 02 015 01 4 3 4 2 5 2 005 3 1 1 0 1 3 2 5 4 6 02/14/2018 Introduction to Data Mining, 2 nd Edition 2

Strengths of Hierarchical Clustering Do not have to assume any particular number of clusters Any desired number of clusters can be obtained by cutting the dendrogram at the proper level They may correspond to meaningful taxonomies Example in biological sciences (eg, animal kingdom, phylogeny reconstruction, ) 02/14/2018 Introduction to Data Mining, 2 nd Edition 3

Hierarchical Clustering Two main types of hierarchical clustering Agglomerative: Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left Divisive: Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains an individual point (or there are k clusters) Traditional hierarchical algorithms use a similarity or distance matrix Merge or split one cluster at a time 02/14/2018 Introduction to Data Mining, 2 nd Edition 4

Agglomerative Clustering Algorithm Most popular hierarchical clustering technique Basic algorithm is straightforward 1 Compute the proximity matrix 2 Let each data point be a cluster 3 Repeat 4 Merge the two closest clusters 5 Update the proximity matrix 6 Until only a single cluster remains Key operation is the computation of the proximity of two clusters Different approaches to defining the distance between clusters distinguish the different algorithms 02/14/2018 Introduction to Data Mining, 2 nd Edition 5

Starting Situation Start with clusters of individual points and a proximity matrix p1 p2 p3 p4 p5 p1 p2 p3 p4 p5 Proximity Matrix 02/14/2018 Introduction to Data Mining, 2 nd Edition 6 p1 p2 p3 p4 p9 p10 p11 p12

Intermediate Situation After some merging steps, we have some clusters C1 C2 C3 C4 C5 C1 C3 C4 C1 C2 C3 C4 C5 Proximity Matrix C2 C5 02/14/2018 Introduction to Data Mining, 2 nd Edition 7 p1 p2 p3 p4 p9 p10 p11 p12

Intermediate Situation We want to merge the two closest clusters (C2 and C5) and update the proximity matrix C1 C2 C3 C4 C5 C1 C3 C4 C1 C2 C3 C4 C5 Proximity Matrix C2 C5 02/14/2018 Introduction to Data Mining, 2 nd Edition 8 p1 p2 p3 p4 p9 p10 p11 p12

After Merging The question is How do we update the proximity matrix? C1 C2 U C5 C3 C4 C1? C3 C4 C2 U C5 C3 C4?????? C1 Proximity Matrix C2 U C5 02/14/2018 Introduction to Data Mining, 2 nd Edition 9 p1 p2 p3 p4 p9 p10 p11 p12

How to Define Inter-Cluster Distance p1 p2 p3 p4 p5 Similarity? p1 p2 p3 p4 MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward s Method uses squared error p5 Proximity Matrix 02/14/2018 Introduction to Data Mining, 2 nd Edition 10

How to Define Inter-Cluster Similarity p1 p2 p3 p4 p5 p1 p2 p3 p4 MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward s Method uses squared error p5 Proximity Matrix 02/14/2018 Introduction to Data Mining, 2 nd Edition 11

How to Define Inter-Cluster Similarity p1 p2 p3 p4 p5 p1 p2 p3 p4 MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward s Method uses squared error p5 Proximity Matrix 02/14/2018 Introduction to Data Mining, 2 nd Edition 12

How to Define Inter-Cluster Similarity p1 p2 p3 p4 p5 p1 p2 p3 p4 MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward s Method uses squared error p5 Proximity Matrix 02/14/2018 Introduction to Data Mining, 2 nd Edition 13

How to Define Inter-Cluster Similarity p1 p1 p2 p3 p4 p5 p2 p3 p4 MIN MAX Group Average Distance Between Centroids Other methods driven by an objective function Ward s Method uses squared error p5 Proximity Matrix 02/14/2018 Introduction to Data Mining, 2 nd Edition 14

MIN or Single Link Proximity of two clusters is based on the two closest points in the different clusters Determined by one pair of points, ie, by one link in the proximity graph Example: Distance Matrix: 02/14/2018 Introduction to Data Mining, 2 nd Edition 15

Hierarchical Clustering: MIN 3 5 1 5 2 2 3 1 6 02 015 01 4 4 005 0 3 6 2 5 4 1 Nested Clusters Dendrogram 02/14/2018 Introduction to Data Mining, 2 nd Edition 16

Strength of MIN Original Points Six Clusters Can handle non-elliptical shapes 02/14/2018 Introduction to Data Mining, 2 nd Edition 17

Limitations of MIN Two Clusters Original Points Sensitive to noise and outliers Three Clusters 02/14/2018 Introduction to Data Mining, 2 nd Edition 18

MAX or Complete Linkage Proximity of two clusters is based on the two most distant points in the different clusters Determined by all pairs of points in the two clusters Distance Matrix: 02/14/2018 Introduction to Data Mining, 2 nd Edition 19

Hierarchical Clustering: MAX 5 4 1 2 5 2 3 6 3 1 4 04 035 03 025 02 015 01 005 0 3 6 4 1 2 5 Nested Clusters Dendrogram 02/14/2018 Introduction to Data Mining, 2 nd Edition 20

Strength of MAX Original Points Two Clusters Less susceptible to noise and outliers 02/14/2018 Introduction to Data Mining, 2 nd Edition 21

Limitations of MAX Original Points Two Clusters Tends to break large clusters Biased towards globular clusters 02/14/2018 Introduction to Data Mining, 2 nd Edition 22

Group Average Proximity of two clusters is the average of pairwise proximity between points in the two clusters pi Cluster i p Cluster proximity(p,p j j proximity(clusteri,cluster j) = Cluster Cluster Need to use average connectivity for scalability since total proximity favors large clusters i Distance Matrix: i j j ) 02/14/2018 Introduction to Data Mining, 2 nd Edition 23

Hierarchical Clustering: Group Average 5 4 1 5 2 2 4 3 3 1 6 025 02 015 01 005 0 3 6 4 1 2 5 Nested Clusters Dendrogram 02/14/2018 Introduction to Data Mining, 2 nd Edition 24

Hierarchical Clustering: Group Average Compromise between Single and Complete Link Strengths Less susceptible to noise and outliers Limitations Biased towards globular clusters 02/14/2018 Introduction to Data Mining, 2 nd Edition 25

Cluster Similarity: Ward s Method Similarity of two clusters is based on the increase in squared error when two clusters are merged Similar to group average if distance between points is distance squared Less susceptible to noise and outliers Biased towards globular clusters Hierarchical analogue of K-means Can be used to initialize K-means 02/14/2018 Introduction to Data Mining, 2 nd Edition 26

02/14/2018 Introduction to Data Mining, 2 nd Edition 27 Hierarchical Clustering: Comparison Group Average Ward s Method 1 2 3 4 5 6 1 2 5 3 4 MIN MAX 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 3 4 5

MST: Divisive Hierarchical Clustering Build MST (Minimum Spanning Tree) Start with a tree that consists of any point In successive steps, look for the closest pair of points (p, q) such that one point (p) is in the current tree but the other (q) is not Add q to the tree and put an edge between p and q 02/14/2018 Introduction to Data Mining, 2 nd Edition 28

MST: Divisive Hierarchical Clustering Use MST for constructing hierarchy of clusters 02/14/2018 Introduction to Data Mining, 2 nd Edition 29

Hierarchical Clustering: Time and Space requirements O(N 2 ) space since it uses the proximity matrix N is the number of points O(N 3 ) time in many cases There are N steps and at each step the size, N 2, proximity matrix must be updated and searched Complexity can be reduced to O(N 2 log(n) ) time with some cleverness 02/14/2018 Introduction to Data Mining, 2 nd Edition 30

Hierarchical Clustering: Problems and Limitations Once a decision is made to combine two clusters, it cannot be undone No global objective function is directly minimized Different schemes have problems with one or more of the following: Sensitivity to noise and outliers Difficulty handling clusters of different sizes and nonglobular shapes Breaking large clusters 02/14/2018 Introduction to Data Mining, 2 nd Edition 31