Clustering Part 3. Hierarchical Clustering

Similar documents
Hierarchical Clustering

Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

Data Mining Concepts & Techniques

Clustering Lecture 3: Hierarchical Methods

CSE 347/447: DATA MINING

Hierarchical Clustering

Hierarchical clustering

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

CS7267 MACHINE LEARNING

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

Cluster Analysis. Ying Shen, SSE, Tongji University

University of Florida CISE department Gator Engineering. Clustering Part 2

Cluster analysis. Agnieszka Nowak - Brzezinska

5/15/16. Computational Methods for Data Analysis. Massimo Poesio UNSUPERVISED LEARNING. Clustering. Unsupervised learning introduction

Clustering CS 550: Machine Learning

Unsupervised Learning. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

DATA MINING - 1DL105, 1Dl111. An introductory class in data mining

Lecture Notes for Chapter 8. Introduction to Data Mining

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Clustering Part 4 DBSCAN

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Clustering Part 2. A Partitional Clustering

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Knowledge Discovery in Databases

University of Florida CISE department Gator Engineering. Clustering Part 4

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Machine Learning 15/04/2015. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

COMP20008 Elements of Data Processing. Outlier Detection and Clustering

Unsupervised Learning

Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

University of Florida CISE department Gator Engineering. Clustering Part 5

HW4 VINH NGUYEN. Q1 (6 points). Chapter 8 Exercise 20

Hierarchical Clustering Lecture 9

Clustering fundamentals

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

Hierarchical clustering

INF4820, Algorithms for AI and NLP: Hierarchical Clustering

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Chapter VIII.3: Hierarchical Clustering

Data Mining. Cluster Analysis: Basic Concepts and Algorithms

Gene Clustering & Classification

Lesson 3. Prof. Enza Messina

Distances, Clustering! Rafael Irizarry!

4. Ad-hoc I: Hierarchical clustering

Online Social Networks and Media. Community detection

Types of general clustering methods. Clustering Algorithms for general similarity measures. Similarity between clusters

Cluster Analysis: Agglomerate Hierarchical Clustering

Hierarchical and Ensemble Clustering

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Clustering: Overview and K-means algorithm

Cluster Analysis. Angela Montanari and Laura Anderlucci

Hierarchy. No or very little supervision Some heuristic quality guidances on the quality of the hierarchy. Jian Pei: CMPT 459/741 Clustering (2) 1

Clustering Algorithms for general similarity measures

CS570: Introduction to Data Mining

Unsupervised Learning : Clustering

What is Cluster Analysis?

Clustering part II 1

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

CHAPTER 4: CLUSTER ANALYSIS

Data Mining and Analysis: Fundamental Concepts and Algorithms

Machine Learning (BSMC-GA 4439) Wenke Liu

Segmentation (continued)

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Multivariate Analysis

Today s lecture. Clustering and unsupervised learning. Hierarchical clustering. K-means, K-medoids, VQ

Clustering: K-means and Kernel K-means

Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010

Hierarchical clustering

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 16

ECLT 5810 Clustering

Clustering (COSC 416) Nazli Goharian. Document Clustering.

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Cluster Analysis for Microarray Data

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition

ECLT 5810 Clustering

Slides by Eamonn Keogh

Clustering COMS 4771

TELCOM2125: Network Science and Analysis

Big Data Analytics! Special Topics for Computer Science CSE CSE Feb 9

Hierarchical Clustering

Unsupervised Learning and Clustering

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

Applications. Foreground / background segmentation Finding skin-colored regions. Finding the moving objects. Intelligent scissors

Road map. Basic concepts

Lecture 7: Segmentation. Thursday, Sept 20

and Algorithms Dr. Hui Xiong Rutgers University Introduction to Data Mining 8/30/ Introduction to Data Mining 08/06/2006 1

Understanding Clustering Supervising the unsupervised

10601 Machine Learning. Hierarchical clustering. Reading: Bishop: 9-9.2

STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

Transcription:

Clustering Part Dr Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville Hierarchical Clustering Two main types: Agglomerative Start with the points as individual clusters Merge clusters until only one is left Divisive Start with all the points as one cluster Split clusters until only singleton clusters remain Agglomerative is more popular Traditional hierarchical algorithms use a similarity or distance matrix Merge or split one cluster at a time Data Mining Sanjay Ranka Fall 00

Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram Tree like diagram Records the sequences of merges or splits 0 0 0 00 0 Can cut the dendrogram to get a partitional clustering Data Mining Sanjay Ranka Fall 00 Basic Agglomerative Clustering Algorithm Algorithm is straightforward Compute the proximity matrix, if necessary Let each data point be a cluster Repeat Merge the two closest clusters Update the proximity matrix Until only a single cluster remains Key operation is the computation of the proximity of two clusters Different approaches to defining the distance between clusters distinguishes the different algorithms Data Mining Sanjay Ranka Fall 00

Agglomerative Hierarchical Clustering: Starting Situation For agglomerative hierarchical clustering we start with clusters of individual points and a proximity matrix p p p p p p p p p p Proximity Matrix Data Mining Sanjay Ranka Fall 00 Agglomerative Hierarchical Clustering: Intermediate Situation After some merging steps, we have some clusters C C C C C C C C C C C C C C C Proximity Matrix Data Mining Sanjay Ranka Fall 00

Agglomerative Hierarchical Clustering: Intermediate Situation We want to merge the two closest clusters (C and C) and update the proximity C C C C C matrix C C C C C C C C C C Proximity Matrix Data Mining Sanjay Ranka Fall 00 7 Agglomerative Hierarchical Clustering: After Merging The question is How do we update the proximity matrix? C C U C C C C? C C C U C C????? C C? Proximity Matrix C U C Data Mining Sanjay Ranka Fall 00 8

How to Define Inter-Cluster Similarity p p p p p Similarity? p p p MIN MAX Group Average Distance Between Centroids p p Proximity Matrix Other methods driven by an objective function Ward s Method uses squared error Data Mining Sanjay Ranka Fall 00 9 How to Define Inter-Cluster Similarity p p p p p p p p MIN MAX Group Average Distance Between Centroids p p Proximity Matrix Other methods driven by an objective function Ward s Method uses squared error Data Mining Sanjay Ranka Fall 00 0

How to Define Inter-Cluster Similarity p p p p p p p p MIN MAX Group Average Distance Between Centroids p p Proximity Matrix Other methods driven by an objective function Ward s Method uses squared error Data Mining Sanjay Ranka Fall 00 How to Define Inter-Cluster Similarity p p p p p p p p MIN MAX Group Average Distance Between Centroids p p Proximity Matrix Other methods driven by an objective function Ward s Method uses squared error Data Mining Sanjay Ranka Fall 00

Cluster Similarity: MIN or Single Link Similarity of two clusters is based on the two closest points in the different clusters Determined by one pair of points, ie, by one link in the proximity graph Can handle non-elliptical shapes Sensitive to noise and outliers Data Mining Sanjay Ranka Fall 00 Hierarchical Clustering: MIN 0 0 0 00 0 Nested Clusters Dendrogram Data Mining Sanjay Ranka Fall 00 7

Strength of MIN Original Points Two Clusters Data Mining Sanjay Ranka Fall 00 Limitations of MIN Original Points Two Clusters Data Mining Sanjay Ranka Fall 00 8

Cluster Similarity: MAX or Complete Linkage Similarity of two clusters is based on the two most distant points in the different clusters Determined by all pairs of points in the two clusters Tends to break large clusters Less susceptible to noise and outliers Biased towards globular clusters Data Mining Sanjay Ranka Fall 00 7 Hierarchical Clustering: MAX 0 0 0 0 0 0 0 00 0 Nested Clusters Dendrogram Data Mining Sanjay Ranka Fall 00 8 9

Strength of MAX Original Points Two Clusters Data Mining Sanjay Ranka Fall 00 9 Limitations of MAX Original Points Two Clusters Data Mining Sanjay Ranka Fall 00 0 0

Cluster Similarity: Group Average Distance of two clusters is the average of pairwise distance between points in the two clusters = Compromise between Single and Complete Link Need to use average connectivity for scalability since total connectivity favors large clusters Less susceptible to noise and outliers Biased towards globular clusters Data Mining Sanjay Ranka Fall 00 Hierarchical Clustering: Group Average 0 0 0 0 00 0 Nested Clusters Dendrogram Data Mining Sanjay Ranka Fall 00

Cluster Similarity: Ward s Method Similarity of two clusters is based on the increase in squared error when two clusters are merged Similar to group average if distance between points is distance squared Less susceptible to noise and outliers Biased towards globular clusters Hierarchical analogue of K-means But Ward s method does not correspond to a local minimum Can be used to initialize K-means Data Mining Sanjay Ranka Fall 00 Hierarchical Clustering: Ward s method 0 0 0 0 00 0 Nested Clusters Dendrogram Data Mining Sanjay Ranka Fall 00

Hierarchical Clustering: Comparison MIN MAX Group Average Ward s Method Data Mining Sanjay Ranka Fall 00 Hierarchical Clustering: Time and Space requirements O(N ) space since it uses the proximity matrix N is the number of points O(N ) time in many cases There are N steps and at each step the proximity matrix (size N ) must be updated and searched By being careful, the complexity can be reduced to O(N log N ) time for some approaches Data Mining Sanjay Ranka Fall 00

Hierarchical Clustering: Problems and Limitations Once a decision is made to combine two clusters, it cannot be undone No objective function is directly minimized Different schemes have problems with one or more of the following: Sensitivity to noise and outliers Difficulty handling different sized clusters and convex shapes Breaking large clusters Data Mining Sanjay Ranka Fall 00 7