Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

Similar documents
Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

CSE 5243 INTRO. TO DATA MINING

CS570: Introduction to Data Mining

Unsupervised Learning Hierarchical Methods

Clustering Part 4 DBSCAN

University of Florida CISE department Gator Engineering. Clustering Part 4

Hierarchy. No or very little supervision Some heuristic quality guidances on the quality of the hierarchy. Jian Pei: CMPT 459/741 Clustering (2) 1

CSE 5243 INTRO. TO DATA MINING

Clustering Techniques

Lecture 7 Cluster Analysis: Part A

DBSCAN. Presented by: Garrett Poppe

University of Florida CISE department Gator Engineering. Clustering Part 5

CSE 5243 INTRO. TO DATA MINING

Clustering part II 1

Knowledge Discovery in Databases

Clustering CS 550: Machine Learning

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

CSE 347/447: DATA MINING

CS570: Introduction to Data Mining

COMP 465: Data Mining Still More on Clustering

DS504/CS586: Big Data Analytics Big Data Clustering II

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

Clustering in Data Mining

DS504/CS586: Big Data Analytics Big Data Clustering II

What is Cluster Analysis? COMP 465: Data Mining Clustering Basics. Applications of Cluster Analysis. Clustering: Application Examples 3/17/2015

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

CS Data Mining Techniques Instructor: Abdullah Mueen

A Review on Cluster Based Approach in Data Mining

Lesson 3. Prof. Enza Messina

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Clustering Part 3. Hierarchical Clustering

CS6220: DATA MINING TECHNIQUES

Data Mining: Concepts and Techniques. Chapter 7 Jiawei Han. University of Illinois at Urbana-Champaign. Department of Computer Science

CS145: INTRODUCTION TO DATA MINING

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

d(2,1) d(3,1 ) d (3,2) 0 ( n, ) ( n ,2)......

Unsupervised Learning

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

Hierarchical Clustering

Lecture Notes for Chapter 8. Introduction to Data Mining

Faster Clustering with DBSCAN

UNIT V CLUSTERING, APPLICATIONS AND TRENDS IN DATA MINING. Clustering is unsupervised classification: no predefined classes

! Introduction. ! Partitioning methods. ! Hierarchical methods. ! Model-based methods. ! Density-based methods. ! Scalability

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

CS 412 Intro. to Data Mining Chapter 10. Cluster Analysis: Basic Concepts and Methods

Data Mining Algorithms

Cluster Analysis. Ying Shen, SSE, Tongji University

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining, 2 nd Edition

Lecture 3 Clustering. January 21, 2003 Data Mining: Concepts and Techniques 1

Data Mining Concepts & Techniques

Unsupervised Learning : Clustering

Clustering Tips and Tricks in 45 minutes (maybe more :)

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Efficient and Effective Clustering Methods for Spatial Data Mining. Raymond T. Ng, Jiawei Han

Analysis and Extensions of Popular Clustering Algorithms

CHAPTER 4: CLUSTER ANALYSIS

Unsupervised learning on Color Images

Machine Learning (BSMC-GA 4439) Wenke Liu

Lezione 21 CLUSTER ANALYSIS

Data Mining. Clustering. Hamid Beigy. Sharif University of Technology. Fall 1394

Clustering Algorithms for Data Stream

DATA MINING II - 1DL460

Data Mining 4. Cluster Analysis

COMP5331: Knowledge Discovery and Data Mining

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

Gene Clustering & Classification

CS6220: DATA MINING TECHNIQUES

PATENT DATA CLUSTERING: A MEASURING UNIT FOR INNOVATORS

CS249: ADVANCED DATA MINING

DATA MINING - 1DL105, 1Dl111. An introductory class in data mining

Clustering Lecture 3: Hierarchical Methods

CS490D: Introduction to Data Mining Prof. Chris Clifton. Cluster Analysis

Unsupervised Learning. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Clustering from Data Streams

Clustering in Ratemaking: Applications in Territories Clustering

Cluster Analysis. Outline. Motivation. Examples Applications. Han and Kamber, ch 8

Cluster Analysis (b) Lijun Zhang

COMPARISON OF DENSITY-BASED CLUSTERING ALGORITHMS

A Comparative Study of Various Clustering Algorithms in Data Mining

Unsupervised Learning and Clustering

Course Content. Classification = Learning a Model. What is Classification?

数据挖掘 Introduction to Data Mining

Course Content. What is Classification? Chapter 6 Objectives

Research Article Term Frequency Based Cosine Similarity Measure for Clustering Categorical Data using Hierarchical Algorithm

CS412 Homework #3 Answer Set

CS6220: DATA MINING TECHNIQUES

Unsupervised Learning. Andrea G. B. Tettamanzi I3S Laboratory SPARKS Team

CS7267 MACHINE LEARNING

On Clustering Validation Techniques

Data Bubbles: Quality Preserving Performance Boosting for Hierarchical Clustering

Parallelization of Hierarchical Density-Based Clustering using MapReduce. Talat Iqbal Syed

Understanding Clustering Supervising the unsupervised

Scalable Varied Density Clustering Algorithm for Large Datasets

Data Clustering With Leaders and Subleaders Algorithm

K-DBSCAN: Identifying Spatial Clusters With Differing Density Levels

Distance-based Methods: Drawbacks

Transcription:

1 Notes Reminder: HW2 Due Today by 11:59PM TA s note: Please provide a detailed ReadMe.txt file on how to run the program on the STDLINUX. If you installed/upgraded any package on STDLINUX, you should also mention it (with version number) in the ReadMe.txt file. Make sure your handwriting is interpretable for TA (if you do not type). Review session on Thursday Midterm next Tuesday (10/10/2017)

CSE 5243 INTRO. TO DATA MINING Cluster Analysis: Basic Concepts and Methods Huan Sun, CSE@The Ohio State University 10/03/2017 Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han

3 Chapter 10. Cluster Analysis: Basic Concepts and Methods Cluster Analysis: An Introduction Partitioning Methods Hierarchical Methods Density- and Grid-Based Methods Evaluation of Clustering Summary

4 Clustering Algorithms K-means and its variants Hierarchical clustering Density-based clustering

Hierarchical Clustering Two main types of hierarchical clustering Agglomerative: Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left Build a bottom-up hierarchy of clusters Divisive: Step 0 Step 1 Step 2 Step 3 Step 4 a agglomerative a b b c a b c d e c d e d d e e 5

6 Agglomerative Clustering Algorithm More popular hierarchical clustering technique Basic algorithm is straightforward 1. Compute the proximity matrix 2. Let each data point be a cluster 3. Repeat 4. Merge the two closest clusters 5. Update the proximity matrix 6. Until only a single cluster remains Key operation is the computation of the proximity of two clusters Different approaches to defining the distance/similarity between clusters distinguish the different algorithms

7 Intermediate Situation After some merging steps, we have some clusters C1 C2 C3 C4 C5 C3 C4 C1 C2 C3 C4 C5 C1 Proximity Matrix C2 C5... p1 p2 p3 p4 p9 p10 p11 p12

Intermediate Situation We want to merge the two closest clusters (C2 and C5) and update the proximity matrix. C1 C2 C3 C4 C5 C1 C3 C4 C2 C3 Proximity Matrix C4 C1 C5 C2 C5 8... p1 p2 p3 p4 p9 p10 p11 p12

9 After Merging How do we update the proximity matrix? C1 C2 U C5 C3 C4 C1? C3 C4 C2 U C5 C3 C4?????? Proximity Matrix C1 C2 U C5... p1 p2 p3 p4 p9 p10 p11 p12

10 How to Define Inter-Cluster Similarity p1 p2 p3 p4 p5... Similarity? p1 MIN MAX Group Average Distance Between Centroids p2 p3 p4 p5... Proximity Matrix

Cluster Similarity: MIN or Single Link Similarity of two clusters is based on the two most similar (closest) points in the different clusters Determined by one pair of points, i.e., by one link in the proximity graph. Let us define the distance between two points using Euclidean distance: 11 Using single link, the distance between two clusters Ci and Cj is then: The name comes from the observation that if we choose a line with the minimum distance to connect two points in two clusters, typically only a single link would exist.

Cluster Similarity: MIN or Single Link Similarity of two clusters is based on the two most similar (closest) points in the different clusters Determined by one pair of points, i.e., by one link in the proximity graph. What if we define the similarity (not distance) between two points? Using single link, the similarity between two clusters Ci and Cj is then: 12 Sim(Ci, Cj) = max{sim(x, y) x in Ci, y in Cj} The name comes from the observation that if we choose a line with the minimum distance to connect two points in two clusters, typically only a single link would exist.

Cluster Similarity: MAX or Complete Linkage Similarity of two clusters is based on the two least similar (most distant) points in the different clusters Determined by all pairs of points in the two clusters Let us define the distance between two points using Euclidean distance: Using MAX link, the distance between two clusters Ci and Cj is then: 13

Cluster Similarity: MAX or Complete Linkage Similarity of two clusters is based on the two least similar (most distant) points in the different clusters Determined by all pairs of points in the two clusters What if we define the similarity (not distance) between two points? Using MAX link, the similarity between two clusters Ci and Cj is then: Sim(Ci, Cj) = min{sim(x, y) x in Ci, y in Cj} 14

15 Clustering Algorithms K-means and its variants Hierarchical clustering Density-based clustering

16 Density-Based Clustering Methods Clustering based on density (a local cluster criterion), such as density-connected points Major features: Discover clusters of arbitrary shape Handle noise One scan (only examine the local region to justify density) Need density parameters as termination condition Several interesting studies: DBSCAN: Ester, et al. (KDD 96) OPTICS: Ankerst, et al (SIGMOD 99) DENCLUE: Hinneburg & D. Keim (KDD 98) CLIQUE: Agrawal, et al. (SIGMOD 98) (also, grid-based)

17 DBSCAN DBSCAN (M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, KDD 96) Discovers clusters of arbitrary shape: Density-Based Spatial Clustering of Applications with Noise DBSCAN is a density-based algorithm. Density = number of points within a specified radius (Eps) Two parameters A point is a core point if it has more than a specified number of points (MinPts) within Eps These are points that are at the interior of a cluster A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point A noise point is any point that is not a core point or a border point.

18 DBSCAN: Core, Border, and Noise Points 1. A point is a core point if it has more than a specified number of points (MinPts) within Eps 2. A border point has fewer than MinPts within Eps, but is in the neighborhood of a core point 3. A noise point is any point that is not a core point or a border point.

DBSCAN Algorithm Eliminate noise points Perform clustering on the remaining points The Eps-neighborhood of a point q: N Eps (q): {p belongs to D dist(p, q) Eps} 19

20 DBSCAN: Core, Border and Noise Points Original Points Point types: core, border and noise Eps = 10, MinPts = 4

When DBSCAN Works Well Original Points Clusters Resistant to Noise 21 Can handle clusters of different shapes and sizes

22 When DBSCAN Does NOT Work Well (MinPts=4, Eps=9.75). Original Points Sensitive to parameters! Varying densities High-dimensional data (MinPts=4, Eps=9.92)

23 Cluster Validity For supervised classification we have a variety of measures to evaluate how good our model is Accuracy, precision, recall For cluster analysis, the analogous question is how to evaluate the goodness of the resulting clusters?

24 Cluster Validity For supervised classification we have a variety of measures to evaluate how good our model is Accuracy, precision, recall For cluster analysis, the analogous question is how to evaluate the goodness of the resulting clusters? But clusters are in the eye of the beholder! Then why do we want to evaluate them? To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters

25 Clusters found in Random Data 1 0.9 0.8 0.7 Random Points y 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 x

Clusters found in Random Data 1 1 0.9 0.9 0.8 0.8 0.7 0.7 Random Points y 0.6 0.5 0.4 y 0.6 0.5 0.4 DBSCAN 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0.2 0.4 0.6 0.8 1 x 0 0 0.2 0.4 0.6 0.8 1 x 1 1 0.9 0.9 0.8 0.8 0.7 0.7 0.6 K-means y 0.6 0.5 0.4 0.3 0.2 y 0.5 0.4 0.3 0.2 Complete Link 0.1 0.1 26 0 0 0.2 0.4 0.6 0.8 1 x 0 0 0.2 0.4 0.6 0.8 1 x

27 Different Aspects of Cluster Validation 1. Determining the clustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data.

28 Different Aspects of Cluster Validation 1. Determining the clustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data. 2. Comparing the results of a cluster analysis to externally known results, e.g., to externally given class labels.

29 Different Aspects of Cluster Validation 1. Determining the clustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data. 2. Comparing the results of a cluster analysis to externally known results, e.g., to externally given class labels. 3. Evaluating how well the results of a cluster analysis fit the data without reference to external information. - Use only the data

30 Different Aspects of Cluster Validation 1. Determining the clustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data. 2. Comparing the results of a cluster analysis to externally known results, e.g., to externally given class labels. 3. Evaluating how well the results of a cluster analysis fit the data without reference to external information. - Use only the data 4. Comparing the results of two different sets of cluster analyses to determine which is better. 5. Determining the correct number of clusters. For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters.

31 Using Similarity Matrix for Cluster Validation Order the similarity matrix with respect to cluster labels and inspect visually. y 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 x Points 10 20 30 40 50 60 70 80 90 100 20 40 60 80 100Similarity 0 Points 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1

32 1 Using Similarity Matrix for Cluster Validation Points 10 20 30 40 50 60 70 80 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 Clusters in random data are not so crisp y 1 0.9 0.8 0.7 0.6 0.5 0.4 Visualization Points 10 20 30 40 50 60 1 0.9 0.8 0.7 0.6 0.5 0.4 90 100 20 40 60 80 100Similarity 0 Points What good clustering results look like 0.1 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 1 x DBSCAN 70 80 90 100 20 40 60 80 100Similarity 0 Points Visualization of Similarity Matrix 0.3 0.2 0.1

33 Measures of Cluster: Centroid, Radius and Diameter Centroid: x0 the middle of a cluster n: number of points in a cluster x i is the i-th point in the cluster Radius: R Average distance from member objects to the centroid The square root of average distance from any point of the cluster to its centroid Diameter: D Average pairwise distance within a cluster The square root of average mean squared distance between all pairs of points in the cluster X x 0 R = D = = n xi i n n 2 ( xi x0) i n n n ( xi xj) i j nn ( 1) 2

34 Cohesion and Separation A proximity graph based approach can also be used for cohesion and separation. Cluster cohesion is the sum of the weight of all links within a cluster. Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster. cohesion separation

35 Silhouette Coefficient Silhouette Coefficient combine ideas of both cohesion and separation, but for individual points, as well as clusters and clusterings For an individual point, i Calculate a = average distance of i to the points in its cluster Calculate b = min (average distance of i to points in another cluster) The silhouette coefficient for a point is then given by s = 1 a/b if a < b, (or s = b/a - 1 if a b, not the usual case) b a Typically between 0 and 1. The closer to 1 the better. Can calculate the Average Silhouette width for a cluster or a clustering

Other Measures of Cluster Validity Entropy/Gini (Please review how to calculate it) If there is a class label you can use the entropy/gini of the class label similar to what we did for classification (Check problem III in sample midterm) If there is no class label one can compute the entropy w.r.t each attribute (dimension) and sum up or weighted average to compute the disorder within a cluster Classification Error If there is a class label one can compute this in a similar manner 36

37 Extensions: Clustering Large Databases Most clustering algorithms assume a large data structure which is memory resident. Clustering may be performed first on a sample of the database then applied to the entire database. Algorithms BIRCH DBSCAN (we have already covered this) CURE

38 Desired Features for Large Databases One scan (or less) of DB Online Suspendable, stoppable, resumable Incremental Work with limited main memory Different techniques to scan (e.g. sampling) Process each tuple once

39 More on Hierarchical Clustering Methods Major weakness of agglomerative clustering methods do not scale well: time complexity of at least O(n 2 ), where n is the number of total objects can never undo what was done previously Integration of hierarchical with distance-based clustering BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters CURE (1998): selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction

40 BIRCH Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH) Incremental, hierarchical, one scan Save clustering information in a tree Each entry in the tree contains information about one cluster New nodes inserted in closest entry in tree

41 BIRCH (1996) Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering Phase 1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans Weakness: handles only numeric data, and sensitive to the order of the data record.

42 Clustering Feature CF Triple: (N, LS, SS) N: Number of points in cluster LS: linear sum of N points in the cluster, N i= 1 X i SS: Sum of squares of points in the cluster, N i= 1 X i 2 CF = (5, (16,30),(54,190)) 10 9 8 7 6 5 4 3 2 1 (3,4) (2,6) (4,5) (4,7) (3,8) 0 0 1 2 3 4 5 6 7 8 9 10

43 Clustering Feature CF Triple: (N, LS, SS) N: Number of points in cluster LS: linear sum of N points in the cluster, SS: Sum of squares of points in the cluster, Clustering feature: N i= 1 X i N X i i= 1 2 Summary of the statistics for a given sub-cluster: the 0-th, 1st, and 2nd moments of the subcluster from the statistical point of view Registers crucial measurements for computing cluster and utilizes storage efficiently

44 Clustering Feature CF Triple: (N, LS, SS) N: Number of points in cluster LS: linear sum of N points in the cluster, SS: Sum of squares of points in the cluster, N X i i= 1 N X i i= 1 2 CF Tree Balanced search tree Node has CF triple for each child Leaf node represents cluster and has CF value for each subcluster in it. Subcluster has maximum diameter

CF Tree B = 7 L = 6 CF 1 CF 2 Root child 1 child 2 child 3 child 6 CF 3 CF 6 CF 1 CF 2 Non-leaf node child 1 child 2 child 3 child 5 CF 3 CF 5 Leaf node Leaf node prev CF 1 CF 2 CF 6 next prev CF 1 CF 2 CF 4 next 45

46 BIRCH Algorithm

47 Improve Clusters

48 CURE Clustering Using Representatives (CURE) Stops the creation of a cluster hierarchy if a level consists of k clusters Use many points to represent a cluster instead of only one Uses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrary shaped clusters and avoids single-link effect Points will be well scattered Drawbacks of square-error based clustering method Consider only one point as representative of a cluster Good only for convex shaped, similar size and density, and if k can be reasonably estimated

49 CURE Approach

50 CURE for Large Databases

51 Cure: The Algorithm Draw random sample s. Partition sample to p partitions with size s/p Partially cluster partitions into s/pq clusters Eliminate outliers By random sampling If a cluster grows too slow, eliminate it. Cluster partial clusters. Label data in disk

Data Partitioning and Clustering s = 50 p = 2 s/p = 25 s/pq = 5 y x y y 52 x

53 Cure: Shrinking Representative Points Shrink the multiple representative points towards the gravity center by a fraction of α. Multiple representatives capture the shape of the cluster

54 Clustering Categorical Data: ROCK ROCK: Robust Clustering using links, by S. Guha, R. Rastogi, K. Shim (ICDE 99). Use links to measure similarity/proximity Not distance based Example (1,0,0,0,0,0), (0,1,1,1,1,0), (0,1,1,0,1,1), (0,0,0,0,1,0,1) Euclidian distance based approach would cluster Pt2, Pt3 and Pt1 and Pt4 Problem? Pt1 and Pt4 have nothing in common

55 Rock: Algorithm Links: The number of common neighbors for the two points. Using jacquard Algorithm Use Distances to determine neighbors (pt1,pt4) = 0, (pt1,pt2) = 0, (pt1,pt3) = 0 (pt2,pt3) = 0.6, (pt2,pt4) = 0.2 (pt3,pt4) = 0.2 Use 0.2 as threshold for neighbors Pt2 and Pt3 have 3 common neighbors Pt3 and Pt4 have 3 common neighbors Pt2 and Pt4 have 3 common neighbors Resulting clusters (1), (2,3,4) which makes more sense Draw random sample Cluster with links Label data in disk

56 Another example Links: The number of common neighbors for the two points. Algorithm {1,2,3}, {1,2,4}, {1,2,5}, {1,3,4}, {1,3,5} {1,4,5}, {2,3,4}, {2,3,5}, {2,4,5}, {3,4,5} Draw random sample Cluster with links Label data in disk {1,2,3} 3 {1,2,4}

57 Backup slides

58 MST: Divisive Hierarchical Clustering Build MST (Minimum Spanning Tree) Start with a tree that consists of any point In successive steps, look for the closest pair of points (p, q) such that one point (p) is in the current tree but the other (q) is not Add q to the tree and put an edge between p and q

59 MST: Divisive Hierarchical Clustering Use MST for constructing hierarchy of clusters