Data Mining Algorithms

Similar documents
Kapitel 4: Clustering

Clustering part II 1

DS504/CS586: Big Data Analytics Big Data Clustering II

Data Mining: Concepts and Techniques. Chapter March 8, 2007 Data Mining: Concepts and Techniques 1

Unsupervised Learning. Andrea G. B. Tettamanzi I3S Laboratory SPARKS Team

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

DS504/CS586: Big Data Analytics Big Data Clustering II

Unsupervised Learning

A Review on Cluster Based Approach in Data Mining

Knowledge Discovery in Databases

ECLT 5810 Clustering

ECLT 5810 Clustering

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

Cluster Analysis. Contents of this Chapter

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

Clustering in Data Mining

CSE 5243 INTRO. TO DATA MINING

Clustering Techniques

CLUSTERING. CSE 634 Data Mining Prof. Anita Wasilewska TEAM 16

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

COMP 465: Data Mining Still More on Clustering

Clustering CS 550: Machine Learning

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

Gene Clustering & Classification

d(2,1) d(3,1 ) d (3,2) 0 ( n, ) ( n ,2)......

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Lecture 7 Cluster Analysis: Part A

What is Cluster Analysis? COMP 465: Data Mining Clustering Basics. Applications of Cluster Analysis. Clustering: Application Examples 3/17/2015

Unsupervised Learning Partitioning Methods

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Clustering Part 4 DBSCAN

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

University of Florida CISE department Gator Engineering. Clustering Part 4

Cluster Analysis. CSE634 Data Mining

Efficient and Effective Clustering Methods for Spatial Data Mining. Raymond T. Ng, Jiawei Han

University of Florida CISE department Gator Engineering. Clustering Part 2

University of Florida CISE department Gator Engineering. Clustering Part 5

DBSCAN. Presented by: Garrett Poppe

Chapter VIII.3: Hierarchical Clustering

Data Mining 4. Cluster Analysis

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Data Mining. Clustering. Hamid Beigy. Sharif University of Technology. Fall 1394

CHAPTER 4: CLUSTER ANALYSIS

Cluster Analysis. Ying Shen, SSE, Tongji University

Clustering. Chapter 10 in Introduction to statistical learning

A Comparative Study of Various Clustering Algorithms in Data Mining

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

Density-Based Clustering. Izabela Moise, Evangelos Pournaras

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Data Mining: Concepts and Techniques. Chapter 7 Jiawei Han. University of Illinois at Urbana-Champaign. Department of Computer Science

Machine Learning (BSMC-GA 4439) Wenke Liu

Cluster Analysis: Basic Concepts and Algorithms

Course Content. Classification = Learning a Model. What is Classification?

Course Content. What is Classification? Chapter 6 Objectives

Clustering Algorithm (DBSCAN) VISHAL BHARTI Computer Science Dept. GC, CUNY

CS Data Mining Techniques Instructor: Abdullah Mueen

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Clustering in Ratemaking: Applications in Territories Clustering

Lesson 3. Prof. Enza Messina

Understanding Clustering Supervising the unsupervised

Cluster analysis. Agnieszka Nowak - Brzezinska

Chapter ML:XI (continued)

CSE 347/447: DATA MINING

Distance-based Methods: Drawbacks

Hierarchical Clustering 4/5/17

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

CS145: INTRODUCTION TO DATA MINING

CS570: Introduction to Data Mining

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Clustering. Lecture 6, 1/24/03 ECS289A

Unsupervised Learning Hierarchical Methods

Unsupervised Learning : Clustering

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Exploratory data analysis for microarrays

Unsupervised Learning and Clustering

COMP5331: Knowledge Discovery and Data Mining

Measure of Distance. We wish to define the distance between two objects Distance metric between points:

UNIT V CLUSTERING, APPLICATIONS AND TRENDS IN DATA MINING. Clustering is unsupervised classification: no predefined classes

10701 Machine Learning. Clustering

Cluster Analysis. Angela Montanari and Laura Anderlucci

K-Means. Oct Youn-Hee Han

Road map. Basic concepts

Chapter 4: Text Clustering

Unsupervised Learning and Clustering

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

MSA220 - Statistical Learning for Big Data

Clustering Lecture 4: Density-based Methods

Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

What to come. There will be a few more topics we will cover on supervised learning

Based on Raymond J. Mooney s slides

Statistics 202: Data Mining. c Jonathan Taylor. Week 8 Based in part on slides from textbook, slides of Susan Holmes. December 2, / 1

COMPARISON OF DENSITY-BASED CLUSTERING ALGORITHMS

Knowledge Discovery and Data Mining I

Transcription:

for the original version: -JörgSander and Martin Ester - Jiawei Han and Micheline Kamber Data Management and Exploration Prof. Dr. Thomas Seidl Data Mining Algorithms Lecture Course with Tutorials Wintersemester 2003/04 Chapter 6: Clustering (Part ) Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 2

What is Clustering? Grouping a set of data objects into clusters Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters Clustering = unsupervised classification (no predefined classes) Typical usage As a stand-alone tool to get insight into data distribution As a preprocessing step for other algorithms WS 2003/04 Data Mining Algorithms 6 3 Measuring Similarity To measure similarity, often a distance function dist is used Measures dissimilarity between pairs objects x and y Small distance dist(x, y): objects x and y are more similar Large distance dist(x, y): objects x and y are less similar Properties of a distance function dist(x, y) 0 dist(x, y) = 0 iff x = y (definite) (iff = if and only if) dist(x, y) = dist(y, x) (symmetry) If dist is a metric, which is often the case: dist(x, z) dist(x, y) + dist(y, z) (triangle inequality) Definition of a distance function is highly application dependent May require standardization/normalization of attributes Different definitions for interval-scaled, boolean, categorical, ordinal and ratio variables WS 2003/04 Data Mining Algorithms 6 4

Example distance functions I For standardized numerical attributes, i.e., vectors x = (x,..., x d ) and y = (y,..., y d ) from a d-dimensional vector space: General L p -Metric (Minkowski-Distance): dist( x, y) = p d i= ( x i y i ) p Euclidean Distance (p = 2): dist( x, y) = d i= ( x i y i ) 2 Manhattan-Distance (p = ): dist( x, y) = Maximum-Metric (p = ): d i= x i y i { x y, i d} dist( x, y) = max i i For sets x and y: dist( x, y) = x y x y x y WS 2003/04 Data Mining Algorithms 6 Example distance functions II For categorical attributes: d = 0 if x = = = i y dist( x, y) δ ( xi, yi ) where δ ( xi, yi ) i else For text documents: A document D is represented by a vector r(d) of frequencies of the terms occuring in D, e.g., ( f ( ti, D) ), ti } r( d) = {log T where f (t i, D) is the frequency of term t i in document D The distance between two documents D and D 2 is defined by the cosine of the angle between the two vectors x = r(d ) and y = r(d 2 ): x, y dist( x, y) = x y where.,. is the inner product and. is the length of vectors i WS 2003/04 Data Mining Algorithms 6 6

General Applications of Clustering Pattern Recognition and Image Processing Spatial Data Analysis create thematic maps in GIS by clustering feature spaces detect spatial clusters and explain them in spatial data mining Economic Science (especially market research) WWW Documents Web-logs Biology Clustering of gene expression data WS 2003/04 Data Mining Algorithms 6 7 A Typical Application: Thematic Maps Satellite images of a region in different wavelengths Each point on the surface maps to a high-dimensional feature vector p = (x,, x d ) where x i is the recorded intensity at the surface point in band i. Assumption: each different land-use reflects and emits light of different wavelengths in a characteristic way. (2),(7.) (8.),(8.7) 2 22 3 2 3 2 3333 Surface of the earth Cluster 2 Band Cluster 2 0 Cluster 3 8 Band 2 6. 8.0 20.0 22.0 Feature-space WS 2003/04 Data Mining Algorithms 6 8

Application: Web Usage Mining Determine Web User Groups Sample content of a web log file romblon.informatik.uni-muenchen.de lopa - [04/Mar/997:0:44:0 +000] "GET /~lopa/ HTTP/.0" 200 364 romblon.informatik.uni-muenchen.de lopa - [04/Mar/997:0:4: +000] "GET /~lopa/x/ HTTP/.0" 200 72 fixer.sega.co.jp unknown - [04/Mar/997:0:8:49 +000] "GET /dbs/porada.html HTTP/.0" 200 229 scooter.pa-x.dec.com unknown - [04/Mar/997:02:08:23 +000] "GET /dbs/kriegel_e.html HTTP/.0" 200 24 Generation of sessions Session::= <IP_address, user_id, [URL,..., URL k ]> which entries form a single session? Distance function for sessions: d( x, y) = x y x y x y WS 2003/04 Data Mining Algorithms 6 9 Major Clustering Approaches Partitioning algorithms Find k partitions, minimizing some objective function Hierarchy algorithms Create a hierarchical decomposition of the set of objects Density-based Find clusters based on connectivity and density functions Other methods Grid-based Neural networks (SOM s) Graph-theoretical methods... WS 2003/04 Data Mining Algorithms 6 0

Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 Partitioning Algorithms: Basic Concept Goal: Construct a partition of a database D of n objects into a set of k clusters minimizing an objective function. Exhaustively enumerating all possible partitions into k sets in order to find the global minimum is too expensive. Heuristic methods: Choose k representations for clusters, e.g., randomly Improve these initial representations iteratively: Assign each object to the cluster it fits best in the current clustering Compute new cluster representations based on these assignments Repeat until the change in the objective function from one iteration to the next drops below a threshold Types of cluster representations k-means: Each cluster is represented by the center of the cluster k-medoid: Each cluster is represented by one of its objects EM: Each cluster is represented by a probability distribution WS 2003/04 Data Mining Algorithms 6 2

Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 3 K-Means Clustering: Basic Idea Objective: For a given k, form k groups so that the sum of the (squared) distances between the mean of the groups and their elements is minimal. Poor Clustering x x x x Centroids Optimal Clustering x x x x Centroids WS 2003/04 Data Mining Algorithms 6 4

K-Means Clustering: Basic Notions Objects p=(x p,..., x p d) are points in a d-dimensional vector space (the mean of a set of points must be defined) Centroid µ C : Mean of all points in a cluster C Measure for the compactness of a cluster C: TD 2 ( C) = dist( p, µ ) p C Measure for the compactness of a clustering k 2 2 TD = TD ( Ci) i= C 2 WS 2003/04 Data Mining Algorithms 6 K-Means Clustering: Algorithm Given k, the k-means algorithm is implemented in 4 steps:. Partition the objects into k nonempty subsets 2. Compute the centroids of the clusters of the current partition. The centroid is the center (mean point) of the cluster. 3. Assign each object to the cluster with the nearest representative. 4. Go back to Step 2, stop when representatives do not change. WS 2003/04 Data Mining Algorithms 6 6

K-Means Clustering: Example 0 9 8 7 6 4 3 2 0 0 2 3 4 6 7 8 9 0 0 9 8 7 6 4 3 2 0 0 2 3 4 6 7 8 9 0 0 9 8 7 6 4 3 2 0 0 2 3 4 6 7 8 9 0 0 9 8 7 6 4 3 2 0 0 2 3 4 6 7 8 9 0 WS 2003/04 Data Mining Algorithms 6 7 K-Means Clustering: Discussion Strength Relatively efficient: O(tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. Easy implementation. Weakness Applicable only when mean is defined. Need to specify k, the number of clusters, in advance. Sensitive to noisy data and outliers Clusters are forced to have convex shapes Result and runtime are very dependent on the initial partition; often terminates at a local optimum however: methods for a good initialization exist. Several variants of the k-means method exist, e.g. ISODATA Extends k-means by methods to eliminate very small clusters, merging and split of clusters; user has to specify additional parameters. WS 2003/04 Data Mining Algorithms 6 8

Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 9 K-Medoid Clustering: Basic Idea Objective: For a given k, find k representatives in the dataset so that, when assigning each object to the closest representative, the sum of the distances between representatives and objects which are assigned to them is minimal. Data Set Poor Clustering Optimal Clustering Medoid Medoid WS 2003/04 Data Mining Algorithms 6 20

K-Medoid Clustering: Basic Notions Requires arbitrary objects and a distance function Medoid m C : representative object in a cluster C Measure for the compactness of a Cluster C: TD ( C) = dist( p, ) p C m C Measure for the compactness of a clustering TD = k i= TD( Ci ) WS 2003/04 Data Mining Algorithms 6 2 K-Medoid Clustering: PAM Algorithm [Kaufman and Rousseeuw, 990] Given k, the k-medoid algorithm is implemented in steps:. Select k objects arbitrarily as medoids (representatives); assign each remaining (non-medoid) object to the cluster with the nearest representative, and compute TD current. 2. For each pair (medoid M, non-medoid N) compute the value TD N M, i.e., the value of TD for the partition that results when swapping M with N 3. Select the non-medoid N for which TD N M is minimal 4. If TD N M is smaller than TD current Swap N with M Set TD current := TD N M Go back to Step 2. Stop. WS 2003/04 Data Mining Algorithms 6 22

K-Medoid Clustering: CLARA and CLARANS CLARA [Kaufmann and Rousseeuw,990] Additional parameter: numlocal Draws numlocal samples of the data set Applies PAM on each sample Returns the best of these sets of medoids as output CLARANS [Ng and Han, 994) Two additional parameters: maxneighbor and numlocal At most maxneighbor many pairs (medoid M, non-medoid N) are evaluated in the algorithm. The first pair (M, N) for which TD N M is smaller than TD current is swapped (instead of the pair with the minimal value of TD N M ) Finding the local minimum with this procedure is repeated numlocal times. Efficiency: runtime(clarans) < runtime(clara) < runtime(pam) WS 2003/04 Data Mining Algorithms 6 23 CLARANS Selection of Representatives CLARANS(objects DB, Integer k, Real dist, Integer numlocal, Integer maxneighbor) for r from to numlocal do Randomly select k objects as medoids; i := 0; while i < maxneighbor do Randomly select (Medoid M, Non-medoid N); Compute changeoftd_:= TD N M TD; if changeoftd < 0 then substitute M by N; TD := TD N M ; i := 0; else i:= i + ; if TD < TD_best then TD_best := TD; Store current medoids; return Medoids; WS 2003/04 Data Mining Algorithms 6 24

K-Medoid Clustering: Discussion Strength Applicable to arbitrary objects + distance function Not so sensitive to noisy data and outliers as k-means Weakness Inefficient Like k-means: need to specify the number of clusters k in advance, and clusters are forced to have convex shapes Result and runtime for CLARA and CLARANS may vary largely due to the randomization TD(CLARANS) TD(PAM) WS 2003/04 Data Mining Algorithms 6 2 Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 26

Initialization of Partitioning Clustering Methods [Fayyad, Reina and Bradley 998] Draw m different (small) samples of the dataset Cluster each sample to get m estimates for k representatives A = (A, A 2,..., A k ), B = (B,..., B k ),..., M = (M,..., M k ) Then, cluster the set DS = A B M m times, using the sets A, B,..., M as respective initial partitioning Use the best of these m clusterings as initialization for the partitioning clustering of the whole dataset WS 2003/04 Data Mining Algorithms 6 27 Initialization of Partitioning Clustering Methods Example D3 A3 C3 C A D B D2 B3 B2 A2 C2 whole dataset k = 3 DS m = 4 samples true cluster centers WS 2003/04 Data Mining Algorithms 6 28

Choice of the Parameter k Idea for a method: Determine a clustering for each k = 2,... n- Choose the best clustering But how can we measure the quality of a clustering? A measure has to be independent of k. The measures for the compactness of a clustering TD 2 and TD are monotonously decreasing with incresing value of k. Silhouette-Coefficient [Kaufman & Rousseeuw 990] Measureforthequalityof a k-means or a k-medoid clustering that is independent of k. WS 2003/04 Data Mining Algorithms 6 29 The Silhouette Coefficient a(o): average distance between object o and the objects in its cluster A b(o): average distance between object o and the objects in its second closest cluster B bo ( ) ao ( ) The silhouette of o is then defined as so ( ) = max{ ao ( ), bo ( )} measures how good the assignment of o to its cluster is s(o) = : bad, on average closer to members of B s(o) = 0: in-between A and B s(o) = : good assignment of o to its cluster A Silhouette Coefficient s C of a clustering: average silhouette of all objects 0.7 < s C.0 strong structure, 0. < s C 0.7 medium structure 0.2 < s C 0. weak structure, s C 0.2 no structure WS 2003/04 Data Mining Algorithms 6 30

Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 3 Density-Based Clustering Basic Idea: Clusters are dense regions in the data space, separated by regions of lower object density Why Density-Based Clustering? Results of a k-medoid algorithm for k=4 Different density-based approaches exist (see Textbook & Papers) Here we discuss the ideas underlying the DBSCAN algorithm WS 2003/04 Data Mining Algorithms 6 32

Density Based Clustering: Basic Concept Intuition for the formalization of the basic idea For any point in a cluster, the local point density around that point has to exceed some threshold The set of points from one cluster is spatially connected Local point density at a point p defined by two parameters ε radius for the neighborhood of point p: N ε (p) := {q in data set D dist(p, q) ε} MinPts minimum number of points in the given neighbourhood N(p) q is called a core object (or core point) w.r.t. ε, MinPts if N ε (q) MinPts ε q MinPts = q is a core object WS 2003/04 Data Mining Algorithms 6 33 Density Based Clustering: Basic Definitions p p directly density-reachable from q w.r.t. ε, MinPts if ) p N ε (q) and 2) q is a core object w.r.t. ε, MinPts p q density-reachable: transitive closure of directly density-reachable q p p is density-connected to a point q w.r.t. ε, MinPts if there is a point o such that both, p and q are density-reachable from o w.r.t. ε, MinPts. o q WS 2003/04 Data Mining Algorithms 6 34

Density Based Clustering: Basic Definitions Density-Based Cluster: non-empty subset S of database D satisfying: ) Maximality: if p is in S and q is density-reachable from p then q is in S 2) Connectivity: each object in S is density-connected to all other objects Density-Based Clustering of a database D : {S,..., S n ; N} where S,..., S n : all density-based clusters in the database D N = D \{S,..., S n } is called the noise (objects not in any cluster) Noise Border Core ε =.0 MinPts = WS 2003/04 Data Mining Algorithms 6 3 Density Based Clustering: DBSCAN Algorithm Basic Theorem: Each object in a density-based cluster C is density-reachable from any of its core-objects Nothing else is density-reachable from core objects. for each o D do if o is not yet classified then if o is a core-object then collect all objects density-reachable from o and assign them to a new cluster. else assign o to NOISE density-reachable objects are collected by performing successive ε-neighborhood queries. WS 2003/04 Data Mining Algorithms 6 36

DBSCAN Algorithm: Example Parameter ε = 2.0 MinPts = 3 for each o D do if o is not yet classified then if o is a core-object then collect all objects density-reachable from o and assign them to a new cluster. else assign o to NOISE WS 2003/04 Data Mining Algorithms 6 37 DBSCAN Algorithm: Example Parameter ε = 2.0 MinPts = 3 for each o D do if o is not yet classified then if o is a core-object then collect all objects density-reachable from o and assign them to a new cluster. else assign o to NOISE WS 2003/04 Data Mining Algorithms 6 38

DBSCAN Algorithm: Example Parameter ε = 2.0 MinPts = 3 for each o D do if o is not yet classified then if o is a core-object then collect all objects density-reachable from o and assign them to a new cluster. else assign o to NOISE WS 2003/04 Data Mining Algorithms 6 39 DBSCAN Algorithm: Performance Runtime complexities N ε -query DBSCAN - without support (worst case): O(n) O(n 2 ) - tree-based support (e.g. R*-tree) : O(log(n)) O(n log(n) ) - direct access to the neighborhood: O() O(n) Runtime Comparison: DBSCAN (+R*-tree) CLARANS Time (sec.) No. of Points,22 2,03 3,90,23 6,26 7,820 8,937 0,426 2,2 62,84 DBSCAN 3 7 6 8 2 28 33 42 233 CLARANS 78 3,026 6,84,74 8,029 29,826 39,26 60,40 80,638????? WS 2003/04 Data Mining Algorithms 6 40

Determining the Parameters ε and MinPts Cluster: Point density higher than specified by ε and MinPts Idea: use the point density of the least dense cluster in the data set as parameters but how to determine this? Heuristic: look at the distances to the k-nearest neighbors p q 3-distance(p) : 3-distance(q) : Function k-distance(p): distance from p to the its k-nearest neighbor k-distance plot: k-distances of all objects, sorted in decreasing order WS 2003/04 Data Mining Algorithms 6 4 Determining the Parameters ε and MinPts Example k-distance plot 3-distance first valley Objects Heuristic method: Fix a value for MinPts (default: 2 d ) border object User selects border object o from the MinPts-distance plot; ε is set to MinPts-distance(o) WS 2003/04 Data Mining Algorithms 6 42

Determining the Parameters ε and MinPts Problematic example A C F E A, B, C B D D G G G3 G2 3-Distance B, D, E B, D, F, G D, D2, G, G2, G3 B D D2 Objects WS 2003/04 Data Mining Algorithms 6 43 Density Based Clustering: Discussion Advantages Clusters can have arbitrary shape and size Number of clusters is determined automatically Can separate clusters from surrounding noise Can be supported by spatial index structures Disadvantages Input parameters may be difficult to determine In some situations very sensitive to input parameter setting WS 2003/04 Data Mining Algorithms 6 44

Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 4 From Partitioning to Hierarchical Clustering Global parameters to separate all clusters with a partitioning clustering method may not exist and/or hierarchical cluster structure largely differing densities and sizes Need a hierarchical clustering algorithm in these situations WS 2003/04 Data Mining Algorithms 6 46

Hierarchical Clustering: Basic Notions Hierarchical decomposition of the data set (with respect to a given similarity measure) into a set of nested clusters Result represented by a so called dendrogram Nodes in the dendrogram represent possible clusters can be constructed bottom-up (agglomerative approach) or top down (divisive approach) Step 0 Step Step 2 Step 3 Step 4 a b c d e a b d e a b c d e c d e Step 4 Step 3 Step 2 Step Step 0 agglomerative divisive WS 2003/04 Data Mining Algorithms 6 47 Hierarchical Clustering: Example Interpretation of the dendrogram The root represents the whole data set A leaf represents a single objects in the data set An internal node represent the union of all objects in its sub-tree The height of an internal node represents the distance between its two child nodes 7 2 4 6 3 8 9 2 3 4 6 7 8 9 2 0 distance between clusters WS 2003/04 Data Mining Algorithms 6 48

Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 49 Agglomerative Hierarchical Clustering. Initially, each object forms its own cluster 2. Compute all pairwise distances between the initial clusters (objects) 3. Merge the closest pair (A, B) in the set of the current clusters into a new cluster C = A B 4. Remove A and B from the set of current clusters; insert C into the set of current clusters. If the set of current clusters contains only C (i.e., if C represents all objects from the database): STOP 6. Else: determine the distance between the new cluster C and all other clusters in the set of current clusters; go to step 3. Requires a distance function for clusters (sets of objects) WS 2003/04 Data Mining Algorithms 6 0

Single Link Method and Variants Given: a distance function dist(p, q) for database objects The following distance functions for clusters (i.e., sets of objects) X and Y are commonly used for hierarchical clustering: Single-Link: dist _ sl( X, Y ) = min x X, y Y dist( x, y) Complete-Link: dist _ cl( X, Y ) = max x X, y Y dist( x, y) Average-Link: dist _ al( X, Y ) = X Y x X, y Y dist( x, y) WS 2003/04 Data Mining Algorithms 6 Chapter 6: Clustering Introduction to clustering Partitioning Methods K-Means K-Medoid Initialization of Partitioning Methods Density-based Methods: DBSCAN Hierarchical Methods Agglomerative Hierarchical Clustering: Single-Link + Variants Density-based hierarchical clustering: OPTICS WS 2003/04 Data Mining Algorithms 6 2

Density-Based Hierarchical Clustering Observation: Dense clusters are completely contained by less dense clusters C C C 2 D Idea: Process objects in the right order and keep track of point density in their neighborhood C MinPts = 3 C C 2 ε 2 ε WS 2003/04 Data Mining Algorithms 6 3 Core- and Reachability Distance Parameters: generating distance ε, fixed value MinPts core-distance ε,minpts (o) smallest distance such that o is a core object (if that distance is ε;? otherwise) MinPts = reachability-distance ε,minpts (p, o) smallest distance such that p is directly density-reachable from o (if that distance is ε;? otherwise) q p o ε core-distance(o) reachability-distance(p,o) reachability-distance(q,o) WS 2003/04 Data Mining Algorithms 6 4

The Algorithm OPTICS Basic data structure: controllist Memorize shortest reachability distances seen so far ( distance of a jump to that point ) Visit each point Make always a shortest jump Output: order of points core-distance of points reachability-distance of points WS 2003/04 Data Mining Algorithms 6 The Algorithm OPTICS ControlList foreach o Database // initially, o.processed = false for all objects o if o.processed = false; insert (o,? ) into ControlList; while ControlList is not empty database select first element (o, r-dist) from ControlList; retrieve N ε (o) and determine c_dist= core-distance(o); set o.processed = true; write (o, r_dist, c_dist) to file; if o is a core object at any distance ε foreach p N ε (o) not yet processed; determine r_dist p = reachability-distance(p, o); if (p, _) ControlList insert (p, r_dist p ) in ControlList; else if (p, old_r_dist) ControlList and r_dist p < old_r_dist update (p, r_dist p ) in ControlList; ControlList ordered by reachability-distance. cluster-ordered file WS 2003/04 Data Mining Algorithms 6 6

OPTICS: Properties Flat density-based clusters wrt. ε* ε and MinPts afterwards: Starts with an object o where c-dist(o) ε* and r-dist(o) > ε* Continues while r-dist ε* 23 7 6 8 4 2 3 4 6 7 8 Core-distance Reachability-distance Performance: approx. runtime( DBSCAN(ε, MinPts) ) O( n * runtime(ε-neighborhood-query) ) without spatial index support (worst case): O( n 2 ) e.g. tree-based spatial index support: O( n log(n) ) WS 2003/04 Data Mining Algorithms 6 7 OPTICS: The Reachability Plot represents the density-based clustering structure easy to analyze independent of the dimension of the data reachability distance cluster ordering reachability distance cluster ordering WS 2003/04 Data Mining Algorithms 6 8

OPTICS: Parameter Sensitivity Relatively insensitive to parameter settings Good result if parameters are just large enough 2 3 MinPts = 0, ε = 0 2 3 MinPts = 0, ε = MinPts = 2, ε = 0 2 3 2 3 WS 2003/04 Data Mining Algorithms 6 9 Hierarchical Clustering: Discussion Advantages Does not require the number of clusters to be known in advance No (standard methods) or very robust parameters (OPTICS) Computes a complete hierarchy of clusters Good result visualizations integrated into the methods A flat partition can be derived afterwards (e.g. via a cut through the dendrogram or the reachability plot) Disadvantages May not scale well Runtime for the standard methods: O(n 2 log n 2 ) Runtime for OPTICS: without index support O(n 2 ) WS 2003/04 Data Mining Algorithms 6 60