Knowledge Discovery in Databases

Similar documents
CSE 347/447: DATA MINING

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

CSE 5243 INTRO. TO DATA MINING

Hierarchical Clustering

Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

Data Mining Concepts & Techniques

DBSCAN. Presented by: Garrett Poppe

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

Clustering Part 3. Hierarchical Clustering

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

Clustering CS 550: Machine Learning

Clustering Lecture 3: Hierarchical Methods

Hierarchical Clustering

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

Cluster analysis. Agnieszka Nowak - Brzezinska

Clustering part II 1

DS504/CS586: Big Data Analytics Big Data Clustering II

CS7267 MACHINE LEARNING

Hierarchical clustering

DS504/CS586: Big Data Analytics Big Data Clustering II

CS570: Introduction to Data Mining

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Cluster Analysis. Ying Shen, SSE, Tongji University

DATA MINING - 1DL105, 1Dl111. An introductory class in data mining

Clustering Part 4 DBSCAN

University of Florida CISE department Gator Engineering. Clustering Part 4

Data Mining Algorithms

Lecture Notes for Chapter 8. Introduction to Data Mining

d(2,1) d(3,1 ) d (3,2) 0 ( n, ) ( n ,2)......

COMP 465: Data Mining Still More on Clustering

Clustering Lecture 4: Density-based Methods

CS145: INTRODUCTION TO DATA MINING

Clustering Techniques

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

CS570: Introduction to Data Mining

Data Mining: Concepts and Techniques. Chapter March 8, 2007 Data Mining: Concepts and Techniques 1

A Review on Cluster Based Approach in Data Mining

5/15/16. Computational Methods for Data Analysis. Massimo Poesio UNSUPERVISED LEARNING. Clustering. Unsupervised learning introduction

Clustering in Data Mining

Course Content. Classification = Learning a Model. What is Classification?

Course Content. What is Classification? Chapter 6 Objectives

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Data Mining 4. Cluster Analysis

Faster Clustering with DBSCAN

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Lecture 7 Cluster Analysis: Part A

UNIT V CLUSTERING, APPLICATIONS AND TRENDS IN DATA MINING. Clustering is unsupervised classification: no predefined classes

Unsupervised Learning

CS Data Mining Techniques Instructor: Abdullah Mueen

CHAPTER 4: CLUSTER ANALYSIS

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Clustering in Ratemaking: Applications in Territories Clustering

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

What is Cluster Analysis? COMP 465: Data Mining Clustering Basics. Applications of Cluster Analysis. Clustering: Application Examples 3/17/2015

Chapter VIII.3: Hierarchical Clustering

Unsupervised Learning. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Knowledge Discovery in Databases

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Unsupervised Learning. Andrea G. B. Tettamanzi I3S Laboratory SPARKS Team

University of Florida CISE department Gator Engineering. Clustering Part 2

A Comparative Study of Various Clustering Algorithms in Data Mining

Heterogeneous Density Based Spatial Clustering of Application with Noise

Data Mining: Concepts and Techniques. Chapter 7 Jiawei Han. University of Illinois at Urbana-Champaign. Department of Computer Science

COMP5331: Knowledge Discovery and Data Mining

COMPARISON OF DENSITY-BASED CLUSTERING ALGORITHMS

Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining, 2 nd Edition

CS249: ADVANCED DATA MINING

Knowledge Discovery in Databases

What is Cluster Analysis?

数据挖掘 Introduction to Data Mining

Clustering fundamentals

Kapitel 4: Clustering

Cluster Analysis: Basic Concepts and Algorithms

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Lesson 3. Prof. Enza Messina

Unsupervised Learning : Clustering

Machine Learning (BSMC-GA 4439) Wenke Liu

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Cluster Analysis. Outline. Motivation. Examples Applications. Han and Kamber, ch 8

Analysis and Extensions of Popular Clustering Algorithms

CS6220: DATA MINING TECHNIQUES

Big Data SONY Håkan Jonsson Vedran Sekara

Distance-based Methods: Drawbacks

CS6220: DATA MINING TECHNIQUES

Efficient Parallel DBSCAN algorithms for Bigdata using MapReduce

CS6220: DATA MINING TECHNIQUES

Density-Based Clustering. Izabela Moise, Evangelos Pournaras

Clustering Algorithms for Data Stream

CS Introduction to Data Mining Instructor: Abdullah Mueen

COMP20008 Elements of Data Processing. Outlier Detection and Clustering

Hierarchy. No or very little supervision Some heuristic quality guidances on the quality of the hierarchy. Jian Pei: CMPT 459/741 Clustering (2) 1

Data Mining. Clustering. Hamid Beigy. Sharif University of Technology. Fall 1394

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

HW4 VINH NGUYEN. Q1 (6 points). Chapter 8 Exercise 20

K-DBSCAN: Identifying Spatial Clusters With Differing Density Levels

Transcription:

Ludwig-Maximilians-Universität München Institut für Informatik Lehr- und Forschungseinheit für Datenbanksysteme Lecture notes Knowledge Discovery in Databases Summer Semester 2012 Lecture 8: Clustering II Lecture: Dr. Eirini Ntoutsi Tutorials: Erich Schubert http://www.dbs.ifi.lmu.de/cms/knowledge_discovery_in_databases_i_(kdd_i) Knowledge Discovery in Databases I: Clustering II 1

Sources Previous KDD I lectures on LMU (Johannes Aßfalg, Christian Böhm, Karsten Borgwardt, Martin Ester, Eshref Januzaj, Karin Kailing, Peer Kröger, Jörg Sander, Matthias Schubert, Arthur Zimek) Tan P.-N., Steinbach M., Kumar V., Introduction to Data Mining, Addison- Wesley, 2006 Jiawei Han, Micheline Kamber and Jian Pei, Data Mining: Concepts and Techniques, 3rd ed., Morgan Kaufmann, 2011. Knowledge Discovery in Databases I: Clustering II 2

Outline Introduction A categorization of major clustering methods Hierarchical methods Density based methods Grid based methods (next lecture) Model-based methods (next lecture) Things you should know Homework/tutorial Knowledge Discovery in Databases I: Clustering II 3

Major clustering methods I Partitioning approaches: Construct various partitions and then evaluate them by some criterion, e.g., minimizing the sum of square errors Typical methods: k-means, k-medoids, CLARANS Hierarchical approaches: Create a hierarchical decomposition of the set of data (or objects) using some criterion Typical methods: Diana, Agnes, BIRCH, ROCK, CHAMELEON Density-based approaches: Based on connectivity and density functions Typical methods: DBSCAN, OPTICS, DenClue 1 2 3 4 5 Knowledge Discovery in Databases I: Clustering II 4

Major clustering methods II Grid-based approaches: based on a multiple-level granularity structure Typical methods: STING, WaveCluster, CLIQUE Model-based approaches: A model is hypothesized for each of the clusters and tries to find the best fit of that model to each other Typical methods: EM, SOM, COBWEB Frequent pattern-based approaches: Based on the analysis of frequent patterns Typical methods: pcluster User-guided or constraint-based approaches: Clustering by considering user-specified or application-specific constraints Typical methods: COD (obstacles), constrained clustering Knowledge Discovery in Databases I: Clustering II 5

Outline Introduction A categorization of major clustering methods Hierarchical methods Density based methods Grid based methods (next lecture) Model-based methods (next lecture) Things you should know Homework/tutorial Knowledge Discovery in Databases I: Clustering II 6

Hierarchical methods idea Produces a set of nested clusters organized as a hierarchical tree Can be visualized as a dendrogram A tree like diagram that records the sequences of merges or splits The height at which two clusters are merged in the dendrogram reflects their distance 6 4 3 4 2 5 5 0.2 0.15 4 5 3 3 1 1 2 0.1 0.05 1 2 0 1 3 2 5 4 6 Nested clusters Dendrogram Knowledge Discovery in Databases I: Clustering II 7

Strengths of Hierarchical Clustering Do not have to assume any particular number of clusters Any desired number of clusters can be obtained by cutting the dendrogram at the proper level They may correspond to meaningful taxonomies Example in biological sciences (e.g., animal kingdom, phylogeny reconstruction, ) Knowledge Discovery in Databases I: Clustering II 8

Hierarchical vs Partitioning p1 p2 p3 p4 Nested clusters p1 p2 p3 p4 Partitioning clustering Dendrogram Partitioning algorithms typically have global objectives Hierarchical clustering algorithms typically have local objectives Knowledge Discovery in Databases I: Clustering II 9

Hierarchical clustering methods Two main types of hierarchical clustering Agglomerative: Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left e.g., AGNES 0.2 0.15 0.1 0.05 0 1 3 2 5 4 6 Divisive: Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters) e.g., DIANA Traditional hierarchical algorithms use a similarity or distance matrix Merge or split one cluster at a time 0.2 0.15 0.1 0.05 0 1 3 2 5 4 6 Knowledge Discovery in Databases I: Clustering II 10

Agglomerative clustering algorithm More popular hierarchical clustering technique Basic algorithm is straightforward 1. Compute the proximity matrix 2. Let each data point be a cluster 3. Repeat 4. Merge the two closest clusters 5. Update the proximity matrix 6. Until only a single cluster remains Key points: the computation of the proximity of two clusters Different approaches to defining the distance between clusters distinguish the different algorithms (single link, complete link,..) the update of the proximity matrix due to cluster merges Knowledge Discovery in Databases I: Clustering II 11

Starting situation Start with clusters of individual points and a proximity matrix p1 p2 p3 p12 p1 p2 p3 p12 Proximity matrix... p1 p2 p3 p4 p9 p10 p11 p12 Knowledge Discovery in Databases I: Clustering II 12

Intermediate situation I After some merging steps, we have some clusters C1 C2 C3 C4 C5 C1 C3 C4 C2 C3 C4 C5 C1 Proximity matrix C2 C5 Knowledge Discovery in Databases I: Clustering II 13... p1 p2 p3 p4 p9 p10 p11 p12

Intermediate situation II We want to merge the two closest clusters (C 2 and C 5 ) and update the proximity matrix. C1 C2 C3 C4 C5 C1 C2 C3 C3 C4 C4 C5 C1 Proximity matrix C2 C5 Knowledge Discovery in Databases I: Clustering II 14... p1 p2 p3 p4 p9 p10 p11 p12

After merging The question is How do we update the proximity matrix? Or, in other words, what is the similarity between two clusters? C1 C2 U C5 C3 C4 C1? C3 C4 C2 U C5 C3 C4?????? C1 Proximity matrix C2 U C5 Knowledge Discovery in Databases I: Clustering II 15... p1 p2 p3 p4 p9 p10 p11 p12

Measures of inter-cluster similarity I p1 p2 p3 p12 Similarity? p1 p2 p3 p12 A variety of different measures: Single link (or MIN) Complete link (or MAX) Group average Distance between centroids Distance between medoids Other methods driven by an objective function Ward s Method uses squared error Proximity matrix Knowledge Discovery in Databases I: Clustering II 16

Measures of inter-cluster similarity II Single link (or MIN): smallest distance between an element in one cluster and an element in the other, i.e., dis sl ( C C ) = min { d( x, y) x C, y C } i, j x, y i j p1 p2 p3 p12 p1 p2 p3 p12 Proximity matrix Knowledge Discovery in Databases I: Clustering II 17

Measures of inter-cluster similarity III Complete link (or MAX): largest distance between an element in one cluster and an element in the other, i.e., dis cl ( C C ) = max { d( x, y) x C, y C } i, j x, y i j p1 p2 p3 p12 p1 p2 p3 p12 Proximity matrix Knowledge Discovery in Databases I: Clustering II 18

Measures of inter-cluster similarity IV Group average: avg distance between an element in one cluster and an element in the other, i.e., dis avg ( C, C ) i j = x C, y C i C d( x, y) i j C j p1 p2 p3 p12 p1 p2 p3 p12 Proximity matrix Knowledge Discovery in Databases I: Clustering II 19

Measures of inter-cluster similarity V Centroid: distance between the centroids of two clusters, i.e., ( C C ) d( c, c ) dis = centroids i, j i j c m n i= = 1 n p i centroid p1 p2 p3 p12 p1 p2 p3 p12 Proximity matrix Knowledge Discovery in Databases I: Clustering II 20

Example Dataset (6 2D points) Distance matrix (Euclidean distance) Knowledge Discovery in Databases I: Clustering II 21

Single link distance (MIN): discussion Similarity of two clusters is based on the two most similar (closest) points in the different clusters Determined by one pair of points, i.e., by one link in the proximity graph. 3 1 5 5 2 1 0.2 2 3 6 0.15 4 4 0.1 0.05 Nested clusters 3 6 2 5 4 1 Knowledge Discovery in Databases I: Clustering II 22 0 Dendrogram

Single link distance (MIN): strengths Original points Two clusters Can handle non-elliptical shapes Knowledge Discovery in Databases I: Clustering II 23

Single link distance (MIN): limitations Original points Two clusters Sensitive to noise and outliers Chain like clusters Knowledge Discovery in Databases I: Clustering II 24

Complete link distance (MAX): discussion Similarity of two clusters is based on the two least similar (most distant) points in the different clusters Determined by one pair of points, i.e., by one link in the proximity graph. 4 1 5 2 5 2 0.4 0.35 3 4 3 1 6 0.3 0.25 0.2 0.15 0.1 0.05 0 3 6 4 1 2 5 Nested clusters Dendrogram Knowledge Discovery in Databases I: Clustering II 25

Complete link distance (MAX): strengths Original points Two clusters Less susceptible to noise and outliers Knowledge Discovery in Databases I: Clustering II 26

Complete link distance (MAX): limitations Original points Two clusters Tends to break large clusters Biased towards spherical clusters Knowledge Discovery in Databases I: Clustering II 27

Group average: discussion Proximity of two clusters is the average of pairwise proximity between points in the two clusters. Determined by all pairs of points in the two clusters 1 5 2 5 2 0.25 4 3 4 3 1 6 0.2 0.15 0.1 0.05 0 3 6 4 2 5 1 Nested clusters Dendrogram Knowledge Discovery in Databases I: Clustering II 28

Group average: strengths and limitations Compromise between Single and Complete Link Strengths Less susceptible to noise and outliers Limitations Biased towards spherical clusters Knowledge Discovery in Databases I: Clustering II 29

Ward s method Ward s method or Ward's minimum variance method The proximity between two clusters is measured in terms of the increase in SSE that results from merging the two clusters At each step, merge the pair of clusters that leads to minimum increase in total inter-cluster variance after merging. Similarly to k-means, tries to minimize the sum of square distances of points from their cluster centroids Similar to group average if distance between points is distance squared Less susceptible to noise and outliers Biased towards spherical clusters 5 5 2 2 4 4 1 3 1 3 6 Nested clusters Knowledge Discovery in Databases I: Clustering II 30

Comparison of the different methods Knowledge Discovery in Databases I: Clustering II 31 Group average Ward s method 1 2 3 4 5 6 1 2 5 3 4 Complete link (MAX) 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 3 4 5 Single link (MIN)

Hierarchical methods: complexity O(N 2 ) space since it uses the proximity matrix. N is the number of points. O(N 3 ) time in many cases There are N steps and at each step the size, N 2, proximity matrix must be updated and searched Complexity can be reduced to O(N 2 log(n) ) time for some approaches Knowledge Discovery in Databases I: Clustering II 32

How to get a clustering from a dendrogram A dendrogram is a tree of clusters. A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster. 0.25 0.2 0.15 0.1 0.05 0 3 6 4 1 2 5 Knowledge Discovery in Databases I: Clustering II 33

Hierarchical clustering: overview No knowledge on the number of clusters Produces a hierarchy of clusters, not a flat clustering A single clustering can be obtained from the dendrogram Merging decisions are final Once a decision is made to combine two clusters, it cannot be undone Lack of a global objective function Decisions are local, at each step Different schemes have problems with one or more of the following: Sensitivity to noise and outliers Breaking large clusters Difficulty handling different sized clusters and convex shapes Inefficiency, especially for large datasets Knowledge Discovery in Databases I: Clustering II 34

Bisecting k-means Hybrid methods: k-means and hierarchical clustering Idea: first split the set of points into two clusters, select one of these clusters for further splitting, and so on, until k clusters. Pseudocode: Which cluster to split? e.g., the one with the largest SSE e.g., based on SSE and size Example: Knowledge Discovery in Databases I: Clustering II 35

Outline Introduction A categorization of major clustering methods Hierarchical methods Density based methods Grid based methods (next lecture) Model-based methods (next lecture) Things you should know Homework/tutorial Knowledge Discovery in Databases I: Clustering II 36

Density based clustering Clusters are regions of high density surrounded by regions of low density (noise) Clustering based on density (local cluster criterion), such as density-connected points Major features: Discover clusters of arbitrary shape Handle noise One scan Need density parameters as termination condition Several interesting studies: DBSCAN: Ester, et al. (KDD 96) OPTICS: Ankerst, et al (SIGMOD 99). DENCLUE: Hinneburg & D. Keim (KDD 98) CLIQUE: Agrawal, et al. (SIGMOD 98) (more grid-based) Knowledge Discovery in Databases I: Clustering II 37

The notion of density Density: Density is measured locally in the Eps-neighborhood (or ε-neighborhood) of each point Density = number of points within a specified radius Eps (point itself included) p ε The e-neighborhood of p: 9 points Density depends on the specified radius In an extreme small radius, all points will have a density of 1 (only themselves) In an extreme large radius, all points will have a density of N (the size of the dataset) Knowledge Discovery in Databases I: Clustering II 38

DBSCAN basic concepts Consider a dataset D of objects to be clustered Two parameters: Eps (or ε): Maximum radius of the neighbourhood MinPts: Minimum number of points in an Eps-neighbourhood of that point Eps-neighborhood of a point p in D N Eps (p): {q belongs to D dist(p,q) <= Eps} p ε The Eps-neighborhood of p Knowledge Discovery in Databases I: Clustering II 39

Core points vs border points vs noise points Let D be a dataset. Given a radius parameter Eps and a density parameter MinPts we can distinguish between: Core points A point is a core point if it has more than a specified number of points (MinPts) within a specified radius Eps, i.e.,: N Eps (p)={q dist(p,q) <= Eps } MinPts - These are points that are at the interior of a cluster Border points A border point has fewer than MinPts within Eps, but it is in the neighborhood of a core point ε ε MinPts=9 Noise points not a core point nor a border point. ε Knowledge Discovery in Databases I: Clustering II 40

Example Knowledge Discovery in Databases I: Clustering II 41

Core, Border and Noise points Original points Point types: core, border and noise Eps = 10, MinPts = 4 Knowledge Discovery in Databases I: Clustering II 42

Direct reachability Directly density-reachable: A point p is directly density-reachable from a point q w.r.t. Eps, MinPts if p belongs to N Eps (q) q is a core point, i.e.,: N Eps (q) >= MinPts Knowledge Discovery in Databases I: Clustering II 43

Reachability Density-reachable: A point p is density-reachable from a point q w.r.t. Eps, MinPts if there is a chain of points p 1,, p n, p 1 = q, p n = p such that p i+1 is directly densityreachable from p i Knowledge Discovery in Databases I: Clustering II 44

Connectivity Density-connected A point p is density-connected to a point q w.r.t. Eps, MinPts if there is a point o such that both, p and q are density-reachable from o w.r.t. Eps and MinPts Knowledge Discovery in Databases I: Clustering II 45

Cluster A cluster is a maximal set of density-connected points Knowledge Discovery in Databases I: Clustering II 46

DBSCAN algorithm Arbitrary select a point p Retrieve all points density-reachable from p w.r.t. Eps and MinPts. If p is a core point, a cluster is formed. If p is a border point, no points are density-reachable from p and DBSCAN visits the next point of the database. Continue the process until all of the points have been processed. Knowledge Discovery in Databases I: Clustering II 47

DBSCAN pseudocode I DBSCAN(Dataset DB, Real Eps, Integer MinPts) // initially all objects are unclassified, // o.clid = unclassified for all o DB ClusterId := nextid(noise); for i from 1 to DB do Object := DB.get(i); if Object.ClId = unclassified then if ExpandCluster(DB, Object, ClusterId, Eps, MinPts) then ClusterId:=nextId(ClusterId); Knowledge Discovery in Databases I: Clustering II 48

DBSCAN pseudocode II ExpandCluster(DB, StartObject, ClusterId, Eps, MinPts): Boolean seeds:= RQ(StartObjekt, Eps); if seeds < MinPts then // StartObject is not a core object StartObject.ClId := NOISE; return false; else // else: StartObject is a core object forall o seeds do o.clid := ClusterId; remove StartObject from seeds; while seeds Empty do select an object o from the set of seeds; Neighborhood := RQ(o, Eps); if Neighborhood MinPts then // o is a core object for i from 1 to Neighborhood do p := Neighborhood.get(i); if p.clid in {UNCLASSIFIED, NOISE} then if p.clid = UNCLASSIFIED then add p to the seeds; p.clid := ClusterId; end if; end for; end if; remove o from the seeds; end while; end if return true; Knowledge Discovery in Databases I: Clustering II 49

Complexity For a dataset D consisting of n points, the time complexity of DBSCAN is O(n x time to find points in the Eps-neighborhood) Worst case O(n 2 ) In low-dimensional spaces O(nlogn); efficient data structures (e.g., kd-trees) allow for efficient retrieval of all points within a given distance of a specified point Knowledge Discovery in Databases I: Clustering II 50

When DBSCAN works well? Original points Clusters Resistant to noise Can handle clusters of different shapes and sizes Knowledge Discovery in Databases I: Clustering II 51

When DBSCAN does not work well? (MinPts=4, Eps=9.75). Original points Varying densities High-dimensional data (MinPts=4, Eps=9.92) Knowledge Discovery in Databases I: Clustering II 52

DBSCAN: determining Eps and MinPts Idea is that for points in a cluster, their k th nearest neighbors are at roughly the same distance Noise points have the k th nearest neighbor at farther distance So, plot sorted distance of every point to its k th nearest neighbor Knowledge Discovery in Databases I: Clustering II 53

OPTICS We will discuss OPTICS next time. Knowledge Discovery in Databases I: Clustering II 54

Outline Introduction A categorization of major clustering methods Hierarchical methods Density based methods Grid based methods (next lecture) Model-based methods (next lecture) Things you should know Homework/tutorial Knowledge Discovery in Databases I: Clustering II 55

Things you should know Hierarchical methods Agglomerative, divisive Cluster comparison measures Bisecting k-means Density based methods DBSCAN Knowledge Discovery in Databases I: Clustering II 56

Homework/ Tutorial Tutorial: Tutorial this Thursday on clustering Homework: Try hierarchical clustering in Weka, Elki Implement your own hierarchical clusterer Try the different cluster similarity measures Try density based clustering in Elki, Weka Implement your own DBSCAN Experiment with different Eps, MinPts parameters Suggested reading: Tan P.-N., Steinbach M., Kumar V., Introduction to Data Mining, Addison-Wesley, 2006 (Chapter 8). Han J., Kamber M., Pei J. Data Mining: Concepts and Techniques 3rd ed., Morgan Kaufmann, 2011 (Chapter 10) Knowledge Discovery in Databases I: Clustering II 57