Finding Clusters 1 / 60

Similar documents
Cluster Analysis. Angela Montanari and Laura Anderlucci

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Hierarchical clustering

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

Machine Learning (BSMC-GA 4439) Wenke Liu

CHAPTER 4: CLUSTER ANALYSIS

CSE 5243 INTRO. TO DATA MINING

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Clustering CS 550: Machine Learning

Lesson 3. Prof. Enza Messina

CSE 5243 INTRO. TO DATA MINING

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Cluster Analysis. Ying Shen, SSE, Tongji University

Hierarchical Clustering

Hierarchical Clustering

Cluster Analysis: Agglomerate Hierarchical Clustering

Stat 321: Transposable Data Clustering

CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

MSA220 - Statistical Learning for Big Data

Hierarchical Clustering Lecture 9

Multivariate Analysis

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Machine Learning (BSMC-GA 4439) Wenke Liu

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

Data Mining. Clustering. Hamid Beigy. Sharif University of Technology. Fall 1394

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Unsupervised Learning and Clustering

Unsupervised Learning

Unsupervised Learning

Understanding Clustering Supervising the unsupervised

Lecture 10: Semantic Segmentation and Clustering

Unsupervised Learning and Clustering

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Clustering and Dissimilarity Measures. Clustering. Dissimilarity Measures. Cluster Analysis. Perceptually-Inspired Measures

Clustering. Chapter 10 in Introduction to statistical learning

SGN (4 cr) Chapter 11

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

Clustering Part 3. Hierarchical Clustering

Unsupervised Learning. Andrea G. B. Tettamanzi I3S Laboratory SPARKS Team

Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010

Forestry Applied Multivariate Statistics. Cluster Analysis

Data Mining and Analysis: Fundamental Concepts and Algorithms

STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

K-Means Clustering 3/3/17

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

Clustering algorithms

Chapter DM:II. II. Cluster Analysis

Hierarchical Clustering 4/5/17

MATH5745 Multivariate Methods Lecture 13

Cluster Analysis. Jia Li Department of Statistics Penn State University. Summer School in Statistics for Astronomers IV June 9-14, 2008

Clustering part II 1

Clustering. SC4/SM4 Data Mining and Machine Learning, Hilary Term 2017 Dino Sejdinovic

Cluster Analysis. Summer School on Geocomputation. 27 June July 2011 Vysoké Pole

Machine learning - HT Clustering

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Clustering and Visualisation of Data

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Chapter 6: Cluster Analysis

Network Traffic Measurements and Analysis

Unsupervised: no target value to predict

Stats 170A: Project in Data Science Exploratory Data Analysis: Clustering Algorithms

Statistics 202: Data Mining. c Jonathan Taylor. Week 8 Based in part on slides from textbook, slides of Susan Holmes. December 2, / 1

Introduction to Data Mining

11/2/2017 MIST.6060 Business Intelligence and Data Mining 1. Clustering. Two widely used distance metrics to measure the distance between two records

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Clustering: K-means and Kernel K-means

k-means demo Administrative Machine learning: Unsupervised learning" Assignment 5 out

Lecture 6: Unsupervised Machine Learning Dagmar Gromann International Center For Computational Logic

University of Florida CISE department Gator Engineering. Clustering Part 2


Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

EECS730: Introduction to Bioinformatics

Information Retrieval and Web Search Engines

Unsupervised Learning : Clustering

ECLT 5810 Clustering

Segmentation Computer Vision Spring 2018, Lecture 27

Measure of Distance. We wish to define the distance between two objects Distance metric between points:

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Hierarchical Clustering

Objective of clustering

ECLT 5810 Clustering

HW4 VINH NGUYEN. Q1 (6 points). Chapter 8 Exercise 20

Unsupervised Learning

An Introduction to Cluster Analysis. Zhaoxia Yu Department of Statistics Vice Chair of Undergraduate Affairs

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Data Exploration with PCA and Unsupervised Learning with Clustering Paul Rodriguez, PhD PACE SDSC

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

Distances, Clustering! Rafael Irizarry!

Data Mining Algorithms

INF4820, Algorithms for AI and NLP: Hierarchical Clustering

Clustering Lecture 3: Hierarchical Methods

Unsupervised Learning: Clustering

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

CLUSTERING IN BIOINFORMATICS

CSE 5243 INTRO. TO DATA MINING

Transcription:

Finding Clusters Types of Clustering Approaches: Linkage Based, e.g. Hierarchical Clustering Clustering by Partitioning, e.g. k-means Density Based Clustering, e.g. DBScan Grid Based Clustering 1 / 60

Hierarchical Clustering 2 / 60

Hierarchical clustering 3 2 Iris virginica Iris versicolor Iris setosa 1 0 1 2 3 3 2 1 0 1 2 3 In the two-dimensional MDS (Sammon mapping) representation of the Iris data set, two clusters can be identified. (The colours, indicating the species of the flowers, are ignored here.) 3 / 60

Hierarchical clustering Hierarchical clustering builds clusters step by step. 4 / 60

Hierarchical clustering Hierarchical clustering builds clusters step by step. Usually a bottom up strategy is applied by first considering each data object as a separate cluster and then step by step joining clusters together that are close to each other. This approach is called agglomerative hierarchical clustering. 4 / 60

Hierarchical clustering Hierarchical clustering builds clusters step by step. Usually a bottom up strategy is applied by first considering each data object as a separate cluster and then step by step joining clusters together that are close to each other. This approach is called agglomerative hierarchical clustering. In contrast to agglomerative hierarchical clustering, divisive hierarchical clustering starts with the whole data set as a single cluster and then divides clusters step by step into smaller clusters. 4 / 60

Hierarchical clustering Hierarchical clustering builds clusters step by step. Usually a bottom up strategy is applied by first considering each data object as a separate cluster and then step by step joining clusters together that are close to each other. This approach is called agglomerative hierarchical clustering. In contrast to agglomerative hierarchical clustering, divisive hierarchical clustering starts with the whole data set as a single cluster and then divides clusters step by step into smaller clusters. In order to decide which data objects should belong to the same cluster, a (dis-)similarity measure is needed. 4 / 60

Hierarchical clustering Hierarchical clustering builds clusters step by step. Usually a bottom up strategy is applied by first considering each data object as a separate cluster and then step by step joining clusters together that are close to each other. This approach is called agglomerative hierarchical clustering. In contrast to agglomerative hierarchical clustering, divisive hierarchical clustering starts with the whole data set as a single cluster and then divides clusters step by step into smaller clusters. In order to decide which data objects should belong to the same cluster, a (dis-)similarity measure is needed. Note: We do need to have access to features, all that is needed for hierarchical clustering is an n n-matrix [d i,j ], where d i,j is the (dis-)similarity of data objects i and j. (n is the number of data objects.) 4 / 60

Hierarchical clustering: Dissimilarity matrix The dissimilarity matrix [d i,j ] should at least satisfy the following conditions. d i,j 0, i.e. dissimilarity cannot be negative. d i,i = 0, i.e. each data object is completely similar to itself. d i,j = d j,i, i.e. data object i is (dis-)similar to data object j to the same degree as data object j is (dis-)similar to data object i. It is often useful if the dissimilarity is a (pseudo-)metric, satisfying also the triangle inequality d i,k d i,j + d j,k. 5 / 60

Agglomerative hierarchical clustering: Algorithm Input: n n dissimilarity matrix [d i,j ]. 1 Start with n clusters, each data objects forms a single cluster. 2 Reduce the number of clusters by joining those two clusters that are most similar (least dissimilar). 3 Repeat step 3 until there is only one cluster left containing all data objects. 6 / 60

Measuring dissimilarity between clusters The dissimilarity between two clusters containing only one data object each is simply the dissimilarity of the two data objects specified in the dissimilarity matrix [d i,j ]. 7 / 60

Measuring dissimilarity between clusters The dissimilarity between two clusters containing only one data object each is simply the dissimilarity of the two data objects specified in the dissimilarity matrix [d i,j ]. But how do we compute the dissimilarity between clusters that contain more than one data object? 7 / 60

Measuring dissimilarity between clusters 8 / 60

Measuring dissimilarity between clusters Centroid Distance between the centroids (mean value vectors) of the two clusters 1 1 Requires that we can compute the mean vector! 8 / 60

Measuring dissimilarity between clusters Centroid Distance between the centroids (mean value vectors) of the two clusters 1 Average Linkage Average dissimilarity between all pairs of points of the two clusters. 1 Requires that we can compute the mean vector! 8 / 60

Measuring dissimilarity between clusters Centroid Distance between the centroids (mean value vectors) of the two clusters 1 Average Linkage Average dissimilarity between all pairs of points of the two clusters. Single Linkage Dissimilarity between the two most similar data objects of the two clusters. 1 Requires that we can compute the mean vector! 8 / 60

Measuring dissimilarity between clusters Centroid Distance between the centroids (mean value vectors) of the two clusters 1 Average Linkage Average dissimilarity between all pairs of points of the two clusters. Single Linkage Dissimilarity between the two most similar data objects of the two clusters. Complete Linkage Dissimilarity between the two most dissimilar data objects of the two clusters. 1 Requires that we can compute the mean vector! 8 / 60

Measuring dissimilarity between clusters Centroid Distance between the centroids (mean value vectors) of the two clusters 1 Average Linkage Average dissimilarity between all pairs of points of the two clusters. Single Linkage Dissimilarity between the two most similar data objects of the two clusters. Complete Linkage Dissimilarity between the two most dissimilar data objects of the two clusters. 1 Requires that we can compute the mean vector! 8 / 60

Measuring dissimilarity between clusters Single linkage can follow chains in the data (may be desirable in certain applications). Complete linkage leads to very compact clusters. Average linkage also tends clearly towards compact clusters. 9 / 60

Measuring dissimilarity between clusters Single linkage Complete linkage 10 / 60

Measuring dissimilarity between clusters Ward s method another strategy for merging clusters In contrast to single, complete or average linkage, it takes the number of data objects in each cluster into account. 11 / 60

Measuring dissimilarity between clusters The updated dissimilarity between the newly formed cluster {C C } and the cluster C is computed in the follwing way. d ({C C }, C ) =... single linkage = min{d (C, C ), d (C, C )} complete linkage = max{d (C, C ), d (C, C )} average linkage = C d (C, C ) + C d (C, C ) C + C Ward = ( C + C )d (C, C ) + ( C + C )d (C, C ) C d (C, C ) C + C + C centroid 2 1 = d(x, y) C C C x C C y C 2 If metric, usually mean vector needs to be computed! 12 / 60

Dendrograms The cluster merging process arranges the data points in a binary tree. Draw the data tuples at the bottom or on the left (equally spaced if they are multi-dimensional). Draw a connection between clusters that are merged, with the distance to the data points representing the distance between the clusters. 13 / 60

Hierarchical clustering Example Clustering of the 1-dimensional data set {2, 12, 16, 25, 29, 45}. All three approaches to measure the distance between clusters lead to different dendrograms. 14 / 60

Hierarchical clustering Centroid Single linkage Complete linkage 15 / 60

Dendrograms 16 / 60

Dendrograms 17 / 60

Dendrograms 18 / 60

Choosing the right clusters Simplest Approach: Specify a minimum desired distance between clusters. Stop merging clusters if the closest two clusters are farther apart than this distance. Visual Approach: Merge clusters until all data points are combined into one cluster. Draw the dendrogram and find a good cut level. Advantage: Cut needs not be strictly horizontal. More Sophisticated Approaches: Analyze the sequence of distances in the merging process. Try to find a step in which the distance between the two clusters merged is considerably larger than the distance of the previous step. Several heuristic criteria exist for this step selection. 19 / 60

Heatmaps A heatmap combines a dendrogram resulting from clustering the data, a dendrogram resulting from clustering the attributes and colours to indicate the values of the attributes. 20 / 60

Example: Heatmap and dendrogram Count 0 1 2 3 4 2 0 2 4 Value 6 5 7 10 8 9 4 3 1 2 x y Color Key and Histogram 21 / 60

Example: Heatmap and dendrogram Color Key and Histogram 0 5 10 15 Value 85 97 99 78 79 81 89 76 86 92 77 95 82 94 84 83 91 96 80 87 93 67 53 65 73 98 100 54 55 66 58 64 63 68 72 56 59 74 52 62 57 60 61 51 75 71 69 70 111 114 115 101 109 125 116 123 104 105 119 106 112 107 102 120 103 121 122 117 118 124 113 108 110 133 136 128 132 148 129 139 90 88 146 126 137 138 144 142 145 140 149 135 127 131 141 130 150 143 134 147 15 13 27 817 5311 46 31 32 18 22 39 233 37 12 23 10 19 48 29 30 20 50 136 641 40 25 42 49 44 16 34 924 26 35 438 45 47 21 28 714 1 2 3 4 Count 0 50 150 22 / 60

Example: Heatmap and dendrogram Color Key and Histogram 2 1 0 1 2 Value 44 57 75 11 72 59 96 100 77 86 54 67 15 47 33 1 37 43 53 12 41 27 39 8 51 79 68 18 55 45 82 5 80 24 34 31 97 56 63 95 84 4 83 9 94 98 28 14 62 21 26 85 88 70 90 35 64 73 50 52 71 2 22 3 13 48 16 40 6 49 32 60 69 23 76 17 46 92 78 58 89 29 66 74 20 25 99 10 81 36 65 42 93 61 30 38 19 91 7 87 1 2 Count 0 5 15 25 23 / 60

Iris Data: Heatmap and dendrogram Color Key and Histogram 2 0 1 2 3 Value 101 137 149 103 113 140 105 141 142 146 111 116 145 125 121 144 110 118 132 108 131 126 130 106 136 119 123 42 61 99 58 94 63 88 69 120 80 68 83 93 60 91 95 107 54 81 82 70 90 62 92 64 79 75 98 72 74 67 85 97 56 100 65 89 96 115 114 122 102 143 109 73 147 112 124 127 84 135 66 87 51 53 55 134 77 59 76 129 133 78 104 148 117 138 150 128 139 52 57 71 86 348 39 430 26 914 18 31 10 35 46 13 28 41 25 37 21 32 29 840 16 33 34 44 24 27 36 50 712 11 49 20 47 22 45 15 19 617 23 538 sw sl pl pw Count 0 40 80 24 / 60

Divisive hierarchical clustering The top-down approach of divisive hierarchical clustering is rarely used. In agglomerative clustering the minimum of the pairwise dissimilarities has to be determined, leading to a quadratic complexity in each step (quadratic in the number of clusters still present in the corresponding step). In divisive clustering for each cluster all possible splits would have to be considered. In the first step, there are 2 n 1 1 possible splits, where n is the number of data objects. 25 / 60

What is Similarity? 26 / 60

How to cluster these objects? 27 / 60

How to cluster these objects? 28 / 60

How to cluster these objects? 29 / 60

Clustering example 30 / 60

Clustering example 31 / 60

Clustering example 32 / 60

Scaling The previous three slides show the same data set. In the second slide, the unit on the x-axis was changed to centi-units. In the third slide, the unit on the y-axis was changed to centi-units. 33 / 60

Scaling The previous three slides show the same data set. In the second slide, the unit on the x-axis was changed to centi-units. In the third slide, the unit on the y-axis was changed to centi-units. Clusters should not depend on the measurement unit! 33 / 60

Scaling The previous three slides show the same data set. In the second slide, the unit on the x-axis was changed to centi-units. In the third slide, the unit on the y-axis was changed to centi-units. Clusters should not depend on the measurement unit! Therefore, some kind of normalisation (see the chapter on data preparation) should be carried out before clustering. 33 / 60

Complex Similarities: An Example A few Adrenalin-like drug candidates: Adrenalin (D) (C) (B) (E) 34 / 60

Complex Similarities: An Example Similarity: Polarity 35 / 60

Complex Similarities: An Example Dissimilarity: Hydrophobic / Hydrophilic 36 / 60

Complex Similarities: An Example Similar to Adrenalin... Adrenalin Amphetamin Ephedrin Dopamin MDMA 37 / 60

Complex Similarities: An Example Similar to Adrenalin...but some cross the blood-brain barrier Adrenalin Amphetamin (Speed) Ephedrin Dopamin MDMA (Ecstasy) 37 / 60

Similarity Measures 38 / 60

Notion of (dis-)similarity: Numerical attributes Various choices for dissimilarities between two numerical vectors: Pearson Manhatten Euclidean Tschebyschew Minkowksi L p d p (x, y) = p n i=1 x i y i p Euclidean L 2 d E (x, y) = (x 1 y 1 ) 2 +... + (x n y n ) 2 Manhattan L 1 d M (x, y) = x 1 y 1 +... + x n y n Tschebyschew L d (x, y) = max{ x 1 y 1,..., x n y n } Cosine d C (x, y) = 1 x y x y Tanimoto d T (x, y) = Pearson x y x 2 + y 2 x y Euclidean of z-score transformed x, y 39 / 60

Notion of (dis-)similarity: Binary attributes The two values (e.g. 0 and 1) of a binary attribute can be interpreted as some property being absent (0) or present (1). In this sense, a vector of binary attribute can be interpreted as a set of properties that the corresponding object has. Example The binary vector (0, 1, 1, 0, 1) corresponds to the set of properties {a 2, a 3, a 5 }. The binary vector (0, 0, 0, 0, 0) corresponds to the empty set. The binary vector (1, 1, 1, 1, 1) corresponds to the set {a 1, a 2, a 3, a 4, a 5 }. 40 / 60

Notion of (dis-)similarity: Binary attributes Dissimilarity measures for two vectors of binary attributes. Each data object is represent by the corresponding set of properties that are present. binary attributes sets of properties simple match d S = 1 b+n b+n+x Russel & Rao d R = 1 b b+n+x 1 X Y Ω Jaccard d J = 1 b b+x 1 X Y X Y Dice d D = 1 2b 2b+x 1 2 X Y X + Y no. of predicates that... b = n = x =...hold in both records...do not hold in both records...hold in only one of both records x y set X set Y b n x d M d R d J d D 101000 111000 {a 1, a 3} {a 1, a 2, a 3} 2 3 1 0.1 6 0.6 6 0.3 3 0.20 41 / 60

Notion of (dis-)similarity: Nominal attributes Nominal attributes may be transformed into a set of binary attributes, each of them indicating one particular feature of the attribute (1-of-n coding). 42 / 60

Notion of (dis-)similarity: Nominal attributes Nominal attributes may be transformed into a set of binary attributes, each of them indicating one particular feature of the attribute (1-of-n coding). Example Attribute Manufacturer with the values BMW, Chrysler, Dacia, Ford, Volkswagen. manufacturer... Volkswagen... Dacia... Ford... binary vector 00001 01000 00100 Then one of the dissimilarity measures for binary attribute can be applied. 42 / 60

Notion of (dis-)similarity: Nominal attributes Nominal attributes may be transformed into a set of binary attributes, each of them indicating one particular feature of the attribute (1-of-n coding). Example Attribute Manufacturer with the values BMW, Chrysler, Dacia, Ford, Volkswagen. manufacturer... Volkswagen... Dacia... Ford... binary vector 00001 01000 00100 Then one of the dissimilarity measures for binary attribute can be applied. Another way to measure similarity between two vectors of nominal attributes is to compute the proportion of attributes where both vectors have the same value, leading to the Russel & Rao dissimilarity measure. 42 / 60

Prototype Based Clustering 43 / 60

Prototype Based Clustering given: dataset of size n return: set of typical examples of size k << n. 44 / 60

k-means clustering Choose a number k of clusters to be found (user input). Initialize the cluster centres randomly (for instance, by randomly selecting k data points). Data point assignment: Assign each data point to the cluster centre that is closest to it (i.e. closer than any other cluster centre). Cluster centre update: Compute new cluster centres as the mean vectors of the assigned data points. (Intuitively: centre of gravity if each data point has unit weight.) 45 / 60

k-means clustering Repeat these two steps (data point assignment and cluster centre update) until the clusters centres do not change anymore. It can be shown that this scheme must converge, i.e., the update of the cluster centres cannot go on forever. 46 / 60

k-means clustering Aim: Minimize the objective function k n f = u ij d ij i=1 j=1 under the constraints u ij {0, 1} and k u ij = 1 for all j = 1,..., n. i=1 47 / 60

Alternating optimization Assuming the cluster centres to be fixed, u ij = 1 should be chosen for the cluster i to which data object x j has the smallest distance in order to minimize the objective function. Assuming the assignments to the clusters to be fixed, each cluster centre should be chosen as the mean vector of the data objects assigned to the cluster in order to minimize the objective function. 48 / 60

k-means clustering: Example 49 / 60

k-means clustering: Example 49 / 60

k-means clustering: Example 49 / 60

k-means clustering: Example 49 / 60

k-means clustering: Example 49 / 60

k-means clustering: Example 49 / 60

k-means clustering: Local minima Clustering is successful in this example: The clusters found are those that would have been formed intuitively. Convergence is achieved after only 5 steps. (This is typical: convergence is usually very fast.) However: The clustering result is fairly sensitive to the initial positions of the cluster centres. With a bad initialisation clustering may fail (the alternating update process gets stuck in a local minimum). 50 / 60

k-means clustering: Local minima 51 / 60

Gaussian Mixture Models 52 / 60

Gaussian mixture models EM clustering Assumption: Data was generated by sampling a set of normal distributions. (The probability density is a mixture of normal distributions.) Aim: Find the parameters for the normal distributions and how much each normal distribution contributes to the data. 53 / 60

Gaussian mixture models 0.8 0.7 0.6 Two normal distributions. 0.5 0.4 0.3 0.2 0.1 0-3 -2-1 0 1 2 3 4 54 / 60

Gaussian mixture models 0.8 0.45 0.7 0.4 0.35 0.6 normal distrubutions contribute 50%). 0.5 0.3 0.25 0.4 0.2 0.3 0.2 0.15 0.1 0.1 0.05 0-3 -2-1 0 1 2 3 4 Two normal distributions. 0-3 -2-1 0 1 2 3 4 Mixture model (both 54 / 60

Gaussian mixture models 0.8 0.45 0.45 0.7 0.4 0.4 0.6 0.35 0.35 0.5 0.4 0.3 0.3 0.25 0.2 0.15 0.3 0.25 0.2 0.15 0.2 0.1 0.1 0.1 0.05 0.05 0-3 -2-1 0 1 2 3 4 Two normal distributions. 0-3 -2-1 0 1 2 3 4 Mixture model (both normal distrubutions contribute 50%). 0-3 -2-1 0 1 2 3 4 Mixture model (one normal distrubutions contributes 10%, the other 90%). 54 / 60

Gaussian mixture models 55 / 60

Gaussian mixture models EM clustering Assumption: Data were generated by sampling a set of normal distributions. (The probability density is a mixture of normal distributions.) Aim: Find the parameters for the normal distributions and how much each normal distribution contributes to the data. Algorithm: EM clustering (expectation maximisation). Alternating scheme in which the parameters of the normal distributions and the likelihoods of the data points to be generated by the corresponding normal distributions are estimated. 56 / 60

Density Based Clustering 57 / 60

Density-based clustering For numerical data, density-based clustering algorithm often yield the best results. Principle: A connected region with high data density corresponds to one cluster. DBScan is one of the most popular density-based clustering algorithms. 58 / 60

Density-based clustering: DBScan Principle idea of DBScan: 1 Find a data point where the data density is high, i.e. in whose ε-neighbourhood are at least l other points. (ε and l are parameters of the algorithm to be chosen by the user.) 2 All the points in the ε-neighbourhood are considered to belong to one cluster. 3 Expand this ε-neighbourhood (the cluster) as long as the high density criterion is satisfied. 4 Remove the cluster (all data points assigned to the cluster) from the data set and continue with 1. as long as data points with a high data density around them can be found. 59 / 60

Density-based clustering: DBScan grid cell grid cell neighbourhood cell with at least 3 hits neighbourhood cell with at least 3 hits 60 / 60