Approaches to Clustering

Similar documents
Cluster Analysis. Jia Li Department of Statistics Penn State University. Summer School in Statistics for Astronomers IV June 9-14, 2008

Cluster Analysis. Debashis Ghosh Department of Statistics Penn State University (based on slides from Jia Li, Dept. of Statistics)

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Unsupervised Learning: Clustering

Clustering web search results

Unsupervised Learning

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme

Clustering and Visualisation of Data

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Network Traffic Measurements and Analysis

CHAPTER 4: CLUSTER ANALYSIS

Note Set 4: Finite Mixture Models and the EM Algorithm

Methods for Intelligent Systems

Clustering CS 550: Machine Learning

Clustering Lecture 5: Mixture Model

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling

Introduction to Machine Learning. Xiaojin Zhu

STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

10701 Machine Learning. Clustering

K-means and Hierarchical Clustering

Expectation Maximization (EM) and Gaussian Mixture Models

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

SGN (4 cr) Chapter 11

Summer School in Statistics for Astronomers & Physicists June 15-17, Cluster Analysis

Clustering. So far in the course. Clustering. Clustering. Subhransu Maji. CMPSCI 689: Machine Learning. dist(x, y) = x y 2 2

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

9.1. K-means Clustering

K-Means Clustering 3/3/17

Machine Learning for OR & FE

Machine Learning. Unsupervised Learning. Manfred Huber

4. Cluster Analysis. Francesc J. Ferri. Dept. d Informàtica. Universitat de València. Febrer F.J. Ferri (Univ. València) AIRF 2/ / 1

Unsupervised Learning

Expectation-Maximization. Nuno Vasconcelos ECE Department, UCSD

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

Clustering. Subhransu Maji. CMPSCI 689: Machine Learning. 2 April April 2015

Generative and discriminative classification techniques

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

Machine Learning for Signal Processing Clustering. Bhiksha Raj Class Oct 2016

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

10-701/15-781, Fall 2006, Final

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Homework #4 Programming Assignment Due: 11:59 pm, November 4, 2018

Machine Learning (BSMC-GA 4439) Wenke Liu

Unsupervised Learning and Clustering

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

Clustering algorithms

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering

CS Introduction to Data Mining Instructor: Abdullah Mueen

Clustering and Dissimilarity Measures. Clustering. Dissimilarity Measures. Cluster Analysis. Perceptually-Inspired Measures

Introduction to Machine Learning CMU-10701

Content-based image and video analysis. Machine learning

Cluster Analysis. Ying Shen, SSE, Tongji University

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions

Supervised vs. Unsupervised Learning

CS 229 Midterm Review

Finding Clusters 1 / 60

Region-based Segmentation

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Convexization in Markov Chain Monte Carlo

Clustering in R d. Clustering. Widely-used clustering methods. The k-means optimization problem CSE 250B

Machine Learning (BSMC-GA 4439) Wenke Liu

COMS 4771 Clustering. Nakul Verma

Based on Raymond J. Mooney s slides

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Unsupervised Learning

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

Lecture 4 Hierarchical clustering

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

Latent Variable Models and Expectation Maximization

Unsupervised Learning. Clustering and the EM Algorithm. Unsupervised Learning is Model Learning

Unsupervised Learning : Clustering

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

Multivariate Analysis

Clustering. Robert M. Haralick. Computer Science, Graduate Center City University of New York

Expectation Maximization: Inferring model parameters and class labels

Using Machine Learning to Optimize Storage Systems

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems

Mixture Models and EM

Function approximation using RBF network. 10 basis functions and 25 data points.

Unsupervised Learning

MSA220 - Statistical Learning for Big Data

CSE 5243 INTRO. TO DATA MINING

Cluster Analysis. Angela Montanari and Laura Anderlucci

Lecture 2 The k-means clustering problem

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

INF 4300 Classification III Anne Solberg The agenda today:

Chapter 4: Non-Parametric Techniques

A Course in Machine Learning

Unsupervised Learning and Clustering

Gaussian Mixture Models For Clustering Data. Soft Clustering and the EM Algorithm

Density estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate

Transcription:

Clustering A basic tool in data mining/pattern recognition: Divide a set of data into groups. Samples in one cluster are close and clusters are far apart. 6 5 4 3 1 1 3 1 1 3 4 5 6 Motivations: Discover classes of data in an unsupervised way (unsupervised learning). Efficient representation of data: fast retrieval, data complexity reduction. Various engineering purposes: tightly linked with pattern recognition. 1

Approaches to Clustering Represent samples by feature vectors. Define a distance measure to assess the closeness between data. Closeness can be measured in many ways. Define distance based on various norms. For stars with measured parallax, the multivariate distance between stars is the spatial Euclidean distance. For a galaxy redshift survey, however, the multivariate distance depends on the Hubble constant which scales velocity to spatial distance. For many astronomical datasets, the variables have incompatible units and no prior known relationship. The result of clustering will depends on the arbitrary choice of variable scaling.

Approaches to Clustering Clustering: grouping of similar objects (unsupervised learning) Approaches Prototype methods: K-means (for vectors) K-center (for vectors) D-clustering (for bags of weighted vectors) Statistical modeling Mixture modeling by the EM algorithm Modal clustering Pairwise distance based partition: Spectral graph partitioning Dendrogram clustering (agglomerative): single linkage (friends of friends algorithm), complete linkage, etc. 3

K-means Assume there are M prototypes denoted by Z = {z 1,z,..., z M }. Each training sample is assigned to one of the prototype. Denote the assignment function by A( ). Then A(x i )=jmeans the ith training sample is assigned to the jth prototype. Goal: minimize the total mean squared error between the training samples and their representative prototypes, that is, the trace of the pooled within cluster covariance matrix. arg min Z,A N i=1 x i z A(xi ) Denote the objective function by N L(Z,A)= x i z A(xi ). i=1 Intuition: training samples are tightly clustered around the prototypes. Hence, the prototypes serve as a compact representation for the training data. 4

Necessary Conditions If Z is fixed, the optimal assignment function A( ) should follow the nearest neighbor rule, that is, A(x i ) = arg min j {1,,...,M} x i z j. If A( ) is fixed, the prototype z j should be the average (centroid) of all the samples assigned to the jth prototype: i:a(x z j = i )=j x i, N j where N j is the number of samples assigned to prototype j. 5

The Algorithm Based on the necessary conditions, the k-means algorithm alternates the two steps: For a fixed set of centroids (prototypes), optimize A( ) by assigning each sample to its closest centroid using Euclidean distance. Update the centroids by computing the average of all the samples assigned to it. The algorithm converges since after each iteration, the objective function decreases (non-increasing). Usually converges fast. Stopping criterion: the ratio between the decrease and the objective function is below a threshold. 6

Example Training set: {1., 5.6, 3.7,.6,.1,.6}. Apply k-means algorithm with centroids, {z 1,z }. Initialization: randomly pick z 1 =,z =5. fixed update {1.,.6,.1,.6} 5 {5.6, 3.7} {1.,.6,.1,.6} 1.15 {5.6, 3.7} 4.65 1.15 {1.,.6,.1,.6} 4.65 {5.6, 3.7} The two prototypes are: z 1 =1.15, z =4.65. The objective function is L(Z,A)=5.315. 7

Initialization: randomly pick z 1 =.8,z =3.8. fixed update.8 {1.,.6,.1} 3.8 {5.6, 3.7,.6} {1.,.6,.1 }.633 {5.6, 3.7,.6 } 3.967.633 {1.,.6,.1} 3.967 {5.6, 3.7,.6} The two prototypes are: z 1 =.633, z =3.967. The objective function is L(Z,A)=5.133. Starting from different initial values, the k-means algorithm converges to different local optimum. It can be shown that {z 1 =.633,z =3.967} is the global optimal solution. 8

Initialization Randomly pick up the prototypes to start the k-means iteration. Different initial prototypes may lead to different local optimal solutions given by k-means. Try different sets of initial prototypes, compare the objective function at the end to choose the best solution. When randomly select initial prototypes, better make sure no prototype is out of the range of the entire data set. Initialization in the above simulation: Generated M random vectors with independent dimensions. For each dimension, the feature is uniformly distributed in [ 1, 1]. Linearly transform the jth feature, Z j, j =1,,..., p in each prototype (a vector) by: Z j s j + m j, where s j is the sample standard deviation of dimension j and m j is the sample mean of dimension j, both computed using the training data. 9

Linde-Buzo-Gray (LBG) Algorithm An algorithm developed in vector quantization for the purpose of data compression. Y. Linde, A. Buzo and R. M. Gray, An algorithm for vector quantizer design, IEEE Trans. on Communication, Vol. COM-8, pp. 84-95, Jan. 198. The algorithm 1. Find the centroid z (1) 1 of the entire data set.. Set k =1,l=1. 3. If k < M, split the current centroids by adding small offsets. If M k k, split all the centroids; otherwise, split only M k of them. Denote the number of centroids split by k = min(k, M k). For example, to split z (1) 1 into two centroids, let z () 1 = z (1) 1, z() = z (1) 1 + ɛ, where ɛ has a small norm and a random direction. 4. k k + k; l l +1. 5. Use {z (l) 1,z(l),..., z(l) k } as initial prototypes. Apply k-means iteration to update these prototypes. 6. If k<m, go back to step 3; otherwise, stop. 1

Tree-structured Clustering Studied extensively in vector quantization from the perspective of data compression. Referred to as tree-structured vector quantization (TSVQ). The algorithm 1. Apply centroids k-means to the entire data set.. The data are assigned to the centroids. 3. For the data assigned to each centroid, apply centroids k-means to them separately. 4. Repeat the above step. 11

Compare with LBG: For LBG, after the initial prototypes are formed by splitting, k-means is applied to the overall data set. The final result is M prototypes. For TSVQ, data partitioned into different centroids at the same level will never affect each other in the future growth of the tree. The final result is a tree structure. Fast searching For k-means, to decide which cell a query x goes to, M (the number of prototypes) distances need to be computed. For the tree-structured clustering, to decide which cell a query x goes to, only log (M) distances need to be computed. Comments on tree-structured clustering: It is structurally more constrained. But on the other hand, it provides more insight into the patterns in the data. It is greedy in the sense of optimizing at each step sequentially. An early bad decision will propagate its effect. It provides more algorithmic flexibility. 1

K-center Clustering Let A be a set of n objects. Partition A into K sets C 1, C,..., C K. Cluster size of C k : the least value D for which all points in C k are: 1. within distance D of each other, or. within distance D/ of some point called the cluster center. Let the cluster size of C k be D k. The cluster size of partition S is D = Goal: Given K, min S D(S). max D k. k=1,...,k 13

Comparison with k-means Assume the distance between vectors is the squared Euclidean distance. K-means: min S K k=1 i:x i C k (x i µ k ) T (x i µ k ) where µ k is the centroid for cluster C k. In particular, µ k = 1 N k i:x i C k x i. K-center: min S max max (x i µ k ) T (x i µ k ). k=1,...,k i:x i C k where µ k is called the centroid, but may not be the mean vector. Another formulation of k-center: min S max k=1,...,k max L(x i,x j ). i,j:x i,x j C k L(x i,x j ) denotes any distance between a pair of objects. 14

8 6 4 4 8 6 5 5 1 (a) 8 6 6 4 4 4 4 6 5 5 1 (b) 6 5 5 1 (c) Figure 1: Comparison of k-means and k-center. (a): Original unclustered data. (b): Clustering by k-means. (c): Clustering by k-center. K-means focuses on average distance. K-center focuses on worst scenario. 15

Greedy Algorithm Choose a subset H from S consisting K points that are farthest apart from each other. Each point h k H represents one cluster C k. Point x i is partitioned into cluster C k if L(x i,h k ) = min L(x i,h k ). k =1,...,K Only need pairwise distance L(x i,x j )for any x i,x j S. Hence, x i can be a non-vector representation of the objects. The greedy algorithm achieves an approximation factor of as long as the distance measure L satisfies the triangle inequality. That is, if D = min S max k=1,...,k max L(x i,x j ) i,j:x i,x j C k then the greedy algorithm guarantees that D D. The relation holds if the cluster size is defined in the sense of centralized clustering. 16

Pseudo Code H denotes the set of cluster representative objects {h 1,..., h k } S. Let cluster(x i ) be the identity of the cluster x i S belongs to. Let dist(x i ) be the distance between x i and its closest cluster representative object: Pseudo code: dist(x i ) = min h j H L(x i,h j ). 1. Randomly select an object x j from S, let h 1 = x j, H = {h 1 }.. for j =1to n, dist(x j )=L(x j,h 1 ) cluster(x j )=1 3. for i =to K D = max xj :x j S\H dist(x j ) choose h i S \ H s.t. dist(h i )=D H=H {h i } for j =1to n if L(x j,h i ) dist(x j ) dist(x j )=L(x j,h i ) cluster(x j )=i 17

Algorithm Property The running time of the algorithm is O(Kn). Let the partition obtained by the greedy algorithm be S and the optimal partition be S. Let the cluster size of S be D and that of S be D. The cluster size is defined in the pairwise distance sense. It can be proved that D D. We have the approximation factor of result if cluster size of a partition S is defined in the sense of centralized clustering. 18

Proof 1. δ = max xj :x j S\H min hk :h k H L(x j,h k ). Let h K+1 be the object in S \ H s.t. min hk :h k H L(h K+1,h k )= δ. 3. By definition, L(h K+1,h k ) δfor all k =1,..., K. 4. Let H k = {h 1,..., h k }, k =1,,..., K. 5. Consider the distance between any h i and h j, i<j Kwithout loss of generality. According to the greedy algorithm: min L(h j,h k ) h k :h k H j 1 min L(x l,h k ) h k :h k H j 1 for any x l S \ H j. Since h K+1 S \ H and S \ H S \ H j, L(h j,h i ) min L(h j,h k ) h k :h k H j 1 min L(h K+1,h k ) h k :h k H j 1 min L(h K+1,h k ) h k :h k H = δ 6. We have shown that for any i<j K+1, L(h i,h j ) δ. 19

7. Consider the partition C 1,C,..., C K formed by S. At lease of the K +1objects h 1,..., h K+1 will be covered by one cluster. Without loss of generality, assume h i and h j belong to the same cluster in S. Then L(h i,h j ) D. 8. Since L(h i,h j ) δ, δ D. 9. Consider any two objects x η and x ζ in any cluster represented by h k. By the definition of δ, L(x η,h k ) δ and L(x ζ,h k ) δ. Hence by the triangle inequality, Hence L(x η,x ζ ) L(x η,h k )+L(x ζ,h k ) δ. D δ D

For centralized clustering: Let D = max k=1,...,k max xj :x j C k L(x j,h k ). Define D similarly. Step 7 in the proof modifies to L(h i,h j ) D by the triangle inequality. D = δ L(h i,h j ) D. A step-by-step illustration of the k-center clustering is provided next. 1

8 6 4 4 8 6 5 5 1 (a) 8 6 6 4 4 4 4 6 5 5 1 (b) 6 5 5 1 (c) Figure : K-center clustering step by step. (a)-(c): -4 clusters.

Applications to Image Segmentation (a) (b) (c) (d) Figure 3: (a) Original image, (b) Segmentation using K-center, (c) Segmentation using K-means with LBG initialization, (d) Segmentation by K-means using K-center for initialization. 3

35 3 3 5 5 15 U component 15 1 V component 1 5 5 5 5 1 1 1 3 4 5 6 7 8 9 1 L component 35 (a) 15 1 3 4 5 6 7 8 9 1 L component 3 (b) 3 5 5 15 U component 15 1 V component 1 5 5 5 5 1 1 1 3 4 5 6 7 8 9 1 L component 35 (c) 15 1 3 4 5 6 7 8 9 1 L component 3 (d) 3 5 5 15 U component 15 1 V component 1 5 5 5 5 1 1 1 3 4 5 6 7 8 9 1 L component (e) 15 1 3 4 5 6 7 8 9 1 L component (f) Figure 4: Scatter plots for LUV color components with three clustering methods applied to the dog picture. Only % of the original data are shown. (a)-(b): K-center, (c)-(d): K-means with LGB initialization, (e)-(f): K-means with K-center initialization. 4

Figure 5: Comparison of segmentation results. Left: original images. Middle: K-means with k- center initialization. Right: K-means with LGB initialization using the same number of clusters as in the k-center case. 5

Agglomerative Clustering Generate clusters in a hierarchical way. Let the data set be A = {x 1,..., x n }. Start with n clusters, each containing one data point. Merge the two clusters with minimum pairwise distance. Update between-cluster distance. Iterate the merging procedure. The clustering procedure can be visualized by a tree structure called dendrogram. Definition for between-cluster distance? For clusters containing only one data point, the between-cluster distance is the between-object distance. For clusters containing multiple data points, the between-cluster distance is an agglomerative version of the between-object distances. Examples: minimum or maximum between-objects distances for objects in the two clusters. The agglomerative between-cluster distance can often be computed recursively. 6

Example Distances Suppose cluster r and s are two clusters merged into a new cluster t. Let k be any other cluster. Denote between-cluster distance by D(, ). How to get D(t, k) from D(r, k) and D(s, k)? Single-link clustering: D(t, k) = min(d(r, k),d(s, k)) D(t, k) is the minimum distance between two objects in cluster t and k respectively. Complete-link clustering: D(t, k) = max(d(r, k),d(s, k)) D(t, k) is the maximum distance between two objects in cluster t and k respectively. Average linkage clustering: Unweighted case: D(t, k) = Weighted case: n r n r +n s D(r, k)+ n s n r +n s D(s, k) D(t, k) = 1 D(r, k)+1 D(s, k) D(t, k) is the average distance between two objects in cluster t and k respectively. 7

For the unweighted case, the number of elements in each cluster is taken into consideration, while in the weighted case each cluster is weighted equally. So objects in smaller cluster are weighted more heavily than those in larger clusters. Centroid clustering: Unweighted case: D(t, k) = n r n r +n s D(r, k)+ n rn s n r + n s D(r, s) Weighted case: n s n r +n s D(s, k) D(t, k) = 1 D(r, k)+1 D(s, k) 1 D(r, s) 4 A centroid is computed for each cluster and the distance between clusters is given by the distance between their respective centroids. Ward s clustering: n r +n k D(t, k) = D(r, k) n r +n s +n k + n s + n k D(s, k) n r + s + n k n k D(r, s) n r + n s + n k Merge the two clusters for which the change in the variance of the clustering is minimized. The vari- 8

ance of a cluster is defined as the sum of squarederror between each object in the cluster and the centroid of the cluster. The dendrogram generated by single-link clustering tends to look like a chain. Clusters generated by completelink may not be well separated. Other methods are intermediates between the two. 9

Pseudo Code 1. Begin with n clusters, each containing one object. Number the clusters 1 through n.. Compute the between-cluster distance D(r, s) as the between-object distance of the two objects in r and s respectively, r, s = 1,,...,n. Let square matrix D =(D(r, s)). 3. Find the most similar pair of clusters r, s, that is, D(r, s) is minimum among all the pairwise distances. 4. Merge r and s to a new cluster t. Compute the betweencluster distance D(t, k) for all k r, s. Delete the rows and columns corresponding to r and s in D. Add a new row and column in D corresponding to cluster t. 5. Repeat Step 3 a total of n 1 times until there is only one cluster left. 3

31

8 8 6 6 4 4 4 4 4 6 8 1 8 (a) 4 4 4 6 8 1 8 (b) 6 6 4 4 4 4 4 6 8 1 (c) 4 4 4 6 8 1 (d) Figure 6: Agglomerate clustering of a data set (1 points) into 9 clusters. (a): Single-link, (b): Complete-link, (c): Average linkage, (d) Wards clustering 3

Hipparcos Data Clustering based on log L and BV. 4 Kcenter clustering for Hipparcos data 4 Kmeans clustering for Hipparcos data 3 3 log L 1 log L 1 1 1.5.5 1 1.5.5 3 BV 4 (a) K-center #clusters=4 EM clustering for Hipparcos data.5.5 1 1.5.5 3 BV 4 (b) K-means #clusters=4 EM clustering for Hipparcos data 3 3 log L 1 log L 1 1 1.5.5 1 1.5.5 3 BV (c) EM #clusters=4.5.5 1 1.5.5 3 BV (d) EM #clusters=3 Figure 7: Clustering of the Hipparcos data 33

4 Single linkage clustering for Hipparcos data 4 Complete linkage clustering for Hipparcos data 3 3 log L 1 log L 1 1 1.5.5 1 1.5.5 3 BV 4 (a) Single linkage #clusters= Average linkage clustering for Hipparcos data.5.5 1 1.5.5 3 BV 4 (b) Complete linkage #clusters=1 Wards linkage clustering for Hipparcos data 3 3 log L 1 log L 1 1 1.5.5 1 1.5.5 3 BV (c) Average linkage #clusters=1.5.5 1 1.5.5 3 BV (d) Ward s linkage #clusters=1 Figure 8: Clustering of the Hipparcos data 34

4 Single linkage clustering for Hipparcos data 4 Complete linkage clustering for Hipparcos data 3 3 log L 1 log L 1 1 1.5.5 1 1.5.5 3 BV 4 (a) Single linkage #clusters=4 Average linkage clustering for Hipparcos data.5.5 1 1.5.5 3 BV 4 (b) Complete linkage #clusters=4 Wards linkage clustering for Hipparcos data 3 3 log L 1 log L 1 1 1.5.5 1 1.5.5 3 BV (c) Average linkage #clusters=4.5.5 1 1.5.5 3 BV (d) Ward s linkage #clusters=4 Figure 9: Clustering of the Hipparcos data 35

Mixture Model-based Clustering Each cluster is mathematically represented by a parametric distribution. Examples: Gaussian (continuous), Poisson (discrete). The entire data set is modeled by a mixture of these distributions. An individual distribution used to model a specific cluster is often referred to as a component distribution. Suppose there are K components (clusters). Each component is a Gaussian distribution parameterized by µ k, Σ k. Denote the data by X, X R d. The density of component k is f k (x) =φ(x µ k,σ k ) 1 = (π)d Σ k exp( (x µ k) t Σ 1 k (x µ k) ). The prior probability (weight) of component k is a k. The mixture density is: K K f(x) = a k f k (x) = a k φ(x µ k, Σ k ). k=1 k=1 36

Advantages A mixture model with high likelihood tends to have the following traits: Component distributions have high peaks (data in one cluster are tight) The mixture model covers the data well (dominant patterns in the data are captured by component distributions). Advantages Well-studied statistical inference techniques available. Flexibility in choosing the component distributions. Obtain a density estimation for each cluster. A soft classification is available..4.35 Density function of two clusters.3.5..15.1.5 8 6 4 4 6 8 37

EM Algorithm The parameters are estimated by the maximum likelihood (ML) criterion using the EM algorithm. The EM algorithm provides an iterative computation of maximum likelihood estimation when the observed data are incomplete. Incompleteness can be conceptual. We need to estimate the distribution of X, in sample space X, but we can only observe X indirectly through Y, in sample space Y. In many cases, there is a mapping x y(x) from X to Y, and x is only known to lie in a subset of X, denoted by X (y), which is determined by the equation y = y(x). The distribution of X is parameterized by a family of distributions f(x θ), with parameters θ Ω, on x. The distribution of y, g(y θ) is g(y θ) = f(x θ)dx. X(y) The EM algorithm aims at finding a θ that maximizes g(y θ) given an observed y. Introduce the function Q(θ θ) =E(log f(x θ ) y, θ), 38

that is, the expected value of log f(x θ ) according to the conditional distribution of x given y and parameter θ. The expectation is assumed to exist for all pairs (θ,θ). In particular, it is assumed that f(x θ) > for θ Ω. EM Iteration: E-step: Compute Q(θ θ (p) ). M-step: Choose θ (p+1) to be a value of θ Ω that maximizes Q(θ θ (p) ). 39

EM for the Mixture of Normals Observed data (incomplete): {x 1,x,..., x n }, where n is the sample size. Denote all the samples collectively by x. Complete data: {(x 1,y 1 ),(x,y ),..., (x n,y n )}, where y i is the cluster (component) identity of sample x i. The collection of parameters, θ, includes: a k, µ k, Σ k, k =1,,..., K. The likelihood function is: ( n K ) L(x θ) = log a k φ(x i µ k, Σ k ) i=1 k=1 L(x θ) is the objective function of the EM algorithm (maximize). Numerical difficulty comes from the sum inside the log.. 4

The Q function is: [ Q(θ θ) =E = E = log ] n a y i φ(x i µ y i, Σ y i ) x,θ [ i=1 n ] ( ) log(a yi ) + log φ(x i µ y i, Σ y i x,θ i=1 n E [ log(a y i ) + log φ(x i µ y i, Σ y i ) x i,θ ]. i=1 The last equality comes from the fact the samples are independent. Note that when x i is given, only y i is random in the complete data (x i,y i ). Also y i only takes a finite number of values, i.e, cluster identities 1 to K. The distribution of Y given X = x i is the posterior probability of Y given X. Denote the posterior probabilities of Y = k, k = 1,..., K given x i by p i,k. By the Bayes formula, the posterior probabilities are: K p i,k a k φ(x i µ k, Σ k ), p i,k =1. k=1 41

Then each summand in Q(θ θ) is E [ log(a y i ) + log φ(x i µ y i, Σ y i ) x i,θ ] K K = p i,k log a k + p i,k log φ(x i µ k, Σ k ). k=1 k=1 Note that we cannot see the direct effect of θ in the above equation, but p i,k are computed using θ, i.e, the current parameters. θ includes the updated parameters. We then have: Q(θ θ) = n i=1 n i=1 K p i,k log a k + k=1 K p i,k log φ(x i µ k, Σ k ) k=1 Note that the prior probabilities a k and the parameters of the Gaussian components µ k, Σ k can be optimized separately. 4

The a k s subject to K k=1 a k =1. Basic optimization theories show that a k are optimized by n a i=1 k = p i,k. n The optimization of µ k and Σ k is simply a maximum likelihood estimation of the parameters using samples x i with weights p i,k. Basic optimization techniques also lead to µ k = n i=1 p i,kx i n i=1 p i,k Σ k = n i=1 p i,k(x i µ k )(x i µ k )t n i=1 p i,k After every iteration, the likelihood function L is guaranteed to increase (may not strictly). We have derived the EM algorithm for a mixture of Gaussians. 43

EM Algorithm for the Mixture of Gaussians Parameters estimated at the pth iteration are marked by a superscript (p). 1. Initialize parameters. E-step: Compute the posterior probabilities for all i = 1,..., n, k =1,..., K. 3. M-step: p i,k = a (p) k K k=1 a(p) k φ(x i µ (p) k φ(x i µ (p) k, Σ(p) k ), Σ(p) k ). a (p+1) k = n i=1 p i,k n µ (p+1) k = n i=1 p i,kx i n i=1 p i,k Σ (p+1) k = n i=1 p i,k(x i µ (p+1) k )(x i µ (p+1) k n i=1 p i,k ) t 4. Repeat step and 3 until converge. Comment: for mixtures of other distributions, the EM algorithm is very similar. The E-step involves computing the posterior probabilities. Only the particular distribution φ needs to be changed. The M-step always involves parameter optimization. Formulas differ according to distributions. 44

Computation Issues If a different Σ k is allowed for each component, the likelihood function is not bounded. Global optimum is meaningless. (Don t overdo it!) How to initialize? Example: Apply k-means first. Initialize µ k and Σ k using all the samples classified to cluster k. Initialize a k by the proportion of data assigned to cluster k by k-means. In practice, we may want to reduce model complexity by putting constraints on the parameters. For instance, assume equal priors, identical covariance matrices for all the components. 45

Examples The heart disease data set is taken from the UCI machine learning database repository. There are 97 cases (samples) in the data set, of which 137 have heart diseases. Each sample contains 13 quantitative variables, including cholesterol, max heart rate, etc. We remove the mean of each variable and normalize it to yield unit variance. data are projected onto the plane spanned by the two most dominant principal component directions. A two-component Gaussian mixture is fit. 46

3 1 1 3 4 4 3 1 1 3 4 5 3 1 1 3 4 4 3 1 1 3 4 5 Figure 1: The heart disease data set and the estimated cluster densities. Top: The scatter plot of the data. Bottom: The contour plot of the pdf estimated using a single-layer mixture of two normals. The thick lines are the boundaries between the two clusters based on the estimated pdfs of individual clusters. 47

Classification Likelihood The likelihood: L(x θ) = ( n K ) log a k φ(x i µ k, Σ k ) i=1 k=1 maximized by the EM algorithm is sometimes called mixture likelihood. Maximization can also be applied to the classification likelihood. Denote the collection of cluster identities of all the samples by y = {y 1,..., y n }. n L(x θ, y) = log (a yi φ(x i µ yi, Σ yi )) i=1 The cluster identities y i, i = 1,..., n are treated as parameters together with θ and are part of the estimation. To maximize L, EM algorithm can be modified to yield an ascending algorithm. This modified version is called Classification EM (CEM). 48

Classification EM A classification step is inserted between the E-step and the M-step. 1. Initialize parameters. E-step: Compute the posterior probabilities for all i = 1,..., n, k =1,..., K. p i,k = 3. Classification: a (p) k K k=1 a(p) k y (p+1) i φ(x i µ (p) k φ(x i µ (p) k = arg max k, Σ(p) k ), Σ(p) p i,k. k ). Or equivalently, let ˆp i,k =1if k = arg max k p i,k and otherwise. 4. M-step: a (p+1) k = n i=1 ˆp i,k n = n i=1 I(y(p+1) i = k) n µ (p+1) k = n i=1 ˆp i,kx i n i=1 ˆp i,k = n i=1 I(y(p+1) i = k)x i n i=1 I(y(p+1) i = k) 49

Σ (p+1) k = = n i=1 ˆp i,k(x i µ (p+1) k )(x i µ (p+1) k ) t n i=1 ˆp i,k n i=1 I(y(p+1) i = k)(x i µ (p+1) k )(x i µ (p+1) k n i=1 I(y(p+1) i = k) 5. Repeat step, 3, 4 until converge. Comment: CEM tends to underestimate the variances. It usually converges much faster than EM. For the purpose of clustering, it is generally believed that it performs similarly as EM. If we assume equal priors a k and also the covariance matrices Σ k are identical and are a scalar matrix, CEM is exactly k-means. (Exercise) ) t 5