Approaches to Clustering

Size: px
Start display at page:

Download "Approaches to Clustering"

Transcription

1 Clustering A basic tool in data mining/pattern recognition: Divide a set of data into groups. Samples in one cluster are close and clusters are far apart Motivations: Discover classes of data in an unsupervised way (unsupervised learning). Efficient representation of data: fast retrieval, data complexity reduction. Various engineering purposes: tightly linked with pattern recognition. 1

2 Approaches to Clustering Represent samples by feature vectors. Define a distance measure to assess the closeness between data. Closeness can be measured in many ways. Define distance based on various norms. For stars with measured parallax, the multivariate distance between stars is the spatial Euclidean distance. For a galaxy redshift survey, however, the multivariate distance depends on the Hubble constant which scales velocity to spatial distance. For many astronomical datasets, the variables have incompatible units and no prior known relationship. The result of clustering will depends on the arbitrary choice of variable scaling.

3 Approaches to Clustering Clustering: grouping of similar objects (unsupervised learning) Approaches Prototype methods: K-means (for vectors) K-center (for vectors) D-clustering (for bags of weighted vectors) Statistical modeling Mixture modeling by the EM algorithm Modal clustering Pairwise distance based partition: Spectral graph partitioning Dendrogram clustering (agglomerative): single linkage (friends of friends algorithm), complete linkage, etc. 3

4 K-means Assume there are M prototypes denoted by Z = {z 1,z,..., z M }. Each training sample is assigned to one of the prototype. Denote the assignment function by A( ). Then A(x i )=jmeans the ith training sample is assigned to the jth prototype. Goal: minimize the total mean squared error between the training samples and their representative prototypes, that is, the trace of the pooled within cluster covariance matrix. arg min Z,A N i=1 x i z A(xi ) Denote the objective function by N L(Z,A)= x i z A(xi ). i=1 Intuition: training samples are tightly clustered around the prototypes. Hence, the prototypes serve as a compact representation for the training data. 4

5 Necessary Conditions If Z is fixed, the optimal assignment function A( ) should follow the nearest neighbor rule, that is, A(x i ) = arg min j {1,,...,M} x i z j. If A( ) is fixed, the prototype z j should be the average (centroid) of all the samples assigned to the jth prototype: i:a(x z j = i )=j x i, N j where N j is the number of samples assigned to prototype j. 5

6 The Algorithm Based on the necessary conditions, the k-means algorithm alternates the two steps: For a fixed set of centroids (prototypes), optimize A( ) by assigning each sample to its closest centroid using Euclidean distance. Update the centroids by computing the average of all the samples assigned to it. The algorithm converges since after each iteration, the objective function decreases (non-increasing). Usually converges fast. Stopping criterion: the ratio between the decrease and the objective function is below a threshold. 6

7 Example Training set: {1., 5.6, 3.7,.6,.1,.6}. Apply k-means algorithm with centroids, {z 1,z }. Initialization: randomly pick z 1 =,z =5. fixed update {1.,.6,.1,.6} 5 {5.6, 3.7} {1.,.6,.1,.6} 1.15 {5.6, 3.7} {1.,.6,.1,.6} 4.65 {5.6, 3.7} The two prototypes are: z 1 =1.15, z =4.65. The objective function is L(Z,A)=

8 Initialization: randomly pick z 1 =.8,z =3.8. fixed update.8 {1.,.6,.1} 3.8 {5.6, 3.7,.6} {1.,.6,.1 }.633 {5.6, 3.7,.6 } {1.,.6,.1} {5.6, 3.7,.6} The two prototypes are: z 1 =.633, z = The objective function is L(Z,A)= Starting from different initial values, the k-means algorithm converges to different local optimum. It can be shown that {z 1 =.633,z =3.967} is the global optimal solution. 8

9 Initialization Randomly pick up the prototypes to start the k-means iteration. Different initial prototypes may lead to different local optimal solutions given by k-means. Try different sets of initial prototypes, compare the objective function at the end to choose the best solution. When randomly select initial prototypes, better make sure no prototype is out of the range of the entire data set. Initialization in the above simulation: Generated M random vectors with independent dimensions. For each dimension, the feature is uniformly distributed in [ 1, 1]. Linearly transform the jth feature, Z j, j =1,,..., p in each prototype (a vector) by: Z j s j + m j, where s j is the sample standard deviation of dimension j and m j is the sample mean of dimension j, both computed using the training data. 9

10 Linde-Buzo-Gray (LBG) Algorithm An algorithm developed in vector quantization for the purpose of data compression. Y. Linde, A. Buzo and R. M. Gray, An algorithm for vector quantizer design, IEEE Trans. on Communication, Vol. COM-8, pp , Jan The algorithm 1. Find the centroid z (1) 1 of the entire data set.. Set k =1,l=1. 3. If k < M, split the current centroids by adding small offsets. If M k k, split all the centroids; otherwise, split only M k of them. Denote the number of centroids split by k = min(k, M k). For example, to split z (1) 1 into two centroids, let z () 1 = z (1) 1, z() = z (1) 1 + ɛ, where ɛ has a small norm and a random direction. 4. k k + k; l l Use {z (l) 1,z(l),..., z(l) k } as initial prototypes. Apply k-means iteration to update these prototypes. 6. If k<m, go back to step 3; otherwise, stop. 1

11 Tree-structured Clustering Studied extensively in vector quantization from the perspective of data compression. Referred to as tree-structured vector quantization (TSVQ). The algorithm 1. Apply centroids k-means to the entire data set.. The data are assigned to the centroids. 3. For the data assigned to each centroid, apply centroids k-means to them separately. 4. Repeat the above step. 11

12 Compare with LBG: For LBG, after the initial prototypes are formed by splitting, k-means is applied to the overall data set. The final result is M prototypes. For TSVQ, data partitioned into different centroids at the same level will never affect each other in the future growth of the tree. The final result is a tree structure. Fast searching For k-means, to decide which cell a query x goes to, M (the number of prototypes) distances need to be computed. For the tree-structured clustering, to decide which cell a query x goes to, only log (M) distances need to be computed. Comments on tree-structured clustering: It is structurally more constrained. But on the other hand, it provides more insight into the patterns in the data. It is greedy in the sense of optimizing at each step sequentially. An early bad decision will propagate its effect. It provides more algorithmic flexibility. 1

13 K-center Clustering Let A be a set of n objects. Partition A into K sets C 1, C,..., C K. Cluster size of C k : the least value D for which all points in C k are: 1. within distance D of each other, or. within distance D/ of some point called the cluster center. Let the cluster size of C k be D k. The cluster size of partition S is D = Goal: Given K, min S D(S). max D k. k=1,...,k 13

14 Comparison with k-means Assume the distance between vectors is the squared Euclidean distance. K-means: min S K k=1 i:x i C k (x i µ k ) T (x i µ k ) where µ k is the centroid for cluster C k. In particular, µ k = 1 N k i:x i C k x i. K-center: min S max max (x i µ k ) T (x i µ k ). k=1,...,k i:x i C k where µ k is called the centroid, but may not be the mean vector. Another formulation of k-center: min S max k=1,...,k max L(x i,x j ). i,j:x i,x j C k L(x i,x j ) denotes any distance between a pair of objects. 14

15 (a) (b) (c) Figure 1: Comparison of k-means and k-center. (a): Original unclustered data. (b): Clustering by k-means. (c): Clustering by k-center. K-means focuses on average distance. K-center focuses on worst scenario. 15

16 Greedy Algorithm Choose a subset H from S consisting K points that are farthest apart from each other. Each point h k H represents one cluster C k. Point x i is partitioned into cluster C k if L(x i,h k ) = min L(x i,h k ). k =1,...,K Only need pairwise distance L(x i,x j )for any x i,x j S. Hence, x i can be a non-vector representation of the objects. The greedy algorithm achieves an approximation factor of as long as the distance measure L satisfies the triangle inequality. That is, if D = min S max k=1,...,k max L(x i,x j ) i,j:x i,x j C k then the greedy algorithm guarantees that D D. The relation holds if the cluster size is defined in the sense of centralized clustering. 16

17 Pseudo Code H denotes the set of cluster representative objects {h 1,..., h k } S. Let cluster(x i ) be the identity of the cluster x i S belongs to. Let dist(x i ) be the distance between x i and its closest cluster representative object: Pseudo code: dist(x i ) = min h j H L(x i,h j ). 1. Randomly select an object x j from S, let h 1 = x j, H = {h 1 }.. for j =1to n, dist(x j )=L(x j,h 1 ) cluster(x j )=1 3. for i =to K D = max xj :x j S\H dist(x j ) choose h i S \ H s.t. dist(h i )=D H=H {h i } for j =1to n if L(x j,h i ) dist(x j ) dist(x j )=L(x j,h i ) cluster(x j )=i 17

18 Algorithm Property The running time of the algorithm is O(Kn). Let the partition obtained by the greedy algorithm be S and the optimal partition be S. Let the cluster size of S be D and that of S be D. The cluster size is defined in the pairwise distance sense. It can be proved that D D. We have the approximation factor of result if cluster size of a partition S is defined in the sense of centralized clustering. 18

19 Proof 1. δ = max xj :x j S\H min hk :h k H L(x j,h k ). Let h K+1 be the object in S \ H s.t. min hk :h k H L(h K+1,h k )= δ. 3. By definition, L(h K+1,h k ) δfor all k =1,..., K. 4. Let H k = {h 1,..., h k }, k =1,,..., K. 5. Consider the distance between any h i and h j, i<j Kwithout loss of generality. According to the greedy algorithm: min L(h j,h k ) h k :h k H j 1 min L(x l,h k ) h k :h k H j 1 for any x l S \ H j. Since h K+1 S \ H and S \ H S \ H j, L(h j,h i ) min L(h j,h k ) h k :h k H j 1 min L(h K+1,h k ) h k :h k H j 1 min L(h K+1,h k ) h k :h k H = δ 6. We have shown that for any i<j K+1, L(h i,h j ) δ. 19

20 7. Consider the partition C 1,C,..., C K formed by S. At lease of the K +1objects h 1,..., h K+1 will be covered by one cluster. Without loss of generality, assume h i and h j belong to the same cluster in S. Then L(h i,h j ) D. 8. Since L(h i,h j ) δ, δ D. 9. Consider any two objects x η and x ζ in any cluster represented by h k. By the definition of δ, L(x η,h k ) δ and L(x ζ,h k ) δ. Hence by the triangle inequality, Hence L(x η,x ζ ) L(x η,h k )+L(x ζ,h k ) δ. D δ D

21 For centralized clustering: Let D = max k=1,...,k max xj :x j C k L(x j,h k ). Define D similarly. Step 7 in the proof modifies to L(h i,h j ) D by the triangle inequality. D = δ L(h i,h j ) D. A step-by-step illustration of the k-center clustering is provided next. 1

22 (a) (b) (c) Figure : K-center clustering step by step. (a)-(c): -4 clusters.

23 Applications to Image Segmentation (a) (b) (c) (d) Figure 3: (a) Original image, (b) Segmentation using K-center, (c) Segmentation using K-means with LBG initialization, (d) Segmentation by K-means using K-center for initialization. 3

24 U component 15 1 V component L component 35 (a) L component 3 (b) U component 15 1 V component L component 35 (c) L component 3 (d) U component 15 1 V component L component (e) L component (f) Figure 4: Scatter plots for LUV color components with three clustering methods applied to the dog picture. Only % of the original data are shown. (a)-(b): K-center, (c)-(d): K-means with LGB initialization, (e)-(f): K-means with K-center initialization. 4

25 Figure 5: Comparison of segmentation results. Left: original images. Middle: K-means with k- center initialization. Right: K-means with LGB initialization using the same number of clusters as in the k-center case. 5

26 Agglomerative Clustering Generate clusters in a hierarchical way. Let the data set be A = {x 1,..., x n }. Start with n clusters, each containing one data point. Merge the two clusters with minimum pairwise distance. Update between-cluster distance. Iterate the merging procedure. The clustering procedure can be visualized by a tree structure called dendrogram. Definition for between-cluster distance? For clusters containing only one data point, the between-cluster distance is the between-object distance. For clusters containing multiple data points, the between-cluster distance is an agglomerative version of the between-object distances. Examples: minimum or maximum between-objects distances for objects in the two clusters. The agglomerative between-cluster distance can often be computed recursively. 6

27 Example Distances Suppose cluster r and s are two clusters merged into a new cluster t. Let k be any other cluster. Denote between-cluster distance by D(, ). How to get D(t, k) from D(r, k) and D(s, k)? Single-link clustering: D(t, k) = min(d(r, k),d(s, k)) D(t, k) is the minimum distance between two objects in cluster t and k respectively. Complete-link clustering: D(t, k) = max(d(r, k),d(s, k)) D(t, k) is the maximum distance between two objects in cluster t and k respectively. Average linkage clustering: Unweighted case: D(t, k) = Weighted case: n r n r +n s D(r, k)+ n s n r +n s D(s, k) D(t, k) = 1 D(r, k)+1 D(s, k) D(t, k) is the average distance between two objects in cluster t and k respectively. 7

28 For the unweighted case, the number of elements in each cluster is taken into consideration, while in the weighted case each cluster is weighted equally. So objects in smaller cluster are weighted more heavily than those in larger clusters. Centroid clustering: Unweighted case: D(t, k) = n r n r +n s D(r, k)+ n rn s n r + n s D(r, s) Weighted case: n s n r +n s D(s, k) D(t, k) = 1 D(r, k)+1 D(s, k) 1 D(r, s) 4 A centroid is computed for each cluster and the distance between clusters is given by the distance between their respective centroids. Ward s clustering: n r +n k D(t, k) = D(r, k) n r +n s +n k + n s + n k D(s, k) n r + s + n k n k D(r, s) n r + n s + n k Merge the two clusters for which the change in the variance of the clustering is minimized. The vari- 8

29 ance of a cluster is defined as the sum of squarederror between each object in the cluster and the centroid of the cluster. The dendrogram generated by single-link clustering tends to look like a chain. Clusters generated by completelink may not be well separated. Other methods are intermediates between the two. 9

30 Pseudo Code 1. Begin with n clusters, each containing one object. Number the clusters 1 through n.. Compute the between-cluster distance D(r, s) as the between-object distance of the two objects in r and s respectively, r, s = 1,,...,n. Let square matrix D =(D(r, s)). 3. Find the most similar pair of clusters r, s, that is, D(r, s) is minimum among all the pairwise distances. 4. Merge r and s to a new cluster t. Compute the betweencluster distance D(t, k) for all k r, s. Delete the rows and columns corresponding to r and s in D. Add a new row and column in D corresponding to cluster t. 5. Repeat Step 3 a total of n 1 times until there is only one cluster left. 3

31 31

32 (a) (b) (c) (d) Figure 6: Agglomerate clustering of a data set (1 points) into 9 clusters. (a): Single-link, (b): Complete-link, (c): Average linkage, (d) Wards clustering 3

33 Hipparcos Data Clustering based on log L and BV. 4 Kcenter clustering for Hipparcos data 4 Kmeans clustering for Hipparcos data 3 3 log L 1 log L BV 4 (a) K-center #clusters=4 EM clustering for Hipparcos data BV 4 (b) K-means #clusters=4 EM clustering for Hipparcos data 3 3 log L 1 log L BV (c) EM #clusters= BV (d) EM #clusters=3 Figure 7: Clustering of the Hipparcos data 33

34 4 Single linkage clustering for Hipparcos data 4 Complete linkage clustering for Hipparcos data 3 3 log L 1 log L BV 4 (a) Single linkage #clusters= Average linkage clustering for Hipparcos data BV 4 (b) Complete linkage #clusters=1 Wards linkage clustering for Hipparcos data 3 3 log L 1 log L BV (c) Average linkage #clusters= BV (d) Ward s linkage #clusters=1 Figure 8: Clustering of the Hipparcos data 34

35 4 Single linkage clustering for Hipparcos data 4 Complete linkage clustering for Hipparcos data 3 3 log L 1 log L BV 4 (a) Single linkage #clusters=4 Average linkage clustering for Hipparcos data BV 4 (b) Complete linkage #clusters=4 Wards linkage clustering for Hipparcos data 3 3 log L 1 log L BV (c) Average linkage #clusters= BV (d) Ward s linkage #clusters=4 Figure 9: Clustering of the Hipparcos data 35

36 Mixture Model-based Clustering Each cluster is mathematically represented by a parametric distribution. Examples: Gaussian (continuous), Poisson (discrete). The entire data set is modeled by a mixture of these distributions. An individual distribution used to model a specific cluster is often referred to as a component distribution. Suppose there are K components (clusters). Each component is a Gaussian distribution parameterized by µ k, Σ k. Denote the data by X, X R d. The density of component k is f k (x) =φ(x µ k,σ k ) 1 = (π)d Σ k exp( (x µ k) t Σ 1 k (x µ k) ). The prior probability (weight) of component k is a k. The mixture density is: K K f(x) = a k f k (x) = a k φ(x µ k, Σ k ). k=1 k=1 36

37 Advantages A mixture model with high likelihood tends to have the following traits: Component distributions have high peaks (data in one cluster are tight) The mixture model covers the data well (dominant patterns in the data are captured by component distributions). Advantages Well-studied statistical inference techniques available. Flexibility in choosing the component distributions. Obtain a density estimation for each cluster. A soft classification is available Density function of two clusters

38 EM Algorithm The parameters are estimated by the maximum likelihood (ML) criterion using the EM algorithm. The EM algorithm provides an iterative computation of maximum likelihood estimation when the observed data are incomplete. Incompleteness can be conceptual. We need to estimate the distribution of X, in sample space X, but we can only observe X indirectly through Y, in sample space Y. In many cases, there is a mapping x y(x) from X to Y, and x is only known to lie in a subset of X, denoted by X (y), which is determined by the equation y = y(x). The distribution of X is parameterized by a family of distributions f(x θ), with parameters θ Ω, on x. The distribution of y, g(y θ) is g(y θ) = f(x θ)dx. X(y) The EM algorithm aims at finding a θ that maximizes g(y θ) given an observed y. Introduce the function Q(θ θ) =E(log f(x θ ) y, θ), 38

39 that is, the expected value of log f(x θ ) according to the conditional distribution of x given y and parameter θ. The expectation is assumed to exist for all pairs (θ,θ). In particular, it is assumed that f(x θ) > for θ Ω. EM Iteration: E-step: Compute Q(θ θ (p) ). M-step: Choose θ (p+1) to be a value of θ Ω that maximizes Q(θ θ (p) ). 39

40 EM for the Mixture of Normals Observed data (incomplete): {x 1,x,..., x n }, where n is the sample size. Denote all the samples collectively by x. Complete data: {(x 1,y 1 ),(x,y ),..., (x n,y n )}, where y i is the cluster (component) identity of sample x i. The collection of parameters, θ, includes: a k, µ k, Σ k, k =1,,..., K. The likelihood function is: ( n K ) L(x θ) = log a k φ(x i µ k, Σ k ) i=1 k=1 L(x θ) is the objective function of the EM algorithm (maximize). Numerical difficulty comes from the sum inside the log.. 4

41 The Q function is: [ Q(θ θ) =E = E = log ] n a y i φ(x i µ y i, Σ y i ) x,θ [ i=1 n ] ( ) log(a yi ) + log φ(x i µ y i, Σ y i x,θ i=1 n E [ log(a y i ) + log φ(x i µ y i, Σ y i ) x i,θ ]. i=1 The last equality comes from the fact the samples are independent. Note that when x i is given, only y i is random in the complete data (x i,y i ). Also y i only takes a finite number of values, i.e, cluster identities 1 to K. The distribution of Y given X = x i is the posterior probability of Y given X. Denote the posterior probabilities of Y = k, k = 1,..., K given x i by p i,k. By the Bayes formula, the posterior probabilities are: K p i,k a k φ(x i µ k, Σ k ), p i,k =1. k=1 41

42 Then each summand in Q(θ θ) is E [ log(a y i ) + log φ(x i µ y i, Σ y i ) x i,θ ] K K = p i,k log a k + p i,k log φ(x i µ k, Σ k ). k=1 k=1 Note that we cannot see the direct effect of θ in the above equation, but p i,k are computed using θ, i.e, the current parameters. θ includes the updated parameters. We then have: Q(θ θ) = n i=1 n i=1 K p i,k log a k + k=1 K p i,k log φ(x i µ k, Σ k ) k=1 Note that the prior probabilities a k and the parameters of the Gaussian components µ k, Σ k can be optimized separately. 4

43 The a k s subject to K k=1 a k =1. Basic optimization theories show that a k are optimized by n a i=1 k = p i,k. n The optimization of µ k and Σ k is simply a maximum likelihood estimation of the parameters using samples x i with weights p i,k. Basic optimization techniques also lead to µ k = n i=1 p i,kx i n i=1 p i,k Σ k = n i=1 p i,k(x i µ k )(x i µ k )t n i=1 p i,k After every iteration, the likelihood function L is guaranteed to increase (may not strictly). We have derived the EM algorithm for a mixture of Gaussians. 43

44 EM Algorithm for the Mixture of Gaussians Parameters estimated at the pth iteration are marked by a superscript (p). 1. Initialize parameters. E-step: Compute the posterior probabilities for all i = 1,..., n, k =1,..., K. 3. M-step: p i,k = a (p) k K k=1 a(p) k φ(x i µ (p) k φ(x i µ (p) k, Σ(p) k ), Σ(p) k ). a (p+1) k = n i=1 p i,k n µ (p+1) k = n i=1 p i,kx i n i=1 p i,k Σ (p+1) k = n i=1 p i,k(x i µ (p+1) k )(x i µ (p+1) k n i=1 p i,k ) t 4. Repeat step and 3 until converge. Comment: for mixtures of other distributions, the EM algorithm is very similar. The E-step involves computing the posterior probabilities. Only the particular distribution φ needs to be changed. The M-step always involves parameter optimization. Formulas differ according to distributions. 44

45 Computation Issues If a different Σ k is allowed for each component, the likelihood function is not bounded. Global optimum is meaningless. (Don t overdo it!) How to initialize? Example: Apply k-means first. Initialize µ k and Σ k using all the samples classified to cluster k. Initialize a k by the proportion of data assigned to cluster k by k-means. In practice, we may want to reduce model complexity by putting constraints on the parameters. For instance, assume equal priors, identical covariance matrices for all the components. 45

46 Examples The heart disease data set is taken from the UCI machine learning database repository. There are 97 cases (samples) in the data set, of which 137 have heart diseases. Each sample contains 13 quantitative variables, including cholesterol, max heart rate, etc. We remove the mean of each variable and normalize it to yield unit variance. data are projected onto the plane spanned by the two most dominant principal component directions. A two-component Gaussian mixture is fit. 46

47 Figure 1: The heart disease data set and the estimated cluster densities. Top: The scatter plot of the data. Bottom: The contour plot of the pdf estimated using a single-layer mixture of two normals. The thick lines are the boundaries between the two clusters based on the estimated pdfs of individual clusters. 47

48 Classification Likelihood The likelihood: L(x θ) = ( n K ) log a k φ(x i µ k, Σ k ) i=1 k=1 maximized by the EM algorithm is sometimes called mixture likelihood. Maximization can also be applied to the classification likelihood. Denote the collection of cluster identities of all the samples by y = {y 1,..., y n }. n L(x θ, y) = log (a yi φ(x i µ yi, Σ yi )) i=1 The cluster identities y i, i = 1,..., n are treated as parameters together with θ and are part of the estimation. To maximize L, EM algorithm can be modified to yield an ascending algorithm. This modified version is called Classification EM (CEM). 48

49 Classification EM A classification step is inserted between the E-step and the M-step. 1. Initialize parameters. E-step: Compute the posterior probabilities for all i = 1,..., n, k =1,..., K. p i,k = 3. Classification: a (p) k K k=1 a(p) k y (p+1) i φ(x i µ (p) k φ(x i µ (p) k = arg max k, Σ(p) k ), Σ(p) p i,k. k ). Or equivalently, let ˆp i,k =1if k = arg max k p i,k and otherwise. 4. M-step: a (p+1) k = n i=1 ˆp i,k n = n i=1 I(y(p+1) i = k) n µ (p+1) k = n i=1 ˆp i,kx i n i=1 ˆp i,k = n i=1 I(y(p+1) i = k)x i n i=1 I(y(p+1) i = k) 49

50 Σ (p+1) k = = n i=1 ˆp i,k(x i µ (p+1) k )(x i µ (p+1) k ) t n i=1 ˆp i,k n i=1 I(y(p+1) i = k)(x i µ (p+1) k )(x i µ (p+1) k n i=1 I(y(p+1) i = k) 5. Repeat step, 3, 4 until converge. Comment: CEM tends to underestimate the variances. It usually converges much faster than EM. For the purpose of clustering, it is generally believed that it performs similarly as EM. If we assume equal priors a k and also the covariance matrices Σ k are identical and are a scalar matrix, CEM is exactly k-means. (Exercise) ) t 5

Cluster Analysis. Jia Li Department of Statistics Penn State University. Summer School in Statistics for Astronomers IV June 9-14, 2008

Cluster Analysis. Jia Li Department of Statistics Penn State University. Summer School in Statistics for Astronomers IV June 9-14, 2008 Cluster Analysis Jia Li Department of Statistics Penn State University Summer School in Statistics for Astronomers IV June 9-1, 8 1 Clustering A basic tool in data mining/pattern recognition: Divide a

More information

Cluster Analysis. Debashis Ghosh Department of Statistics Penn State University (based on slides from Jia Li, Dept. of Statistics)

Cluster Analysis. Debashis Ghosh Department of Statistics Penn State University (based on slides from Jia Li, Dept. of Statistics) Cluster Analysis Debashis Ghosh Department of Statistics Penn State University (based on slides from Jia Li, Dept. of Statistics) Summer School in Statistics for Astronomers June 1-6, 9 Clustering: Intuition

More information

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06 Clustering CS294 Practical Machine Learning Junming Yin 10/09/06 Outline Introduction Unsupervised learning What is clustering? Application Dissimilarity (similarity) of objects Clustering algorithm K-means,

More information

Unsupervised Learning: Clustering

Unsupervised Learning: Clustering Unsupervised Learning: Clustering Vibhav Gogate The University of Texas at Dallas Slides adapted from Carlos Guestrin, Dan Klein & Luke Zettlemoyer Machine Learning Supervised Learning Unsupervised Learning

More information

Clustering web search results

Clustering web search results Clustering K-means Machine Learning CSE546 Emily Fox University of Washington November 4, 2013 1 Clustering images Set of Images [Goldberger et al.] 2 1 Clustering web search results 3 Some Data 4 2 K-means

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover

More information

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme Machine Learning B. Unsupervised Learning B.1 Cluster Analysis Lars Schmidt-Thieme Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim, Germany

More information

Clustering and Visualisation of Data

Clustering and Visualisation of Data Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some

More information

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A. 205-206 Pietro Guccione, PhD DEI - DIPARTIMENTO DI INGEGNERIA ELETTRICA E DELL INFORMAZIONE POLITECNICO DI BARI

More information

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering SYDE 372 - Winter 2011 Introduction to Pattern Recognition Clustering Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 5 All the approaches we have learned

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods

More information

Network Traffic Measurements and Analysis

Network Traffic Measurements and Analysis DEIB - Politecnico di Milano Fall, 2017 Introduction Often, we have only a set of features x = x 1, x 2,, x n, but no associated response y. Therefore we are not interested in prediction nor classification,

More information

CHAPTER 4: CLUSTER ANALYSIS

CHAPTER 4: CLUSTER ANALYSIS CHAPTER 4: CLUSTER ANALYSIS WHAT IS CLUSTER ANALYSIS? A cluster is a collection of data-objects similar to one another within the same group & dissimilar to the objects in other groups. Cluster analysis

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

Methods for Intelligent Systems

Methods for Intelligent Systems Methods for Intelligent Systems Lecture Notes on Clustering (II) Davide Eynard eynard@elet.polimi.it Department of Electronics and Information Politecnico di Milano Davide Eynard - Lecture Notes on Clustering

More information

Clustering CS 550: Machine Learning

Clustering CS 550: Machine Learning Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf

More information

Clustering Lecture 5: Mixture Model

Clustering Lecture 5: Mixture Model Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics

More information

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014

More information

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling

Machine Learning. B. Unsupervised Learning B.1 Cluster Analysis. Lars Schmidt-Thieme, Nicolas Schilling Machine Learning B. Unsupervised Learning B.1 Cluster Analysis Lars Schmidt-Thieme, Nicolas Schilling Information Systems and Machine Learning Lab (ISMLL) Institute for Computer Science University of Hildesheim,

More information

Introduction to Machine Learning. Xiaojin Zhu

Introduction to Machine Learning. Xiaojin Zhu Introduction to Machine Learning Xiaojin Zhu jerryzhu@cs.wisc.edu Read Chapter 1 of this book: Xiaojin Zhu and Andrew B. Goldberg. Introduction to Semi- Supervised Learning. http://www.morganclaypool.com/doi/abs/10.2200/s00196ed1v01y200906aim006

More information

STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010 STATS306B Jonathan Taylor Department of Statistics Stanford University June 3, 2010 Spring 2010 Outline K-means, K-medoids, EM algorithm choosing number of clusters: Gap test hierarchical clustering spectral

More information

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a Week 9 Based in part on slides from textbook, slides of Susan Holmes Part I December 2, 2012 Hierarchical Clustering 1 / 1 Produces a set of nested clusters organized as a Hierarchical hierarchical clustering

More information

10701 Machine Learning. Clustering

10701 Machine Learning. Clustering 171 Machine Learning Clustering What is Clustering? Organizing data into clusters such that there is high intra-cluster similarity low inter-cluster similarity Informally, finding natural groupings among

More information

K-means and Hierarchical Clustering

K-means and Hierarchical Clustering K-means and Hierarchical Clustering Xiaohui Xie University of California, Irvine K-means and Hierarchical Clustering p.1/18 Clustering Given n data points X = {x 1, x 2,, x n }. Clustering is the partitioning

More information

Expectation Maximization (EM) and Gaussian Mixture Models

Expectation Maximization (EM) and Gaussian Mixture Models Expectation Maximization (EM) and Gaussian Mixture Models Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 2 3 4 5 6 7 8 Unsupervised Learning Motivation

More information

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points] CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.

More information

SGN (4 cr) Chapter 11

SGN (4 cr) Chapter 11 SGN-41006 (4 cr) Chapter 11 Clustering Jussi Tohka & Jari Niemi Department of Signal Processing Tampere University of Technology February 25, 2014 J. Tohka & J. Niemi (TUT-SGN) SGN-41006 (4 cr) Chapter

More information

Summer School in Statistics for Astronomers & Physicists June 15-17, Cluster Analysis

Summer School in Statistics for Astronomers & Physicists June 15-17, Cluster Analysis Summer School in Statistics for Astronomers & Physicists June 15-17, 2005 Session on Computational Algorithms for Astrostatistics Cluster Analysis Max Buot Department of Statistics Carnegie-Mellon University

More information

Clustering. So far in the course. Clustering. Clustering. Subhransu Maji. CMPSCI 689: Machine Learning. dist(x, y) = x y 2 2

Clustering. So far in the course. Clustering. Clustering. Subhransu Maji. CMPSCI 689: Machine Learning. dist(x, y) = x y 2 2 So far in the course Clustering Subhransu Maji : Machine Learning 2 April 2015 7 April 2015 Supervised learning: learning with a teacher You had training data which was (feature, label) pairs and the goal

More information

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask Machine Learning and Data Mining Clustering (1): Basics Kalev Kask Unsupervised learning Supervised learning Predict target value ( y ) given features ( x ) Unsupervised learning Understand patterns of

More information

9.1. K-means Clustering

9.1. K-means Clustering 424 9. MIXTURE MODELS AND EM Section 9.2 Section 9.3 Section 9.4 view of mixture distributions in which the discrete latent variables can be interpreted as defining assignments of data points to specific

More information

K-Means Clustering 3/3/17

K-Means Clustering 3/3/17 K-Means Clustering 3/3/17 Unsupervised Learning We have a collection of unlabeled data points. We want to find underlying structure in the data. Examples: Identify groups of similar data points. Clustering

More information

Machine Learning for OR & FE

Machine Learning for OR & FE Machine Learning for OR & FE Unsupervised Learning: Clustering Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com (Some material

More information

Machine Learning. Unsupervised Learning. Manfred Huber

Machine Learning. Unsupervised Learning. Manfred Huber Machine Learning Unsupervised Learning Manfred Huber 2015 1 Unsupervised Learning In supervised learning the training data provides desired target output for learning In unsupervised learning the training

More information

4. Cluster Analysis. Francesc J. Ferri. Dept. d Informàtica. Universitat de València. Febrer F.J. Ferri (Univ. València) AIRF 2/ / 1

4. Cluster Analysis. Francesc J. Ferri. Dept. d Informàtica. Universitat de València. Febrer F.J. Ferri (Univ. València) AIRF 2/ / 1 Anàlisi d Imatges i Reconeixement de Formes Image Analysis and Pattern Recognition:. Cluster Analysis Francesc J. Ferri Dept. d Informàtica. Universitat de València Febrer 8 F.J. Ferri (Univ. València)

More information

Unsupervised Learning

Unsupervised Learning Networks for Pattern Recognition, 2014 Networks for Single Linkage K-Means Soft DBSCAN PCA Networks for Kohonen Maps Linear Vector Quantization Networks for Problems/Approaches in Machine Learning Supervised

More information

Expectation-Maximization. Nuno Vasconcelos ECE Department, UCSD

Expectation-Maximization. Nuno Vasconcelos ECE Department, UCSD Expectation-Maximization Nuno Vasconcelos ECE Department, UCSD Plan for today last time we started talking about mixture models we introduced the main ideas behind EM to motivate EM, we looked at classification-maximization

More information

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani Clustering CE-717: Machine Learning Sharif University of Technology Spring 2016 Soleymani Outline Clustering Definition Clustering main approaches Partitional (flat) Hierarchical Clustering validation

More information

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves

More information

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning Associate Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551

More information

Clustering. Subhransu Maji. CMPSCI 689: Machine Learning. 2 April April 2015

Clustering. Subhransu Maji. CMPSCI 689: Machine Learning. 2 April April 2015 Clustering Subhransu Maji CMPSCI 689: Machine Learning 2 April 2015 7 April 2015 So far in the course Supervised learning: learning with a teacher You had training data which was (feature, label) pairs

More information

Generative and discriminative classification techniques

Generative and discriminative classification techniques Generative and discriminative classification techniques Machine Learning and Category Representation 013-014 Jakob Verbeek, December 13+0, 013 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.13.14

More information

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample CS 1675 Introduction to Machine Learning Lecture 18 Clustering Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square Clustering Groups together similar instances in the data sample Basic clustering problem:

More information

Machine Learning for Signal Processing Clustering. Bhiksha Raj Class Oct 2016

Machine Learning for Signal Processing Clustering. Bhiksha Raj Class Oct 2016 Machine Learning for Signal Processing Clustering Bhiksha Raj Class 11. 13 Oct 2016 1 Statistical Modelling and Latent Structure Much of statistical modelling attempts to identify latent structure in the

More information

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22 INF4820 Clustering Erik Velldal University of Oslo Nov. 17, 2009 Erik Velldal INF4820 1 / 22 Topics for Today More on unsupervised machine learning for data-driven categorization: clustering. The task

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014

More information

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas

CS839: Probabilistic Graphical Models. Lecture 10: Learning with Partially Observed Data. Theo Rekatsinas CS839: Probabilistic Graphical Models Lecture 10: Learning with Partially Observed Data Theo Rekatsinas 1 Partially Observed GMs Speech recognition 2 Partially Observed GMs Evolution 3 Partially Observed

More information

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford Department of Engineering Science University of Oxford January 27, 2017 Many datasets consist of multiple heterogeneous subsets. Cluster analysis: Given an unlabelled data, want algorithms that automatically

More information

Homework #4 Programming Assignment Due: 11:59 pm, November 4, 2018

Homework #4 Programming Assignment Due: 11:59 pm, November 4, 2018 CSCI 567, Fall 18 Haipeng Luo Homework #4 Programming Assignment Due: 11:59 pm, ovember 4, 2018 General instructions Your repository will have now a directory P4/. Please do not change the name of this

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 01-25-2018 Outline Background Defining proximity Clustering methods Determining number of clusters Other approaches Cluster analysis as unsupervised Learning Unsupervised

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2009 CS 551, Spring 2009 c 2009, Selim Aksoy (Bilkent University)

More information

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time

Today. Lecture 4: Last time. The EM algorithm. We examine clustering in a little more detail; we went over it a somewhat quickly last time Today Lecture 4: We examine clustering in a little more detail; we went over it a somewhat quickly last time The CAD data will return and give us an opportunity to work with curves (!) We then examine

More information

Clustering algorithms

Clustering algorithms Clustering algorithms Machine Learning Hamid Beigy Sharif University of Technology Fall 1393 Hamid Beigy (Sharif University of Technology) Clustering algorithms Fall 1393 1 / 22 Table of contents 1 Supervised

More information

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering Introduction to Pattern Recognition Part II Selim Aksoy Bilkent University Department of Computer Engineering saksoy@cs.bilkent.edu.tr RETINA Pattern Recognition Tutorial, Summer 2005 Overview Statistical

More information

CS Introduction to Data Mining Instructor: Abdullah Mueen

CS Introduction to Data Mining Instructor: Abdullah Mueen CS 591.03 Introduction to Data Mining Instructor: Abdullah Mueen LECTURE 8: ADVANCED CLUSTERING (FUZZY AND CO -CLUSTERING) Review: Basic Cluster Analysis Methods (Chap. 10) Cluster Analysis: Basic Concepts

More information

Clustering and Dissimilarity Measures. Clustering. Dissimilarity Measures. Cluster Analysis. Perceptually-Inspired Measures

Clustering and Dissimilarity Measures. Clustering. Dissimilarity Measures. Cluster Analysis. Perceptually-Inspired Measures Clustering and Dissimilarity Measures Clustering APR Course, Delft, The Netherlands Marco Loog May 19, 2008 1 What salient structures exist in the data? How many clusters? May 19, 2008 2 Cluster Analysis

More information

Introduction to Machine Learning CMU-10701

Introduction to Machine Learning CMU-10701 Introduction to Machine Learning CMU-10701 Clustering and EM Barnabás Póczos & Aarti Singh Contents Clustering K-means Mixture of Gaussians Expectation Maximization Variational Methods 2 Clustering 3 K-

More information

Content-based image and video analysis. Machine learning

Content-based image and video analysis. Machine learning Content-based image and video analysis Machine learning for multimedia retrieval 04.05.2009 What is machine learning? Some problems are very hard to solve by writing a computer program by hand Almost all

More information

Cluster Analysis. Ying Shen, SSE, Tongji University

Cluster Analysis. Ying Shen, SSE, Tongji University Cluster Analysis Ying Shen, SSE, Tongji University Cluster analysis Cluster analysis groups data objects based only on the attributes in the data. The main objective is that The objects within a group

More information

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions Thomas Giraud Simon Chabot October 12, 2013 Contents 1 Discriminant analysis 3 1.1 Main idea................................

More information

Supervised vs. Unsupervised Learning

Supervised vs. Unsupervised Learning Clustering Supervised vs. Unsupervised Learning So far we have assumed that the training samples used to design the classifier were labeled by their class membership (supervised learning) We assume now

More information

CS 229 Midterm Review

CS 229 Midterm Review CS 229 Midterm Review Course Staff Fall 2018 11/2/2018 Outline Today: SVMs Kernels Tree Ensembles EM Algorithm / Mixture Models [ Focus on building intuition, less so on solving specific problems. Ask

More information

Finding Clusters 1 / 60

Finding Clusters 1 / 60 Finding Clusters Types of Clustering Approaches: Linkage Based, e.g. Hierarchical Clustering Clustering by Partitioning, e.g. k-means Density Based Clustering, e.g. DBScan Grid Based Clustering 1 / 60

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

Convexization in Markov Chain Monte Carlo

Convexization in Markov Chain Monte Carlo in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non

More information

Clustering in R d. Clustering. Widely-used clustering methods. The k-means optimization problem CSE 250B

Clustering in R d. Clustering. Widely-used clustering methods. The k-means optimization problem CSE 250B Clustering in R d Clustering CSE 250B Two common uses of clustering: Vector quantization Find a finite set of representatives that provides good coverage of a complex, possibly infinite, high-dimensional

More information

Machine Learning (BSMC-GA 4439) Wenke Liu

Machine Learning (BSMC-GA 4439) Wenke Liu Machine Learning (BSMC-GA 4439) Wenke Liu 01-31-017 Outline Background Defining proximity Clustering methods Determining number of clusters Comparing two solutions Cluster analysis as unsupervised Learning

More information

COMS 4771 Clustering. Nakul Verma

COMS 4771 Clustering. Nakul Verma COMS 4771 Clustering Nakul Verma Supervised Learning Data: Supervised learning Assumption: there is a (relatively simple) function such that for most i Learning task: given n examples from the data, find

More information

Based on Raymond J. Mooney s slides

Based on Raymond J. Mooney s slides Instance Based Learning Based on Raymond J. Mooney s slides University of Texas at Austin 1 Example 2 Instance-Based Learning Unlike other learning algorithms, does not involve construction of an explicit

More information

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM. Center of Atmospheric Sciences, UNAM November 16, 2016 Cluster Analisis Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster)

More information

Unsupervised Learning

Unsupervised Learning Outline Unsupervised Learning Basic concepts K-means algorithm Representation of clusters Hierarchical clustering Distance functions Which clustering algorithm to use? NN Supervised learning vs. unsupervised

More information

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering An unsupervised machine learning problem Grouping a set of objects in such a way that objects in the same group (a cluster) are more similar (in some sense or another) to each other than to those in other

More information

Lecture 4 Hierarchical clustering

Lecture 4 Hierarchical clustering CSE : Unsupervised learning Spring 00 Lecture Hierarchical clustering. Multiple levels of granularity So far we ve talked about the k-center, k-means, and k-medoid problems, all of which involve pre-specifying

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Latent Variable Models and Expectation Maximization

Latent Variable Models and Expectation Maximization Latent Variable Models and Expectation Maximization Oliver Schulte - CMPT 726 Bishop PRML Ch. 9 2 4 6 8 1 12 14 16 18 2 4 6 8 1 12 14 16 18 5 1 15 2 25 5 1 15 2 25 2 4 6 8 1 12 14 2 4 6 8 1 12 14 5 1 15

More information

Unsupervised Learning. Clustering and the EM Algorithm. Unsupervised Learning is Model Learning

Unsupervised Learning. Clustering and the EM Algorithm. Unsupervised Learning is Model Learning Unsupervised Learning Clustering and the EM Algorithm Susanna Ricco Supervised Learning Given data in the form < x, y >, y is the target to learn. Good news: Easy to tell if our algorithm is giving the

More information

Unsupervised Learning : Clustering

Unsupervised Learning : Clustering Unsupervised Learning : Clustering Things to be Addressed Traditional Learning Models. Cluster Analysis K-means Clustering Algorithm Drawbacks of traditional clustering algorithms. Clustering as a complex

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

Multivariate Analysis

Multivariate Analysis Multivariate Analysis Cluster Analysis Prof. Dr. Anselmo E de Oliveira anselmo.quimica.ufg.br anselmo.disciplinas@gmail.com Unsupervised Learning Cluster Analysis Natural grouping Patterns in the data

More information

Clustering. Robert M. Haralick. Computer Science, Graduate Center City University of New York

Clustering. Robert M. Haralick. Computer Science, Graduate Center City University of New York Clustering Robert M. Haralick Computer Science, Graduate Center City University of New York Outline K-means 1 K-means 2 3 4 5 Clustering K-means The purpose of clustering is to determine the similarity

More information

Expectation Maximization: Inferring model parameters and class labels

Expectation Maximization: Inferring model parameters and class labels Expectation Maximization: Inferring model parameters and class labels Emily Fox University of Washington February 27, 2017 Mixture of Gaussian recap 1 2/26/17 Jumble of unlabeled images HISTOGRAM blue

More information

Using Machine Learning to Optimize Storage Systems

Using Machine Learning to Optimize Storage Systems Using Machine Learning to Optimize Storage Systems Dr. Kiran Gunnam 1 Outline 1. Overview 2. Building Flash Models using Logistic Regression. 3. Storage Object classification 4. Storage Allocation recommendation

More information

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Jian Guo, Debadyuti Roy, Jing Wang University of Michigan, Department of Statistics Introduction In this report we propose robust

More information

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Unsupervised learning Until now, we have assumed our training samples are labeled by their category membership. Methods that use labeled samples are said to be supervised. However,

More information

MSA220 - Statistical Learning for Big Data

MSA220 - Statistical Learning for Big Data MSA220 - Statistical Learning for Big Data Lecture 13 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Clustering Explorative analysis - finding groups

More information

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING CSE 5243 INTRO. TO DATA MINING Cluster Analysis: Basic Concepts and Methods Huan Sun, CSE@The Ohio State University 09/25/2017 Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han 2 Chapter 10.

More information

Cluster Analysis. Angela Montanari and Laura Anderlucci

Cluster Analysis. Angela Montanari and Laura Anderlucci Cluster Analysis Angela Montanari and Laura Anderlucci 1 Introduction Clustering a set of n objects into k groups is usually moved by the aim of identifying internally homogenous groups according to a

More information

Lecture 2 The k-means clustering problem

Lecture 2 The k-means clustering problem CSE 29: Unsupervised learning Spring 2008 Lecture 2 The -means clustering problem 2. The -means cost function Last time we saw the -center problem, in which the input is a set S of data points and the

More information

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm DATA MINING LECTURE 7 Hierarchical Clustering, DBSCAN The EM Algorithm CLUSTERING What is a Clustering? In general a grouping of objects such that the objects in a group (cluster) are similar (or related)

More information

INF 4300 Classification III Anne Solberg The agenda today:

INF 4300 Classification III Anne Solberg The agenda today: INF 4300 Classification III Anne Solberg 28.10.15 The agenda today: More on estimating classifier accuracy Curse of dimensionality and simple feature selection knn-classification K-means clustering 28.10.15

More information

Chapter 4: Non-Parametric Techniques

Chapter 4: Non-Parametric Techniques Chapter 4: Non-Parametric Techniques Introduction Density Estimation Parzen Windows Kn-Nearest Neighbor Density Estimation K-Nearest Neighbor (KNN) Decision Rule Supervised Learning How to fit a density

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervised Learning and Clustering Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2008 CS 551, Spring 2008 c 2008, Selim Aksoy (Bilkent University)

More information

Gaussian Mixture Models For Clustering Data. Soft Clustering and the EM Algorithm

Gaussian Mixture Models For Clustering Data. Soft Clustering and the EM Algorithm Gaussian Mixture Models For Clustering Data Soft Clustering and the EM Algorithm K-Means Clustering Input: Observations: xx ii R dd ii {1,., NN} Number of Clusters: kk Output: Cluster Assignments. Cluster

More information

Density estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate

Density estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate Density estimation In density estimation problems, we are given a random sample from an unknown density Our objective is to estimate? Applications Classification If we estimate the density for each class,

More information