Unsupervised Learning

Similar documents
Road map. Basic concepts

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

Data Informatics. Seon Ho Kim, Ph.D.

COSC 6339 Big Data Analytics. Fuzzy Clustering. Some slides based on a lecture by Prof. Shishir Shah. Edgar Gabriel Spring 2017.

COSC 6397 Big Data Analytics. Fuzzy Clustering. Some slides based on a lecture by Prof. Shishir Shah. Edgar Gabriel Spring 2015.

Cluster Analysis. Ying Shen, SSE, Tongji University

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

CSE 5243 INTRO. TO DATA MINING

CSE 5243 INTRO. TO DATA MINING

Clustering. Shishir K. Shah

Clustering CS 550: Machine Learning

CHAPTER 4: CLUSTER ANALYSIS

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Gene Clustering & Classification

CSE 5243 INTRO. TO DATA MINING

Clustering part II 1

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

Unsupervised Learning : Clustering

ECLT 5810 Clustering

Clustering Lecture 3: Hierarchical Methods

Unsupervised Learning. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Lesson 3. Prof. Enza Messina

Unsupervised Learning and Clustering

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

Cluster analysis. Agnieszka Nowak - Brzezinska

ECLT 5810 Clustering

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

Unsupervised Learning and Clustering

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

Clustering Part 3. Hierarchical Clustering

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

CHAPTER 4 K-MEANS AND UCAM CLUSTERING ALGORITHM

Hierarchical Clustering Lecture 9

University of Florida CISE department Gator Engineering. Clustering Part 2

Cluster Analysis. Angela Montanari and Laura Anderlucci

Hierarchical Clustering

Hierarchical Clustering

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

What to come. There will be a few more topics we will cover on supervised learning

CS570: Introduction to Data Mining

Unsupervised Learning Hierarchical Methods

Machine Learning. Unsupervised Learning. Manfred Huber

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Information Retrieval and Web Search Engines

Clustering (COSC 416) Nazli Goharian. Document Clustering.

Data Mining Algorithms

Hierarchical Clustering

Big Data Analytics! Special Topics for Computer Science CSE CSE Feb 9

Clustering in Data Mining

Keywords Clustering, Goals of clustering, clustering techniques, clustering algorithms.

Introduction to Mobile Robotics

Cluster Analysis. CSE634 Data Mining

Unsupervised Learning Partitioning Methods

CS7267 MACHINE LEARNING

CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

Multivariate Analysis

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Hierarchical clustering

Clustering (Basic concepts and Algorithms) Entscheidungsunterstützungssysteme

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

CLUSTERING. CSE 634 Data Mining Prof. Anita Wasilewska TEAM 16

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

Clustering in Ratemaking: Applications in Territories Clustering

Kapitel 4: Clustering

Data Warehousing and Machine Learning

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Clustering. Chapter 10 in Introduction to statistical learning

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

10701 Machine Learning. Clustering

Supervised and Unsupervised Learning (II)

Exploratory Analysis: Clustering

INF4820, Algorithms for AI and NLP: Hierarchical Clustering

Unsupervised Learning I: K-Means Clustering

Cluster Analysis: Agglomerate Hierarchical Clustering

Cluster Analysis. Prof. Thomas B. Fomby Department of Economics Southern Methodist University Dallas, TX April 2008 April 2010

Chapter 9. Classification and Clustering

Hierarchical Clustering 4/5/17

Data Mining Concepts & Techniques

Introduction to Clustering

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

Unsupervised learning, Clustering CS434

Clustering algorithms

Data Mining and Data Warehousing Henryk Maciejewski Data Mining Clustering

Clustering (COSC 488) Nazli Goharian. Document Clustering.

What is Cluster Analysis? COMP 465: Data Mining Clustering Basics. Applications of Cluster Analysis. Clustering: Application Examples 3/17/2015

Information Retrieval and Web Search Engines

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

What is Clustering? Clustering. Characterizing Cluster Methods. Clusters. Cluster Validity. Basic Clustering Methodology

Clustering CE-324: Modern Information Retrieval Sharif University of Technology

Clustering Tips and Tricks in 45 minutes (maybe more :)

Unsupervised: no target value to predict

Transcription:

Outline Unsupervised Learning Basic concepts K-means algorithm Representation of clusters Hierarchical clustering Distance functions Which clustering algorithm to use? NN Supervised learning vs. unsupervised learning Supervised learning: discover patterns in the data that relate data attributes with a target (class) attribute. These patterns are then utilized to predict the values of the target attribute in future data instances. Unsupervised learning: The data have no target attribute. We want to explore the data to find some intrinsic structures in them. Clustering Clustering is a technique for finding similarity groups in data, called clusters. I.e., it groups data instances that are similar to (near) each other in one cluster and data instances that are very different (far away) from each other into different clusters. Clustering is often called an unsupervised learning task as no class values denoting an a priori grouping of the data instances are given, which is the case in supervised learning. Due to historical reasons, clustering is often considered synonymous with unsupervised learning. In fact, association rule mining is also unsupervised NN 3 NN An illustration The data set has three natural groups of data points, i.e., 3 natural clusters. What is clustering for? Let us see some real-life examples Example 1: groups people of similar sizes together to make small, medium and large T-Shirts. Tailor-made for each person: too expensive One-size-fits-all: does not fit all. Example : In marketing, segment customers according to their similarities To do targeted marketing. NN 5 NN 1

What is clustering for? (cont ) Example 3: Given a collection of text documents, we want to organize them according to their content similarities, To produce a topic hierarchy In fact, clustering is one of the most utilized data mining techniques too. It has a long history, and used in almost every field, e.g., medicine, sociology, biology, archeology, marketing, insurance, libraries, etc. In recent years, due to the rapid increase of online documents, text clustering becomes important. Aspects of clustering A clustering algorithm Partitional clustering Hierarchical clustering A distance(similarity, or dissimilarity) function Clustering quality Inter-clusters distance maximized Intra-clusters distance minimized The quality of a clustering result depends on the algorithm, the distance function, and the application. NN 7 NN K-means clustering K-means is a partitional clustering algorithm Let the set of data points (or instances) D be {x 1, x,, x n }, where x i = (x i1, x i,, x ir ) is a vector in a realvalued space X R r, and r is the number of attributes (dimensions) in the data. The k-means algorithm partitions the given data into k clusters. Each cluster has a cluster center, called centroid. k is specified by the user K-means algorithm Given k, the k-means algorithm works as follows: 1)Randomly choose k data points (seeds) to be the initial centroids, cluster centers )Assign each data point to the closest centroid 3)Re-compute the centroids using the current cluster memberships. )If a convergence criterion is not met, go to ). NN 9 NN K-means algorithm (cont ) Stopping/convergence criterion 1. no (or minimum) re-assignments of data points to different clusters,. no (or minimum) change of centroids, or 3. minimum decrease in the sum of squared error k(sse), SSE = dist( x, m ) (1) j = 1 x C j j C i is the jth cluster, m j is the centroid of cluster C j (the mean vector of all the data points in C j ), and dist(x, m j ) is the distance between data point x and centroid m j. NN 11 NN 1

An example An example (cont ) NN 13 NN 1 Data Set datak1.txt Illustration of K-means Algorithm Initial cluster centers New Cluster Centers 1 -d points for K-means algorithm 1. 7. 1 1.. 1. 7. 1.. 1 3.. 1 3. 5. 1.. 1. 5. 1.... 7.. 7. 5. 7.... 9 7 5 3 1 Data points for Clustering 0 0 1 3 5 7 9 0 0 5 NN 15 0 NN 0 1 0 5 0 5 New Cluster Centers 0 0 5 Final Results An example distance function A disk version of k-means K-means can be implemented with data on disk In each iteration, it scans the data once. as the centroids can be computed incrementally It can be used to cluster large datasets that do not fit in main memory We need to control the number of iterations In practice, a limited is set (< 50). Not the best method. There are other scaleup algorithms, e.g., BIRCH. NN 17 NN 1 3

A disk version of k-means (cont ) Strengths of k-means Strengths: Simple: easy to understand and to implement Efficient: Time complexity: O(tkn), where n is the number of data points, k is the number of clusters, and t is the number of iterations. Since both k and t are small. k-means is considered a linear algorithm. K-means is the most popular clustering algorithm. Note that: it terminates at a local optimum if SSE is used. The global optimum is hard to find due to complexity. NN 19 NN 0 Weaknesses of k-means The algorithm is only applicable if the mean is defined. For categorical data, k-mode - the centroid is represented by most frequent values. The user needs to specify k. The algorithm is sensitive to outliers Outliers are data points that are very far away from other data points. Outliers could be errors in the data recording or some special data points with very different values. Weaknesses of k-means: Problems with outliers NN 1 NN Weaknesses of k-means: To deal with outliers One method is to remove some data points in the clustering process that are much further away from the centroids than other data points. To be safe, we may want to monitor these possible outliers over a few iterations and then decide to remove them. Another method is to perform random sampling. Since in sampling we only choose a small subset of the data points, the chance of selecting an outlier is very small. Assign the rest of the data points to the clusters by distance or similarity comparison, or classification Weaknesses of k-means (cont ) The algorithm is sensitive to initial seeds. NN 3 NN

Weaknesses of k-means (cont ) If we use different seeds: good results There are some methods to help choose good seeds Weaknesses of k-means (cont ) The k-means algorithm is not suitable for discovering clusters that are not hyper-ellipsoids (or hyper-spheres). NN 5 NN K-means summary Despite weaknesses, k-means is still the most popular algorithm due to its simplicity, efficiency and other clustering algorithms have their own lists of weaknesses. No clear evidence that any other clustering algorithm performs better in general although they may be more suitable for some specific types of data or applications. Comparing different clustering algorithms is a difficult task. No one knows the correct clusters! K-means Assignment Cluster 1- Slide 15 -the Iris data by using Kmeans NN 7 NN Common ways to represent clusters Use the centroid of each cluster to represent the cluster. compute the radius and standard deviation of the cluster to determine its spread in each dimension The centroid representation alone works well if the clusters are of the hyper-spherical shape. If clusters are elongated or are of other shapes, centroids are not sufficient Using classification model All the data points in a cluster are regarded to have the same class label, e.g., the cluster ID. run a supervised learning algorithm on the data to find a classification model. NN 9 NN 30 5

Use frequent values to represent cluster This method is mainly for clustering of categorical data (e.g., k-modes clustering). Main method used in text clustering, where a small set of frequent words in each cluster is selected to represent the cluster. Clusters of arbitrary shapes Hyper-elliptical and hyperspherical clusters are usually easy to represent, using their centroid together with spreads. Irregular shape clusters are hard to represent. They may not be useful in some applications. Using centroids are not suitable (upper figure) in general K-means clusters may be more useful (lower figure), e.g., for making size T-shirts. NN 31 NN 3 Hierarchical Clustering Produce a nested sequence of clusters, a tree, also called Dendrogram. Types of hierarchical clustering Agglomerative (bottom up) clustering: It builds the dendrogram (tree) from the bottom level, and merges the most similar (or nearest) pair of clusters stops when all the data points are merged into a single cluster (i.e., the root cluster). Divisive (top down) clustering: It starts with all data points in one cluster, the root. Splits the root into a set of child clusters. Each child cluster is recursively divided further stops when only singleton clusters of individual data points remain, i.e., each cluster with only a single point NN 33 NN 3 An example: working of the algorithm Agglomerative clustering It is more popular then divisive methods. At the beginning, each data point forms a cluster (also called a node). Merge nodes/clusters that have the least distance. Go on merging Eventually all nodes belong to one cluster NN 35 NN 3

Nearest Neighbor, Level, k = 7 clusters. Nearest Neighbor Algorithm Nearest Neighbor Algorithm is an agglomerative approach (bottom-up). Starts with n nodes (n is the size of our sample), merges the most similar nodes at each step, and stops when the desired number of clusters is reached. NN 37 NN 3 From http://www.stat.unc.edu/postscript/papers/marron/stat31fda/rimaizempresentation.ppt Nearest Neighbor, Level 3, k = clusters. Nearest Neighbor, Level, k = 5 clusters. NN 39 NN 0 Nearest Neighbor, Level 5, k = clusters. Nearest Neighbor, Level, k = 3 clusters. NN 1 NN 7

Nearest Neighbor, Level 7, k = clusters. Nearest Neighbor, Level, k = 1 cluster. NN 3 NN Hierarchical Clustering Agglomerative clustering algorithm Calculate the similarity between all possible combinations of two profiles Keys Similarity Clustering Two most similar clusters are grouped together to form a new cluster Calculate the similarity between the new cluster and all remaining clusters. 5 NN NN Measuring the distance of two clusters A few ways to measure distances of two clusters. Results in different variations of the algorithm. Single link Complete link Average link Centroids Single link method The distance between two clusters is the distance between two closest data points in the two clusters, one data point from each cluster. It can find arbitrarily shaped clusters, but It may cause the undesirable chain effect by noisy points Two natural clusters are split into two NN 7 NN

Single Linkage II C Dissimilarity between two clusters = Minimum dissimilarity between the members of two clusters Tend to generate long chains Complete link method The distance between two clusters is the distance of two furthest data points in the two clusters. It is sensitive to outliers because they are far away C 1 NN 9 NN 50 Complete Linkage II C 1 C Dissimilarity between two clusters = Maximum dissimilarity between the members of two clusters Tend to generate clumps Average link and centroid methods Average link: A compromise between the sensitivity of complete-link clustering to outliers and the tendency of single-link clustering to form long chains that do not correspond to the intuitive notion of clusters as compact, spherical objects. In this method, the distance between two clusters is the average distance of all pair-wise distances between the data points in two clusters. Centroid method: In this method, the distance between two clusters is the distance between their centroids NN 51 NN 5 Average Linkage Centroid-Average Group Linkage C 1 C Dissimilarity between two clusters = Averaged distances of all pairs of objects (one from each cluster). C 1 C Dissimilarity between two clusters = Distance between two cluster means. NN 53 NN 5 9

The complexity All the algorithms are at least O(n ). n is the number of data points. Single link can be done in O(n ). Complete and average links can be done in O(n logn). Due the complexity, hard to use for large data sets. Sampling Scale-up methods (e.g., BIRCH). Distance functions Key to clustering. similarity and dissimilarity can also commonly used terms. There are numerous distance functions for Different types of data Numeric data Nominal data Different specific applications NN 55 NN 5 Distance functions for numeric attributes Most commonly used functions are Euclidean distance and Manhattan (city block) distance We denote distance with: dist(x i, x j ), where x i and x j are data points (vectors) They are special cases of Minkowski distance. h is positive integer. h h dist( x, x ) = (( x x ) ( x x )... ( x x i j 1 h h i1 j1 i j ir jr) ) How to choose a clustering algorithm Clustering research has a long history. A vast collection of algorithms are available. We only introduced several main algorithms. Choosing the best algorithm is a challenge. Every algorithm has limitations and works well with certain data distributions. It is very hard, if not impossible, to know what distribution the application data follow. The data may not fully follow any ideal structure or distribution required by the algorithms. One also needs to decide how to standardize the data, to choose a suitable distance function and to select other parameter values. NN 57 NN 5 Choose a clustering algorithm (cont ) Due to these complexities, the common practice is to run several algorithms using different distance functions and parameter settings, and then carefully analyze and compare the results. The interpretation of the results must be based on insight into the meaning of the original data together with knowledge of the algorithms used. Clustering is highly application dependent and to certain extent subjective (personal preferences). Summary Clustering is has along history and still active There are a huge number of clustering algorithms More are still coming every year. We only introduced several main algorithms. There are many others, e.g., density based algorithm, sub-space clustering, scale-up methods, neural networks based methods, fuzzy clustering, co-clustering, etc. Clustering is hard to evaluate, but very useful in practice. This partially explains why there are still a large number of clustering algorithms being devised every year. Clustering is highly application dependent and to some extent subjective. NN 59 NN 0