Introduction to Data Mining

Similar documents
DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li

Clustering. Huanle Xu. Clustering 1

Clustering. Distance Measures Hierarchical Clustering. k -Means Algorithms

Machine Learning: Symbol-based

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

University of Florida CISE department Gator Engineering. Clustering Part 2

Chapter 18 Clustering

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

Clustering Lecture 3: Hierarchical Methods

Hierarchical Clustering 4/5/17

CSE 5243 INTRO. TO DATA MINING

What to come. There will be a few more topics we will cover on supervised learning

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Clustering: Overview and K-means algorithm

Clustering Part 2. A Partitional Clustering

Clustering. Unsupervised Learning

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Clust Clus e t ring 2 Nov

Document Clustering: Comparison of Similarity Measures

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Clustering. Unsupervised Learning

Cluster Analysis. Ying Shen, SSE, Tongji University

Unsupervised Learning

Hierarchical Clustering

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

CSE 5243 INTRO. TO DATA MINING

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

Clustering: Overview and K-means algorithm

Analytics and Visualization of Big Data

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Clustering. K-means clustering

Hierarchical Clustering Lecture 9

Clustering Part 3. Hierarchical Clustering

9/17/2009. Wenyan Li (Emily Li) Sep. 15, Introduction to Clustering Analysis

What is Unsupervised Learning?

Clustering. Unsupervised Learning

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

4. Ad-hoc I: Hierarchical clustering

Exploratory Analysis: Clustering

Gene Clustering & Classification

CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

Clustering and Dimensionality Reduction. Stony Brook University CSE545, Fall 2017

Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

Unsupervised Learning. Pantelis P. Analytis. Introduction. Finding structure in graphs. Clustering analysis. Dimensionality reduction.

Unsupervised Learning. Supervised learning vs. unsupervised learning. What is Cluster Analysis? Applications of Cluster Analysis

Today s topic CS347. Results list clustering example. Why cluster documents. Clustering documents. Lecture 8 May 7, 2001 Prabhakar Raghavan

Data Mining Clustering

Information Retrieval and Web Search Engines

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

Clustering. Chapter Introduction to Clustering Techniques Points, Spaces, and Distances

CS 2750: Machine Learning. Clustering. Prof. Adriana Kovashka University of Pittsburgh January 17, 2017

Cluster analysis. Agnieszka Nowak - Brzezinska

Unsupervised Learning Hierarchical Methods

Clustering algorithms

Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in class hard-copy please)

CSE 347/447: DATA MINING

CHAPTER 4: CLUSTER ANALYSIS

Data Informatics. Seon Ho Kim, Ph.D.

Finding Clusters 1 / 60

STATS306B STATS306B. Clustering. Jonathan Taylor Department of Statistics Stanford University. June 3, 2010

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

CS 188: Artificial Intelligence Fall 2008

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Machine Learning using MapReduce

Lecture 10: Semantic Segmentation and Clustering

SYDE Winter 2011 Introduction to Pattern Recognition. Clustering

Lecture 8 May 7, Prabhakar Raghavan

Data Mining Concepts & Techniques

Lecture Notes for Chapter 7. Introduction to Data Mining, 2 nd Edition. by Tan, Steinbach, Karpatne, Kumar

Near Neighbor Search in High Dimensional Data (1) Dr. Anwar Alhenshiri

Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Tan,Steinbach, Kumar Introduction to Data Mining 4/18/

Machine Learning (BSMC-GA 4439) Wenke Liu

5/15/16. Computational Methods for Data Analysis. Massimo Poesio UNSUPERVISED LEARNING. Clustering. Unsupervised learning introduction

Unsupervised Learning : Clustering

Clustering and Visualisation of Data

Three Unsupervised Models

Cluster Analysis: Agglomerate Hierarchical Clustering

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

Types of general clustering methods. Clustering Algorithms for general similarity measures. Similarity between clusters

ECG782: Multidimensional Digital Signal Processing

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

10701 Machine Learning. Clustering

Clustering Part 4 DBSCAN

Understanding Clustering Supervising the unsupervised

Clustering CS 550: Machine Learning

Clustering. Chapter 10 in Introduction to statistical learning

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

Network Traffic Measurements and Analysis

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

Hierarchical clustering

CSC 411: Lecture 12: Clustering

Information Retrieval and Web Search Engines

CSE 5243 INTRO. TO DATA MINING

Clustering. Supervised vs. Unsupervised Learning

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

Distances, Clustering! Rafael Irizarry!

CPSC 340: Machine Learning and Data Mining. Hierarchical Clustering Fall 2017

Big Data Analytics! Special Topics for Computer Science CSE CSE Feb 9

Transcription:

Introduction to Data Mining Lecture #14: Clustering Seoul National University 1

In This Lecture Learn the motivation, applications, and goal of clustering Understand the basic methods of clustering (bottom-up and top-down): representing clusters, nearness of clusters, etc. Learn the k-means algorithm, and how to set the parameter k 2

Outline Overview K-Means Clustering 3

High Dimensional Data Given a cloud of data points we want to understand its structure 4

The Problem of Clustering Given a set of points, with a notion of distance between points, group the points into some number of clusters, so that Members of a cluster are close/similar to each other Members of different clusters are dissimilar Usually: Points are in a high-dimensional space Similarity is defined using a distance measure Euclidean, Cosine, Jaccard, edit distance, 5

Eample: Clusters & Outliers Outlier Cluster 6

Clustering is a hard problem! 7

Why is it hard? Clustering in two dimensions looks easy Clustering small amounts of data looks easy And in most cases, looks are not deceiving But, many applications involve not 2, but 10 or 10,000 dimensions High-dimensional spaces look different: almost all pairs of points are at about the same distance. Distance between ( 1.. d ) and (y 1..y d ) = dd ( ii yy ii ) 2 ii=1 8

Clustering Problem: Galaies A catalog of 2 billion sky objects represents objects by their radiation in 7 dimensions (frequency bands) Problem: Cluster into similar objects, e.g., galaies, nearby stars, quasars, etc. Sloan Digital Sky Survey 9

Clustering Problem: Music CDs Intuitively: Music divides into categories, and customers prefer a few categories But what are categories really? Represent a CD by a set of customers who bought it: Similar CDs have similar sets of customers, and vice-versa 10

Clustering Problem: Music CDs Space of all CDs: Think of a space with one dim. for each customer Values in a dimension may be 0 or 1 only A CD is a point in this space ( 1, 2,, k ), where i = 1 iff the i th customer bought the CD For Amazon, the dimension is tens of millions Task: Find clusters of similar CDs 11

Clustering Problem: Documents Finding topics: Represent a document by a vector ( 1, 2,, k ), where i = 1 iff the i th word (e.g., in a dictionary order) appears in the document Documents with similar sets of words may be about the same topic 12

Cosine, Jaccard, and Euclidean As with CDs we have a choice when we think of documents as sets of words or shingles: Sets as vectors: Measure similarity by the cosine distance cccccccc = yy yy θθ Sets as sets: Measure similarity by the Jaccard distance Sets as points: Measure similarity by Euclidean distance 13

Overview: Methods of Clustering Hierarchical: Agglomerative (bottom up): Initially, each point is a cluster Repeatedly combine the two nearest clusters into one Divisive (top down): Start with one cluster and recursively split it Point assignment: Maintain a set of clusters Points belong to nearest cluster 14

Hierarchical Clustering Key operation: Repeatedly combine two nearest clusters Three important questions: 1) How do you represent a cluster of more than one point? 2) How do you determine the nearness of clusters? 3) When to stop combining clusters? 15

Hierarchical Clustering Key operation: Repeatedly combine two nearest clusters (1) How to represent a cluster of many points? Key problem: As you merge clusters, how do you represent the location of each cluster, to tell which pair of clusters is closest? Euclidean case: each cluster has a centroid (= average of its (data)points) (2) How to determine nearness of clusters? Measure cluster distances by distances of centroids 16

Eample: Hierarchical clustering (5,3) o (1,2) o (1.5,1.5) (4.7,1.3) (1,1) o (2,1) o (4,1) (4.5,0.5) o (0,0) o (5,0) Data: o data point centroid Dendrogram 17

When to Stop (3) When to stop combining clusters? When we reach the predetermined number of clusters When the quality of clusters (e.g. average distance to centroids) becomes very bad 18

And in the Non-Euclidean Case? What about the Non-Euclidean case? The only locations we can talk about are the points themselves E.g., there is no average of two sets Approach 1: (1) How to represent a cluster of many points? clustroid (= (data)point closest to other points) (2) How do you determine the nearness of clusters? Treat clustroid as if it were centroid, when computing inter-cluster distances 19

Closest Point? (1) How to represent a cluster of many points? clustroid = point closest to other points Possible meanings of closest : Datapoint Smallest maimum distance to other points Smallest average distance to other points Smallest sum of squares of distances to other points For distance metric d clustroid c of cluster C is: argmin d(, c) X Cluster on 3 datapoints Centroid Clustroid c C Centroid is the avg. of all (data)points in the cluster. This means centroid is an artificial point. Clustroid is an eisting (data)point that is closest to all other points in the cluster. 20 2

Defining Nearness of Clusters (2) How do you determine the nearness of clusters? Approach 1: distance between clustroids Approach 2: Intercluster distance = minimum of the distances between any two points, one from each cluster Approach 3: Pick a notion of cohesion of clusters, e.g., maimum distance from the clustroid of the new merged cluster Merge clusters whose union is most cohesive 21

Cohesion Approach 3.1: Use the diameter of the merged cluster = maimum distance between points in the cluster Approach 3.2: Use the average distance between points in the cluster Approach 3.3: Use a density-based approach Take the diameter or avg. distance, e.g., and divide by the number of points in the cluster 22

Implementation Naïve implementation of hierarchical clustering: At each step, compute pairwise distances between all pairs of clusters, then merge N 2 + (N-1) 2 + (N-2) 2 + = O(N 3 ) Careful implementation using priority queue (e.g. Heap) can reduce time to O(N 2 log N) Still too epensive for really big datasets that do not fit in memory 23

Outline Overview K-Means Clustering 24

k means Algorithm(s) Assumes Euclidean space/distance Start by picking k, the number of clusters We will see how to select the right k later Initialize clusters by picking one point per cluster Eample: Pick one point at random, then k-1 other points, each as far away as possible from the previous points 25

Populating Clusters Step 1) For each point, place it in the cluster whose current centroid is nearest Step 2) After all points are assigned, update the locations of centroids of the k clusters Step 3) Reassign all points to their closest centroid Sometimes moves points between clusters Repeat steps 2 and 3 until convergence Convergence: Points don t move between clusters and centroids stabilize 26

Eample: Assigning Clusters data point centroid Clusters after round 1 27

Eample: Assigning Clusters data point centroid Clusters after round 2 28

Eample: Assigning Clusters data point centroid Clusters at the end 29

Getting the k right How to select k? Finding the Knee Method Try different k, looking at the change in the average distance to centroid as k increases Average falls rapidly until right k, then changes little Best value of k Average distance to centroid k 30

Eample: Picking k Too few; many long distances to centroid. 31

Eample: Picking k Just right; distances rather short. 32

Eample: Picking k Too many; little improvement in average distance. 33

What You Need to Know Motivation, applications, and goal of clustering Basic methods of clustering (bottom-up and topdown) How to represent clusters, determine nearness of clusters, etc. K-means algorithm How to set the parameter k 34

Questions? 35