Data Mining: Concepts and Techniques. Chapter March 8, 2007 Data Mining: Concepts and Techniques 1

Similar documents
Unsupervised Learning. Andrea G. B. Tettamanzi I3S Laboratory SPARKS Team

Cluster Analysis. CSE634 Data Mining

ECLT 5810 Clustering

ECLT 5810 Clustering

Lecture 7 Cluster Analysis: Part A

Community Detection. Jian Pei: CMPT 741/459 Clustering (1) 2

Data Mining: Concepts and Techniques. Chapter 7 Jiawei Han. University of Illinois at Urbana-Champaign. Department of Computer Science

What is Cluster Analysis? COMP 465: Data Mining Clustering Basics. Applications of Cluster Analysis. Clustering: Application Examples 3/17/2015

Cluster analysis. Agnieszka Nowak - Brzezinska

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

Clustering Techniques

UNIT V CLUSTERING, APPLICATIONS AND TRENDS IN DATA MINING. Clustering is unsupervised classification: no predefined classes

Clustering Analysis Basics

Lezione 21 CLUSTER ANALYSIS

CS490D: Introduction to Data Mining Prof. Chris Clifton. Cluster Analysis

Unsupervised Learning Partitioning Methods

Lecture 3 Clustering. January 21, 2003 Data Mining: Concepts and Techniques 1

Cluster Analysis. Outline. Motivation. Examples Applications. Han and Kamber, ch 8

Kapitel 4: Clustering

Data Mining Algorithms

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

CSE 5243 INTRO. TO DATA MINING

A Review on Cluster Based Approach in Data Mining

CS145: INTRODUCTION TO DATA MINING

Clustering part II 1

CSE 5243 INTRO. TO DATA MINING

EECS 730 Introduction to Bioinformatics Microarray. Luke Huan Electrical Engineering and Computer Science

CLUSTERING. CSE 634 Data Mining Prof. Anita Wasilewska TEAM 16

CS Data Mining Techniques Instructor: Abdullah Mueen

COMP5331: Knowledge Discovery and Data Mining

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

Data Mining. Clustering. Hamid Beigy. Sharif University of Technology. Fall 1394

Knowledge Discovery in Databases

d(2,1) d(3,1 ) d (3,2) 0 ( n, ) ( n ,2)......

Clustering in Data Mining

7.1 Euclidean and Manhattan distances between two objects The k-means partitioning algorithm... 21

Road map. Basic concepts

CS249: ADVANCED DATA MINING

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

! Introduction. ! Partitioning methods. ! Hierarchical methods. ! Model-based methods. ! Density-based methods. ! Scalability

Gene Clustering & Classification

Unsupervised Learning

Data Mining 資料探勘. 分群分析 (Cluster Analysis) 1002DM04 MI4 Thu. 9,10 (16:10-18:00) B513

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES

A Technical Insight into Clustering Algorithms & Applications

Metodologie per Sistemi Intelligenti. Clustering. Prof. Pier Luca Lanzi Laurea in Ingegneria Informatica Politecnico di Milano Polo di Milano Leonardo

K-Means. Oct Youn-Hee Han

CSE 5243 INTRO. TO DATA MINING

CS570: Introduction to Data Mining

Course Content. Classification = Learning a Model. What is Classification?

Big Data Analytics! Special Topics for Computer Science CSE CSE Feb 9

Course Content. What is Classification? Chapter 6 Objectives

Clustering in Ratemaking: Applications in Territories Clustering

Sponsored by AIAT.or.th and KINDML, SIIT

CHAPTER 4: CLUSTER ANALYSIS

CS6220: DATA MINING TECHNIQUES

Data Informatics. Seon Ho Kim, Ph.D.

Overview of Clustering

Chapter DM:II. II. Cluster Analysis

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Cluster Analysis: Basic Concepts and Methods

Keywords Clustering, Goals of clustering, clustering techniques, clustering algorithms.

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 10

Big Data SONY Håkan Jonsson Vedran Sekara

Getting to Know Your Data

CS Introduction to Data Mining Instructor: Abdullah Mueen

[7.3, EA], [9.1, CMB]

Clustering CS 550: Machine Learning

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Exploratory data analysis for microarrays

Microarray data analysis

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

Clustering (Basic concepts and Algorithms) Entscheidungsunterstützungssysteme

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Clustering. Chapter 10 in Introduction to statistical learning

DBSCAN. Presented by: Garrett Poppe

Unsupervised Learning : Clustering

Efficient and Effective Clustering Methods for Spatial Data Mining. Raymond T. Ng, Jiawei Han

Knowledge Discovery in Databases. Part III Clustering

Hierarchical Clustering

Clustering Part 4 DBSCAN

CS573 Data Privacy and Security. Li Xiong

University of Florida CISE department Gator Engineering. Clustering Part 5

Unsupervised Learning Hierarchical Methods

Contents. List of Figures. List of Tables. List of Algorithms. I Clustering, Data, and Similarity Measures 1

A Comparative Study of Various Clustering Algorithms in Data Mining

Descriptive Modeling. Based in part on Chapter 9 of Hand, Manilla, & Smyth And Section 14.3 of HTF David Madigan

Clustering. Shishir K. Shah

APPLICATION OF MULTIPLE RANDOM CENTROID (MRC) BASED K-MEANS CLUSTERING ALGORITHM IN INSURANCE A REVIEW ARTICLE

University of Florida CISE department Gator Engineering. Clustering Part 4

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

Table Of Contents: xix Foreword to Second Edition

COMP 465: Data Mining Still More on Clustering

University of Florida CISE department Gator Engineering. Clustering Part 2

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

Measure of Distance. We wish to define the distance between two objects Distance metric between points:

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

Transcription:

Data Mining: Concepts and Techniques Chapter 7.1-4 March 8, 2007 Data Mining: Concepts and Techniques 1

1. What is Cluster Analysis? 2. Types of Data in Cluster Analysis Chapter 7 Cluster Analysis 3. A Categorization of Major Clustering Methods 4. Partitioning Methods 5. Hierarchical Methods 6. Density-Based Methods 7. Grid-Based Methods 8. Model-Based Methods 9. Clustering High-Dimensional Data 10. Constraint-Based Clustering 11. Outlier Analysis 12. Summary March 8, 2007 Data Mining: Concepts and Techniques 2

What is Cluster Analysis? Cluster: a collection of data objects Similar to one another within the same cluster. Dissimilar to the objects in other clusters. Cluster analysis Finding similarities between data according to the characteristics found in the data and grouping similar data objects into clusters. Unsupervised learning: No predefined classes. Typical applications As a stand-alone tool to get insight into data distribution. As a preprocessing step for other algorithms. March 8, 2007 Data Mining: Concepts and Techniques 3

Clustering Rich Applications and Multidisciplinary Efforts Pattern Recognition Spatial Data Analysis Create thematic maps in GIS by clustering feature spaces Detect spatial clusters or for other spatial mining tasks Image Processing Economic Science (especially market research) WWW Document classification Cluster Weblog data to discover groups of similar access patterns March 8, 2007 Data Mining: Concepts and Techniques 4

Examples of Clustering Applications Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs. Land use: Identification of areas of similar land use in an earth observation database. Insurance: Identifying groups of motor insurance policy holders with a high average claim cost. City-planning: Identifying groups of houses according to their house type, value, and geographical location. Earthquake studies: Observed earth quake epicenters should be clustered along continent faults. March 8, 2007 Data Mining: Concepts and Techniques 5

Quality What Is Good Clustering? A good clustering method will produce high quality clusters with high intra-class similarity low inter-class similarity The quality of a clustering result depends on both the similarity measure used by the method and its implementation. The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns. March 8, 2007 Data Mining: Concepts and Techniques 6

Measuring the Quality of Clustering Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, typically metric: d(i, j). There is a separate quality function that measures the goodness of a cluster. The definitions of distance functions are usually very different for interval-scaled, boolean, categorical, ordinal ratio, and vector variables. Weights should be associated with different variables based on applications and data semantics. It is hard to define similar enough or good enough the answer is typically highly subjective. March 8, 2007 Data Mining: Concepts and Techniques 7

Requirements of Clustering Algorithms in Data Mining 1. Ability to scale 2. Ability to deal with different types of attributes 3. Ability to handle dynamic data 4. Discovery of clusters with arbitrary shape 5. Minimize requirements for domain knowledge to determine input parameters 6. Able to deal with noise and outliers 7. Insensitive to order of input records 8. Ability to deal with high dimensionality 9. Incorporation of user-specified constraints 10. Enable Interpretability and usability March 8, 2007 Data Mining: Concepts and Techniques 8

1. What is Cluster Analysis? Chapter 7 Cluster Analysis 2. Types of Data in Cluster Analysis 3. A Categorization of Major Clustering Methods 4. Partitioning Methods 5. Hierarchical Methods 6. Density-Based Methods 7. Grid-Based Methods 8. Model-Based Methods 9. Clustering High-Dimensional Data 10. Constraint-Based Clustering 11. Outlier Analysis 12. Summary March 8, 2007 Data Mining: Concepts and Techniques 9

Data Structures Data matrix (two modes) & x 11 $ $... $ x $ i1 $... $ $ x % n1............... x 1f... x if... x nf............... x 1p... x ip... x np #!!!!!!!" Dissimilarity matrix (one mode) & 0 $ $ d(2,1) $ d(3,1) $ $ : $ % d( n,1) 0 d(3,2) : d( n,2) 0 :...... #!!!!! 0! " March 8, 2007 Data Mining: Concepts and Techniques 10

Type of data in clustering analysis Types of variables Interval-scaled Binary symmetric asymmetric Nominal Ordinal Ratio How to combine variables of mixed types? March 8, 2007 Data Mining: Concepts and Techniques 11

Interval-valued variables Standardize the data. Calculate the mean absolute deviation: where Calculate the standardized measurement (z-score): Using mean absolute deviation is often more robust than using standard deviation. Less susceptible to outliers. March 8, 2007 Data Mining: Concepts and Techniques 12

Similarity and Dissimilarity Between Objects Distances are normally used to measure the similarity or dissimilarity between two data objects Minkowski distance is a popular class of measures: where i = (x i1, x i2,, x ip ) and j = (x j1, x j2,, x jp ) are two p- dimensional data objects, and q is a positive integer. If q = 1, d is Manhattan distance March 8, 2007 Data Mining: Concepts and Techniques 13

Similarity and Dissimilarity Between Objects (cont.) If q = 2, d is Euclidean distance: Properties (of a metric space): d(i,j) 0 d(i,i) = 0 d(i,j) = d(j,i) [no negative distances] [distance to self is zero] [symmetric] d(i,j) d(i,k) + d(k,j) [transitive] Also, one can use weighted distance, parametric Pearson product moment correlation, as well as a number of other disimilarity measures. March 8, 2007 Data Mining: Concepts and Techniques 14

A contingency table for binary data: Distance measure for symmetric binary variables: Distance measure for asymmetric binary variables: Jaccard coefficient (similarity measure for asymmetric binary variables): Binary Variables Object i Object j 1 sum 1 a b a+ b 0 c d c+ d sum a+ c b+ d p d( i, j) = b c a+ b+ c+ d d( i, j) = b c a+ b+ c sim Jaccard ( i, j)= a+ a b+ c 0 March 8, 2007 Data Mining: Concepts and Techniques 15

Dissimilarity between Binary Variables Example: Name Gender Fever Cough Test-1 Test-2 Test-3 Test-4 Jack M Y N P N N N Mary F Y N P N P N Jim M Y P N N N N gender is a symmetric attribute the remaining attributes are asymmetric binary let the values Y and P be set to 1, and the value N be set to 0 d( d( d( 0 + 1 jack, mary) = = 0.33 2 + 0 + 1 1+ 1 jack, jim) = = 0.67 1 + 1+ 1 1+ 2 jim, mary) = = 0.75 1+ 1 + 2 March 8, 2007 Data Mining: Concepts and Techniques 16

Nominal Variables A generalization of the binary variable in that it can take more than 2 states; e.g., red, yellow, blue, green. Method 1: Simple matching m: # of matches, p: total # of variables Method 2: Use a large number of binary variables. Create a new binary variable for each of the M nominal states. March 8, 2007 Data Mining: Concepts and Techniques 17

Ordinal Variables An ordinal variable can be discrete or continuous. Order is important; i.e., rank. Can be treated like interval-scaled: Replace each x if by its rank: Map the range of each variable onto [0, 1] by replacing i-th object in the f-th variable by Compute the dissimilarity using methods for interval-scaled variables. March 8, 2007 Data Mining: Concepts and Techniques 18

Ratio-Scaled Variables Ratio-scaled variable: A positive measurement on a nonlinear scale, approximately at exponential scale, such as Ae Bt or Ae -Bt. Methods: Treat them like interval-scaled variables not a good choice! (Why? The scale can be distorted.) Apply a logarithmic transformation: Treat them as continuous ordinal data, treating their rank values as interval-scaled. March 8, 2007 Data Mining: Concepts and Techniques 19

Variables of Mixed Types A database may contain all the six types of variables. symmetric binary, asymmetric binary, nominal, ordinal, interval, and ratio. One may use a weighted formula to combine their effects: f is binary or nominal: d (f) ij = 0 if x if = x jf, or d (f) ij = 1 otherwise f is interval-based: use the normalized distance f is ordinal or ratio-scaled compute ranks r if and and treat z if as interval-scaled March 8, 2007 Data Mining: Concepts and Techniques 20

Vector Objects Vector objects: keywords in documents, gene features in micro-arrays, etc. Broad applications: information retrieval, biological taxonomy, etc. Cosine measure: A variant: Tanimoto coefficient March 8, 2007 Data Mining: Concepts and Techniques 21

1. What is Cluster Analysis? 2. Types of Data in Cluster Analysis Chapter 7 Cluster Analysis 3. A Categorization of Major Clustering Methods 4. Partitioning Methods 5. Hierarchical Methods 6. Density-Based Methods 7. Grid-Based Methods 8. Model-Based Methods 9. Clustering High-Dimensional Data 10. Constraint-Based Clustering 11. Outlier Analysis 12. Summary March 8, 2007 Data Mining: Concepts and Techniques 22

Major Clustering Approaches (I) Partitioning approach: Construct various partitions and then evaluate them by some criterion; e.g., minimizing the sum of square errors. Typical methods: k-means, k-medoids, CLARANS Hierarchical approach: Create a hierarchical decomposition of the set of data (or objects) using some criterion. Typical methods: Diana, Agnes, BIRCH, ROCK, CAMELEON Density-based approach: Based on connectivity and density functions. Typical methods: DBSCAN, OPTICS, DenClue March 8, 2007 Data Mining: Concepts and Techniques 23

Major Clustering Approaches (II) Grid-based approach: Based on a multiple-level granularity structure. Typical methods: STING, WaveCluster, CLIQUE Model-based: A model is hypothesized for each of the clusters, then tries to find the best fit of the data to the given model. Typical methods: EM, SOM, COBWEB Frequent pattern-based: Based on the analysis of frequent patterns. Typical methods: pcluster User-guided or constraint-based: Clustering by considering user-specified or application-specific constraints. Typical methods: COD (obstacles), constrained clustering March 8, 2007 Data Mining: Concepts and Techniques 24

Typical Methods to Calculate the Distance between Clusters Single link: smallest distance between an element in one cluster and an element in the other; i.e., dis(k i, K j ) = min(t ip, t jq ) Complete link: largest distance between an element in one cluster and an element in the other; i.e., dis(k i, K j ) = max(t ip, t jq ) Average: avg distance between an element in one cluster and an element in the other; i.e., dis(k i, K j ) = avg(t ip, t jq ) Centroid: distance between the centroids of two clusters; i.e., dis(k i, K j ) = dis(c i, C j ) Medoid: distance between the medoids of two clusters; i.e., dis(k i, K j ) = dis(m i, M j ) Medoid: one chosen, centrally located object in the cluster March 8, 2007 Data Mining: Concepts and Techniques 25

Centroid, Radius, and Diameter of a Cluster (for numerical data sets) Centroid. The middle of a cluster: Radius. Square root of average distance from any point of the cluster to its centroid:! C m = " N ( t! c 2 i ip m ) R = 1 m = N N ( t i =1 N Diameter. Square root of average mean squared distance between all pairs of points in the cluster: " N " N ( t! t ) 2 D = i = 1 i = 1 ip iq m N( N! 1) ip ) March 8, 2007 Data Mining: Concepts and Techniques 26

1. What is Cluster Analysis? 2. Types of Data in Cluster Analysis Chapter 7 Cluster Analysis 3. A Categorization of Major Clustering Methods 4. Partitioning Methods 5. Hierarchical Methods 6. Density-Based Methods 7. Grid-Based Methods 8. Model-Based Methods 9. Clustering High-Dimensional Data 10. Constraint-Based Clustering 11. Outlier Analysis 12. Summary March 8, 2007 Data Mining: Concepts and Techniques 27

Partitioning Algorithms The Basic Concept Partitioning: Construct a partition of a database D of n objects into a set of k clusters that optimizes (minimizes) the partitioning criterion. Partitioning criterion: E.g., sum of the squared distances Methods Globally optimal: exhaustively enumerate all partitions Heuristic methods: k-means and k-medoids algorithms k-means (MacQueen 67): Each cluster is represented by the center of the cluster. k-medoids or PAM (Partition around medoids) (Kaufman & Rousseeuw 87): Each cluster is represented by one of the objects in the cluster. March 8, 2007 Data Mining: Concepts and Techniques 28

The K-Means Clustering Method Given k, the k-means algorithm is implemented in four steps: 1. Partition objects into k nonempty subsets. 2. Compute seed points as the centroids of the clusters of the current partition (the centroid is the center; i.e., mean point, of the cluster). 3. Assign each object to the cluster with the nearest seed point. 4. Go back to Step 2; stop when no changes occur. March 8, 2007 Data Mining: Concepts and Techniques 29

The K-Means Clustering Method Example 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Assign each objects to most similar center 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 reassign Update the cluster means 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 reassign K=2 Arbitrarily choose K object as initial cluster center 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Update the cluster means 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 March 8, 2007 Data Mining: Concepts and Techniques 30

Comments on the K-Means Method Advantages Relatively efficient: O(tkn), where n is #objects, k is #clusters, and t is #iterations. Normally, k, t n. Comparison. PAM: O(k(n-k) 2 ); CLARA: O(ks 2 + k(n-k)) Often terminates at a local optimum. An actual global optimum may be found using techniques such as deterministic annealing and genetic algorithms. Disadvantages Applicable only when mean is defined; what about categorical data? Need to specify k, the number of clusters, in advance. Is sensitive to noisy data and outliers. Not suitable for discovering clusters with non-convex shapes. March 8, 2007 Data Mining: Concepts and Techniques 31

Variations of the K-Means Method Variants of the k-means differ in selection of the initial k means dissimilarity calculations strategies to calculate cluster means Handling categorical data: k-modes (Huang 98) Replacing means of clusters with modes. Using new dissimilarity measures to deal with categorical objects. Using a frequency-based method to update modes of clusters. A mixture of categorical and numerical data: k-prototype method. March 8, 2007 Data Mining: Concepts and Techniques 32

Problems with the K-Means Method The k-means algorithm is sensitive to outliers! An object with an extremely large value may substantially distort the distribution of the data. K-Medoids: Instead of taking the mean value of the object in a cluster as a reference point, medoids can be used, which is the most centrally located object in a cluster. 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 March 8, 2007 Data Mining: Concepts and Techniques 33

The K-Medoids Clustering Method Find a representative object, called a medoid, for each cluster. PAM (Partitioning Around Medoids, 1987) Starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting clustering. PAM works effectively for small data sets, but does not scale well for large data sets. Why? CLARA (Kaufmann & Rousseeuw, 1990) CLARANS (Ng & Han, 1994): Randomized sampling Focusing + spatial data structure (Ester et al., 1995) March 8, 2007 Data Mining: Concepts and Techniques 34

A Typical K-Medoids Algorithm (PAM) Total Cost = 20 10 10 10 9 9 9 8 8 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 K=2 Arbitrary choose k object as initial medoids 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Total Cost = 26 Assign each remainin g object to nearest medoids 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 Randomly select a nonmedoid object,o ramdom Do loop Until no change Swapping O and O ramdom If quality is improved. 10 9 8 7 6 5 4 3 2 Compute total cost of swapping 10 9 8 7 6 5 4 3 2 1 1 0 0 1 2 3 4 5 6 7 8 9 10 0 0 1 2 3 4 5 6 7 8 9 10 March 8, 2007 Data Mining: Concepts and Techniques 35

PAM (Partitioning Around Medoids) (1987) PAM (Kaufman and Rousseeuw, 1987), built into Splus. Use an actual object to represent the cluster. Select k representative objects arbitrarily. For each pair of non-selected object h and selected object i, calculate the total swapping cost Tc ih. For each pair of i and h: If TC ih < 0, i is replaced by h. Then assign each non-selected object to the most similar representative object. Repeat steps 2-3 until there is no change. March 8, 2007 Data Mining: Concepts and Techniques 36

PAM Clustering Total swapping cost TC ih = j C jih 10 9 8 7 6 5 4 3 2 t i j h 10 9 8 7 6 5 4 3 2 t j h i 1 1 0 0 1 2 3 4 5 6 7 8 9 10 C jih = d(j, h) - d(j, i) 0 0 1 2 3 4 5 6 7 8 9 10 C jih = 0 10 10 9 8 7 6 5 4 3 2 1 0 h j i t 0 1 2 3 4 5 6 7 8 9 10 C jih = d(j, t) - d(j, i) March 8, 2007 Data Mining: Concepts and Techniques 37 9 8 7 6 5 4 3 2 1 0 i h j t 0 1 2 3 4 5 6 7 8 9 10 C jih = d(j, h) - d(j, t)

Problems with PAM? Pam is more robust than k-means in the presence of noise and outliers, because a medoid is less influenced by outliers or other extreme values than a mean. Pam works efficiently for small data sets, but does not scale well for large data sets. O(k(n-k) 2 ) for each iteration, where n is # of data,k is # of clusters. Sampling based method: CLARA (Clustering LARge Applications) March 8, 2007 Data Mining: Concepts and Techniques 38

CLARA (Clustering Large Applications) (1990) CLARA (Kaufmann and Rousseeuw in 1990) Built in statistical analysis packages, such as S+. Draws multiple samples of the data set, applies PAM on each sample, and gives the best clustering as the output Advantage Can deal with larger data sets than PAM. Disadvantages Efficiency depends on the sample size. A good clustering based on samples will not necessarily represent a good clustering of the whole data set, if the sample is biased. March 8, 2007 Data Mining: Concepts and Techniques 39

CLARANS ( Randomized CLARA) (1994) CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han, 1994). CLARANS draws sample of neighbors dynamically. The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids. If the local optimum is found, CLARANS starts with new randomly selected node in search for a new local optimum. It is more efficient and scalable than both PAM and CLARA. Focusing techniques and spatial access structures may further improve its performance (Ester et al., 1995). March 8, 2007 Data Mining: Concepts and Techniques 40