d(2,1) d(3,1 ) d (3,2) 0 ( n, ) ( n ,2)......

Similar documents
Clustering Techniques

COMP 465: Data Mining Still More on Clustering

Data Mining: Concepts and Techniques. Chapter March 8, 2007 Data Mining: Concepts and Techniques 1

What is Cluster Analysis? COMP 465: Data Mining Clustering Basics. Applications of Cluster Analysis. Clustering: Application Examples 3/17/2015

Clustering part II 1

Unsupervised Learning. Andrea G. B. Tettamanzi I3S Laboratory SPARKS Team

UNIT V CLUSTERING, APPLICATIONS AND TRENDS IN DATA MINING. Clustering is unsupervised classification: no predefined classes

CS145: INTRODUCTION TO DATA MINING

Lecture 7 Cluster Analysis: Part A

PAM algorithm. Types of Data in Cluster Analysis. A Categorization of Major Clustering Methods. Partitioning i Methods. Hierarchical Methods

A Review on Cluster Based Approach in Data Mining

GRID BASED CLUSTERING

CS Data Mining Techniques Instructor: Abdullah Mueen

Clustering Part 4 DBSCAN

CS570: Introduction to Data Mining

Data Mining Algorithms

University of Florida CISE department Gator Engineering. Clustering Part 4

CSE 5243 INTRO. TO DATA MINING

Knowledge Discovery in Databases

Clustering in Data Mining

Lecture 3 Clustering. January 21, 2003 Data Mining: Concepts and Techniques 1

ECLT 5810 Clustering

Lezione 21 CLUSTER ANALYSIS

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

ECLT 5810 Clustering

! Introduction. ! Partitioning methods. ! Hierarchical methods. ! Model-based methods. ! Density-based methods. ! Scalability

Kapitel 4: Clustering

COMP5331: Knowledge Discovery and Data Mining

CS249: ADVANCED DATA MINING

Cluster Analysis. Outline. Motivation. Examples Applications. Han and Kamber, ch 8

Cluster Analysis. CSE634 Data Mining

CS490D: Introduction to Data Mining Prof. Chris Clifton. Cluster Analysis

DATA MINING LECTURE 7. Hierarchical Clustering, DBSCAN The EM Algorithm

Clustering in Ratemaking: Applications in Territories Clustering

Metodologie per Sistemi Intelligenti. Clustering. Prof. Pier Luca Lanzi Laurea in Ingegneria Informatica Politecnico di Milano Polo di Milano Leonardo

Clustering CS 550: Machine Learning

CS6220: DATA MINING TECHNIQUES

Unsupervised Learning

CS6220: DATA MINING TECHNIQUES

CSE 5243 INTRO. TO DATA MINING

DBSCAN. Presented by: Garrett Poppe

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Data Mining: Concepts and Techniques. Chapter 7 Jiawei Han. University of Illinois at Urbana-Champaign. Department of Computer Science

Data Mining. Clustering. Hamid Beigy. Sharif University of Technology. Fall 1394

CS6220: DATA MINING TECHNIQUES

CSE 5243 INTRO. TO DATA MINING

Cluster Analysis. Ying Shen, SSE, Tongji University

MultiDimensional Signal Processing Master Degree in Ingegneria delle Telecomunicazioni A.A

Course Content. What is Classification? Chapter 6 Objectives

Analysis and Extensions of Popular Clustering Algorithms

DS504/CS586: Big Data Analytics Big Data Clustering II

Data Mining Chapter 9: Descriptive Modeling Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

CS570: Introduction to Data Mining

CLUSTERING. CSE 634 Data Mining Prof. Anita Wasilewska TEAM 16

Course Content. Classification = Learning a Model. What is Classification?

DS504/CS586: Big Data Analytics Big Data Clustering II

CS570: Introduction to Data Mining

Data Mining: Concepts and Techniques. (3 rd ed.) Chapter 10

Gene Clustering & Classification

A Comparative Study of Various Clustering Algorithms in Data Mining

Unsupervised Learning Partitioning Methods

Lesson 3. Prof. Enza Messina

Big Data Analytics! Special Topics for Computer Science CSE CSE Feb 9

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

CHAPTER 4: CLUSTER ANALYSIS

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Slides From Lecture Notes for Chapter 8. Introduction to Data Mining

K-Means. Oct Youn-Hee Han

Data Mining. Dr. Raed Ibraheem Hamed. University of Human Development, College of Science and Technology Department of Computer Science

CS Introduction to Data Mining Instructor: Abdullah Mueen

CS 412 Intro. to Data Mining Chapter 10. Cluster Analysis: Basic Concepts and Methods

University of Florida CISE department Gator Engineering. Clustering Part 2

Data Mining Cluster Analysis: Basic Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining

Data Mining Cluster Analysis: Advanced Concepts and Algorithms. Lecture Notes for Chapter 8. Introduction to Data Mining, 2 nd Edition

K-DBSCAN: Identifying Spatial Clusters With Differing Density Levels

Big Data SONY Håkan Jonsson Vedran Sekara

Unsupervised Learning : Clustering

Clustering Algorithm (DBSCAN) VISHAL BHARTI Computer Science Dept. GC, CUNY

On Clustering Validation Techniques

A Technical Insight into Clustering Algorithms & Applications

Chapter VIII.3: Hierarchical Clustering

Data Informatics. Seon Ho Kim, Ph.D.

Hierarchy. No or very little supervision Some heuristic quality guidances on the quality of the hierarchy. Jian Pei: CMPT 459/741 Clustering (2) 1

Network Traffic Measurements and Analysis

Clustering Algorithms for Spatial Databases: A Survey

Unsupervised Learning and Clustering

COMPARISON OF DENSITY-BASED CLUSTERING ALGORITHMS

DATA MINING - 1DL105, 1Dl111. An introductory class in data mining

Machine Learning. Unsupervised Learning. Manfred Huber

Lecture Notes for Chapter 8. Introduction to Data Mining

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

A New Approach to Determine Eps Parameter of DBSCAN Algorithm

Unsupervised Learning and Clustering

Unsupervised Learning Hierarchical Methods

University of Florida CISE department Gator Engineering. Clustering Part 5

Part I. Hierarchical clustering. Hierarchical Clustering. Hierarchical clustering. Produces a set of nested clusters organized as a

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

10701 Machine Learning. Clustering

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Transcription:

Data Mining i Topic: Clustering CSEE Department, e t, UMBC Some of the slides used in this presentation are prepared by Jiawei Han and Micheline Kamber Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Model-Based Clustering Methods Density-Based Methods Grid-Based Methods Outlier Analysis Summary 1

What is Cluster Analysis? Cluster: a collection of data objects Similar to one another within the same cluster Dissimilar to the objects in other clusters Cluster analysis Grouping a set of data objects into clusters Clustering is unsupervised classification: no predefined classes Typical applications As a stand-alone alone tool to get insight into data distribution As a preprocessing step for other algorithms General Applications of Clustering Pattern Recognition Spatial Data Analysis create thematic maps in GIS by clustering feature spaces detect spatial clusters and explain them in spatial data mining Image Processing Economics Biology WWW Document classification Cluster Weblog data to discover groups of similar access patterns 2

Examples of Clustering Applications Marketing: Help marketers discover distinct groups in their customer bases, and then use this knowledge to develop targeted marketing programs Land use: Identification of areas of similar land use in an earth observation database Insurance: Identifying groups of motor insurance policy holders with a high average claim cost City-planning: Identifying groups of houses according to their house type, value, and geographical location Earth-quake studies: Observed earth quake epicenters should be clustered along continent faults What Is Good Clustering? A good clustering method will produce high quality clusters with high intra-class similarity low inter-class similarity The quality of a clustering result depends on both the similarity measures used by the method and its implementation. The quality of a clustering method is also measured by its ability to discover some or all of the hidden patterns. 3

Requirements of Clustering in Data Mining Scalability Ability to deal with different types of attributes Discovery of clusters with arbitrary shape Minimal requirements for domain knowledge to determine input parameters Able to deal with noise and outliers Insensitive to order of input records High dimensionality Incorporation of user-specified specified constraints Interpretability and usability Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Model-Based Clustering Methods Density-Based Methods Grid-Based Methods Outlier Analysis Summary 4

Data Structures Data matrix Dissimilarity matrix x 11... x i1... x n1............... x 1f... x if... x nf............... x 1p... x ip... x np 0 d(2,1) 0 d(3,1 ) d (3,2) 0 : : : d ( n,1) d ( n,2)...... 0 Measure the Quality of Clustering Dissimilarity/Similarity metric: Similarity is expressed in terms of a distance function, which is typically metric: d(i, j) j There is a separate quality function that measures the goodness of a cluster. The definitions of distance functions are usually very different for interval-scaled, Boolean, categorical, ordinal and ratio variables. Weights should be associated with different variables based on applications and data semantics. It is hard to define similar enough or good enough the answer is typically highly subjective. 5

Different Types of Clustering Hierarchical vs. Partitional Exclusive vs. Overlapping vs. Fuzzy Complete vs. Partial Different Types of Clusters Well-separated Prototype-based Graph-based Density-based Conceptual clusters 6

Major Clustering Approaches Partitioning algorithms: Construct various partitions and then evaluate them by some criterion Hierarchical algorithms: Create a hierarchical decomposition of the set of data (or objects) using some criterion Density-based: based on connectivity and density functions Grid-based: based on a multiple-level level granularity structure Model-based: A model is hypothesized for each of the clusters and the idea is to find the best fit of that model to each other Cluster Analysis What is Cluster Analysis? Partitioning i Methods Hierarchical Methods Model-Based Clustering Methods Density-Based Methods Grid-Based Methods Outlier Analysis Summary 7

Partitioning Algorithms: Basic Concept Partitioning method: Construct a partition of a database D of n objects into a set of k clusters Given a k, find a partition of k clusters that optimizes the chosen partitioning criterion Global optimal: exhaustively enumerate all partitions Heuristic methods: k-means and k-medoids algorithms k-means (MacQueen 67): Each cluster is represented by the center of the cluster k-medoids: : Each cluster is represented by one of the objects in the cluster Square Error Criterion Sum of Square Error for a clustering SSE = K i= 1 x C i dist( x, c ) i 2 c i is the centroid of the cluster C i 8

The K-Means Clustering Method Given k, the k-means algorithm is implemented in 4 steps: Partition objects into k nonempty subsets Compute seed points as the centroids of the clusters of the current partition. The centroid is the center (mean point) of the cluster. Assign each object to the cluster with the nearest seed point. If the stopping criterion is not satisfied then go back to Step 2. K-means: Example, k = 3 Step 1: Make random assignments and compute centroids (big dots) Step 2: Assign points to nearest centroids Step 3: Re-compute centroids (in this example, solution is now stable) 9

Comments on the K-Means Method Strength O(tkn( tkn), where n is # objects, k is # clusters, and t is # iterations. Normally, k, t << n. Often terminates at a local optimum.. The global optimum may be found using techniques such as: deterministic annealing and genetic algorithms. Weakness Applicable only when mean is defined, then what about categorical data? Need to specify k, the number of clusters, in advance Unable to handle noisy data and outliers Not suitable to discover clusters with non-convex shapes Sensitive to initial choice of centroids Variations of the K-Means Method A few variants of the k-means which differ in Selection of the initial k means Dissimilarity calculations Strategies to calculate cluster means Handling categorical data: k-modes (Huang 98) Replacing means of clusters with modes Using new dissimilarity measures to deal with categorical objects Using a frequency-based method to update modes of clusters A mixture of categorical and numerical data: k- prototype method 10

The K-Medoids Clustering Method Find representative objects, called medoids, in clusters PAM (Partitioning Around Medoids, 1987) starts from an initial set of medoids and iteratively replaces one of the medoids by one of the non-medoids if it improves the total distance of the resulting clustering PAM works effectively for small data sets, but does not scale well for large data sets CLARA (Kaufmann & Rousseeuw, 1990) CLARANS (Ng & Han, 1994): Randomized sampling Focusing + spatial data structure (Ester et al., 1995) PAM (Partitioning Around Medoids) PAM (Kaufman and Rousseeuw, 1987), built in Splus Use real object tt to represent tth the cluster Select k representative objects arbitrarily For each pair of non-selected object h and selected object i, calculate the total improvement of cluster quality Q ih For each pair of i and h, If Q ih > 0, i is replaced by h Then assign each non-selected object to the most similar representative object Repeat steps 2-3 until there is no change 11

Time and Space Complexity Space complexity, O((m+K)n) Time complexity O(I*K*m*n) I= number of iterations K=number of centroids m=number of tuples n=number of attributes K-Means & Post-processing Reducing SSE Split a cluster Introduce a new cluster centroid Decrease the number of clusters Disperse a cluster Merge two clusters 12

Bisecting K-Means Start with two clusters Split one Continue until K clusters are found K-Means as an Optimization Problem K-Means and SSE K-Means and Sum of Absolute Error (SAE) Minimize the error function by choosing appropriate centroid. 13

CLARA (Clustering Large Applications) (1990) CLARA (Kaufmann and Rousseeuw in 1990) Built in statistical ti ti analysis packages, such as S+ It draws multiple samples of the data set, applies PAM on each sample, and gives the best clustering as the output Strength: deals with larger data sets than PAM Weakness: Efficiency depends on the sample size A good clustering based on samples will not necessarily represent a good clustering of the whole data set if the sample is biased CLARANS ( Randomized CLARA) (1994) CLARANS (A Clustering Algorithm based on Randomized Search) (Ng and Han 94) CLARANS draws sample of neighbors dynamically The clustering process can be presented as searching a graph where every node is a potential solution, that is, a set of k medoids If the local optimum is found, CLARANS starts with new randomly selected node in search for a new local optimum It is more efficient and scalable than both PAM and CLARA Focusing techniques and spatial access structures may further improve its performance (Ester et al. 95) 14

Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Model-Based Clustering Methods Density-Based Methods Grid-Based Methods Outlier Analysis Summary Hierarchical Clustering Use distance matrix as clustering criteria. This method does not require the number of clusters k as an input, but needs a termination condition. Decompose data objects into a several levels of nested partitioning (tree of clusters), called a dendrogram. A clustering of the data objects is obtained by cutting the dendrogram at the desired level, then each connected component forms a cluster. 15

Hierarchical Clustering: Example F p1 p2 E p3 A p4b Cp5 D p6 p3 A p4 B Cp5 p1 p2 Dp6 E F Agglomerative and Divisive Step 0 Step 1 Step 2 Step 3 Step 4 a a b b a b c d e c c d e d d e e Step 4 Step 3 Step 2 Step 1 Step 0 agglomerative divisive 16

Agglomerative Techniques Compute cluster proximity matrix, if necessary Repeat until only cluster remains Merge the closest two clusters Update the proximity matrix How to Cluster the Clusters? Single Link: Minimum distance d min ( Ci, C j ) min p C p' = C i j p p' Complete Link: Maximum distance Average distance (averaged over every pair of cluster members) Cluster Mean distance Ward s method: Clusters represented using centroids; but measures proximity in terms of the increase in the SSE resulting from merging two clusters 17

Lance-Williams Formula for Cluster Proximity General formula for computing cluster proximity using different schemes Proximity between clusters Q and R where R is generated by merging clusters A and B. m A, m B, and m Q are the number of points in A, B, and Q respectively. p(.,.) is the proximity function p( R, Q) = α p( A, Q) + α p( B, Q) + βp( A, B) + γ p( A, Q) p( B, Q) A Example: Single Link: B α = 1/ 2, α = 1/ 2, β = 0, γ = 1/ 2 A B Time and Space Complexity O(m^2) space O(m^3) running time; can be improved to O(m^2log m) when the cluster-proximities are kept sorted. 18

Few Issues Lack of global objective function Handling different cluster sizes Weighted vs. unweighted schemes Local merging decisions AGNES (Agglomerative Nesting) Introduced in Kaufmann and Rousseeuw (1990) Implemented in statistical analysis packages, e.g., Splus Use the Single-Link method and the dissimilarity matrix. Merge nodes that have the least dissimilarity Eventually all nodes belong to the same cluster 10 10 10 9 9 9 8 8 8 7 6 7 6 7 6 5 4 5 4 5 4 3 2 3 2 3 2 1 1 1 0 0 1 2 3 4 5 6 7 8 9 10 0 0 1 2 3 4 5 6 7 8 9 10 0 0 1 2 3 4 5 6 7 8 9 10 19

DIANA (Divisive Analysis) Introduced in Kaufmann and Rousseeuw (1990) Implemented in statistical analysis packages, e.g., Splus Inverse order of AGNES Eventually each node forms a cluster on its own 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 10 9 8 7 6 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 More on Hierarchical Clustering Methods Major weakness of agglomerative clustering methods do not scale well: space complexity of at least O(m 2 ), where m is the number of total objects can never undo what was done previously Integration of hierarchical with distance-based clustering BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters CURE (1998): selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction CHAMELEON (1999): hierarchical clustering using dynamic modeling 20

BIRCH Balanced and Iterative Reducing and Clustering using Hierarchies (BIRCH) Clustering data for Euclidean space Clusters can be represented using Clustering Feature Clustering Feature: CF=(N, LS, SS) N is the number of points in a cluster LS is the linear sum SS is the square sum BIRCH CF Tree: Used for summarizing clusters Height balanced tree Non-leaf nodes store [CF i, child i ] the sums of the CFs of their children pointers to the child node Leaf nodes store a sequence of clustering features representing the number of points that have been previously scanned Height of the CF tree is controlled by a threshold parameter T. 21

Continued Scan database and build CF-tree in memory. Cluster the leaf nodes of the CF tree. Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Model-Based Clustering Methods Density-Based Methods Grid-Based Methods Outlier Analysis Summary 22

Probabilistic Clustering Finite Mixtures of k probability distributions Known structure of distributions Find the parameters Example Two clusters A and B Each has normal distribution with μ A, σ A and μb, σ B as the mean and standard deviations Samples are taken with probabilities p Aand p B respectively Estimation i of the parameters is easy if we know the decomposition of the data set 23

Example (Continued) If we know the parameters we can identify the decomposition. Given x, the probability that it belongs to A is Pr[ x A]Pr[ A] Pr[ A x] = = Pr[ x] f ( x μ ) 1 2 2σ ( x; μ, σ ) = e 1/ 2 (2π ) σ f ( x; μ A, σ A) p Pr[ x] 2 A Normalize Pr[A x] and Pr[B x] EM Algorithm Expectation Maximization (EM) Probabilistically assign every member to every cluster based on weight distribution w1 x1 + w2 x2 +... + wn xn μ A = w + w +... + w σ 1 2 2 2 w1 ( x1 μ1) +... + wn ( xn μn A = w1 + w2 +... + wn n ) 2 24

When to Terminate Overall likelihood that the data comes from the distribution defined by the model ( pa Pr[ xi A] + pb Pr[ xi B]) i Overall likelihood is a measure of the goodness of the clustering. Self-organizing Feature Maps (SOMs) Can be used for clustering Originally developed by Kohonen Based on topological maps in the brain 25

Architecture of SOM A graphs with N nodes (sometimes called neurons) Each node is associated with a weight vector. SOM Algorithm 1. Randomly initialize the weight vectors of every node. 2. Draw a data point x,, from given data set. 3. Find the node i with the weight vector w that matches best with x. 4. Update w. 5. Update the neighbors of i. 6. Go back to step 2 until convergence of the weight vectors. 26

SOFM Details Initialization: Choose random small values for the initial weight vectors w j (0). Similarity matching: Find the best matching (winning) node i(x) i( x) = arg min x( t) w j = 1,2, K, N j j Updating Process Updating: Adjust the weight vectors of all neurons in neighborhood of i(x) w j w ( t + 1) = w j j ( t) + η( t)( x( t) w ( t) j ( t)) j Λ i( x) ( t) Otherwise Λ i(x) (t) defines the neighborhod of i(x). 27

Learning Rate Parameter Time varying learning rate parameter η(t): () Ordering phase: gradually decrease (linearly, exponentially, inversely with t from close to 1 to 0.1 during the initial 1000 iterations. Convergence phase: small value around 0.01 for rest of process (thousands of iterations). Neighborhood function Λ i(x) (t) Square, hexagonal, continuous Gaussian, Mexican hat. Starts by including all neurons and gradually shrinks with iterations. Demonstration Uniformly Distributed 2D Input 28

Possible Kinks Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Model-Based Clustering Methods Density-Based Methods Grid-Based Methods Outlier Analysis Summary 29

Density-Based Clustering Methods Clustering based on density (local cluster criterion), such as density-connected points Major features: Discover clusters of arbitrary shape Handle noise One scan Need density parameters as termination condition Several interesting studies: DBSCAN: Ester, et al. (KDD 96) OPTICS: Ankerst, et al (SIGMOD 99). DENCLUE: Hinneburg & D. Keim (KDD 98) CLIQUE: Agrawal, et al. (SIGMOD 98) Density-Based Clustering: Background Two parameters: Eps: : Maximum radius of the neighbourhood MinPts: : Minimum number of points in an Eps-neighbourhood of that point N Eps (q): {p belongs to D dist(p,q) <= Eps} Directly density-reachable: A point p is directly density- reachable from a point q wrt. Eps, MinPts if 1) p belongs to N Eps (q) 2) core point condition: N Eps (q) >= MinPts q p MinPts = 5 Eps = 1 cm 30

Density-Based Clustering: Background (II) Density-reachable: A point p is density-reachable from a point q wrt. Eps, MinPts if there is a chain of points p 1,, p n, p 1 = q, p n = p such that p i+1 is directly density-reachable from p i q p 1 p Density-connected A point p is density-connected to a point q wrt. Eps, MinPts if there is a point o such that both, p and q are density- reachable from o wrt. Eps and MinPts. p o q Types of Data Points Core point: As defined earlier Border point: Not a core point, but falls within the neighborhood of a core point. Outlier/Noise point: Neither a core nor a border point 31

DBSCAN: Density Based Spatial Clustering of Applications with Noise Relies on a density-based notion of cluster: A cluster is defined as a maximal set of density-connected points Discovers clusters of arbitrary shape in spatial databases with noise Outlier Border Core Eps = 1cm MinPts = 5 DBSCAN: The Algorithm Arbitrary select a point p Retrieve all points density-reachable from p with respect to Eps and MinPts. If p is a core point, a cluster is formed. If p is a border point, no points are density-reachable from p and DBSCAN visits i the next point of the database. Continue the process until all of the points have been processed. 32

Time and Space Complexity Time: O(m 2 ) using naïve scheme where m is the number of tuples in the data set. Advanced data structures for finding points in the Eps-neighborhood can reduce the time (O(m lg m)) Space: O(m) Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Model-Based Clustering Methods Density-Based Methods Grid-Based GidB d Methods Outlier Analysis Summary 33

Grid-Based Clustering Method Using multi-resolution grid data structure Several interesting methods STING (a STatistical INformation Grid approach) by Wang, Yang and Muntz (1997) WaveCluster by Sheikholeslami, Chatterjee, and Zhang (VLDB 98): A multi-resolution clustering approach using wavelet method STING: A Statistical Information Grid Approach Wang, Yang and Muntz (VLDB 97) The spatial area is divided into rectangular cells There are several levels of cells corresponding to different levels of resolution 34

STING: A Statistical Information Grid Approach (2) Each cell at a high level is partitioned into a number of smaller cells in the next lower level Statistical info of each cell is calculated and stored beforehand and is used to answer queries Parameters of higher level cells can be easily calculated from parameters of lower level cell count, mean, s, min, max type of distribution normal, normal, uniform,, etc. Use a top-down approach to answer spatial data queries Start from a pre-selected layer typically with a small number of cells For each cell in the current level compute the confidence interval STING: A Statistical Information Grid Approach (3) Remove the irrelevant cells from further consideration When finish examining the current layer, proceed to the next lower level Repeat this process until the bottom layer is reached O(K), where K is the number of grid cells at the lowest level All the cluster boundaries are either horizontal or vertical, and no diagonal boundary is detected 35

CLIQUE (Clustering In QUEst) Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD 98). Automatically identifying subspaces of a high dimensional data space that allow better clustering than original space CLIQUE can be considered as both density-based and grid- based It partitions each dimension into the same number of equal length interval It partitions an m-dimensional data space into non-overlapping overlapping rectangular units A unit is dense if the fraction of total data points contained in the unit exceeds the input model parameter A cluster is a maximal set of connected dense units within a subspace CLIQUE: The Major Steps Partition the data space and find the number of points that lie inside each cell of the partition. Identify the subspaces that contain clusters using the Apriori principle Identify clusters: Determine dense units in all subspaces of interests Determine connected dense units in all subspaces of interests. Generate minimal description for the clusters Determine maximal regions that cover a cluster of connected dense units for each cluster Determination of minimal cover for each cluster 36

Salary (10,000) 1 2 3 4 5 6 7 0 20 30 40 50 60 age Vacation (week) 1 2 3 4 5 6 7 0 20 30 40 50 60 age τ = 3 Vacation 30 50 age Strength and Weakness of CLIQUE Strength It automatically finds subspaces of the highest dimensionality such that high density clusters exist in those subspaces It is insensitive to the order of records in input and does not presume some canonical data distribution It scales linearly with the size of input and has good scalability aab ya as the number of dimensions in the data daa increases Weakness The accuracy of the clustering result may be degraded at the expense of simplicity of the method 37

Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods Hierarchical Methods Model-Based Clustering Methods Density-Based Methods GidB Grid-Based d Methods Outlier Analysis References Outlier Detection Statistical techniques Distance-based techniques Density-based techniques Clustering-based techniques Classification-based techniques 38

References Tan, Steinbach, Kumar. Data Clustering: A Review by Jain, Murty, and Flynn. Han and Kamber A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988. 39