Uniformity and Homogeneity Based Hierachical Clustering

Similar documents
Clustering CS 550: Machine Learning

ECLT 5810 Clustering

Automated Clustering-Based Workload Characterization

AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION

ECLT 5810 Clustering

Unsupervised Learning

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/10/2017)

Indexing by Shape of Image Databases Based on Extended Grid Files

Introduction to Clustering

COMPUTER AND ROBOT VISION

Cluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

University of Florida CISE department Gator Engineering. Clustering Part 4

Automated Classification of Quilt Photographs Into Crazy and Noncrazy

A SYNTAX FOR IMAGE UNDERSTANDING

Clustering Part 4 DBSCAN

A Review on Cluster Based Approach in Data Mining

Cluster Analysis: Agglomerate Hierarchical Clustering

Several pattern recognition approaches for region-based image analysis

Unsupervised Learning : Clustering

Shape spaces. Shape usually defined explicitly to be residual: independent of size.

Notes. Reminder: HW2 Due Today by 11:59PM. Review session on Thursday. Midterm next Tuesday (10/09/2018)

A Novel Approach for Minimum Spanning Tree Based Clustering Algorithm

Multi-Modal Data Fusion: A Description

Kapitel 4: Clustering

Machine Learning (BSMC-GA 4439) Wenke Liu

USING SOFT COMPUTING TECHNIQUES TO INTEGRATE MULTIPLE KINDS OF ATTRIBUTES IN DATA MINING

Cluster Analysis. Angela Montanari and Laura Anderlucci

A REVIEW ON CLUSTERING TECHNIQUES AND THEIR COMPARISON

Cluster Analysis and Visualization. Workshop on Statistics and Machine Learning 2004/2/6

Methods for Intelligent Systems

Enhancing Clustering Results In Hierarchical Approach By Mvs Measures

Algorithm That Mimics Human Perceptual Grouping of Dot Patterns

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

CSSE463: Image Recognition Day 21

Introduction to Pattern Recognition Part II. Selim Aksoy Bilkent University Department of Computer Engineering

FINDING PATTERN BEHAVIOR IN TEMPORAL DATA USING FUZZY CLUSTERING

Fast Fuzzy Clustering of Infrared Images. 2. brfcm

Autoregressive and Random Field Texture Models

9. Three Dimensional Object Representations

A Graph Theoretic Approach to Image Database Retrieval

Machine Learning (BSMC-GA 4439) Wenke Liu

Hard clustering. Each object is assigned to one and only one cluster. Hierarchical clustering is usually hard. Soft (fuzzy) clustering

IDENTIFYING GEOMETRICAL OBJECTS USING IMAGE ANALYSIS

Chapter 4: Text Clustering

Clustering algorithms and introduction to persistent homology

Clustering Using Elements of Information Theory

Spatial Outlier Detection

Voronoi Diagrams in the Plane. Chapter 5 of O Rourke text Chapter 7 and 9 of course text

High Dimensional Indexing by Clustering

Olmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.

Robust Clustering of PET data

CSE 5243 INTRO. TO DATA MINING

Integrating Intensity and Texture in Markov Random Fields Segmentation. Amer Dawoud and Anton Netchaev. {amer.dawoud*,

Line Segment Based Watershed Segmentation

Foundations of Multidimensional and Metric Data Structures

Lecture-17: Clustering with K-Means (Contd: DT + Random Forest)

Scale-invariant Region-based Hierarchical Image Matching

Practical Image and Video Processing Using MATLAB

Dynamic Thresholding for Image Analysis

Graph-theoretic Issues in Remote Sensing and Landscape Ecology

Keywords Clustering, Goals of clustering, clustering techniques, clustering algorithms.

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

6. Concluding Remarks

MATH 890 HOMEWORK 2 DAVID MEREDITH

University of Florida CISE department Gator Engineering. Clustering Part 5

Nonlinear dimensionality reduction of large datasets for data exploration

Color-Based Classification of Natural Rock Images Using Classifier Combinations

Image Mining: frameworks and techniques

Normalized cuts and image segmentation

4. Ad-hoc I: Hierarchical clustering

ECG782: Multidimensional Digital Signal Processing

Clustering Methods for Video Browsing and Annotation

Texture Segmentation by Windowed Projection

EE 584 MACHINE VISION

Clustering in Data Mining

Clustering Algorithm (DBSCAN) VISHAL BHARTI Computer Science Dept. GC, CUNY

Lecture 11: Classification

CS 534: Computer Vision Segmentation and Perceptual Grouping

Hierarchical Clustering Lecture 9

Fingerprint Classification Using Orientation Field Flow Curves

Norbert Schuff VA Medical Center and UCSF

Unsupervised Learning

Cluster Analysis for Microarray Data

Water-Filling: A Novel Way for Image Structural Feature Extraction

Texture Analysis. Selim Aksoy Department of Computer Engineering Bilkent University

Working with Unlabeled Data Clustering Analysis. Hsiao-Lung Chan Dept Electrical Engineering Chang Gung University, Taiwan

Contents. Preface to the Second Edition

CS 340 Lec. 4: K-Nearest Neighbors

Data Clustering Hierarchical Clustering, Density based clustering Grid based clustering

From Ramp Discontinuities to Segmentation Tree

Using the Kolmogorov-Smirnov Test for Image Segmentation

DATA CLASSIFICATORY TECHNIQUES

Motivation. Technical Background

Clustering Results. Result List Example. Clustering Results. Information Retrieval

A Novel Field-source Reverse Transform for Image Structure Representation and Analysis

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

Image retrieval based on region shape similarity

Transcription:

Uniformity and Homogeneity Based Hierachical Clustering Peter Bajcsy and Narendra Ahuja Becman Institute University of Illinois at Urbana-Champaign 45 N. Mathews Ave., Urbana, IL 181 E-mail: peter@stereo.ai.uiuc.edu and ahuja@vision.ai.uiuc.edu Abstract This paper presents a clustering algorithm for dot patterns in n-dimensional space. The n-dimensional space often represents a multivariate (n f -dimensional) function in a n s -dimensional space (n s + n f = n). The proposed algorithm decomposes the clustering problem into the two lower dimensional problems. Clustering in n f -dimensional space is performed to detect the sets of dots in n-dimensional space having similar n f -variate function values (location based clustering using a homogeneity model). Clustering in n s - dimensional space is performed to detect the sets of dots in n-dimensional space having similar interneighbor distances (density based clustering with a uniformity model). Clusters in the n-dimensional space are obtained by combining the results in the two subspaces. 1. Introduction Clustering explores inherent tendency of a dot pattern to form sets of dots (clusters) in multidimensional space. The multidimensional space represents parameters of some phenomenon, for example, image texture may contain overlapping multiple textures having inherent densities (one subspace) with different colors or shapes of texels within each texture (another subspace). This paper presnts a new clustering method that separates the n s -dimensional spatial (e.g., location and density) and n f -dimensional intrinsic properties represented by the dot distribution (n s +n f = n). In this sense it differs from many of the existing methods (single lin, complete lin, minimum spanning tree, Zahn s clustering, nearest neighbors, Voronoi neighbors, K-means and mode seeing [, 3, 2, 11, 1, 1, 8, 9, 5, 4]). The clustering problem is decomposed into two lower-dimensional problems. The dot pattern in n-dimensional space is projected This research was supported in part by the Advanced Research Projects Agency under grant N14-93-1-117 administered by the Office of Naval Research onto the two subspaces. The specific choice of subspaces is determined by the application at hand. Clustering is performed in each subspace and the results then combined. Thus, clustering is viewed as an extension of the problem of segmenting a noisy multivariate multidimensional function. A location uniformity model for clustering is used in n s -dimensional subspace (modeling uniform sampling) to detect clusters with similar interior distances between dots (density based clustering), and a homogeneity model for clustering is used in n f -dimensional subspace (modeling constant multivariate function values) to detect clusters with similar locations of dots (location based clustering). Similarity is defined as the Euclidean distance, e.g., between two interior distances or two locations. The two models are used in the corresponding two subspaces and the lins and dot locations are clustered using a new method. Overall clustering is carried out by clustering the two dot patterns independently in n s and n f dimensional subspaces and then combining the results. Hierarchical organization of clusters is obtained by (1) varying the degrees of uniformity " and homogeneity to create several clusterings and (2) capturing the relationship among the detected clusters as a function of uniformity " and homogeneity. The proposed clustering method can be related to the graph theoretic algorithms. 2. Uniformity and homogeneity based clustering First, a mathematical framewor is established in section 2.1. n-dimensional (nd) points are projected onto the two lower dimensional subspaces giving rise to the n s -dimensional (n s D) sample points and n f -dimensional (n f D) attribute points. Clustering of sample points is proposed with the uniformity model in section 2.2 (uniformity of sample point locations or homogeneity of interior lin distances). Clustering of attribute points is proposed with the homogeneity model in section 2.3 (homogeneity of point locations). A procedure for hierarchical clustering is outlined in section 2.4. The result is exclusive (nonoverlapping 1

clusters), intrinsic (no a priori nowledge), agglomerative (grouping points) and a graph based hierarchical classification of a dot pattern. 2.1. Mathematical formulation CL CL 1 2 α s clusters of lins sharing sample points An nd dot pattern is defined as a set of points p i with coordinates (x 1 ; x 2,,x ns, f 1 ; f 2,, f nf ), which represent a discrete sample point x i =(x 1 ; x 2,, x ns ) and a discrete attribute point f(x i ) =(f 1 ; f 2,, f nf ) in the two n s and n f dimensional subspaces. f is defined as a mapping f : < ns?! < nf at sample points x i. Dissimilarity measure d of two dots p 1 and p 2 is defined by the Euclidean distance of the minimum path between p 1 and p 2 (denoted as lin l p1;p 2 ), i.e., d(l p1;p 2 ) = p 1? p 2. Given sample points x i, a lin is assigned to every possible pair of sample points. All lins over given sample points x i create a complete graph H = fl xi1;x i2 = g. Let us suppose that all lins from a complete graph H are partitioned into nonoverlapping clusters of lins CL m, where m is the index of a cluster. The uniformity " of one cluster of lins CL m (denoted as CL " m ) is valid if for all lins in the cluster CL " m the following is true: (1) A connected graph G(CL " m ) H is created, i.e., if every sample point is a vertex in the graph then a path exists between any two vertices in the connected graph. (2) Distances d( 2 CL " m ) associated with lins vary by no more than ", i.e., j d( 2 CL " m ))? d( 2 CL " m )) j ". We can write that all lins 2 CL " m must have lin distances within an " wide distance interval d( ) 2 [d midp (CL " m )? " 2 ; d midp(cl " m ) + " ], where 2 d midp (CL " m ) is the average value of the maximum and minimum lin distances from the connected graph G(CL " m ); d midp (CL " m ) = 1 2 (maxfd()g + minfd( )g) (see Figure 1). Having the final partition of all lins into clusters of lins CL " m with "-uniformity, we can obtain the final partition of sample points x i into clusters of sample points CS j " with "-uniformity based on the priority of minimum average lin distances within clusters of lins (the minimum spanning tree of clusters of lins CL " m ). Let us suppose that all attribute points f(x i ) are partitioned into clusters of attribute points CF j, where j is the index of a cluster. The homogeneity of one cluster CF j (denoted as CFj ) is defined as the maximum distance between any pair of attribute points from the cluster, i.e., f(x 1 2 CFj )? f(x 2 2 CFj ). We can also write that any attribute point f(x i ) 2 CFj must have a location within an n f D sphere f(x i ) 2 Sph(Center = f midp ; radius = 2 ), where f midp has the coordinates of the middle attribute point from the two attribute points f(x q ) and f(x t ) being the most distant; f(x q )? f(x t ) = maxf f(x i1 )? f(x i2 ) g and f midp = 1 2 (f(x q) + f(x t )). The CL 3 d midp.5.5 Figure 1. Uniformity and separation of clusters of lins. Uniformity and separation of clusters of lins are illustrated on the axis of lin distances d( ). Clusters of lins with "-uniformity contain lins with lin distances occupying " wide interval on the axis of lin distances (CL " 1 ; CL" 2 ; CL" 3 ). Separation of any pair of clusters of lins, which share at least one common sample point x i by their lins (CL " 1, CL 2 "), is defined as s = minfj d( 2 CL " 1 )? d( 2 CL 2 " ) jg. There is no separation defined between clusters of lins, which do not share at least one common sample point (CL " 1 ; CL" 3 ). -homogeneity of a cluster is illustrated in Figure 2. Definition 1 "-uniformity and -homogeneity based dot pattern clustering. Given the uniformity parameter ", the homogeneity parameter and nd dots p i = (x i ; f(x i )), ("; ) based dot pattern clustering partitions nd dots p i into a set of clusters C t "; such that the clusters C t "; (t is the index of a cluster) satisfy the following properties: 1. "-uniformity of sample points x i. 2. -homogeneity of attribute points f(x i ). 3. Cluster intersection; C "; t1 4. Cluster union; [C "; t = [p i. 2.2. Clustering of sample points x i \ C"; t2 = for all t1 = t2. The clustering method for unnown clusters of lins having a large separation of lin distances with respect to their interior uniformity is proposed first in 2.2.1. Statistical descriptors of clusters are introduced to cope with unnown clusters of lins having a small separation of lin distances with respect to their interior uniformity in 2.2.2. Descriptors and a sequential decrease of the number of lins reduce complexity of the proposed clustering method. The clustering algorithm is provided at the end in 2.2.3.

f(x i ) PLANE CF1 CF 2 2 CL clusters of lins sharing sample points α f α s > f midp f(x q) f(x ) t CL 1 CL 2 f (x ) 2 i f (x ) 1 i CL 3 Figure 2. Homogeneity and separation of clusters of 2D attribute points. All attribute points from one -homogeneous cluster are within a sphere having center at f midp and radius 2 (see CF 2 ). Separation f of a pair of clusters is defined as the minimum distance between two attribute points each from one cluster; f = minf (f(x i1 ) 2 CF 1 )? (f(x i2) 2 CF 2 ) g. 2.2.1 The clustering method for modelled clusters Let us suppose that M nonoverlapping "-uniform clusters of lins fcl " m ; m = 1; ::Mg are created from a complete graph H = f g over sample points x i. Let us assume that for all pairs of clusters of lins CL " m1 ; CL" m2 sharing at least one sample point x i by their lins, the separation s of lin distances is more than their interior uniformity " s > "; (see Figure 3). This scenario represents a modelled dot pattern. An unnown cluster of lins CL " m can be created from any lin 1 2 CL " m by grouping together all lins satisfying the inequality j d(1 )? d( ) j ". The "- uniform cluster CL " m is identical with the 2"-uniform cluster 1 created from the lin 1 such that d midp = d(1 ); j d midp? d( ) j " and d midp was defined in section 2.1. Starting from individual lins, clusters of lins CL " m can be created by grouping those lins 1 and 2 together, which (1) are connected (1 and 2 share one common point x i ) and (2) lead to identica"-uniform clusters l = 1 l = 2 CL " m. Let us order clusters of lins CL " m = fg Mm =1 based on their average lin distances (first moments d 1stm ( 2 P CL " m ) = M 1 Mm m =1 d( 2 CL " m )) from the shortest average lin distances to the longest average lin distances; d 1stm ( 2 CL " 1 ) d 1stm( 2 CL " 2 ) ::: d 1stm( 2 CL " M ). Then clusters of sample points CS" j are uniquely derived from the cluster of lins CL " m by using minimum spanning tree of CL " m with the lin distances equal to d 1stm ( 2 CL " m ). Thus lins 2 CL " 1 from the ordered Figure 3. Lin distances for CL " m and. An "-uniform unnown cluster of lins CL " m with a large separation of lin distances with respect to its interior uniformity s > " contains the same lins as any created 2"-uniform cluster CL" from a lin 2 CL " m (see = CL " 1 ). set of CL " m will assign labels to sample points x i first, then lins 2 CL " 2 for the remaining unlabelled sample points, etc. Proposed clustering of sample points x i (1) Create 2"-uniform clusters of connected lins CL" for each lin. (2) Compare all pairs of 2"-uniform clusters and, which started from lins ; sharing one sample point x i. (3) Assign lins into clusters of lins CL " m based on comparisons. (4) Map uniquely already created clusters of lins CL " m into clusters of sample points CS j ". These four steps of the proposed clustering method for a modelled dot pattern ( s > ") are applied to any dot pattern having unnown clusters of lins with large or small separation with respect to their interior uniformity; s > " or s ". Performance improvement of the method is achieved by using cluster descriptors for the comparison of 2"-uniform clusters in the step 2. Exhaustive comparison of two clusters is replaced by comparing two values (descriptors), which leads to a reduction of the computational complexity. Reduction of the computational complexity is improved even more by sequentially decreasing the number of the processed lins. 2.2.2 Complexity reduction and performance improvement A. Descriptors of clusters : A simplified comparison of two 2"-uniform clusters can be performed using descriptors D(CL" ) derived from lin distances d(l i 2 ). The descriptor was selected as the mean estimator (first moment) of the correct lin distances m within a cluster CL " m; D( 2CL " ) = m d 1stm (l i 2 ) = 1 M l P Ml i=1 d(l i 2 ) / m. Analysis of clustering accuracy using descriptors for

s > " and s " led to simplified comparisons of pairs of clusters and in the form of inequalities j D( )? D( ) j " rather than equalities D( l ) = 1 D( l ) 2 if two lins ; are going to be assigned to the same cluster (cluster detection approach). Inequalities improve the noise robustness of the method (decrease probability of misclassification). B. Number of lins: The number of processed lins is decreased by merging lins in the order of the lin distances d( ) (from the shortest lins to the longest lins) into CL " m and deriving clusters of sample points CS j " immediately. No other lins, which contain already merged sample points x i 2 CS j ", will be processed afterwards. When the union of all clusters of sample points includes all given sample points ([CS j " = [x i) then no more lins are processed. 2.2.3 Clustering procedure (1) Calculate lin distances d( ) for the complete graph H over sample points x i. (2) Order d( ) from the shortest to the longest. (3) Create 2"-uniform clusters of lins for each individual lin such that d( ) = d midp d( ) + " of the cluster. (4) Calculate descriptors d 1stm ( ). (5) Group together connected pairs of lins 1 and 2 into a common cluster of lins CL " m if j d 1stm( 2 l ) 1? d 1stm ( 2 l ) 2 j ". () Assign those unassigned sample points to clusters CS " j, which belong to lins creating clusters CL " m. (7) Remove all lins from the ordered set, which contain already assigned sample points. (8) Perform calculations from step (3) for d( ) = d( ) + " until there are unassigned sample points. 2.3. Clustering of attribute points f(x i ) Clustering of attribute points is analogous to the clustering of sample points with replacing lins by attribute point locations. The clustering method for unnown clusters having a large separation with respect to their homogeneity f > is derived first. Proposed clustering for attribute points f(x i ) (1) Create 2-homogeneous clusters CFf(x 2 i) for every attribute point f(x i ). (2) Compare pairs of clusters CFf(x 2 i). (3) Assign attribute points f(x i ) to clusters CFj based on comparisons. Unnown clusters CFj having a small separation with respect to their interior homogeneity f are tacled by using descriptors of 2-homogeneous clusters in the step 2, which estimate the correct centroid value j of attribute points within a cluster CFj ; D(CF f(x 2 i)2cfj ) = f 1stm (f(x l ) 2 CF 2 f(x i) ) = 1 M f(x i ) P Mf(x i ) l=1 (f(x l ) 2 CFf(x 2 i) ) / j. The clustering algorithm is provided next. Clustering procedure (1) Create 2-homogeneous clusters CFf(x 2 i) for each attribute point f(x i ). (2) Calculate descriptors f 1stm (f 2 CFf(x 2 i) ). (3) Group together attribute points f(x 1 ) and f(x 2 ) into a common cluster CFj if f 1stm (f 2 CFf(x 2 1) )?f 1stm(f 2 CFf(x 2 2) ). 2.4. Hierarchical clustering A hierarchy of clusters of dots C t "; is defined as a combination of the hierarchy of clusters of lins CL " m and the hierarchy of clusters of attribute points CFj for varying uniformity and homogeneity parameters (", ). Clusters of lins or attribute points are organized hierarchically by allowing the clusters only to grow for increasing parameter. The hierarchy of clusters of lins CL " m and clusters of attribute points CFj is guaranteed by modifying lin distances and attribute points within created clusters at each parameter value ("; ) to the first moments of lin distances d 1stm ( 2 CL " m ) and attribute points f 1stm(f(x i ) 2 CFj ) for performed agglomerative clustering. 3. Performance evaluation Theoretical and experimental evaluations are focused on (1) clustering accuracy and (2) performance for real applications. Clustering accuracy is tested for (1) synthetic (modelled) dot patterns and (2) standard test dot patterns (8x, IRIS), which were used by several other researches to illustrate properties of clusters (8x is used in [] and IRIS in [7, ]). Experimental results are compared with four other clustering methods (single lin, complete lin, FORGY and CLUSTER). In addition, experimental performance of both proposed methods is tested using dot patterns from [11] to compare clustering results with the two related methods, the Zahn s clustering [11] ("-uniformity method) and the centroid method [] (-homogeneity method). Experimental results for real applications are conducted for dot patterns obtained from botanical analysis of plants and image texture analysis and synthesis. From all aforementioned experiments we show only one result of image texture detection to demonstrate an exceptional property of the proposed clustering method with respect to all other nown clustering techniques. A synthetic image with three overlapping textures having different densities of circles (texels) contains subset of circles having different color. Created gray scale image is shown in Figure 4 (top). The image was segmented and

2 4 8 References 8 4 2.1 dist 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 Figure 4. Image with overlapping transparent textures. Top - original synthetic image. Bottom - "-uniformity clustering of a dot pattern obtained by taing centroid locations of detected regions in the segmented synthetic image; clusters are denoted by numerical labels ( for large circles, for middle size circles and 2 for small circles) and are detected at the uniformity value " = :1. 2D dots were obtained as centroids of detected regions. "- uniformity clustering of a dot pattern provided separation of the three different textures shown in Figure 4 (bottom). The texture separation was successful despite partial occlusion of circles an therefore irregularity of dot locations obtained from segmented regions. Within each detected texture a -homogeneity clustering grouped together circles with similar color. [1] N. Ahuja. Dot pattern processing using voronoi neighborhoods. IEEE Transaction on Pattern Analysis and Machine Intelligence, 4(3):33 343, May 1982. [2] N. Ahuja and B. J. Schachter. Pattern Models. John Wiley and sons inc., USA, 1983. [3] E. by K. S. Fu. Digital Pattern Recognition. Springer-Verlag, New Yor, 197. [4] B. S. Everitt. Cluster analysis. Edward Arnold, a division of Hodder and Stoughton, London, 1993. [5] H. Hanaizumi, S. Chino, and S. Fujimura. A binary division algorithm for clustering remotely sensed multispectral images. IEEE Transaction on Instrumentation and Measurement, 44(3):759 73, June 1995. [] A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice Hall Inc., Englewood Cliffs, New Jersey, 1988. [7] P. H. Sneath and R. R. Soal. Numerical Taxonomy. W. H. Freeman and company, San Francisco, 1973. [8] M. Tuceryan and A. Jain. Texture segmentation using voronoi polygons. IEEE Trans. on Pattern Analysis and Machine Intelligence, 12(2):211 21, 199. [9] Y. Wong and E. C. Posner. A new clustering algorithm applicable to multiscale and polarimetric sar images. IEEE Transaction on Geoscience and Remote Sensing, 31(3):34 44, May 1993. [1] B. Yu and B. Yuan. A global optimum clustering algorithm. Engineering applications of artificial inteligence, 8(2):223 227, Apri995. [11] C. T. Zahn. Graph-theoretical methods for detecting and describing gestalt clusters. IEEE Transaction on Computers, C-2(1):8 8, January 1971. 4. Conclusions We have presented a new hierarchical clustering method that decomposes the n-dimensional clustering problem into two lower dimensional problems. Decomposing allows us to apply two different models to n-dimensional dots, the "-uniformity model in n s -dimensional subspace and the - homogeneity model in n f -dimensional subspace (n s +n f = n). A new "-uniformity method for density based clustering is proposed for n s D spatial points. The use of density allows us to detect multiple interleaved noisy clusters that represent projections of different clusters on transparent surfaces into a single image. -homogeneity clustering is proposed for n f D attribute points to detect intrinsic property represented by dots.