Web Mining: Clustering Web Documents A Preliminary Review

Similar documents
Machine Learning: Algorithms and Applications

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Hierarchical clustering for gene expression data analysis

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Unsupervised Learning

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

Performance Evaluation of Information Retrieval Systems

Query Clustering Using a Hybrid Query Similarity Measure

Unsupervised Learning and Clustering

Cluster Analysis of Electrical Behavior

Wishing you all a Total Quality New Year!

Support Vector Machines

Clustering. A. Bellaachia Page: 1

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Machine Learning. Topic 6: Clustering

A Deflected Grid-based Algorithm for Clustering Analysis

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Smoothing Spline ANOVA for variable screening

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

An Entropy-Based Approach to Integrated Information Needs Assessment

TN348: Openlab Module - Colocalization

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

CS 534: Computer Vision Model Fitting

Optimizing Document Scoring for Query Retrieval

A Binarization Algorithm specialized on Document Images and Photos

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

S1 Note. Basis functions.

Support Vector Machines

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Unsupervised Learning and Clustering

Clustering is a discovery process in data mining.

An Iterative Solution Approach to Process Plant Layout using Mixed Integer Optimisation

Classifier Selection Based on Data Complexity Measures *

Clustering algorithms and validity measures

Survey of Cluster Analysis and its Various Aspects

Bidirectional Hierarchical Clustering for Web Mining

DOCUMENT clustering is a special version of data clustering

On the Efficiency of Swap-Based Clustering

An Optimal Algorithm for Prufer Codes *

6.854 Advanced Algorithms Petar Maymounkov Problem Set 11 (November 23, 2005) With: Benjamin Rossman, Oren Weimann, and Pouya Kheradpour

Mathematics 256 a course in differential equations for engineering students

Module Management Tool in Software Development Organizations

Analysis of Continuous Beams in General

Course Introduction. Algorithm 8/31/2017. COSC 320 Advanced Data Structures and Algorithms. COSC 320 Advanced Data Structures and Algorithms

Feature Reduction and Selection

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Intelligent Information Acquisition for Improved Clustering

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Assignment # 2. Farrukh Jabeen Algorithms 510 Assignment #2 Due Date: June 15, 2009.

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Determining Fuzzy Sets for Quantitative Attributes in Data Mining Problems

LECTURE : MANIFOLD LEARNING

Experiments in Text Categorization Using Term Selection by Distance to Transition Point

K-means and Hierarchical Clustering

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

Private Information Retrieval (PIR)

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

Detection of an Object by using Principal Component Analysis

BioTechnology. An Indian Journal FULL PAPER. Trade Science Inc.

Feature Selection as an Improving Step for Decision Tree Construction

Problem Set 3 Solutions

Lecture 5: Multilayer Perceptrons

Learning-Based Top-N Selection Query Evaluation over Relational Databases

Active Contours/Snakes

Hierarchical agglomerative. Cluster Analysis. Christine Siedle Clustering 1

Programming in Fortran 90 : 2017/2018

An Image Fusion Approach Based on Segmentation Region

The Research of Support Vector Machine in Agricultural Data Classification

1. Introduction. Abstract

An Efficient Genetic Algorithm with Fuzzy c-means Clustering for Traveling Salesman Problem

Compiler Design. Spring Register Allocation. Sample Exercises and Solutions. Prof. Pedro C. Diniz

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

An Internal Clustering Validation Index for Boolean Data

A Webpage Similarity Measure for Web Sessions Clustering Using Sequence Alignment

Web Document Classification Based on Fuzzy Association

Circuit Analysis I (ENGR 2405) Chapter 3 Method of Analysis Nodal(KCL) and Mesh(KVL)

A Knowledge Management System for Organizing MEDLINE Database

CSCI 104 Sorting Algorithms. Mark Redekopp David Kempe

A Novel Term_Class Relevance Measure for Text Categorization

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

SAO: A Stream Index for Answering Linear Optimization Queries

Hermite Splines in Lie Groups as Products of Geodesics

Helsinki University Of Technology, Systems Analysis Laboratory Mat Independent research projects in applied mathematics (3 cr)

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

Fuzzy C-Means Initialized by Fixed Threshold Clustering for Improving Image Retrieval

Load-Balanced Anycast Routing

Available online at Available online at Advanced in Control Engineering and Information Science

Analyzing Popular Clustering Algorithms from Different Viewpoints

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Keyword-based Document Clustering

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

Network Intrusion Detection Based on PSO-SVM

Graph-based Clustering

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

Transcription:

Web Mnng: Clusterng Web Documents A Prelmnary Revew Khaled M. Hammouda Department of Systems Desgn Engneerng Unversty of Waterloo Waterloo, Ontaro, Canada 2L 3G1 hammouda@pam.uwaterloo.ca February 26, 2001 Evdently there s a tremendous prolferaton n the amount of nformaton found today on the largest shared nformaton source, the World Wde Web (or smply the Web). The process of fndng relevant nformaton on the web can be overwhelmng. Even wth the presence of today s search engnes that ndex the web t s hard to wade through the large number of returned documents n a response to a user query. Ths fact has lead to the need to organze a large set of documents (due to a user query or smply a collecton of documents) nto categores through clusterng. It s beleved that groupng smlar documents together nto clusters wll help the users fnd relevant nformaton qucker, and wll allow them to focus ther search n the approprate drecton. The purpose of ths revew s an attempt to explore the clusterng technques n the data mnng lterature and to report on ther approprateness for clusterng large sets of web documents. The revew s by no means complete but covers the most representatve approaches for clusterng. 1. Background The motvaton behnd clusterng any set of data s to fnd nherent structure n the data, and expose ths structure as a set of groups, where the data objects wthn each group should exhbt a large degree of smlarty (known as ntra-cluster smlarty) whle the smlarty among dfferent clusters should be mnmzed [9]. There s a multtude of clusterng technques n the lterature, each adoptng a certan strategy for detectng the groupng n the data. However, most of the reported methods have some common features [4]: There s no explct supervson effect. Patterns are organzed wth respect to an optmzaton crteron. They all adopt the noton of smlarty or dstance. It should be noted that some algorthms, however, make use of labelled data to evaluate ther clusterng results, but not n the process of clusterng tself (e.g. [12] and [13]). 1

Many of the clusterng algorthms were motvated by a certan problem doman. Accordngly, there s a varaton on the requrements of each algorthm, ncludng data representaton, cluster model, smlarty measures, and runnng tme. Each of these requrements more or less has a sgnfcant effect on the usablty of any algorthm. Moreover, t makes t dffcult to compare dfferent algorthms based on dfferent problem domans. The followng secton addresses some of these requrements. 2. Propertes of Clusterng Algorthms Before we can analyze and compare dfferent algorthms, we have to defne some of the propertes for such algorthms, and fnd out what problem domans mpose what knd of propertes. An analyss of dfferent document clusterng methods wll be presented n secton 3. 2.1. Data Model Most clusterng algorthms expect the data set to be clustered n the form of a set of vectors X = { x1, x2, K, x n }, where the vector x, = 1, K, n corresponds to a sngle object n the data set and s called the feature vector. Extractng the proper features to represent through the feature vector s hghly dependent on the problem doman. The dmensonalty of the feature vector s a crucal factor on the runnng tme of the algorthm and hence ts scalablty. However, some problem domans by default mpose a hgh dmenson. There exst some methods to reduce the problem dmenson, such as prncple component analyss. Krshnapuram et al [5] were able to reduce a 500-dmensonal feature vector to 10-dmenson usng ths method; however ts valdty was not justfed. We now turn our focus on document data representaton, and how to extract the proper features. Document Data Model Most document clusterng methods use the Vector Space model to represent document objects. Each document s represented by a vector d, n the term space, such that d = { tf1, tf2, K, tf n }, where tf, = 1, K, n s the term frequency of the term t n the document. To represent every document wth the same set of terms, we have to extract all the terms found n the documents and use them as our feature vector 1. Sometmes another method s used whch combnes the term frequency wth the nverse document frequency (TF-IDF). The document frequency df s the number of documents n a collecton of documents n whch the term t occurs. A typcal nverse document frequency (df) factor of ths type s gven by log( / df ). The weght of a term t n a document s gven by w = tf log( / df ) [13]. To keep the 1 Obvously the dmensonalty of the feature vector s always very hgh, n the range of hundreds and sometmes thousands. 2

feature vector dmenson reasonable, only n terms wth the hghest weghts n all the documents are chosen as the n features. Wong and Fu [13] showed that they could reduce the number of representatve terms by choosng only the terms that have suffcent coverage 1 over the document set. Some algorthms [6][13] refran from usng term frequences (or term weghts) by usng a bnary feature vector, where each term weght s ether 1 or 0, dependng on whether t s present n the document or not, respectvely. Wong and Fu [13] argued that the average term frequency n web documents s below 2 (based on statstcal experments), whch does not ndcate the actual mportance of the term, thus a bnary weghtng scheme would be more sutable to ths problem doman. Before any feature extracton takes place, the document set s frst cleaned by removng stop-words 2 and then applyng a stemmng algorthm that converts dfferent word forms nto a smlar canoncal form. Another model for document representaton s called -gram. The -gram model assumes that the document s a sequence of characters, and usng a sldng wndow of sze n the character sequence s scanned extractng all n-character sequences n the document. The -gram approach s tolerant of mnor spellng errors because of the redundancy ntroduced n the resultng n-grams. The model also acheves mnor language ndependence when used wth a stemmng algorthm. Smlarty n ths approach s based on the number of shared n-grams between two documents. Fnally, a new model proposed by Zamr and Etzon [2] s a phrase-based approach. The model fnds common phrase suffxes between documents and bulds a suffx tree where each node represents part of a phrase (a suffx node) and assocated wth t are the documents contanng ths phrase-suffx. The approach clearly captures the nformaton of word proxmty, whch s thought to be valuable for fndng smlar documents. umercal Data Model A more straghtforward model of data s the numercal model. Based on the problem context, a number of features are extracted, where each feature s represented as an nterval of numbers. The feature vector s usually of reasonable dmensonalty, yet t depends on the problem beng analyzed. The features ntervals are usually normalzed so that each feature has the same effect when calculatng dstance measures. Smlarty n ths case s straghtforward snce only the dstance calculaton between two vectors s usually trval [15]. 1 The Coverage of a feature s defned as the percentage of documents contanng that feature. 2 Stop-words are very common words that have no sgnfcance for capturng relevant nformaton about a document (such as the, and, a, etc). 3

Categorcal Data Model Ths model s usually found n problems related to database clusterng. Usually databases table attrbutes are of categorcal nature, wth some attrbutes beng numercal. Usually statstcal based clusterng approaches are used to deal wth ths knd of data. The ITERATE algorthm s such an example whch deals wth categorcal data on statstcal bass [16]. The K-modes algorthm s also a good example [17]. Mxed Data Model Accordng to the problem doman, sometmes the features representng data objects are not of the same type. A combnaton of numercal, categorcal, spatal, or text data mght be the case. In these domans t s mportant to devse an approach that captures all the nformaton effcently. A converson process mght be appled to convert one data type to another (e.g. dscretzaton of contnuous numercal values). Sometmes the data s kept ntact, but the algorthm s modfed to work on more than one data type [16]. 2.2. Smlarty Measure A key factor n the success of any clusterng algorthm s the smlarty measure adopted by the algorthm. In order to be able to group smlar data objects a proxmty metrc has to be used to fnd whch objects (or clusters) are smlar. There s a large number of smlarty metrcs reported n the lterature, we are only gong to revew the most common here. The calculaton of the (ds)smlarty between two objects s acheved through some dstance functon, sometmes also referred to a dssmlarty functon. Gven two feature vectors x and y representng two objects t s requred to fnd the degree of smlarty (or dssmlarty) between them. A very common class of dstance functons s known as the famly of Mnkowsk dstances [4], descrbed as: n = p x = 1 x y y p n where x, y R. Ths dstance functon actually descrbes an nfnte famly of the dstances beng ndexed by p. Ths parameter assumes values greater than or equal to 1. Some of the common values for p and ther respectve dstance functons are: p = 1: Hammng dstance x y = x y n = 1 4

p = 2 : Eucldean dstance x y = x y n = 1 2 p = : Tschebyshev dstance x y = max = 1,2, K, n x y A more common smlarty measure that s used specfcally n document clusterng s the cosne correlaton measure (used by [1], [12] and [13]), defned as: cos( x, y) = x y x y where ndcates the vector dot product and ndcates the length of the vector. Another commonly used smlarty measure s the Jaccard measure (used by [5], [6], and [7]), defned as: d( x, y ) n = = = 1 n 1 mn( x, y ) max( x, y ) whch n the case of bnary feature vectors could be smplfed to: d( x, y) = x y x y. It has to be noted that the term dstance s not to be confused wth the term smlarty. Those terms are opposte to each other n the terms of how smlar the two objects are. Smlarty decreases when dstance ncreases. Another remark s that many algorthms employ the dstance functon (or smlarty functon) to calculate the smlarty between two clusters, a cluster and an object, or two objects. Calculatng the dstance between clusters (or clusters and objects) requres a representatve feature vector of that cluster (sometmes referred to as a medod). Often some clusterng algorthms make use of a smlarty matrx. A smlarty matrx s a matrx recordng the dstance (or degree of smlarty) between each par of objects. Obvously the smlarty matrx s a postve defnte matrx so we only need to store the upper rght (or lower left) porton of the matrx. 2.3. Cluster Model Any clusterng algorthm assumes a certan cluster structure. Sometmes the cluster structure s not assumed explctly, but s nherent due to the nature of the clusterng algorthm tself. For example, the k-means clusterng algorthm assumes sphercal shaped (or generally convex shaped) clusters. Ths s due to the way k-means fnds 5

cluster centres and updates object membershps. Also f care s not taken, we could end up wth elongated clusters, where the resultng partton contans a few large clusters and some very small clusters. Wong and Fu [13] proposed a strategy to keep the cluster szes n a certan range, but t could be argued that forcng a lmt on cluster sze s not always desrable. A dynamc model for fndng clusters rrelevant of ther structure s CHAMELEO, whch was proposed by Karyps et al [10]. Dependng on the problem, we mght wsh to have dsjont clusters or overlappng clusters. In the context of document clusterng t s usually desrable to have overlappng clusters because documents tend to belong to more than one topc (for example a document mght contan nformaton about car racng and car companes as well). A good example of overlappng document cluster generaton s the tree-based STC system proposed by Zamr and Etzon [2]. Another way for generatng overlappng clusters s through fuzzy clusterng where objects can belong to dfferent clusters to dfferent degrees of membershp [5]. 3. Document Clusterng The majorty of clusterng technques fall nto two major categores: Herarchcal Clusterng, and Parttonal Clusterng. 3.1. Herarchcal Clusterng Herarchcal technques produce a nested sequence of parttons, wth a sngle allnclusve cluster at the top and sngleton clusters of ndvdual objects at the bottom. Clusters at an ntermedate level encompass all the clusters below them n the herarchy. The result of a herarchcal clusterng algorthm can be vewed as a tree, called a dendogram (Fgure 1). {a, b,c,d,e} {a}, {b,c,d,e} {a}, {b,c}, {d,e} {a}, {b,c}, {d}, {e} {a}, {b}, {c}, {d}, {e} a b c d e Fgure 1. A sample dendogram of clustered data usng Herarchcal Clusterng Dependng on the drecton of buldng the herarchy we can dentfy to methods of herarchcal clusterng: Agglomeratve and Dvsve. The agglomeratve approach s the most commonly used n herarchcal clusterng. 6

Agglomeratve Herarchcal Clusterng (AHC) Ths method starts wth the set of objects as ndvdual clusters; then, at each step merges the most two smlar clusters. Ths process s repeated untl a mnmal number of clusters have been reached, or, f a complete herarchy s requred then the process contnues untl only one cluster s left. Thus, agglomeratve clusterng works n a greedy manner, n that the par of document groups chosen for agglomeraton s the par that s consdered best or most smlar under some crteron. The method s very smple but needs to specfy how to compute the dstance between two clusters. Three commonly used methods for computng ths dstance are lsted below: Sngle Lnkage Method. The smlarty between two clusters S and T s calculated based on the mnmal dstance between the elements belongng to the correspondng clusters. Ths method s also called nearest neghbour clusterng method. T S = mn x T x y y S Complete Lnkage Method. The smlarty between two clusters S and T s calculated based on the maxmal dstance between the elements belongng to the correspondng clusters. Ths method s also called furthest neghbour clusterng method. T S = max x T x y y S Average Lnkage Method. The smlarty between two clusters S and T s calculated based on the average dstance between the elements belongng to the correspondng clusters. Ths method takes nto account all possble pars of dstances between the objects n the clusters, and s consdered more relable and robust to outlers. Ths method s also known as UPGMA (Unweghted Par- Group Method usng Arthmetc averages). x T x y y S T S = S T It was argued by Karyps et al [10] that the above methods assume a statc model of the nterconnectvty and closeness of the data, and they proposed a new dynamcbased model that avods these ptfalls. Ther system, CHAMELEO, combnes two clusters only f the nter-connectvty and closeness of the clusters are hgh relatve to the nternal nter-connectvty and closeness wthn the clusters. 7

2 Agglomeratve technques are usually Ω ( n ) due to ther global nature snce all pars of nter-group smlartes are consdered n the course of selectng an agglomeraton. The Scatter/Gather system, proposed by Cuttng et al [12], makes use of a group average agglomeratve subroutne for fndng seed clusters to be used by ther partotonal clusterng algorthm. However, to avod the quadratc runnng tme of that subroutne, they only use t on a small sample of the documents to be clustered. Also, the group average method was recommended by Stenbach et al [1] over the other smlarty methods due to ts robustness. Dvsve Herarchcal Clusterng These methods work from top to bottom, startng wth the whole data set as one cluster, and at each step splt a cluster untl only sngleton clusters of ndvdual objects reman. They bascally dffer n two thngs: (1) whch cluster to splt next, and (2) how to perform the splt. Usually an exhaustve search s done to fnd the cluster to splt such that the splt results n mnmal reducton n some performance crteron. A smpler way would be to choose the largest cluster to splt, the one wth the least overall smlarty, or use a crteron based on both sze and overall smlarty. Stenbach et al [1] dd a study on these strateges and found that the dfference between them s small, so they resorted on splttng the largest remanng cluster. Splttng a cluster requres the decson of whch objects go to whch sub-cluster. One method s to fnd the two sub-clusters usng k-means, resultng n a hybrd technque called bsectng k-means [1]. Another method based on statstcal approach s used by the ITERATE algorthm [16], however, t does not necessarly splt the cluster nto only two clusters, the cluster could be splt up to many sub-clusters accordng to a coheson measure of the resultng sub-partton. 3.2. Parttonal Clusterng Ths class of clusterng algorthms works by dentfyng potental clusters smultaneously, whle updatng the clusters teratvely guded by the mnmzaton of some objectve functon. The most known class of parttonal clusterng algorthms are the k-means algorthm and ts varants. K-means starts by randomly selectng k seed cluster means; then assgns each object to ts nearest cluster mean. The algorthm then teratvely recalculates the cluster means and new object membershps. The process contnues up to a certan number of teratons, or when no changes are detected n the clusters means [15]. K-means algorthms are O( nkt ), where T s the number of teratons, whch s consdered more or less a good bound. However, a major dsadvantage of k-means s that t assumes sphercal cluster structure, and cannot be appled n domans where cluster structures are non-sphercal. A varant of k-means that allows overlappng of clusters s known as Fuzzy C-means (FCM). Instead of havng bnary membershp of objects to ther respectve clusters, FCM allows for varyng degrees of object membershps [15]. Krshnapuram et al [5] proposed a modfed verson of FCM called Fuzzy C-Medods (FCMdd) where the 8

means are replaced wth medods. They clam that ther algorthm converges very 2 quckly and has a worst case of O( n ) and s an order of magntude faster than FCM. Due to the random choce of cluster seeds these algorthms exhbt, they are consdered non-determnstc as opposed to herarchcal clusterng approaches. Thus several runs of the algorthm mght be requred to acheve relable results. Some methods have been employed to fnd good ntal cluster seeds that are then used by such algorthms. A good example s the Scatter/Gather system [12]. One approach that combnes both parttonal clusterng wth hybrd clusterng s the bsectng k-means algorthm mentoned earler. Ths algorthm s a dvsve algorthm where cluster splttng nvolves usng the k-means algorthm to fnd the two subclusters. Stenbach et al reported that bsectng k-means performance was superor to k-means alone, and superor to UPGMA [1]. It has to be noted that an mportant feature of herarchcal algorthms s that most of them allow ncremental updates where new objects can be assgned to the relevant cluster easly by followng a tree path to the approprate locaton. STC [2] and DCtree [13] are two examples of such algorthms. On the other hand parttonal algorthms often requre a global update of cluster means and possbly object membershps. Incremental updates are essental for on-lne applcatons where, for example, query results are processed ncrementally as they arrve. 4. Cluster Evaluaton Crtera The results of any clusterng algorthm should be evaluated usng an nformatve qualty measure that reflects the goodness of the resultng clusters. The evaluaton depends on whether we have pror knowledge about the classfcaton of data objects;.e. we have labelled data, or there s no classfcaton for the data. If the data s not prevously classfed we have to use an nternal qualty measure that allows us to compare dfferent sets of clusters wthout reference to external knowledge. On the other hand, f the data s labelled, we make use of ths classfcaton by comparng the resultng clusters wth the orgnal classfcaton; such measure s known as an external qualty measure. We revew two external qualty measures and one nternal qualty measure here. Entropy One external measure s entropy, whch provdes a measure of goodness for unnested clusters or for the clusters at one level of a herarchcal clusterng. Entropy tells us how homogeneous a cluster s. The hgher the homogenety of a cluster, the lower the entropy s, and vce versa. The entropy of a cluster contanng only one object (perfect homogenety) s zero. Let P be a partton result of a clusterng algorthm consstng of m clusters. For every cluster j n P we compute p, the probablty that a member of cluster j belongs to j class. The entropy of each cluster j s calculated the standard formula 9

E = p log( p ), where the sum s taken over all classes. The total entropy for a j j j set of clusters s calculated as the sum of entropes for each cluster weghted by the m j sze of each cluster: EP = E j j= 1, where s the sze of cluster j, and s the j total number of data objects. As mentoned earler, we would lke to generate clusters of lower entropy, whch s an ndcaton of the homogenety (or smlarty) of objects n the clusters. The weghted overall entropy formula avods favourng smaller clusters over larger clusters. F-measure The second external qualty measure s the F-measure, a measure that combnes the precson and recall deas from nformaton retreval lterature. The precson and recall of a cluster j wth respect to a class are defned as: where P = Precson(, j) = R = Recall(, j) = j s the number of members of class I n cluster j, members of cluster j, and class s defned as: j j j j s the number of s the number of members of class. The F-measure of a F( ) = 2PR P + R Wth respect to class we consder the cluster wth the hghest F-measure to be the cluster j for, and that F-measure becomes the score for class. The overall F- measure for the clusterng result P s the weghted average of the F-measure for each class : F p = ( F( )) where s the number of objects n class. The hgher the overall F-measure, the better the clusterng, due to the hgher accuracy of the clusters mappng the orgnal classes. 10

Overall Smlarty A common nternal qualty measure s the overall smlarty and s used n the absence of any external nformaton such as class labels. Overall smlarty measures cluster cohesveness by usng the weghted smlarty of the nternal cluster smlarty: 1 (, ) 2 sm x y, where S s the cluster under consderaton, and sm( x, y ) s the S x S y S smlarty between the two objects x and y. 5. Requrements for Document Clusterng Algorthms In the context of the prevous dscusson about clusterng algorthms, t s essental to dentfy the requrements for document clusterng algorthms n partcular, whch wll enable us to desgn more effcent and robust document clusterng solutons geared toward that end. Followng s a lst of those requrements. 5.1. Extracton of Informatve Features The root of any clusterng problem les n the choce of the most representatve set of features descrbng the underlyng data model. The set of extracted features has to be nformatve enough so that t represents the actual data beng analyzed. Otherwse, no matter how good the clusterng algorthm s, t wll be mslead by non-nformatve features. Moreover, t s mportant to reduce the number of features because hgh dmensonal feature space always has severe mpact on the algorthm scalablty. A comparatve study done by Yang and Pedersen [18] on the effectveness of a number of feature extracton methods for text categorzaton showed that the Document Frequency (DF) thresholdng method shows good results compared to other methods and s of lowest cost n computaton. Also, as mentoned n secton 2.1, Wong and Fu [13] showed that they could reduce the number of representatve terms by choosng only the terms that have suffcent coverage over the document set. The document model s also of great mportance. The most common model s based on ndvdual terms extracted from the set of all documents, and calculatng term frequences and document frequences as explaned before. The other model s a phrase-based model, such as that proposed by Zamr and Ezton [2], where they fnd shared suffx phrases n documents usng a Suffx Tree data structure. 5.2. Overlappng Cluster Model Any document collecton, especally those n the web doman, wll tend to have documents coverng one or more topc. When clusterng documents, t s necessary to put those documents n ther relevant clusters, whch means some documents mght belong to more than one cluster. An overlappng cluster model allows ths knd of mult-topc document clusterng. A few clusterng algorthms allow overlapped clusterng, ncludng fuzzy clusterng [5] and suffx tree clusterng (STC) [2]. In some cases t wll be desrable to have dsjont clusters when each document must belong to 11

only one cluster; n these cases one of the non-overlappng clusterng algorthms can be used, or a set of dsjont clusters could be generated from fuzzy clusterng after defuzzfyng cluster membershps. 5.3. Scalablty In the web doman, a smple search query mght return hundreds, and sometmes thousands, of pages. It s necessary to be able to cluster those results n a reasonable tme. It has to be noted that some proposed systems only cluster the snppets returned by most search engnes, not the whole pages (e.g. [2]). Whle ths s an acceptable strategy for clusterng search results on the fly, t s not acceptable for clusterng documents where snppets do not provde enough nformaton about the actual contents of documents. An onlne clusterng algorthm should be able to perform the clusterng n lnear tme f possble. An offlne clusterng algorthm can exceed that lmt, but wth the mert of beng able to produce hgher qualty clusters. 5.4. ose Tolerance A potental problem faced by many clusterng algorthms s the presence of nose and outlers n the data. A good clusterng algorthm should be robust enough to handle these types of nose, and produce hgh qualty clusters that are not affected by nose. In herarchcal clusterng, for example, the nearest neghbour and furthest neghbour dstance calculaton methods are very senstve to outlers, thus should not be used f possble. The average lnkage method s most approprate for nosy data. 5.5. Incrementalty A very desrable feature n a dynamc doman such as the web s to be able to update the clusters ncrementally. ew documents should be added to ther respectve clusters as they arrve wthout the need to re-cluster the whole document set. Modfed documents should be re-processed and moved to ther respectve clusters, f applcable. It s noteworthy that f ncrementalty s acheved effcently, scalablty s enhanced as well. 5.6. Result Presentaton A clusterng algorthm s as good as ts ablty to present a concse and accurate descrpton of the clusters t produces to the user. The cluster summares should be representatve enough of ther respectve content, so that the users can determne at a glance whch cluster they are nterested n. 6. References [1] M. Stenbach, G. Karyps, V. Kumar, A Comparson of Document Clusterng Technques, TextMnng Workshop, KDD, 2000. [2] O. Zamr and O. Etzon, Web Document Clusterng: A Feasblty Demonstraton, Proc. of the 21 st ACM SIGIR Conference, pp 46-54. 12

[3] O. Zamr, O. Etzon, O Madan, R. M. Karp, Fast and Intutve Clusterng of Web Documents, Proc. of the 3 rd Internatonal Conference on Knowledge Dscovery and Data Mnng, pp 287-290, 1997. [4] K. Cos, W. Pedrycs, R. Swnarsk, Data Mnng Methods for Knowledge Dscovery, Kluwer Academc Publshers, 1998. [5] R. Krshnapuram, A. Josh, L. Y, A Fuzzy Relatve of the k-medods Algorthm wth Applcaton to Web Document and Snppet Clusterng, Proc. IEEE Intl. Conf. Fuzzy Systems, Korea, August 1999. [6] Z. Jang, A. Josh, R. Krshnapuram, L. Y, Retrever: Improvng Web Search Engne Results Usng Clusterng, Techncal Report, CSEE Department, UMBC, 2000. [7] T. H. Havelwala, A. Gons, P. Indyk, Scalable Technques for Clusterng the Web, Extended Abstract, WebDB 2000, Thrd Internatonal Workshop on the Web and Databases, In conjuncton wth ACM SIGMOD 2000, Dallas, TX, 2000. [8] A. Bouguettaya, On-Lne Clusterng, IEEE Trans. on Knowledge and Data Engneerng, Vol. 8, o. 2, 1996. [9] A. K. Jan and R. C. Dubes, Algorthms for Clusterng Data, John Wley & Sons, 1988. [10] G. Karyps, E. Han, V. Kumar, CHAMELEO: A Herarchcal Clusterng Algorthm Usng Dynamc Modelng, IEEE Computer 32, pp. 68-75, August 1999. [11] O. Zamr and O. Etzon, Grouper: A Dynamc Clusterng Interface to Web Search Results, Proc. of the 8 th Internatonal World Wde Web Conference, Toronto, Canada, May 1999. Page 8. [12] D. R. Cuttng, D. R. Karger, J. O. Pedersen, J.W. Tukey, Scatter/Gather: A Clusterbased Approach to Browsng Large Document Collectons, In Proceedngs of the 16 th Internatonal ACM SIGIR Conference on Research and Development n Informaton Retreval, pages 126-35, 1993. [13] W. Wong and A. Fu, Incremental Document Clusterng for Web Page Classfcaton, Int. Conf. on Info. Socety n the 21 st century: emergng technologes and new challenges (IS2000), ov 5-8, 2000, Japan. [14] R. Mchalsk, I. Bratko, M. Kubat, Machne Learnng and Data Mnng Methods and Applcatons, John Wley & Sons Ltd., 1998. [15] J. Jang, C. Sun, E. Mzutan, euro-fuzzy and Soft Computng A Computatonal Approach to Learnng and Machne Intellgence, Prentce Hall, 1997. [16] G. Bswas, J.B. Wenberg, D. Fsher, ITERATE: A Conceptual Clusterng Algorthm for Data Mnng, IEEE Transactons on Systems, Man and Cybernetcs, Vol. 28, 1998, pp. 100-111. [17] Z. Huang, A Fast Clusterng Algorthm to Cluster Very Large Categorcal Data Sets n Data Mnng, Workshop on Research Issues on Data Mnng and Knowledge Dscovery, 1997. [18] Y. Yang and J. Pedersen, A Comparatve Study on Feature Selecton n Text Categorzaton, In Proc. of the 14 th Internatonal Conference on Machne Learnng, pages 412-420, ashvlle, T, 1997. 13