Informa(on Retrieval

Similar documents
Informa(on Retrieval

Clustering CE-324: Modern Information Retrieval Sharif University of Technology

CS490W. Text Clustering. Luo Si. Department of Computer Science Purdue University

CS47300: Web Information Search and Management

DD2475 Information Retrieval Lecture 10: Clustering. Document Clustering. Recap: Classification. Today

Data Informatics. Seon Ho Kim, Ph.D.

Information Retrieval and Organisation

Big Data Analytics! Special Topics for Computer Science CSE CSE Feb 9

Clustering. Partition unlabeled examples into disjoint subsets of clusters, such that:

Flat Clustering. Slides are mostly from Hinrich Schütze. March 27, 2017


Based on Raymond J. Mooney s slides

Hierarchical Clustering

CSE 7/5337: Information Retrieval and Web Search Document clustering I (IIR 16)

Introduction to Information Retrieval

Introduction to Information Retrieval

PV211: Introduction to Information Retrieval

Administrative. Machine learning code. Supervised learning (e.g. classification) Machine learning: Unsupervised learning" BANANAS APPLES

Clustering algorithms

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Introduction to Information Retrieval

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

Today s topic CS347. Results list clustering example. Why cluster documents. Clustering documents. Lecture 8 May 7, 2001 Prabhakar Raghavan

MI example for poultry/export in Reuters. Overview. Introduction to Information Retrieval. Outline.

Classification & Clustering. Hadaiq Rolis Sanabila

Lecture 15 Clustering. Oct

Administrative. Machine learning code. Machine learning: Unsupervised learning

Unsupervised learning, Clustering CS434

Lecture 8 May 7, Prabhakar Raghavan

Information Retrieval and Web Search Engines

Clustering Results. Result List Example. Clustering Results. Information Retrieval

What to come. There will be a few more topics we will cover on supervised learning

k-means demo Administrative Machine learning: Unsupervised learning" Assignment 5 out

Cluster Evaluation and Expectation Maximization! adapted from: Doug Downey and Bryan Pardo, Northwestern University

Information Retrieval and Web Search Engines

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Clustering: Overview and K-means algorithm

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

Clustering: Overview and K-means algorithm

Clustering CS 550: Machine Learning

Expectation Maximization!

Unsupervised Learning

CS 6140: Machine Learning Spring 2017

Informa(on Retrieval

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

CLUSTERING. Quiz information. today 11/14/13% ! The second midterm quiz is on Thursday (11/21) ! In-class (75 minutes!)

Search Engines. Informa1on Retrieval in Prac1ce. Annota1ons by Michael L. Nelson

Informa(on Retrieval

Lecture on Modeling Tools for Clustering & Regression

CSE 5243 INTRO. TO DATA MINING

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

5/15/16. Computational Methods for Data Analysis. Massimo Poesio UNSUPERVISED LEARNING. Clustering. Unsupervised learning introduction

CSCI 599 Class Presenta/on. Zach Levine. Markov Chain Monte Carlo (MCMC) HMM Parameter Es/mates

Unsupervised Learning Partitioning Methods

Unsupervised Learning and Clustering

Introduc)on to Probabilis)c Latent Seman)c Analysis. NYP Predic)ve Analy)cs Meetup June 10, 2010

Chapter 9. Classification and Clustering

Hypergraph Sparsifica/on and Its Applica/on to Par//oning

CS 2750: Machine Learning. Clustering. Prof. Adriana Kovashka University of Pittsburgh January 17, 2017

CSE 5243 INTRO. TO DATA MINING

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

Search Engines. Informa1on Retrieval in Prac1ce. Annotations by Michael L. Nelson

Road map. Basic concepts

Artificial Intelligence. Programming Styles

CS Introduction to Data Mining Instructor: Abdullah Mueen

K-Means Clustering 3/3/17

INF4820, Algorithms for AI and NLP: Hierarchical Clustering

Clust Clus e t ring 2 Nov

Clustering (COSC 416) Nazli Goharian. Document Clustering.

Data Clustering. Danushka Bollegala

Big Data Infrastructure CS 489/698 Big Data Infrastructure (Winter 2017)

Decision Trees, Random Forests and Random Ferns. Peter Kovesi

Clustering Algorithms for general similarity measures

CSE 494/598 Lecture-11: Clustering & Classification

Hierarchical Clustering 4/5/17

Gene Clustering & Classification

Dimensionality Reduction, Clustering and Trends in NLP. Kan Min-Yen Day 3 / Afternoon

Cluster Analysis. Ying Shen, SSE, Tongji University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

What is Clustering? Clustering. Characterizing Cluster Methods. Clusters. Cluster Validity. Basic Clustering Methodology

Unsupervised Learning and Clustering

Introduction to Data Mining

CS6200 Informa.on Retrieval. David Smith College of Computer and Informa.on Science Northeastern University

Clustering. Bruno Martins. 1 st Semester 2012/2013

Machine Learning and Data Mining. Clustering (1): Basics. Kalev Kask

Clustering Algorithms. Margareta Ackerman

Clustering (COSC 488) Nazli Goharian. Document Clustering.

CHAPTER 4: CLUSTER ANALYSIS

Text Documents clustering using K Means Algorithm

Hierarchical Clustering

CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

Exploratory Analysis: Clustering

Introduc)on to. CS60092: Informa0on Retrieval

About the Course. Reading List. Assignments and Examina5on

Unsupervised Learning : Clustering

Course work. Today. Last lecture index construc)on. Why compression (in general)? Why compression for inverted indexes?

Clustering. CS294 Practical Machine Learning Junming Yin 10/09/06

BoW model. Textual data: Bag of Words model

Transcription:

Introduc*on to Informa(on Retrieval Clustering Chris Manning, Pandu Nayak, and Prabhakar Raghavan

Today s Topic: Clustering Document clustering Mo*va*ons Document representa*ons Success criteria Clustering algorithms Par**onal Hierarchical

Ch. 16 What is clustering? Clustering: the process of grouping a set of objects into classes of similar objects Documents within a cluster should be similar. Documents from different clusters should be dissimilar. The commonest form of unsupervised learning Unsupervised learning = learning from raw data, as opposed to supervised data where a classifica*on of examples is given A common and important task that finds many applica*ons in IR and other places

Ch. 16 A data set with clear cluster structure How would you design an algorithm for finding the three clusters in this case?

Sec. 16.1 Applica*ons of clustering in IR Whole corpus analysis/naviga(on Be:er user interface: search without typing For improving recall in search applica*ons BeQer search results (like pseudo RF) For beqer naviga*on of search results Effec*ve user recall will be higher For speeding up vector space retrieval Cluster- based retrieval gives faster search

Yahoo! Hierarchy isn t clustering but is the kind of output you want from clustering www.yahoo.com/science (30) agriculture biology physics CS space...... dairy crops botany cell forestry agronomy evolution magnetism relativity......... AI HCI courses craft missions

Google News: automa*c clustering gives an effec*ve news presenta*on metaphor

Sec. 16.1 ScaQer/Gather: CuZng, Karger, and Pedersen

Sec. 16.1 Applica*ons of clustering in IR Whole corpus analysis/naviga*on BeQer user interface: search without typing For improving recall in search applica(ons Be:er search results (like pseudo RF) For beqer naviga*on of search results Effec*ve user recall will be higher For speeding up vector space retrieval Cluster- based retrieval gives faster search

Sec. 16.1 For improving search recall Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to informa*on needs Therefore, to improve search recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in the cluster containing D Hope if we do this: The query car will also return docs containing automobile Because clustering grouped together docs containing car with those containing automobile. Why might this happen?

Sec. 16.1 Applica*ons of clustering in IR Whole corpus analysis/naviga*on BeQer user interface: search without typing For improving recall in search applica*ons BeQer search results (like pseudo RF) For be:er naviga(on of search results Effec(ve user recall will be higher For speeding up vector space retrieval Cluster- based retrieval gives faster search

yippy.com grouping search results 12

Sec. 16.1 Applica*ons of clustering in IR Whole corpus analysis/naviga*on BeQer user interface: search without typing For improving recall in search applica*ons BeQer search results (like pseudo RF) For beqer naviga*on of search results Effec*ve user recall will be higher For speeding up vector space retrieval Cluster- based retrieval gives faster search

Sec. 16.2 Issues for clustering Representa*on for clustering Document representa*on Vector space? Normaliza*on? Need a no*on of similarity/distance How many clusters? Fixed a priori? Completely data driven? Avoid trivial clusters - too large or small If a cluster's too large, then for naviga*on purposes you've wasted an extra user click without whiqling down the set of documents much.

No*on of similarity/distance Ideal: seman*c similarity. Prac*cal: term- sta*s*cal similarity (docs as vectors) Cosine similarity For many algorithms, easier to think in terms of a distance (rather than similarity) between docs. We will mostly speak of Euclidean distance But real implementa*ons use cosine similarity

Hard vs. sob clustering Hard clustering: Each document belongs to exactly one cluster More common and easier to do Sob clustering: A document can belong to more than one cluster. Makes more sense for applica*ons like crea*ng browsable hierarchies You may want to put a pair of sneakers in two clusters: (i) sports apparel and (ii) shoes You can only do that with a sob clustering approach. We won t do sob clustering today. See IIR 16.5, 18

Clustering Algorithms Flat algorithms Usually start with a random (par*al) par**oning Refine it itera*vely K means clustering (Model based clustering) Hierarchical algorithms BoQom- up, agglomera*ve (Top- down, divisive)

Par**oning Algorithms Par**oning method: Construct a par**on of n documents into a set of K clusters Given: a set of documents and the number K Find: a par**on of K clusters that op*mizes the chosen par**oning criterion Globally op*mal Intractable for many objec*ve func*ons Ergo, exhaus*vely enumerate all par**ons Effec*ve heuris*c methods: K- means and K- medoids algorithms See also Kleinberg NIPS 2002 impossibility for natural clustering

Sec. 16.4 K- Means Assumes documents are real- valued vectors. Clusters based on centroids (aka the center of gravity or mean) of points in a cluster, c: 1 µ(c) = x c x c Reassignment of instances to clusters is based on distance to the current cluster centroids. (Or one can equivalently phrase it in terms of similari*es)

Sec. 16.4 K- Means Algorithm Select K random docs {s 1, s 2, s K } as seeds. Un*l clustering converges (or other stopping criterion): For each doc d i : Assign d i to the cluster c j such that dist(x i, s j ) is minimal. (Next, update the seeds to the centroid of each cluster) For each cluster c j s j = µ(c j )

K Means Example Sec. 16.4 (K=2) Pick seeds Reassign clusters Compute centroids Reassign clusters x x x x Compute centroids Reassign clusters Converged!

Sec. 16.4 Termina*on condi*ons Several possibili*es, e.g., A fixed number of itera*ons. Doc par**on unchanged. Centroid posi*ons don t change. Does this mean that the docs in a cluster are unchanged?

Sec. 16.4 Convergence Why should the K- means algorithm ever reach a fixed point? A state in which clusters don t change. K- means is a special case of a general procedure known as the Expecta)on Maximiza)on (EM) algorithm. EM is known to converge. Number of itera*ons could be large. But in prac*ce usually isn t

Sec. 16.4 Convergence of K- Means Residual Sum of Squares (RSS), a goodness measure of a cluster, is the sum of squared distances from the cluster centroid: RSS j = Σ i d i c j 2 (sum over all d i in cluster j) RSS = Σ j RSS j Reassignment monotonically decreases RSS since each vector is assigned to the closest centroid. Recomputa*on also monotonically decreases each RSS j because

Sec. 16.4 Cluster recomputa*on in K- means RSS j = Σ i d i c j 2 = Σ i Σ k (d ik c jk ) 2 i ranges over documents in cluster j RSS j reaches minimum when: Σ i 2(d ik c jk ) = 0 (for each c jk ) Σ i c jk = Σ i d ik m j c jk = Σ i d ik (m j is # of docs in cluster j) c jk = (1/ m j ) Σ i d ik K- means typically converges quickly

Sec. 16.4 Time Complexity Compu*ng distance between two docs is O(M) where M is the dimensionality of the vectors. Reassigning clusters: O(KN) distance computa*ons, or O(KNM). Compu*ng centroids: Each doc gets added once to some centroid: O(NM). Assume these two steps are each done once for I itera*ons: O(IKNM).

Sec. 16.4 Seed Choice Results can vary based on random seed selec*on. Some seeds can result in poor convergence rate, or convergence to sub- op*mal clusterings. Select good seeds using a heuris*c (e.g., doc least similar to any exis*ng mean) Try out mul*ple star*ng points Ini*alize with the results of another method. Example showing sensitivity to seeds In the above, if you start with B and E as centroids you converge to {A,B,C} and {D,E,F} If you start with D and F you converge to {A,B,D,E} {C,F}

Sec. 16.4 K- means issues, varia*ons, etc. Recompu*ng the centroid aber every assignment (rather than aber all points are re- assigned) can improve speed of convergence of K- means Assumes clusters are spherical in vector space Sensi*ve to coordinate changes, weigh*ng etc. Disjoint and exhaus*ve Doesn t have a no*on of outliers by default But can add outlier filtering Dhillon et al. ICDM 2002 variation to fix some issues with small document clusters

How Many Clusters? Number of clusters K is given Par**on n docs into predetermined number of clusters Finding the right number of clusters is part of the problem Given docs, par**on into an appropriate number of subsets. E.g., for query results - ideal value of K not known up front - though UI may impose limits.

K not specified in advance Say, the results of a query. Solve an op*miza*on problem: penalize having lots of clusters applica*on dependent, e.g., compressed summary of search results list. Tradeoff between having more clusters (beqer focus within each cluster) and having too many clusters

K not specified in advance Given a clustering, define the Benefit for a doc to be the cosine similarity to its centroid Define the Total Benefit to be the sum of the individual doc Benefits. Why is there always a clustering of Total Benefit n?

Penalize lots of clusters For each cluster, we have a Cost C. Thus for a clustering with K clusters, the Total Cost is KC. Define the Value of a clustering to be = Total Benefit - Total Cost. Find the clustering of highest value, over all choices of K. Total benefit increases with increasing K. But can stop when it doesn t increase by much. The Cost term enforces this.

Ch. 17 Hierarchical Clustering Build a tree- based hierarchical taxonomy (dendrogram) from a set of documents. animal vertebrate fish reptile amphib. mammal invertebrate worm insect crustacean One approach: recursive applica*on of a par**onal clustering algorithm.

Dendrogram: Hierarchical Clustering Clustering obtained by cutting the dendrogram at a desired level: each connected component forms a cluster. 34

Hierarchical Agglomera*ve Clustering (HAC) Starts with each doc in a separate cluster then repeatedly joins the closest pair of clusters, un*l there is only one cluster. Sec. 17.1 The history of merging forms a binary tree or hierarchy. Note: the resulting clusters are still hard and induce a partition

Sec. 17.2 Closest pair of clusters Many variants to defining closest pair of clusters Single- link Similarity of the most cosine- similar (single- link) Complete- link Similarity of the furthest points, the least cosine- similar Centroid Clusters whose centroids (centers of gravity) are the most cosine- similar Average- link Average cosine between all pairs of elements

Sec. 17.2 Single Link Agglomera*ve Clustering Use maximum similarity of pairs: sim( c i, c j ) = max x c, y i c j sim( x, Can result in straggly (long and thin) clusters due to chaining effect. Aber merging c i and c j, the similarity of the resul*ng cluster to another cluster, c k, is: y) sim (( c c ), c ) = max( sim( c, c ), sim( c, c i j k i k j k ))

Sec. 17.2 Single Link Example

Sec. 17.2 Complete Link Use minimum similarity of pairs: sim( c i, c j ) = x c min, y i c j sim( x, y) Makes *ghter, spherical clusters that are typically preferable. Aber merging c i and c j, the similarity of the resul*ng cluster to another cluster, c k, is: sim (( c c ), c ) = min( sim( c, c ), sim( c, c i j k i k j k )) C i C j C k

Sec. 17.2 Complete Link Example

General HAC algorithm and complexity 1. Compute similarity between all pairs of documents 2. Do N 1 *mes O(N 2 ) 1. Find closest pair of documents/clusters to merge Naïve: O(N 2 ) 2. Update similarity of all documents/clusters to new cluster Naïve: O(N) Priority Queue: O(N) Priority Queue: O(N log N) Single link: O(N) Best merge persistent! Single link: O(N)

Sec. 17.3 Group Average Similarity of two clusters = average similarity of all pairs within merged cluster. sim( c i, c j ) = c i c j ( 1 c i c 1) x ( c c ) y ( c c ): y x sim( x, Compromise between single and complete link. Two op*ons: Averaged across all ordered pairs in the merged cluster Averaged over all pairs between the two original clusters No clear difference in efficacy j i j i j y)

Compu*ng Group Average Similarity Always maintain sum of vectors in each cluster. Compute similarity of clusters in constant *me: = c j x c j x s ) ( 1) )( ( ) ( )) ( ) ( ( )) ( ) ( ( ), ( + + + + + = j i j i j i j i j i j i c c c c c c c s c s c s c s c c sim Sec. 17.3

Sec. 16.3 What Is A Good Clustering? Internal criterion: A good clustering will produce high quality clusters in which: the intra- class (that is, intra- cluster) similarity is high the inter- class similarity is low The measured quality of a clustering depends on both the document representa*on and the similarity measure used

Sec. 16.3 External criteria for clustering quality Quality measured by its ability to discover some or all of the hidden paqerns or latent classes in gold standard data Assesses a clustering with respect to ground truth requires labeled data Assume documents with C gold standard classes, while our clustering algorithms produce K clusters, ω 1, ω 2,, ω K with n i members.

Sec. 16.3 External Evalua*on of Cluster Quality Simple measure: purity, the ra*o between the dominant class in the cluster ω i and the size of cluster ω i 1 Purity (ω i ) = max j ( nij) j C n Biased because having n clusters maximizes purity Others are entropy of classes in clusters (or mutual informa*on between classes and clusters) i

Sec. 16.3 Purity example Cluster I Cluster II Cluster III Cluster I: Purity = 1/6 (max(5, 1, 0)) = 5/6 Cluster II: Purity = 1/6 (max(1, 4, 1)) = 4/6 Cluster III: Purity = 1/5 (max(2, 0, 3)) = 3/5

Sec. 16.3 Rand Index measures between pair decisions. Here RI = 0.68 Number of point pairs Same Cluster in clustering Different Clusters in clustering Same class in ground truth 20 24 Different classes in ground truth 20 72

Sec. 16.3 Rand index and Cluster F- measure RI = A + A B + + D C + D Compare with standard Precision and Recall: P = A A + B People also define and use a cluster F- measure, which is probably a better measure. R = A A + C

Final word and resources In clustering, clusters are inferred from the data without human input (unsupervised learning) However, in prac*ce, it s a bit less clear: there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representa*on of documents,... Resources IIR 16 except 16.5 IIR 17.1 17.3