Introduction to Information Retrieval

Similar documents
Flat Clustering. Slides are mostly from Hinrich Schütze. March 27, 2017

CSE 7/5337: Information Retrieval and Web Search Document clustering I (IIR 16)

Introduction to Information Retrieval

MI example for poultry/export in Reuters. Overview. Introduction to Information Retrieval. Outline.

PV211: Introduction to Information Retrieval

Introduction to Information Retrieval

Information Retrieval and Organisation

Cluster Evaluation and Expectation Maximization! adapted from: Doug Downey and Bryan Pardo, Northwestern University

Clustering CE-324: Modern Information Retrieval Sharif University of Technology

Expectation Maximization!

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering


Administrative. Machine learning code. Supervised learning (e.g. classification) Machine learning: Unsupervised learning" BANANAS APPLES

Information Retrieval and Web Search Engines

Information Retrieval and Web Search Engines

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

CS490W. Text Clustering. Luo Si. Department of Computer Science Purdue University

INF4820 Algorithms for AI and NLP. Evaluating Classifiers Clustering

Clustering: Overview and K-means algorithm

Chapter 9. Classification and Clustering

CLUSTERING. Quiz information. today 11/14/13% ! The second midterm quiz is on Thursday (11/21) ! In-class (75 minutes!)

Clustering Results. Result List Example. Clustering Results. Information Retrieval

Clustering: Overview and K-means algorithm

CS47300: Web Information Search and Management

Clustering. Informal goal. General types of clustering. Applications: Clustering in information search and analysis. Example applications in search

Text Documents clustering using K Means Algorithm

INF4820. Clustering. Erik Velldal. Nov. 17, University of Oslo. Erik Velldal INF / 22

DD2475 Information Retrieval Lecture 10: Clustering. Document Clustering. Recap: Classification. Today

Based on Raymond J. Mooney s slides

Data Clustering. Danushka Bollegala

Clustering. CE-717: Machine Learning Sharif University of Technology Spring Soleymani

Text classification II CE-324: Modern Information Retrieval Sharif University of Technology

What to come. There will be a few more topics we will cover on supervised learning

Search Engines. Information Retrieval in Practice

INF 4300 Classification III Anne Solberg The agenda today:

Data Informatics. Seon Ho Kim, Ph.D.

Clustering. Partition unlabeled examples into disjoint subsets of clusters, such that:

PV211: Introduction to Information Retrieval

Informa(on Retrieval

Clustering (COSC 416) Nazli Goharian. Document Clustering.

Informa(on Retrieval

Big Data Analytics! Special Topics for Computer Science CSE CSE Feb 9

Lecture on Modeling Tools for Clustering & Regression

Exploratory Analysis: Clustering

Today s topic CS347. Results list clustering example. Why cluster documents. Clustering documents. Lecture 8 May 7, 2001 Prabhakar Raghavan

Clustering (COSC 488) Nazli Goharian. Document Clustering.

Natural Language Processing

VECTOR SPACE CLASSIFICATION

Gene Clustering & Classification

Unsupervised Learning and Clustering

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

CSE 158. Web Mining and Recommender Systems. Midterm recap

Unsupervised Learning Partitioning Methods

Network Traffic Measurements and Analysis

CS145: INTRODUCTION TO DATA MINING

Hierarchical Clustering 4/5/17

CSE 494/598 Lecture-11: Clustering & Classification

Clustering CS 550: Machine Learning

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

Unsupervised Learning and Clustering

Lecture 8 May 7, Prabhakar Raghavan

Unsupervised Learning

Introduction to Mobile Robotics

Unsupervised Data Mining: Clustering. Izabela Moise, Evangelos Pournaras, Dirk Helbing

10/14/2017. Dejan Sarka. Anomaly Detection. Sponsors

CS 2750 Machine Learning. Lecture 19. Clustering. CS 2750 Machine Learning. Clustering. Groups together similar instances in the data sample

Clustering Algorithms for general similarity measures

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

Clustering. Bruno Martins. 1 st Semester 2012/2013

CSE 5243 INTRO. TO DATA MINING

Weka ( )

Administrative. Machine learning code. Machine learning: Unsupervised learning

Unsupervised Learning. Clustering and the EM Algorithm. Unsupervised Learning is Model Learning

DATA MINING INTRODUCTION TO CLASSIFICATION USING LINEAR CLASSIFIERS

Unsupervised Learning : Clustering

CSE 5243 INTRO. TO DATA MINING

Machine Learning. Unsupervised Learning. Manfred Huber

Clustering and Visualisation of Data

Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems

Artificial Intelligence. Programming Styles

Kernels and Clustering

CS 8520: Artificial Intelligence. Machine Learning 2. Paula Matuszek Fall, CSC 8520 Fall Paula Matuszek

Classification and Clustering

ECLT 5810 Clustering

Document Clustering: Comparison of Similarity Measures

Association Rule Mining and Clustering

Clust Clus e t ring 2 Nov

k-means Clustering David S. Rosenberg April 24, 2018 New York University

CS 1675 Introduction to Machine Learning Lecture 18. Clustering. Clustering. Groups together similar instances in the data sample

CS 8520: Artificial Intelligence

Introduction to Artificial Intelligence

ECLT 5810 Clustering

CSE 573: Artificial Intelligence Autumn 2010

Clustering algorithms

MIA - Master on Artificial Intelligence

Multivariate Analysis (slides 9)

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions

Supervised vs. Unsupervised Learning

Chapter 4: Text Clustering

Transcription:

Introduction to Information Retrieval http://informationretrieval.org IIR 16: Flat Clustering Hinrich Schütze Institute for Natural Language Processing, Universität Stuttgart 2009.06.16 1/ 64

Overview 1 Recap 2 Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? 2/ 64

Outline 1 Recap 2 Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? 3/ 64

MI example for poultry/export in Reuters e c = e poultry = 1 e c = e poultry = 0 e t = e export = 1 N 11 = 49 N 10 = 27,652 e t = e export = 0 N 01 = 141 N 00 = 774,106 these values into formula: Plug 49 I(U;C) = 801,948 log 801,948 49 2 (49+27,652)(49+141) + 141 801,948 log 801,948 141 2 (141+774,106)(49+141) + 27,652 801,948 log 801,948 27,652 2 (49+27,652)(27,652+774,106) + 774,106 801,948 log 801,948 774,106 2 (141+774,106)(27,652+774,106) 0.000105 4/ 64

Linear classifiers Linear classifiers compute a linear combination or weighted sum i w ix i of the feature values. Classification decision: i w ix i > θ? Geometrically, the equation i w ix i = θ defines a line (2D), a plane (3D) or a hyperplane (higher dimensionalities). Assumption: The classes are linearly separable. Methods for finding a linear separator: Perceptron, Rocchio, Naive Bayes, many others 5/ 64

A linear classifier in 1D A linear classifier in 1D is a point described by the equation w 1 d 1 = θ The point at θ/w 1 Points (d 1 ) with w 1 d 1 θ are in the class c. Points (d 1 ) with w 1 d 1 < θ are in the complement class c. 6/ 64

A linear classifier in 2D A linear classifier in 2D is a line described by the equation w 1 d 1 + w 2 d 2 = θ Example for a 2D linear classifier Points (d 1 d 2 ) with w 1 d 1 + w 2 d 2 θ are in the class c. Points (d 1 d 2 ) with w 1 d 1 + w 2 d 2 < θ are in the complement class c. 7/ 64

A linear classifier in 3D A linear classifier in 3D is a plane described by the equation w 1 d 1 + w 2 d 2 + w 3 d 3 = θ Example for a 3D linear classifier Points (d 1 d 2 d 3 ) with w 1 d 1 + w 2 d 2 + w 3 d 3 θ are in the class c. Points (d 1 d 2 d 3 ) with w 1 d 1 + w 2 d 2 + w 3 d 3 < θ are in the complement class c. 8/ 64

Rocchio as a linear classifier Rocchio is a linear classifier defined by: M w i d i = w d = θ i=1 where the normal vector w = µ(c 1 ) µ(c 2 ) and θ = 0.5 ( µ(c 1 ) 2 µ(c 2 ) 2 ). 9/ 64

Naive Bayes as a linear classifier Naive Bayes is a linear classifier defined by: M w i d i = θ i=1 where w i = log[ˆp(t i c)/ˆp(t i c)], d i = number of occurrences of t i in d, and θ = log[ˆp(c)/ˆp( c)]. Here, the index i, 1 i M, refers to terms of the vocabulary (not to positions in d as k did in our original definition of Naive Bayes) 10/ 64

knn is not a linear classifier x x x x x x x x x x x The decision boundaries between classes are piecewise linear......but they are not linear separators that can be described as M i=1 w id i = θ. 11/ 64

Outline 1 Recap 2 Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? 12/ 64

What is clustering? (Document) clustering is the process of grouping a set of documents into clusters of similar documents. Documents within a cluster should be similar. Documents from different clusters should be dissimilar. Clustering is the most common form of unsupervised learning. Unsupervised = there are no labeled or annotated data. 13/ 64

Data set with clear cluster structure 0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 14/ 64

Classification vs. Clustering Classification: supervised learning Clustering: unsupervised learning Classification: Classes are human-defined and part of the input to the learning algorithm. Clustering: Clusters are inferred from the data without human input. However, there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents,... 15/ 64

Outline 1 Recap 2 Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? 16/ 64

The cluster hypothesis Cluster hypothesis. Documents in the same cluster behave similarly with respect to relevance to information needs. All applications in IR are based (directly or indirectly) on the cluster hypothesis. 17/ 64

Applications of clustering in IR Scatter-Gather Application What is Benefit Example clustered? Search result clustering search more effective information results presentation to user (subsets of) collection alternative user interface: search without typing Collection clustering collection effective information presentation for exploratory browsing McKeown et al. 2002, news.google.com Cluster-based retrieval collection higher efficiency: faster search Salton 1971 18/ 64

Search result clustering for better navigation 19/ 64

Scatter-Gather 20/ 64

Global navigation: Yahoo 21/ 64

Global navigation: MESH (upper level) 22/ 64

Global navigation: MESH (lower level) 23/ 64

Note: Yahoo/MESH are not examples of clustering. But they are well known examples for using a global hierarchy for navigation. Some examples for global navigation/exploration based on clustering: Cartia Themescapes Google News 24/ 64

Global navigation combined with visualization (1) 25/ 64

Global navigation combined with visualization (2) 26/ 64

Global clustering for navigation: Google News http://news.google.com 27/ 64

Clustering for improving recall To improve search recall: Cluster docs in collection a priori When a query matches a doc d, also return other docs in the cluster containing d Hope: if we do this: the query car will also return docs containing automobile Because clustering groups together docs containing car with those containing automobile. Both types of documents contain words like parts, dealer, mercedes, road trip. 28/ 64

Data set with clear cluster structure 0.0 0.5 1.0 1.5 2.0 2.5 Exercise: Come up with an algorithm for finding the three clusters in this case 0.0 0.5 1.0 1.5 2.0 29/ 64

Document representations in clustering Vector space model As in vector space classification, we measure relatedness between vectors by Euclidean distance......which is almost equivalent to cosine similarity. Almost: centroids are not length-normalized. For centroids, distance and cosine give different results. 30/ 64

Issues in clustering General goal: put related docs in the same cluster, put unrelated docs in different clusters. But how do we formalize this? How many clusters? Initially, we will assume the number of clusters K is given. Often: secondary goals in clustering Example: avoid very small and very large clusters Flat vs. hierarchical clustering Hard vs. soft clustering 31/ 64

Flat vs. Hierarchical clustering Flat algorithms Usually start with a random (partial) partitioning of docs into groups Refine iteratively Main algorithm: K-means Hierarchical algorithms Create a hierarchy Bottom-up, agglomerative Top-down, divisive 32/ 64

Hard vs. Soft clustering Hard clustering: Each document belongs to exactly one cluster. More common and easier to do Soft clustering: A document can belong to more than one cluster. Makes more sense for applications like creating browsable hierarchies You may want to put a pair of sneakers in two clusters: sports apparel shoes You can only do that with a soft clustering approach. We won t have time for soft clustering. See IIR 16.5, IIR 18 33/ 64

Our plan This lecture: Flat, hard clustering Next lecture: Hierarchical, hard clustering 34/ 64

Flat algorithms Flat algorithms compute a partition of N documents into a set of K clusters. Given: a set of documents and the number K Find: a partition in K clusters that optimizes the chosen partitioning criterion Global optimization: exhaustively enumerate partitions, pick optimal one Not tractable Effective heuristic method: K-means algorithm 35/ 64

Outline 1 Recap 2 Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? 36/ 64

K-means Perhaps the best known clustering algorithm Simple, works well in many cases Use as default / baseline for clustering documents 37/ 64

K-means Each cluster in K-means is defined by a centroid. Objective/partitioning criterion: minimize the average squared difference from the centroid Recall definition of centroid: µ(ω) = 1 ω x x ω where we use ω to denote a cluster. We try to find the minimum average squared difference by iterating two steps: reassignment: assign each vector to its closest centroid recomputation: recompute each centroid as the average of the vectors that were assigned to it in reassignment 38/ 64

K-means algorithm K-means({ x 1,..., x N },K) 1 ( s 1, s 2,..., s K ) SelectRandomSeeds({ x 1,..., x N },K) 2 for k 1 to K 3 do µ k s k 4 while stopping criterion has not been met 5 do for k 1 to K 6 do ω k {} 7 for n 1 to N 8 do j arg min j µ j x n 9 ω j ω j { x n } (reassignment of vectors) 10 for k 1 to K 11 do µ k 1 ω k 12 return { µ 1,..., µ K } x ω k x (recomputation of centroids) 39/ 64

K-means example 40/ 64

K-means is guaranteed to converge Proof: The sum of squared distances (RSS) decreases during reassignment. RSS = sum of all squared distances between document vector and closest centroid (because each vector is moved to a closer centroid) RSS decreases during recomputation. (We will show this on the next slide.) There is only a finite number of clusterings. Thus: We must reach a fixed point. (assume that ties are broken consistently) 41/ 64

Recomputation decreases average distance RSS = K k=1 RSS k the residual sum of squares (the goodness measure) RSS k ( v) = v x 2 = x ω k x ω k RSS k ( v) = 2(v m x m ) = 0 v m x ω k v m = 1 ω k x ω k x m M (v m x m ) 2 m=1 The last line is the componentwise definition of the centroid! We minimize RSS k when the old centroid is replaced with the new centroid. RSS, the sum of the RSS k, must then also decrease during recomputation. 42/ 64

K-means is guaranteed to converge But we don t know how long convergence will take! If we don t care about a few docs switching back and forth, then convergence is usually fast (< 10-20 iterations). However, complete convergence can take many more iterations. 43/ 64

Optimality of K-means Convergence does not mean that we converge to the optimal clustering! This is the great weakness of K-means. If we start with a bad set of seeds, the resulting clustering can be horrible. 44/ 64

Exercise: Suboptimal clustering 3 2 1 0 d 1 d 2 d 3 0 1 2 3 4 d 4 d 5 d 6 What is the optimal clustering for K = 2? Do we converge on this clustering for arbitrary seeds d i1,d i2? 45/ 64

Initialization of K-means Random seed selection is just one of many ways K-means can be initialized. Random seed selection is not very robust: It s easy to get a suboptimal clustering. Better heuristics: Select seeds not randomly, but using some heuristic (e.g., filter out outliers or find a set of seeds that has good coverage of the document space) Use hierarchical clustering to find good seeds (next class) Select i (e.g., i = 10) different sets of seeds, do a K-means clustering for each, select the clustering with lowest RSS 46/ 64

Time complexity of K-means Computing one distance of two vectors is O(M). Reassignment step: O(KNM) (we need to compute KN document-centroid distances) Recomputation step: O(NM) (we need to add each of the document s < M values to one of the centroids) Assume number of iterations bounded by I Overall complexity: O(IKNM) linear in all important dimensions However: This is not a real worst-case analysis. In pathological cases, the number of iterations can be much higher than linear in the number of documents. 47/ 64

Outline 1 Recap 2 Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? 48/ 64

What is a good clustering? Internal criteria Example of an internal criterion: RSS in K-means But an internal criterion often does not evaluate the actual utility of a clustering in the application. Alternative: External criteria Evaluate with respect to a human-defined classification 49/ 64

External criteria for clustering quality Based on a gold standard data set, e.g., the Reuters collection we also used for the evaluation of classification Goal: Clustering should reproduce the classes in the gold standard (But we only want to reproduce how documents are divided into groups, not the class labels.) First measure for how well we were able to reproduce the classes: purity 50/ 64

External criterion: Purity purity(ω,c) = 1 N k max ω k c j j Ω = {ω 1,ω 2,...,ω K } is the set of clusters and C = {c 1,c 2,...,c J } is the set of classes. For each cluster ω k : find class c j with most members n kj in ω k Sum all n kj and divide by total number of points 51/ 64

Example for computing purity cluster 1 cluster 2 cluster 3 x o x x x x x o o o o x x To compute purity: 5 = max j ω 1 c j (class x, cluster 1); 4 = max j ω 2 c j (class o, cluster 2); and 3 = max j ω 3 c j (class, cluster 3). Purity is (1/17) (5 + 4 + 3) 0.71. 52/ 64

Rand index Definition: RI = TP+TN TP+FP+FN+TN Based on 2x2 contingency table of all pairs of documents: same cluster different clusters same class true positives (TP) false negatives (FN) different classes false positives (FP) true negatives (TN) TP+FN+FP+TN is the total number of pairs. There are ( N 2) pairs for N documents. Example: ( ) 17 2 = 136 in o/ /x example Each pair is either positive or negative (the clustering puts the two documents in the same or in different clusters)...... and either true (correct) or false (incorrect): the clustering decision is correct or incorrect. 53/ 64

As an example, we compute RI for the o/ /x example. We first compute TP + FP. The three clusters contain 6, 6, and 5 points, respectively, so the total number of positives or pairs of documents that are in the same cluster is: TP + FP = ( 6 2 ) + ( 6 2 ) + ( 5 2 ) = 40 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the pairs in cluster 3, and the x pair in cluster 3 are true positives: ( ) ( ) ( ) ( ) 5 4 3 2 TP = + + + = 20 2 2 2 2 Thus, FP = 40 20 = 20. FN and TN are computed similarly. 54/ 64

Rand measure for the o/ /x example same cluster different clusters same class TP = 20 FN = 24 different classes FP = 20 TN = 72 RI is then (20 + 72)/(20 + 20 + 24 + 72) 0.68. 55/ 64

Two other external evaluation measures Two other measures Normalized mutual information (NMI) How much information does the clustering contain about the classification? Singleton clusters (number of clusters = number of docs) have maximum MI Therefore: normalize by entropy of clusters and classes F measure Like Rand, but precision and recall can be weighted 56/ 64

Evaluation results for the o/ /x example purity NMI RI F 5 lower bound 0.0 0.0 0.0 0.0 maximum 1.0 1.0 1.0 1.0 value for example 0.71 0.36 0.68 0.46 All four measures range from 0 (really bad clustering) to 1 (perfect clustering). 57/ 64

Outline 1 Recap 2 Clustering: Introduction 3 Clustering in IR 4 K-means 5 Evaluation 6 How many clusters? 58/ 64

How many clusters? Either: Number of clusters K is given. Then partition into K clusters K might be given because there is some external constraint. Example: In the case of Scatter-Gather, it was hard to show more than 10 20 clusters on a monitor in the 90s. Or: Finding the right number of clusters is part of the problem. Given docs, find K for which an optimum is reached. How to define optimum? We can t use RSS or average squared distance from centroid as criterion: always chooses K = N clusters. 59/ 64

Exercise Suppose we want to analyze the set of all articles published by a major newspaper (e.g., New York Times or Süddeutsche Zeitung) in 2008. Goal: write a two-page report about what the major news stories in 2008 were. We want to use K-means clustering to find the major news stories. How would you determine K? 60/ 64

Simple objective function for K (1) Basic idea: Start with 1 cluster (K = 1) Keep adding clusters (= keep increasing K) Add a penalty for each new cluster Trade off cluster penalties against average squared distance from centroid Choose K with best tradeoff 61/ 64

Simple objective function for K (2) Given a clustering, define the cost for a document as (squared) distance to centroid Define total distortion RSS(K) as sum of all individual document costs (corresponds to average distance) Then: penalize each cluster with a cost λ Thus for a clustering with K clusters, total cluster penalty is Kλ Define the total cost of a clustering as distortion plus total cluster penalty: RSS(K) + Kλ Select K that minimizes (RSS(K) + Kλ) Still need to determine good value for λ... 62/ 64

Finding the knee in the curve residual sum of squares 1750 1800 1850 1900 1950 2 4 6 8 10 number of clusters curve flattens. Here: 4 or 9. Pick the number of clusters where 63/ 64

Resources Chapter 16 of IIR Resources at http://ifnlp.org/ir K-means example Keith van Rijsbergen on the cluster hypothesis (he was one of the originators) Bing/Carrot2/Clusty: search result clustering 64/ 64