Chapter 8. Evaluating Search Engine

Similar documents
CSCI 599: Applications of Natural Language Processing Information Retrieval Evaluation"

Search Engines Chapter 8 Evaluating Search Engines Felix Naumann

Retrieval Evaluation. Hongning Wang

Search Engines. Informa1on Retrieval in Prac1ce. Annota1ons by Michael L. Nelson

Information Retrieval

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology

Advanced Search Techniques for Large Scale Data Analytics Pavel Zezula and Jan Sedmidubsky Masaryk University

Search Evaluation. Tao Yang CS293S Slides partially based on text book [CMS] [MRS]

Information Retrieval

Information Retrieval

Information Retrieval. Lecture 7 - Evaluation in Information Retrieval. Introduction. Overview. Standard test collection. Wintersemester 2007

Information Retrieval and Web Search

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

Part 7: Evaluation of IR Systems Francesco Ricci

Web Information Retrieval. Exercises Evaluation in information retrieval

Information Retrieval. Lecture 7

THIS LECTURE. How do we know if our results are any good? Results summaries: Evaluating a search engine. Making our good results usable to a user

Modern Retrieval Evaluations. Hongning Wang

Experiment Design and Evaluation for Information Retrieval Rishiraj Saha Roy Computer Scientist, Adobe Research Labs India

Part 11: Collaborative Filtering. Francesco Ricci

CSCI 5417 Information Retrieval Systems. Jim Martin!

CS47300: Web Information Search and Management

Chapter III.2: Basic ranking & evaluation measures

Information Retrieval

University of Virginia Department of Computer Science. CS 4501: Information Retrieval Fall 2015

Evaluation of Retrieval Systems

Evaluation. David Kauchak cs160 Fall 2009 adapted from:

CS6322: Information Retrieval Sanda Harabagiu. Lecture 13: Evaluation

Modern Information Retrieval

Evaluation Metrics. (Classifiers) CS229 Section Anand Avati

Information Retrieval

Evaluation of Retrieval Systems

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology

Information Retrieval and Web Search Engines

Evaluation of Retrieval Systems

Evaluation of Retrieval Systems

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology

Information Retrieval. (M&S Ch 15)

Ranked Retrieval. Evaluation in IR. One option is to average the precision scores at discrete. points on the ROC curve But which points?

Retrieval Evaluation

Ryen W. White, Matthew Richardson, Mikhail Bilenko Microsoft Research Allison Heath Rice University

Part 11: Collaborative Filtering. Francesco Ricci

Information Retrieval

WebSci and Learning to Rank for IR

CCRMA MIR Workshop 2014 Evaluating Information Retrieval Systems. Leigh M. Smith Humtap Inc.

User Logs for Query Disambiguation

Document indexing, similarities and retrieval in large scale text collections

Informa(on Retrieval

Mean Tests & X 2 Parametric vs Nonparametric Errors Selection of a Statistical Test SW242

Information Retrieval and Web Search Engines

Introduction to Information Retrieval

Lizhe Sun. November 17, Florida State University. Ranking in Statistics and Machine Learning. Lizhe Sun. Introduction

An Exploration of Query Term Deletion

Data Mining Classification: Alternative Techniques. Imbalanced Class Problem

ICFHR 2014 COMPETITION ON HANDWRITTEN KEYWORD SPOTTING (H-KWS 2014)

CMPSCI 646, Information Retrieval (Fall 2003)

Search Engines and Learning to Rank

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München

Basic Tokenizing, Indexing, and Implementation of Vector-Space Retrieval

Diversification of Query Interpretations and Search Results

Assignment No. 1. Abdurrahman Yasar. June 10, QUESTION 1

Ranking and Learning. Table of Content. Weighted scoring for ranking Learning to rank: A simple example Learning to ranking as classification.

Discounted Cumulated Gain based Evaluation of Multiple Query IR Sessions

Evaluating Classifiers

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

Informativeness for Adhoc IR Evaluation:

Chapter 12: Statistics

A Comparative Analysis of Cascade Measures for Novelty and Diversity

Evaluating Search Engines by Modeling the Relationship Between Relevance and Clicks

CS473: Course Review CS-473. Luo Si Department of Computer Science Purdue University

Learning Dense Models of Query Similarity from User Click Logs

Effective Query Generation and Postprocessing Strategies for Prior Art Patent Search

Efficient Diversification of Web Search Results

Topic 3: GIS Models 10/2/2017. What is a Model? What is a GIS Model. Geography 38/42:477 Advanced Geomatics

Rishiraj Saha Roy and Niloy Ganguly IIT Kharagpur India. Monojit Choudhury and Srivatsan Laxman Microsoft Research India India

A Simple and Efficient Sampling Method for Es7ma7ng AP and ndcg

Frequency Distributions

Predicting Query Performance on the Web

Tabularized Search Results

Slides 11: Verification and Validation Models

Chapter 6 Evaluation Metrics and Evaluation

Flat Clustering. Slides are mostly from Hinrich Schütze. March 27, 2017

Learning to Reweight Terms with Distributed Representations

Reducing Click and Skip Errors in Search Result Ranking

Effect of log-based Query Term Expansion on Retrieval Effectiveness in Patent Searching

Error Analysis, Statistics and Graphing

Overview. Lecture 6: Evaluation. Summary: Ranked retrieval. Overview. Information Retrieval Computer Science Tripos Part II.

CSE 7/5337: Information Retrieval and Web Search Document clustering I (IIR 16)

Statistics Class 25. May 14 th 2012

CS54701: Information Retrieval

Comparative Analysis of Clicks and Judgments for IR Evaluation

Workshop 8: Model selection

Evaluating Classifiers

CS224W Final Report Emergence of Global Status Hierarchy in Social Networks

Weka ( )

Enterprise Miner Tutorial Notes 2 1

Information Retrieval. Information Retrieval and Web Search

Two-Stage Least Squares

The Power and Sample Size Application

Fast or furious? - User analysis of SF Express Inc

Transcription:

Chapter 8 Evaluating Search Engine

Evaluation Evaluation is key to building effective and efficient search engines Measurement usually carried out in controlled laboratory experiments Online testing can also be done Effectiveness, efficiency, and cost are related e.g., if we want a particular level of effectiveness and efficiency, this will determine the cost of the system configuration Efficiency and cost targets may impact effectiveness & vice versa 2

Evaluation Corpus Test collections consisting of documents, queries, and relevance judgments, e.g., 3

Test Collections Bibliographic Records Full-text Documents Policy Descriptions Based on TREC topics 4

TREC Topic Example Short Query Long Query Criteria for Relevance 5

Relevance Judgments TREC judgments Depend on task being evaluated, e.g., topical relevance Generally binary, i.e., relevant vs. non-relevant Agreement good because of narrative Emphasize on high recall Obtaining relevance judgments is an expensive, timeconsuming process that requires manual effort Who does it? What are the instructions? What is the level of agreement? 6

Effectiveness Measures A is set of relevant documents; B is set of retrieved documents Relevant Non-Relevant Retrieved A B A B Not Retrieved A B A B Collection Recall = A B A B Precision = A B (true positive rate) (positive predictive value) Relevant Set, A A B A B Retrieved Set, B Retrieved Relevant, A B 7

Classification Errors Precision is used when probability that a positive result is correct is important False Positive (Type I Error) Non-relevant documents retrieved: A B False positive/false alarm rate: False Negative (Type II Error) Relevant documents not retrieved: A B FN Ratio: 1 - Recall 8

F Measure Harmonic mean of recall & precision, a single measure Harmonic mean emphasizes the importance of small values, whereas the arithmetic mean is affected more by outliers that are unusually large More general form β is a parameter that determines relative importance of recall and precision What if β = 1? 9

Ranking Effectiveness (1/6 1/6 2/6 3/6 4/6 5/6 5/6 5/6 5/6 6/6) (1/1 1/2 2/3 3/4 4/5 5/6 5/7 5/8 5/9 6/10) (0/6 1/6 1/6 1/6 2/6 3/6 4/6 4/6 5/6 6/6) (0/1 1/2 1/3 1/4 2/5 3/6 4/7 4/8 5/9 6/10) 10

Summarizing a Ranking Calculating recall and precision at fixed rank positions Calculating precision at standard recall levels, from 0.0 to 1.0 Requires interpolation Averaging the precision values from the rank positions where a relevant document was retrieved 11

Average Precision Mean Average Precision: (0.78 + 0.52) / 2 = 0.65 12

Averaging Mean Average Precision (MAP) Summarize rankings from multiple queries by averaging average precision Most commonly used measure in research papers Assumes user is interested in finding many relevant documents for each query Requires many relevance judgments in text collection Recall-precision graphs are also useful summaries 13

MAP 14

Recall-Precision Graph Too much information could be lost using MAP Recall-precision graphs provide more detail on the effectiveness of the ranking at different recall levels Example. The recall-precision graphs for the two previous queries Ranking of Query 1 Ranking of Query 2 15

Precision and Recall Precision versus Recall Curve A visual display of the evaluation measure of a retrieval strategy such that documents are ranked accordingly Example. Let q be a query in the text reference collection Let R q = { d 3, d 5, d 9, d 25, d 39, d 44, d 56, d 71, d 89, d 123 } be the set of 10 relevant documents for q Assume the following ranking of retreived documents in the answer set of q: 1. d 123 6. d 9 11. d 38 2. d 84 7. d 511 12. d 48 3. d 56 8. d 129 13. d 250 4. d 6 9. d 187 14. d 113 5. d 8 10. d 25 15. d 3 16

Precision Example. Precision Versus Recall Curve R q = 10, total number of relevant documents for q Given ranking of retrieved documents in the answer set of q: 1. d 123 6. d 9 11. d 38 2. d 84 7. d 511 12. d 48 3. d 56 8. d 129 13. d 250 4. d 6 9. d 187 14. d 113 5. d 8 10. d 25 15. d 3 100 100% An Interpolated Value 80 60 66% 50% 40 40% 33% 20 0 10 20 30 40 50 60 80 100 Recall 17

Precision and Recall To evaluate the performance of a retrieval strategy over all test queries, compute the average of precision at each recall level: P(r) = N q i=1 P i (r) N q where = 1 N q N q i=1 P(r) = average precision at the r th recall level P i (r) = precision at the r th recall level for the i th query N q = number of queries used P i (r) 18

Precision Example. Precision Versus Recall Curve R q = {d 3, d 56, d 129 } & R q = 3, number of relevant docs for q Given ranking of retreived documents in the answer set of q: 1. d 123 6. d 9 11. d 38 2. d 84 7. d 511 12. d 48 3. d 56 8. d 129 13. d 250 4. d 6 9. d 187 14. d 113 5. d 8 10. d 25 15. d 3 100 80 60 Interpolated precision at the 11 recall levels 40 20 33% 25% 20% 0 10 30 50 70 90 20 40 60 80 100 Recall 19

Precision Interpolation of Precision at Various Recall Levels 100 80 60 Interpolated precision at the 11 recall levels 40 20 0 10 30 50 70 90 20 40 60 80 100 Recall Let r i (0 i < 10) be a reference to the i th recall level P(r i ) = max r i r r i+1 P(r) Interpolation of previous levels by using the next known level 20

Precision Precision Versus Recall Curve 120 100 80 60 40 20 0 20 40 60 80 100 Average precision at each recall level for two distinct retrieval strategies Recall Compare the retrieval performance of distinct retrieval strategies using precision versus recall figures Average precision versus recall figures become a standard evaluation strategy for IR systems Simple and intuitive Qualify the overall answer set 21

Focusing on Top Documents Users tend to look at only the top part of the ranked result list to find relevant documents Some search tasks have only one relevant doc (P@1) in mind the top-ranked doc is expected to be relevant e.g., in question-answering system, navigation search, etc. Precision at R (5, 10, or 20): ratio of top ranked relevant docs Easy to compute, average, understand Not sensitive to rank positions less than R, higher or lower Recall not an appropriate measure in this case Instead need to measure how well the search engine does at retrieving relevant documents at very high ranks 22

Focusing on Top Documents Reciprocal Rank Reciprocal of the rank at which the 1 st relevant doc retrieved Very sensitive to rank position, e.g., d n, d r, d n, d n, d n (RR = ½), whereas d n, d n, d n, d n, d r (RR = 1/5) Mean Reciprocal Rank (MRR) is the average of the reciprocal ranks over a set of queries, i.e., Number of Queries 1 MRR = Qs Qs i=1 1 rank i Normalization factor Ranking position of the first relevant document 23

Discounted Cumulative Gain Popular measure for evaluating web search and related tasks Two assumptions: Highly relevant documents are more useful than marginally relevant document The lower the ranked position of a relevant document, the less useful it is for the user, since it is less likely to be examined 24

Discounted Cumulative Gain Uses graded relevance (i.e., a value) as a measure of the usefulness, or gain, from examining a document Gain is accumulated starting at the top of the ranking and may be reduced, or discounted, at lower ranks Typical discount is 1 / log 2 (rank) With base 2, the discount at rank 4 is, and at rank 8 it is 25

Discounted Cumulative Gain DCG is the total gain accumulated at a particular rank p: where rel i is the graded relevance of the document at rank i, e.g., ranging from Bad to Perfect is 0 rel i 5, or 0 or 1 log 2 i is the discount/reduction factor applied to the gain Alternative formulation: Used by some web search companies Emphasis on retrieving highly relevant documents 26

DCG Example Assume that there are 10 ranked documents judged on a 0-3 ( not relevant to highly relevant ) relevance scale: 3, 2, 3, 0, 0, 1, 2, 2, 3, 0 which represent the gain at each rank The discounted gain is 3, 2/1, 3/1.59, 0, 0, 1/2.59, 2/2.81, 2/3, 3/3.17, 0 = 3, 2, 1.89, 0, 0, 0.39, 0.71, 0.67, 0.95, 0 DCG at each rank is formed by accumulating the numbers 3, 5, 6.89, 6.89, 6.89, 7.28, 7.99, 8.66, 9.61, 9.61 (5) (10) 27

Normalized DCG DCG numbers are averaged across a set of queries at specific rank values, similar to precision@p e.g., DCG at rank 5 is 6.89 and at rank 10 is 9.61 DCG values are often normalized by comparing the DCG at each rank with the DCG value for the perfect ranking NDCG p = DCG p / IDCG p Makes averaging easier for queries with different numbers of relevant documents 28

NDCG Example Perfect ranking for which each relevance level is on the scale 0-3: Ideal DCG values: 3, 3, 3, 2, 2, 2, 1, 0, 0, 0 (5) (10) 3, 6, 7.89, 8.89, 9.75, 10.52, 10.88, 10.88, 10.88, 10.88 (5) Given the DCG values are 3, 5, 6.89, 6.89, 6.89, 7.28, 7.99, 8.66, 9.61, 9.61, the NDCG values: (10) 1, 0.83, 0.87, 0.77, 0.71, 0.69, 0.73, 0.8, 0.88, 0.88 (5) (10) NDCG 1 at any rank position 29

Significance Tests Given the results from a number of queries, how can we conclude that (a new) ranking algorithm B is better than (the baseline) algorithm A? A significance test, which allows us to quantify, enables us to reject the null hypothesis (no difference) in favor of the alternative hypothesis (there is a difference) The power of a test is the probability that the test will reject the null hypothesis correctly Increasing the number of queries in the experiment also increases power (confidence in any judgment) of test Type 1 Error: the null hypothesis is rejected when it is true Type 2 Error: the null hypothesis is accepted when it is false 30

Significance Tests Standard Procedure for comparing two retrieval systems: 0.01, 31

Significance Test P = 0.05 Test Statistic Value Probability distribution for test statistic values assuming the null hypothesis. The shaded area is the region of rejection for a one-sided test. x 32

t-test Assumption is that the difference between the effectiveness data values is a sample from a normal distribution Null hypothesis is that the mean of the distribution of differences is zero Test statistic for the paired t-test is Mean of the differences Example. Given that N = 10 such that Standard derivation A = < 25, 43, 39, 75, 43, 15, 20, 52, 49, 50 > and B = < 35, 84, 15, 75, 68, 85, 80, 50, 58, 75 > One-tailed P Value QuickCalcs (http://www.graphpad.com/quickcalcs/ttest2.cfm) 33

Example: t-test 34

Wilcoxon Signed-Ranks Test Nonparametric test based on differences between effectiveness scores Test statistic To compute the signed-ranks, the differences are ordered by their absolute values (increasing), and then assigned rank values Rank values are given the sign of the original difference Sample website (http://scistatcalc.blogspot.com/2013/10/wilcoxon-signedrank-test-calculator.html) 35

Wilcoxon Test Example 9 non-zero differences are (in order of absolute value): Signed-ranks: 2, 9, 10, 24, 25, 25, 41, 60, 70-1, +2, +3, -4, +5.5, +5.5, +7, +8, +9 w (sum of the signed-ranks) = 35, p-value = 0.038 Two-tailed P Value Null hypothesis holds if of + ranks = of the - ranks 36