Introduction to Computational Advertising. MS&E 239 Stanford University Autumn 2010 Instructors: Andrei Broder and Vanja Josifovski

Size: px
Start display at page:

Download "Introduction to Computational Advertising. MS&E 239 Stanford University Autumn 2010 Instructors: Andrei Broder and Vanja Josifovski"

Transcription

1 Introduction to Computational Advertising MS&E 239 Stanford University Autumn 2010 Instructors: Andrei Broder and Vanja Josifovski 1

2 Lecture 4: Sponsored Search (part 2) 2

3 Disclaimers This talk presents the opinions of the authors. It does not necessarily reflect the views of Yahoo! Inc or any other entity Algorithms, techniques, features, etc mentioned here might or might not be in use by Yahoo! Or any other company First part of the lecture based on the slides of Prabhakar Raghavan and Chris Manning 3

4 Lecture overview 1. Checkpoint on Sponsored Search 2. Sponsored search query rewriting (continued) 3. Introduction to Information Retrieval 4

5 5 Checkpoint

6 Checkpoint Sponsored Search 1. Sponsored search is the main channel for textual advertising on the web 2. Web queries are a (very) succinct representation of the user s intent 3. Query volumes follow a power law with a long tail. There are billions of unique queries 4. Ads are selected in sponsored search using an exact match to the bid phrase or advanced match to the whole ad 5. Main ad selection approaches are the database approach (lookup for exact match) and the IR approach where we look up using multiple features 6. Advanced match approaches use click and relevance judgments to learn how to match ads to queries 7. Query rewrite is a common advanced match technique where the query is rewritten into another query that is used to retrieve the ads by exact match 8. Users often rewrite queries in a single session a good source for learning rewrites for advanced match 9. Random walks in query-ad click graphs are another mechanism to establish query similarity 6

7 Typical query rewriting flow Typical of the DB approach to AM Rewrite the user query q into Q = (q1, q2, ) Use EM to select ads for Q Fits well in the current system architectures 7

8 Query Rewriting: Generating rewrites from matched ads 8

9 Data source: clicks? session query query query s query query search engine web pages ads clicks clicks 9

10 Data source Query Sessions issued Users contains clicks Queries search result clicks bid phrases similarity co-occurence Ads Web pages 10

11 Query Rewrites Based on other Ad Retrieval Mechanisms Can be also implemented as a memory of a working system Reactive mechanism Use any ad selection mechanism to discover good rewrites from queries to bid phrases Higher latency Offline computation can use more information than working online only For repeating queries, save the repeating bid phrases of the selected ads Apply some selection criteria to select the q q rewrites Use click information or other scoring Covering head and torso queries (~10M) 11

12 12 Online query rewriting

13 Sponsored Search: Decent Performance in the Head 13

14 Sponsored Search in the Tail Many tail queries display no sponsored search results Advertising in the tail is challenging Longer / rare queries are more difficult to interpret Exact match and phrase match are much less likely Click-based relevance predictors are poor, due to data sparseness Considerable monetization potential in tail, if done correctly This us what advanced match is mostly about! 14

15 Sponsored Search in the Tail 15

16 16 Query fragment rewriting

17 Query fragment rewriting Particularly of interest for tail queries too rare to process offline and store rewrites in a table Tail queries often contain repeating sub-phrases = query segments Cheap car insurance vs. Inexpensive car insurance Cheap ski trips vs. Inexpensive ski trips Individually, the queries do not repeat enough to learn a substitution However, the substitution cheap inexpensive can be learned from many queries that contain this substitution 17

18 Using fragment rewriting online A query comes into the system: 1. Fragment the query 2. Look up into the fragment rewrite table 3. Produce all possible variants k fragments, each having s substitutions total number of rewrites is of order s k 4. Reduce the space: consider only rewrites that result into an existing bid phrase How would you do this algorithmically? 5. Retrieve the ads 18

19 Example: Fishing in Santa Cruz Segment the query: Fishing in Santa Cruz Substitution: 1. Fishing Surfcasting 2. in near 3. Santa Cruz Capitola Rewrites 1. Fishing in Capitola 2. Fishing near Santa Cruz 3. Surfcasting in Santa Cruz 4. Fishing near Capitola 5. 19

20 Off-line sub-query rewriting Some of the rewriting methods we presented suffer from data sparsity: Query logs: we need significant occurrences of the rewritequery pairs Click graphs: we need many clicks/impressions Solution: Modify these methods to work with segments instead of queries Query logs: count segment rewrites Click graphs: nodes can be segments (a query induces multiple edges) 20

21 Problem: Correct query fragmentation Studied in NLP literature but still open research problem, How to determine that new york should be treated as a single entity as opposed to new countertop Simple strategy: pair-wise mutual information: p(a,b)>k*p(a)p(b) (the compound a.b is much more frequent than expected if a and b are independent) maximum likelihood estimate from a corpus of queries can use larger corpus: web pages More sophisticated approaches Use dictionaries of entities: people names, places, companies Conditional Random Fields or Markov models to capture the sequential structure of the query 21

22 A simple rewriting method: Deleting words from queries 22

23 Data source: queries? session query query query s query query search engine web pages ads clicks clicks 23

24 Deletion Can we learn how to rewrite queries into bid phrases by dropping words? Many long queries are over specified: cabbage soup from scratch cabbage soup In sponsored search (as opposed to web search) it is often ok to generalize from the original query Commercial interest preserved Buy used Audi CSQ buy used audi 24

25 Which words to delete? Solution: use click log data to learn and evaluate several alternative deletion strategies: 1. leftmost prefix deletion (optional adjectives?) 2. rightmost suffix deletion (less important terms last?) 3. joint probability deletion based on deletion probability of the word in the query logs e.g. free deleted 1.64% of all single word deletions 4. conditional probability p(delete(w) contains(q,w)) over all queries q that contain w 5. history based as conditional probability of deletion, ranging over single query 25 Jones et al, SIGIR03

26 Data Click Data from April-December M pairs of where a single word has been deleted Average length 3.07 predicting the right deletion with random algorithm would have 33% accuracy 2K test set (January 2003) 26

27 27 Examples

28 28 Results

29 Evaluation of query rewrites

30 Unified scoring of rewrites The methods for generating query rewrites use different sources of data There are scores from each method However these are incomparable Eventually, we have to put together a single table with all the rewrites comparable scores Unified scoring framework for the rewrites using machine learned methods Corresponds to the re-ranking of results in search 30

31 Key decisions This is a learning task: the standard questions! 1. What learning method: supervised: binary classification, regression, semi-supervised unsupervised: clustering, distance between query and rewrite 2. Where do we get the labels: click or relevance judgments? 3. Pseudo judgments: External sources of information for queries and rewrites help us decide whether the rewrite was good Eg: Web search results should be similar 4. Metrics: how good is our final ranking? (might be different from training, eg revenue) 31

32 An example: [Jones et al. WWW2006] Sample 1000 queries (q1) Select a single substitution for each (q2) Manually label the <q1,q2> pairs from 1 to 4 Learn to score <q1,q2> pairs based on the manually labeled set Order by score Assess Precision/Recall Precise task {1,2} 1 vs {3,4} 0 Broad task {1,2,3} 1 vs {4} 0 32

33 Example evaluation: methods and metrics Decision Trees Linear Regression of Editorial Scores 2-class classification with SVM (Editorial score threshold to distinguish the classes) Metrics: average precision / recall with 10-fold cross validation 33

34 Features Used in Scoring Rewrites 34 Total of 37 features from 3 general groups: Lexical features Character edit distance Prefix overlap Porter-stem Jaccard score on words Statistical features Probability of rewrite Frequency of rewrite Other Number of substitutions (numsubst) Whole query = 0 Replace one phrase = 1 Replace two phrases = 2 Query length, Bid phrase of an ad

35 Simple Decision Tree wordsincommon > 0 No Yes Class={1,2} Yes prefixoverlap>0 No Class={1,2} Class={3,4} Interpretation of the decision tree: substitution must have at least 1 word in common with initial query the beginning of the query should stay unchanged 35

36 Linear Regression using Editorial Scores Outputs continuous score [1..4] Like decision tree Prefer few edits Prefer few word changes Prefer whole-query or few phrase changes Normalize output to a probability of correctness using sigmoid fit: p(f) = 1/(1+e -f ) Result: 36

37 37 Performance

38 Second example: use relevance data Lexical features: sharewords(q,r) query and rewrite share words? worddistance(q,r) #word changes editdistance(q,r) - #character changes cosine(q,r) cosine similarity (original words only, no search results) trigramcosine(q,r) remove whitespace, consider all 3-letter sequences Semantic similarity features: maxmatchscore(q,r) maximum similarity (unigrams, classes) between q ad any ad bidding on r abstractcosine(q,r) cosine similarity between 40 search snippets for q and r taxonomysimilarity(q,r) analyzing lowest common ancestors w.r.t. classification taxonomy Radlinski et al,sigir08

39 Ad schema features queryfrequency(r) frequency of r as a Web search query maxbid(r) max bid secondbid(r) second highest bid numads(r) number of ads bidding on r numclients(r) number of clients bidding on r

40 What are the important features: SVM Weights of the Features abstractcosine(q,r) trigramcosine(q,r) numads(r) queryfrequency(r) secondbid(r) taxonomysimilarity(q,r) maxmatchscore(q,r) editdistance(q,r) worddistance(q,r) maxbid(r) sharewords(q,r)

41 Relevance vs. Revenue as objective Query rewrites can impact the revenue Rewriting into higher bid phrases can increase revenue Should we factor the bid in the ranking of the candidate rewrites? What is the impact on the relevance? What is the revenue potential Case study: Query rewriting based on the ad selection (previous class) Rewrites ranked by giving the bid a different weight in the score 41

42 42 Precision-Recall: Relevance

43 Revenue Estimates 43

44 Summary: Query rewriting for sponsored search Efficient and simple method Fits well into a database-based selection Can be done for both common and rare queries Works better for common queries in the head/torso of the query volume curve Use every available data connection Several reported methods Limitation: ad selection done based on one feature only How to use all available information in the ad and the query? 44

45 Online Query Rewriting Reading List Query rewriting technique 1. Learning Query Substitutions for Online Advertising: Broder et al. in Proc of ACM SIGIR Online Expansion of Rare Queries for Sponsored Search: Broder et al, In Proc. of WWW Query Word Deletion Prediction: Jones at al., in Proc of ACM SIGIR 2003 Data source query-to-ad similarity query-to-query similarity query logs 45

46 Information Retrieval background" Based on Prabhakar Raghavan s slideware & CS 276 / LING 286 Information Retrieval and Web Mining ( See also Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press

47 Finding the best ad as an Information Retrieval (IR) problem Representation: Treat the ads as documents in IR [Ribeiro-Neto et al. SIGIR 2005] [Broder et al. SIGIR2007] [Broder et al. CIKM2008] Optimization/solution: Retrieve the ads by evaluating the query over the ad corpus Details Analyze the query and extract query-features Query = full context (content, user profile, environment, etc) Analyze the documents (= ads) and extract doc-features Devise a scoring function = predicates on q-features and d-features + weights Build a search engine that produces quickly the ads that maximize the scoring function In the following documents ads 47

48 IR from 100,000 feet Collection: Fixed set of documents Query: Description of the user s information need Goal: Retrieve documents with information that is relevant to u ser s information need and helps him complete a task 48

49 Searching through documents Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia? One could grep all of Shakespeare s plays for Brutus and Caesar, then strip out lines containing Calpurnia? Slow (for large corpora) NOT Calpurnia is non-trivial Other operations (e.g., find the word Romans near countrymen) not feasible Ranked retrieval (best documents to return) 49

50 Conceptual view of the document corpus: A Term-Document Incidence Matrix 50 Brutus AND Caesar but NOT Calpurnia 1 if play contains word, 0 otherwise

51 Incidence vectors 0/1 vector for each term. To answer a Boolean Query: bitwise AND of the vectors for Brutus, Caesar and Calpurnia (complemented) AND AND = Each bit denotes one document. 51

52 Answers to query Antony and Cleopatra, Act III, Scene ii Agrippa [Aside to DOMITIUS ENOBARBUS]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain. Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. 52

53 Document Retrieval System Architecture docs Indexing Doc Analysis Retrieval query per document term list Inversion Query Analysis per term document list Index write Query Evaluation 53 inverted index dictionary posting lists

54 Metrics: Precision-Recall Precision: fraction of retrieved documents that are relevant P= RA / A Recall: proportion of relevant documents (ads) in the retrieved documents P = RA / R CORPUS RETRIEVED A RA RELEVANT R 54

55 Document Analysis To retrieve a document, we need to understand what it is about The IR system reads it Start with a raw document and produce a sequence of atomic document units: terms Sequence of steps grouped into two phases: Lexical Analysis: word separation, eliminate basic word variations. Result: sequence of tokens Semantic Analysis: entity extraction, grammatical and conceptual structure of the documents. Result: sequence of terms 55

56 Query Analysis Break the query into atomic units terms Parsing, tokenization, stemming, etc. must match document analysis Key issue: query interpretation Segment the query into suitable units Determine the intent of the user Key issue: queries are short Query expansion with external sources of knowledge See lecture note on (optional) reading on query expansion 56

57 IR Models How to compare the documents and queries? Asses how good is the match Model algebraic framework with well defined operators We will cover two IR models Boolean Term weights: Boolean values (0/1) Operators: Boolean algebra operators Results: Boolean values (0/1) Vector Space Term weights: based on the term importance globally and in the given document Operators: Angles and distances in Euclidian space Result: Score for each document 57

58 Boolean Model: Exact match with Boolean expressions The Boolean Retrieval model is being able to ask a query that is a Boolean expression: Boolean Queries are queries using AND, OR and NOT to join query terms Views each document as a set of words Is precise: document matches condition or not. Primary commercial retrieval tool for 3 decades. Professional searchers (e.g., lawyers) still like Boolean queries: You know exactly what you re getting. 58

59 Problem with the Boolean search: feast or famine Boolean queries often result in either too few (=0) or too many (1000s) results. Query 1: standard user dlink ,000 hits Query 2: standard user dlink 650 no card found : 0 hits It takes skill to come up with a query that produces a manageable number of hits. With a ranked list of documents it does not matter how large the retrieved set is. 59

60 Scoring as the basis of ranked retrieval We wish to return in order the documents most likely to be useful to the searcher How can we rank-order the documents in the collection with respect to a query? Assign a score say in [0, 1] to each document This score measures how well document and query match. 60

61 Query-document matching scores We need a way of assigning a score to a query/document pair Let s start with a one-term query If the query term does not occur in the document: score should be 0 The more frequent the query term in the document, the higher the score (should be) We will look at a number of alternatives for this. 61

62 Take 1: Jaccard coefficient jaccard(a,b) = A B / A B jaccard(a,a) = 1; jaccard(a,b) = 0 if A B = 0 A and B don t have to be the same size. Always assigns a number between 0 and 1. Example: Query: ides of march Document 1: caesar died in march Document 2: the long march Issues: All terms given equal weight Long documents penalized 62

63 Term weighting Importance of a term Global how much information is there in a term ( IBM vs. maybe ) independent of how/where it appears in the document? Local how important is a term for the give document (footer vs. title)? What information can be used to asses these two? 63

64 Simplification: Bag of words model Vector representation doesn t consider the ordering of words in a document John is quicker than Mary and Mary is quicker than John have the same vectors This is called the bag of words model. 64

65 Local importance: Term Frequency Consider the number of occurrences of a term in a document: Each document is a count vector in N v : a column below The term frequency (tf t,d )of term t in document d is defined as the number of times that t occurs in d. 65

66 Term frequency tf How to use tf to weight the terms? Raw term frequency is not what we want: A document with 10 occurrences of the term is more relevant than a document with one occurrence of the term. But not 10 times more relevant. Adversarial behavior spam Relevance does not increase proportionally with term frequency. 66

67 Log-frequency weighting The log frequency weight of term t in d is 0 0, 1 1, 2 1.3, 10 2, , etc. Score for a document-query pair: sum over terms t in both q and d: Score The score is 0 if none of the query terms is present in the document. 67

68 Document frequency Rare terms are more informative than frequent terms Recall stop words Consider a term in the query that is rare in the collection (e.g., EXL257) A document containing this term is very likely to be relevant to the query EXL257 We want a high weight for rare terms like EXL

69 Document frequency, continued 69 Consider a query term that is frequent in the collection (e.g., high, increase, line) A document containing such a term is more likely to be relevant than a document that doesn t, but it s not a sure indicator of relevance. For frequent terms, we want positive weights for words like high, increase, and line, but lower weights than for rare terms. We will use document frequency (df) to capture this in the score. df ( N) is the number of documents that contain the term

70 idf weight df t is the document frequency of t: the number of documents that contain t df is a measure of the informativeness of t We define the idf (inverse document frequency) of t by We use log N/df t instead of N/df t to dampen the effect of idf. 70

71 idf example, suppose N= 1 million term df t idf t calpurnia 1 6 animal sunday 1,000 3 fly 10,000 2 under 100,000 1 the 1,000,000 0 There is one idf value for each term t in a collection. 71

72 Collection vs. Document frequency The collection frequency of t is the number of occurrences of t in the collection, counting multiple occurrences. Example: Word Collection frequency Document frequency insurance try Which word is a better search term (and should get a higher weight)? 72

73 Compete tf-idf weighting The tf-idf weight of a term is the product of its tf weight and its idf weight. Best known weighting scheme in information retrieval Note: the - in tf-idf is a hyphen, not a minus sign! Alternative names: tf.idf, tf x idf Increases with the number of occurrences within a document Increases with the rarity of the term in the collection 73

74 Binary count weight matrix Each document is now represented by a real-valued vector of tf-idf weights R V 74

75 Vector Space Model: Documents and Queries as Vectors So we have a V -dimensional vector space Terms are axes of the space Documents are points or vectors in this space Very high-dimensional: hundreds of millions of dimensions when you apply this to a web search engine This is a very sparse vector - most entries are zero. 75

76 Queries as vectors Key idea 1: Do the same for queries: represent them as vectors in the space Key idea 2: Rank documents according to their proximity to the query in this space proximity = similarity of vectors proximity inverse of distance 76

77 Formalizing vector space proximity First cut: distance between two points ( = distance between the end points of the two vectors) Euclidean distance? Euclidean distance of the raw vectors does not work for vectors of different length 77

78 Use angle instead of distance Thought experiment: take a document d and append it to itself. Call this document d. Semantically d and d have the same content The Euclidean distance between the two documents can be quite large The angle between the two documents is 0, corresponding to maximal similarity. Rank documents according to angle with query. 78

79 From angles to cosines The following two notions are equivalent. Rank documents in decreasing order of the angle between query and document Rank documents in increasing order of cosine (query,document) Cosine is a monotonically decreasing function for the interval [0 o, 180 o ] 79

80 Length normalization A vector can be (length-) normalized by dividing each of its components by its length for this we use the L 2 norm: Dividing a vector by its L 2 norm makes it a unit (length) vector Effect on the two documents d and d (d appended to itself) from earlier slide: they have identical vectors after length-normalization. 80

81 cosine(query,document) Dot product Unit vectors q i is the tf-idf weight of term i in the query d i is the tf-idf weight of term i in the document cos(q,d) is the cosine similarity of q and d or, equivalently, the cosine of the angle between q and d. 81

82 Cosine similarity amongst 3 documents How similar are the novels SaS: Sense and Sensibility PaP: Pride and Prejudice, and WH: Wuthering Heights? term SaS PaP WH affection jealous gossip wuthering Term frequencies (counts) 82

83 3 documents example contd. Log frequency weighting term SaS PaP WH affection jealous gossip wuthering After normalization term SaS PaP WH affection jealous gossip wuthering cos(sas,pap) cos(sas,wh) 0.79 cos(pap,wh) Why do we have cos(sas,pap) > cos(sas,wh)?

84 84 tf-idf weighting has many variants

85 Summary vector space ranking Represent the query as a weighted tf-idf vector Represent each document as a weighted tf-idf vector Compute the cosine similarity score for the query vector and each document vector Rank documents with respect to the query by score Return the top K (e.g., K = 10) to the user 85

86 Document Indexing and Query Evaluation 86

87 Document representation Document x Term matrix Entries for each Document-Term combination Per Term entries Per Document entries Consider N = 1M documents, each with about 1K terms. Avg 6 bytes/term incl spaces/punctuation 6GB of data in the documents. Say there are m = 500K distinct terms among these. 87

88 Can t build the matrix 500K x 1M matrix has half-a-trillion 0 s and 1 s. But it has no more than one billion 1 s. matrix is extremely sparse. What s a better representation? We only record the 1 positions. 88

89 Inverted index For each term T, we must store a list of all documents that contain T. Do we use an array or a list for this? Brutus Calpurnia Caesar What happens if the word Caesar is added to document 14? 89

90 Inverted index Two main data structures: Term posting list list of all documents where the term appears Dictionary is used to find the per term data (including the start of the posting list Posting Brutus Calpurnia Caesar Dictionary Postings lists Sorted by docid (more later on why).

91 Step One: Document Analysis Sequence of (term, docid) pairs. Sorted by docid we process one document at that time Easy to parallelize divide the documents among the available nodes Doc 1 Doc 2 I did enact Julius Caesar I was killed i' the Capitol; Brutus killed me. So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious 91

92 Step Two: Inversion Sort by <term, docid> Group all occurrences of the same term across all documents Can be the computationally most intense part of the index building How to parallelize? Merge-Sort: sort individual runs locally then merge runs from different nodes Hadoop Map-Reduce infrastructure 92

93 Step Three: Index Write Write a Dictionary file and a Postings file Sequential process 93

94 Query Evaluation How to use the index data structures? 94

95 Boolean query processing: AND Consider processing the query: Brutus AND Caesar Locate Brutus in the Dictionary; Retrieve its postings. Locate Caesar in the Dictionary; Retrieve its postings. Merge the two postings: 95

96 Posting list merging Two general approaches Term at the Time (TAAT): candidate_set = first posting list for each of the remaining terms t candidate_set = intersection(candidate_set, t.posting_list()); Document at the Time (DAAT) Open an iterator (cursor) at the posting list beginning Move the cursors forward through the posting lists simultaneously: cusor.next() moves to the next entry in the posting list cursor.goto(docid) moves to the posting for docid or the first document with doc ID larger than docid To perform an OR query we need an union move the minimum cursor More in the algo project 96

97 Merging further questions What about an arbitrary Boolean formula? (Brutus OR Caesar) AND NOT (Antony OR Cleopatra) Can we always merge in linear time? Linear in what? Can we do better again think about this in the DAAT exercise (How can you skip efficiently?) 97

98 Improving the TAAT algorithms What is the best order for query processing? Consider a query that is an AND of t terms. For each of the t terms, get its postings, then AND them together. Brutus Calpurnia Caesar Query: Brutus AND Calpurnia AND Caesar

99 Improving the TAAT algorithms Process in order of increasing freq: start with smallest set, then keep cutting further. Brutus Calpurnia Caesar

100 The Merge in DAAT Algorithms Walk through the two postings simultaneously, in time linear in the total number of postings entries If the list lengths are x and y, the merge takes O(x+y) operations. Crucial: postings sorted by docid. 100

101 Conclusion Wealth of work on how to break down documents and queries into atomic units terms Several models proposed to compare the documents and queries and retrieve the ones that are most similar Practice shows that tuning is very important in IR systems. Many parameters depending on the: Corpus Queries Textual advertising lectures: how to apply IR to selection of ads Project: how to start testing with your own ad selection based on open source search engines 101

102 Evaluating search engines IIR Chapter 8 102

103 Measures for a search engine How fast does it index Number of documents/hour (Average document size) How fast does it search Latency as a function of index size Expressiveness of query language Ability to express complex information needs Speed on complex queries 103

104 Measures for a search engine All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise The key measure: user happiness What is this? Speed of response/size of index are factors But blindingly fast, useless answers won t make a user happy Need a way of quantifying user happiness 104

105 Happiness: elusive to measure 105 Most common proxy: relevance of search results But how do you measure relevance? We will detail a methodology here, then examine its issues Relevant measurement requires 3 elements: 1. A benchmark document collection 2. A benchmark suite of queries 3. A usually binary assessment of either Relevant or Nonrelevant for each query and each document Some work on more-than-binary, but not the standard in classic IR More common in ad evaluation: perfect, very good, fair, irrelevant, offensive

106 Evaluating an IR system Note: the information need is translated into a query Relevance is assessed relative to the information need not the query E.g., Information need: I'm looking for information on whether drinking red wine is more effective at reducing your risk of heart attacks than white wine. Query: wine red white heart attack effective You evaluate whether the doc addresses the information need, not whether it has these words 106

107 Standard relevance benchmarks TREC - National Institute of Standards and Technology (NIST) has run a large IR test bed for many years Reuters and other benchmark doc collections used Retrieval tasks specified sometimes as queries Human experts mark, for each query and for each doc, Relevant or Nonrelevant or at least for subset of docs that some system returned for that query 107

108 Unranked retrieval evaluation: Precision and Recall Precision: fraction of retrieved docs that are relevant = P (relevant retrieved) Recall: fraction of relevant docs that are retrieved = P (retrieved relevant) Relevant Nonrelevant Retrieved tp fp Not Retrieved fn tn Precision P = tp/(tp + fp) Recall R = tp/(tp + fn) 108

109 Precision/Recall You can get high recall (but low precision) by retrieving all docs for all queries! Recall is a non-decreasing function of the number of docs retrieved In a good system, precision decreases as either the number of docs retrieved or recall increases This is not a theorem, but a result with strong empirical confirmation 109

110 Difficulties in using precision/recall Should average over large document collection/query ensembles Need human relevance assessments People aren t reliable assessors Assessments have to be binary Nuanced assessments? Heavily skewed by collection/authorship Results may not translate from one domain to another 110

111 A combined measure: F Combined measure that assesses precision/recall tradeoff is F measure (weighted harmonic mean): 111 People usually use balanced F 1 measure i.e., with = 1 or = ½ Harmonic mean is a conservative average (if either value is low, average is low) See CJ van Rijsbergen, Information Retrieval

112 F 1 and other averages 112

113 Evaluating ranked results Evaluation of ranked results: The system can return any number of results By taking various numbers of the top returned documents (levels of recall), the evaluator can produce a precision-recall curve 113

114 A precision-recall curve Monotonically decreasing envelope 114

115 Evaluation Graphs are good, but people want summary measures! Precision at fixed retrieval level Precision-at-k: Precision of top k results Appropriate for ads: all people want are good matches on the first few ads 11-point interpolated average precision The standard measure in the early TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them Evaluates performance at all recall levels 115

116 Thank you!

117 117 This talk is Copyright Authors retain all rights, including copyrights and distribution rights. No publication or further distribution in full or in part permitted without explicit written permission

Introduction to Information Retrieval

Introduction to Information Retrieval Introduction Inverted index Processing Boolean queries Course overview Introduction to Information Retrieval http://informationretrieval.org IIR 1: Boolean Retrieval Hinrich Schütze Institute for Natural

More information

Information Retrieval

Information Retrieval Information Retrieval Suan Lee - Information Retrieval - 06 Scoring, Term Weighting and the Vector Space Model 1 Recap of lecture 5 Collection and vocabulary statistics: Heaps and Zipf s laws Dictionary

More information

This lecture: IIR Sections Ranked retrieval Scoring documents Term frequency Collection statistics Weighting schemes Vector space scoring

This lecture: IIR Sections Ranked retrieval Scoring documents Term frequency Collection statistics Weighting schemes Vector space scoring This lecture: IIR Sections 6.2 6.4.3 Ranked retrieval Scoring documents Term frequency Collection statistics Weighting schemes Vector space scoring 1 Ch. 6 Ranked retrieval Thus far, our queries have all

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Christopher Manning and Prabhakar Raghavan Lecture 1: Boolean retrieval Information Retrieval Information Retrieval (IR)

More information

Unstructured Data Management. Advanced Topics in Database Management (INFSCI 2711)

Unstructured Data Management. Advanced Topics in Database Management (INFSCI 2711) Unstructured Data Management Advanced Topics in Database Management (INFSCI 2711) Textbooks: Database System Concepts - 2010 Introduction to Information Retrieval - 2008 Vladimir Zadorozhny, DINS, SCI,

More information

Information Retrieval

Information Retrieval Information Retrieval Natural Language Processing: Lecture 12 30.11.2017 Kairit Sirts Homework 4 things that seemed to work Bidirectional LSTM instead of unidirectional Change LSTM activation to sigmoid

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval Information Retrieval and Web Search Lecture 1: Introduction and Boolean retrieval Outline ❶ Course details ❷ Information retrieval ❸ Boolean retrieval 2 Course details

More information

Informa(on Retrieval

Informa(on Retrieval Introduc)on to Informa)on Retrieval CS3245 Informa(on Retrieval Lecture 7: Scoring, Term Weigh9ng and the Vector Space Model 7 Last Time: Index Construc9on Sort- based indexing Blocked Sort- Based Indexing

More information

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2016 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan lectures

More information

Introduction to Information Retrieval and Boolean model. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H.

Introduction to Information Retrieval and Boolean model. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Introduction to Information Retrieval and Boolean model Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H. Schutze 1 Unstructured (text) vs. structured (database) data in late

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval CS276 Information Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan Lecture 1: Boolean retrieval Information Retrieval Information Retrieval (IR) is finding

More information

Informa(on Retrieval

Informa(on Retrieval Introduc)on to Informa)on Retrieval CS3245 Informa(on Retrieval Lecture 7: Scoring, Term Weigh9ng and the Vector Space Model 7 Last Time: Index Compression Collec9on and vocabulary sta9s9cs: Heaps and

More information

Advanced Retrieval Information Analysis Boolean Retrieval

Advanced Retrieval Information Analysis Boolean Retrieval Advanced Retrieval Information Analysis Boolean Retrieval Irwan Ary Dharmawan 1,2,3 iad@unpad.ac.id Hana Rizmadewi Agustina 2,4 hagustina@unpad.ac.id 1) Development Center of Information System and Technology

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval CS3245 Information Retrieval Lecture 2: Boolean retrieval 2 Blanks on slides, you may want to fill in Last Time: Ngram Language Models Unigram LM: Bag of words Ngram

More information

FRONT CODING. Front-coding: 8automat*a1 e2 ic3 ion. Extra length beyond automat. Encodes automat. Begins to resemble general string compression.

FRONT CODING. Front-coding: 8automat*a1 e2 ic3 ion. Extra length beyond automat. Encodes automat. Begins to resemble general string compression. Sec. 5.2 FRONT CODING Front-coding: Sorted words commonly have long common prefix store differences only (for last k-1 in a block of k) 8automata8automate9automatic10automation 8automat*a1 e2 ic3 ion Encodes

More information

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2013 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276,

More information

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2015 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan lectures

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Introduction to Information Retrieval http://informationretrieval.org IIR 1: Boolean Retrieval Hinrich Schütze Institute for Natural Language Processing, University of Stuttgart 2011-05-03 1/ 36 Take-away

More information

Part 2: Boolean Retrieval Francesco Ricci

Part 2: Boolean Retrieval Francesco Ricci Part 2: Boolean Retrieval Francesco Ricci Most of these slides comes from the course: Information Retrieval and Web Search, Christopher Manning and Prabhakar Raghavan Content p Term document matrix p Information

More information

CSE 7/5337: Information Retrieval and Web Search Introduction and Boolean Retrieval (IIR 1)

CSE 7/5337: Information Retrieval and Web Search Introduction and Boolean Retrieval (IIR 1) CSE 7/5337: Information Retrieval and Web Search Introduction and Boolean Retrieval (IIR 1) Michael Hahsler Southern Methodist University These slides are largely based on the slides by Hinrich Schütze

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval Lecture 6-: Scoring, Term Weighting Outline Why ranked retrieval? Term frequency tf-idf weighting 2 Ranked retrieval Thus far, our queries have all been Boolean. Documents

More information

THIS LECTURE. How do we know if our results are any good? Results summaries: Evaluating a search engine. Making our good results usable to a user

THIS LECTURE. How do we know if our results are any good? Results summaries: Evaluating a search engine. Making our good results usable to a user EVALUATION Sec. 6.2 THIS LECTURE How do we know if our results are any good? Evaluating a search engine Benchmarks Precision and recall Results summaries: Making our good results usable to a user 2 3 EVALUATING

More information

CS105 Introduction to Information Retrieval

CS105 Introduction to Information Retrieval CS105 Introduction to Information Retrieval Lecture: Yang Mu UMass Boston Slides are modified from: http://www.stanford.edu/class/cs276/ Information Retrieval Information Retrieval (IR) is finding material

More information

Information Retrieval

Information Retrieval Information Retrieval Suan Lee - Information Retrieval - 01 Boolean Retrieval 1 01 Boolean Retrieval - Information Retrieval - 01 Boolean Retrieval 2 Introducing Information Retrieval and Web Search -

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Mustafa Jarrar: Lecture Notes on Information Retrieval University of Birzeit, Palestine 2014 Introduction to Information Retrieval Dr. Mustafa Jarrar Sina Institute, University of Birzeit mjarrar@birzeit.edu

More information

CS 572: Information Retrieval. Lecture 2: Hello World! (of Text Search)

CS 572: Information Retrieval. Lecture 2: Hello World! (of Text Search) CS 572: Information Retrieval Lecture 2: Hello World! (of Text Search) 1/13/2016 CS 572: Information Retrieval. Spring 2016 1 Course Logistics Lectures: Monday, Wed: 11:30am-12:45pm, W301 Following dates

More information

Introducing Information Retrieval and Web Search. borrowing from: Pandu Nayak

Introducing Information Retrieval and Web Search. borrowing from: Pandu Nayak Introducing Information Retrieval and Web Search borrowing from: Pandu Nayak Information Retrieval Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually

More information

CSCI 5417 Information Retrieval Systems. Jim Martin!

CSCI 5417 Information Retrieval Systems. Jim Martin! CSCI 5417 Information Retrieval Systems Jim Martin! Lecture 7 9/13/2011 Today Review Efficient scoring schemes Approximate scoring Evaluating IR systems 1 Normal Cosine Scoring Speedups... Compute the

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval Boolean retrieval Basic assumptions of Information Retrieval Collection: Fixed set of documents Goal: Retrieve documents with information that is relevant to the user

More information

Search: the beginning. Nisheeth

Search: the beginning. Nisheeth Search: the beginning Nisheeth Interdisciplinary area Information retrieval NLP Search Machine learning Human factors Outline Components Crawling Processing Indexing Retrieval Evaluation Research areas

More information

boolean queries Inverted index query processing Query optimization boolean model September 9, / 39

boolean queries Inverted index query processing Query optimization boolean model September 9, / 39 boolean model September 9, 2014 1 / 39 Outline 1 boolean queries 2 3 4 2 / 39 taxonomy of IR models Set theoretic fuzzy extended boolean set-based IR models Boolean vector probalistic algebraic generalized

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval Lecture 5: Evaluation Ruixuan Li http://idc.hust.edu.cn/~rxli/ Sec. 6.2 This lecture How do we know if our results are any good? Evaluating a search engine Benchmarks

More information

CSCI 5417 Information Retrieval Systems! What is Information Retrieval?

CSCI 5417 Information Retrieval Systems! What is Information Retrieval? CSCI 5417 Information Retrieval Systems! Lecture 1 8/23/2011 Introduction 1 What is Information Retrieval? Information retrieval is the science of searching for information in documents, searching for

More information

Classic IR Models 5/6/2012 1

Classic IR Models 5/6/2012 1 Classic IR Models 5/6/2012 1 Classic IR Models Idea Each document is represented by index terms. An index term is basically a (word) whose semantics give meaning to the document. Not all index terms are

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Introduction to Information Retrieval http://informationretrieval.org IIR 1: Boolean Retrieval Hinrich Schütze Center for Information and Language Processing, University of Munich 2014-04-09 Schütze: Boolean

More information

Information Retrieval. Lecture 7

Information Retrieval. Lecture 7 Information Retrieval Lecture 7 Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations This lecture Evaluating a search engine Benchmarks Precision

More information

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Schütze s, linked from http://informationretrieval.org/ IR 1: Boolean Retrieval Paul Ginsparg Cornell University, Ithaca, NY 27 Aug

More information

Part 7: Evaluation of IR Systems Francesco Ricci

Part 7: Evaluation of IR Systems Francesco Ricci Part 7: Evaluation of IR Systems Francesco Ricci Most of these slides comes from the course: Information Retrieval and Web Search, Christopher Manning and Prabhakar Raghavan 1 This lecture Sec. 6.2 p How

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval Introducing Information Retrieval and Web Search Information Retrieval Information Retrieval (IR) is finding material (usually documents) of an unstructurednature

More information

CS6322: Information Retrieval Sanda Harabagiu. Lecture 13: Evaluation

CS6322: Information Retrieval Sanda Harabagiu. Lecture 13: Evaluation Sanda Harabagiu Lecture 13: Evaluation Sec. 6.2 This lecture How do we know if our results are any good? Evaluating a search engine Benchmarks Precision and recall Results summaries: Making our good results

More information

Information Retrieval CS Lecture 06. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Information Retrieval CS Lecture 06. Razvan C. Bunescu School of Electrical Engineering and Computer Science Information Retrieval CS 6900 Lecture 06 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Boolean Retrieval vs. Ranked Retrieval Many users (professionals) prefer

More information

Information Retrieval and Text Mining

Information Retrieval and Text Mining Information Retrieval and Text Mining http://informationretrieval.org IIR 1: Boolean Retrieval Hinrich Schütze & Wiltrud Kessler Institute for Natural Language Processing, University of Stuttgart 2012-10-16

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval CS3245 Information Retrieval Lecture 9: IR Evaluation 9 Ch. 7 Last Time The VSM Reloaded optimized for your pleasure! Improvements to the computation and selection

More information

The Web document collection

The Web document collection Web Data Management Part 1 Advanced Topics in Database Management (INFSCI 2711) Textbooks: Database System Concepts - 2010 Introduction to Information Retrieval - 2008 Vladimir Zadorozhny, DINS, SCI, University

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Introduction to Information Retrieval http://informationretrieval.org IIR 1: Boolean Retrieval Hinrich Schütze Institute for Natural Language Processing, Universität Stuttgart 2008.04.22 Schütze: Boolean

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval Lecture 1: Boolean retrieval 1 Sec. 1.1 Unstructured data in 1680 Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia? One could grep

More information

Multimedia Information Extraction and Retrieval Term Frequency Inverse Document Frequency

Multimedia Information Extraction and Retrieval Term Frequency Inverse Document Frequency Multimedia Information Extraction and Retrieval Term Frequency Inverse Document Frequency Ralf Moeller Hamburg Univ. of Technology Acknowledgement Slides taken from presentation material for the following

More information

Digital Libraries: Language Technologies

Digital Libraries: Language Technologies Digital Libraries: Language Technologies RAFFAELLA BERNARDI UNIVERSITÀ DEGLI STUDI DI TRENTO P.ZZA VENEZIA, ROOM: 2.05, E-MAIL: BERNARDI@DISI.UNITN.IT Contents 1 Recall: Inverted Index..........................................

More information

Behrang Mohit : txt proc! Review. Bag of word view. Document Named

Behrang Mohit : txt proc! Review. Bag of word view. Document  Named Intro to Text Processing Lecture 9 Behrang Mohit Some ideas and slides in this presenta@on are borrowed from Chris Manning and Dan Jurafsky. Review Bag of word view Document classifica@on Informa@on Extrac@on

More information

Web Information Retrieval Exercises Boolean query answering. Prof. Luca Becchetti

Web Information Retrieval Exercises Boolean query answering. Prof. Luca Becchetti Web Information Retrieval Exercises Boolean query answering Prof. Luca Becchetti Material rif 3. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schueze, Introduction to Information Retrieval, Cambridge

More information

Overview of Information Retrieval and Organization. CSC 575 Intelligent Information Retrieval

Overview of Information Retrieval and Organization. CSC 575 Intelligent Information Retrieval Overview of Information Retrieval and Organization CSC 575 Intelligent Information Retrieval 2 How much information? Google: ~100 PB a day; 1+ million servers (est. 15-20 Exabytes stored) Wayback Machine

More information

Web Information Retrieval. Exercises Evaluation in information retrieval

Web Information Retrieval. Exercises Evaluation in information retrieval Web Information Retrieval Exercises Evaluation in information retrieval Evaluating an IR system Note: information need is translated into a query Relevance is assessed relative to the information need

More information

CS347. Lecture 2 April 9, Prabhakar Raghavan

CS347. Lecture 2 April 9, Prabhakar Raghavan CS347 Lecture 2 April 9, 2001 Prabhakar Raghavan Today s topics Inverted index storage Compressing dictionaries into memory Processing Boolean queries Optimizing term processing Skip list encoding Wild-card

More information

Today s topics CS347. Inverted index storage. Inverted index storage. Processing Boolean queries. Lecture 2 April 9, 2001 Prabhakar Raghavan

Today s topics CS347. Inverted index storage. Inverted index storage. Processing Boolean queries. Lecture 2 April 9, 2001 Prabhakar Raghavan Today s topics CS347 Lecture 2 April 9, 2001 Prabhakar Raghavan Inverted index storage Compressing dictionaries into memory Processing Boolean queries Optimizing term processing Skip list encoding Wild-card

More information

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2014 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford)

More information

Recap: lecture 2 CS276A Information Retrieval

Recap: lecture 2 CS276A Information Retrieval Recap: lecture 2 CS276A Information Retrieval Stemming, tokenization etc. Faster postings merges Phrase queries Lecture 3 This lecture Index compression Space estimation Corpus size for estimates Consider

More information

1Boolean retrieval. information retrieval. term search is quite ambiguous, but in context we use the two synonymously.

1Boolean retrieval. information retrieval. term search is quite ambiguous, but in context we use the two synonymously. 1Boolean retrieval information retrieval The meaning of the term information retrieval (IR) can be very broad. Just getting a credit card out of your wallet so that you can type in the card number is a

More information

Informa(on Retrieval

Informa(on Retrieval Introduc)on to Informa(on Retrieval cs160 Introduction David Kauchak adapted from: h6p://www.stanford.edu/class/cs276/handouts/lecture1 intro.ppt Introduc)ons Name/nickname Dept., college and year One

More information

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2015 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford)

More information

Information Retrieval. Chap 8. Inverted Files

Information Retrieval. Chap 8. Inverted Files Information Retrieval Chap 8. Inverted Files Issues of Term-Document Matrix 500K x 1M matrix has half-a-trillion 0 s and 1 s Usually, no more than one billion 1 s Matrix is extremely sparse 2 Inverted

More information

Indexing. Lecture Objectives. Text Technologies for Data Science INFR Learn about and implement Boolean search Inverted index Positional index

Indexing. Lecture Objectives. Text Technologies for Data Science INFR Learn about and implement Boolean search Inverted index Positional index Text Technologies for Data Science INFR11145 Indexing Instructor: Walid Magdy 03-Oct-2017 Lecture Objectives Learn about and implement Boolean search Inverted index Positional index 2 1 Indexing Process

More information

Ges$one Avanzata dell Informazione Part A Full- Text Informa$on Management. Full- Text Indexing

Ges$one Avanzata dell Informazione Part A Full- Text Informa$on Management. Full- Text Indexing Ges$one Avanzata dell Informazione Part A Full- Text Informa$on Management Full- Text Indexing Contents } Introduction } Inverted Indices } Construction } Searching 2 GAvI - Full- Text Informa$on Management:

More information

Online Expansion of Rare Queries for Sponsored Search

Online Expansion of Rare Queries for Sponsored Search Online Expansion of Rare Queries for Sponsored Search Peter Ciccolo, Evgeniy Gabrilovich, Vanja Josifovski, Don Metzler, Lance Riedel, Jeff Yuan Yahoo! Research 1 Sponsored Search 2 Sponsored Search in

More information

Information Retrieval and Organisation

Information Retrieval and Organisation Information Retrieval and Organisation Dell Zhang Birkbeck, University of London 2016/17 IR Chapter 01 Boolean Retrieval Example IR Problem Let s look at a simple IR problem Suppose you own a copy of Shakespeare

More information

Models for Document & Query Representation. Ziawasch Abedjan

Models for Document & Query Representation. Ziawasch Abedjan Models for Document & Query Representation Ziawasch Abedjan Overview Introduction & Definition Boolean retrieval Vector Space Model Probabilistic Information Retrieval Language Model Approach Summary Overview

More information

Lecture 1: Introduction and the Boolean Model

Lecture 1: Introduction and the Boolean Model Lecture 1: Introduction and the Boolean Model Information Retrieval Computer Science Tripos Part II Helen Yannakoudakis 1 Natural Language and Information Processing (NLIP) Group helen.yannakoudakis@cl.cam.ac.uk

More information

Boolean Retrieval. Manning, Raghavan and Schütze, Chapter 1. Daniël de Kok

Boolean Retrieval. Manning, Raghavan and Schütze, Chapter 1. Daniël de Kok Boolean Retrieval Manning, Raghavan and Schütze, Chapter 1 Daniël de Kok Boolean query model Pose a query as a boolean query: Terms Operations: AND, OR, NOT Example: Brutus AND Caesar AND NOT Calpuria

More information

60-538: Information Retrieval

60-538: Information Retrieval 60-538: Information Retrieval September 7, 2017 1 / 48 Outline 1 what is IR 2 3 2 / 48 Outline 1 what is IR 2 3 3 / 48 IR not long time ago 4 / 48 5 / 48 now IR is mostly about search engines there are

More information

Outline of the course

Outline of the course Outline of the course Introduction to Digital Libraries (15%) Description of Information (30%) Access to Information (30%) User Services (10%) Additional topics (15%) Buliding of a (small) digital library

More information

3-2. Index construction. Most slides were adapted from Stanford CS 276 course and University of Munich IR course.

3-2. Index construction. Most slides were adapted from Stanford CS 276 course and University of Munich IR course. 3-2. Index construction Most slides were adapted from Stanford CS 276 course and University of Munich IR course. 1 Ch. 4 Index construction How do we construct an index? What strategies can we use with

More information

Informa(on Retrieval

Informa(on Retrieval Introduc*on to Informa(on Retrieval Lecture 8: Evalua*on 1 Sec. 6.2 This lecture How do we know if our results are any good? Evalua*ng a search engine Benchmarks Precision and recall 2 EVALUATING SEARCH

More information

EECS 395/495 Lecture 3 Scalable Indexing, Searching, and Crawling

EECS 395/495 Lecture 3 Scalable Indexing, Searching, and Crawling EECS 395/495 Lecture 3 Scalable Indexing, Searching, and Crawling Doug Downey Based partially on slides by Christopher D. Manning, Prabhakar Raghavan, Hinrich Schütze Announcements Project progress report

More information

Introduction to Information Retrieval IIR 1: Boolean Retrieval

Introduction to Information Retrieval IIR 1: Boolean Retrieval .. Introduction to Information Retrieval IIR 1: Boolean Retrieval Mihai Surdeanu (Based on slides by Hinrich Schütze at informationretrieval.org) Fall 2014 Boolean Retrieval 1 / 77 Take-away Why you should

More information

Information Retrieval and Web Search

Information Retrieval and Web Search Information Retrieval and Web Search Introduction to IR models and methods Rada Mihalcea (Some of the slides in this slide set come from IR courses taught at UT Austin and Stanford) Information Retrieval

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Boolean model and Inverted index Processing Boolean queries Why ranked retrieval? Introduction to Information Retrieval http://informationretrieval.org IIR 1: Boolean Retrieval Hinrich Schütze Institute

More information

Information Retrieval

Information Retrieval Information Retrieval Suan Lee - Information Retrieval - 04 Index Construction 1 04 Index Construction - Information Retrieval - 04 Index Construction 2 Plan Last lecture: Dictionary data structures Tolerant

More information

Evaluation. David Kauchak cs160 Fall 2009 adapted from:

Evaluation. David Kauchak cs160 Fall 2009 adapted from: Evaluation David Kauchak cs160 Fall 2009 adapted from: http://www.stanford.edu/class/cs276/handouts/lecture8-evaluation.ppt Administrative How are things going? Slides Points Zipf s law IR Evaluation For

More information

Index construction CE-324: Modern Information Retrieval Sharif University of Technology

Index construction CE-324: Modern Information Retrieval Sharif University of Technology Index construction CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2014 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford) Ch.

More information

Information Retrieval. Information Retrieval and Web Search

Information Retrieval. Information Retrieval and Web Search Information Retrieval and Web Search Introduction to IR models and methods Information Retrieval The indexing and retrieval of textual documents. Searching for pages on the World Wide Web is the most recent

More information

ΕΠΛ660. Ανάκτηση µε το µοντέλο διανυσµατικού χώρου

ΕΠΛ660. Ανάκτηση µε το µοντέλο διανυσµατικού χώρου Ανάκτηση µε το µοντέλο διανυσµατικού χώρου Σηµερινό ερώτηµα Typically we want to retrieve the top K docs (in the cosine ranking for the query) not totally order all docs in the corpus can we pick off docs

More information

Information Retrieval. (M&S Ch 15)

Information Retrieval. (M&S Ch 15) Information Retrieval (M&S Ch 15) 1 Retrieval Models A retrieval model specifies the details of: Document representation Query representation Retrieval function Determines a notion of relevance. Notion

More information

Lecture 1: Introduction and Overview

Lecture 1: Introduction and Overview Lecture 1: Introduction and Overview Information Retrieval Computer Science Tripos Part II Simone Teufel Natural Language and Information Processing (NLIP) Group Simone.Teufel@cl.cam.ac.uk Lent 2014 1

More information

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Schütze s, linked from http://informationretrieval.org/ IR 6: Index Compression Paul Ginsparg Cornell University, Ithaca, NY 15 Sep

More information

Search Evaluation. Tao Yang CS293S Slides partially based on text book [CMS] [MRS]

Search Evaluation. Tao Yang CS293S Slides partially based on text book [CMS] [MRS] Search Evaluation Tao Yang CS293S Slides partially based on text book [CMS] [MRS] Table of Content Search Engine Evaluation Metrics for relevancy Precision/recall F-measure MAP NDCG Difficulties in Evaluating

More information

Overview. Lecture 6: Evaluation. Summary: Ranked retrieval. Overview. Information Retrieval Computer Science Tripos Part II.

Overview. Lecture 6: Evaluation. Summary: Ranked retrieval. Overview. Information Retrieval Computer Science Tripos Part II. Overview Lecture 6: Evaluation Information Retrieval Computer Science Tripos Part II Recap/Catchup 2 Introduction Ronan Cummins 3 Unranked evaluation Natural Language and Information Processing (NLIP)

More information

CS473: Course Review CS-473. Luo Si Department of Computer Science Purdue University

CS473: Course Review CS-473. Luo Si Department of Computer Science Purdue University CS473: CS-473 Course Review Luo Si Department of Computer Science Purdue University Basic Concepts of IR: Outline Basic Concepts of Information Retrieval: Task definition of Ad-hoc IR Terminologies and

More information

Information Retrieval CS Lecture 01. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Information Retrieval CS Lecture 01. Razvan C. Bunescu School of Electrical Engineering and Computer Science Information Retrieval CS 6900 Razvan C. Bunescu School of Electrical Engineering and Computer Science bunescu@ohio.edu Information Retrieval Information Retrieval (IR) is finding material of an unstructured

More information

CSCI 5417 Information Retrieval Systems Jim Martin!

CSCI 5417 Information Retrieval Systems Jim Martin! CSCI 5417 Information Retrieval Systems Jim Martin! Lecture 4 9/1/2011 Today Finish up spelling correction Realistic indexing Block merge Single-pass in memory Distributed indexing Next HW details 1 Query

More information

Information Retrieval. CS630 Representing and Accessing Digital Information. What is a Retrieval Model? Basic IR Processes

Information Retrieval. CS630 Representing and Accessing Digital Information. What is a Retrieval Model? Basic IR Processes CS630 Representing and Accessing Digital Information Information Retrieval: Retrieval Models Information Retrieval Basics Data Structures and Access Indexing and Preprocessing Retrieval Models Thorsten

More information

Index construction CE-324: Modern Information Retrieval Sharif University of Technology

Index construction CE-324: Modern Information Retrieval Sharif University of Technology Index construction CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2017 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford) Ch.

More information

Index construction CE-324: Modern Information Retrieval Sharif University of Technology

Index construction CE-324: Modern Information Retrieval Sharif University of Technology Index construction CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2016 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford) Ch.

More information

Querying Introduction to Information Retrieval INF 141 Donald J. Patterson. Content adapted from Hinrich Schütze

Querying Introduction to Information Retrieval INF 141 Donald J. Patterson. Content adapted from Hinrich Schütze Introduction to Information Retrieval INF 141 Donald J. Patterson Content adapted from Hinrich Schütze http://www.informationretrieval.org Overview Boolean Retrieval Weighted Boolean Retrieval Zone Indices

More information

Chapter 6: Information Retrieval and Web Search. An introduction

Chapter 6: Information Retrieval and Web Search. An introduction Chapter 6: Information Retrieval and Web Search An introduction Introduction n Text mining refers to data mining using text documents as data. n Most text mining tasks use Information Retrieval (IR) methods

More information

modern database systems lecture 4 : information retrieval

modern database systems lecture 4 : information retrieval modern database systems lecture 4 : information retrieval Aristides Gionis Michael Mathioudakis spring 2016 in perspective structured data relational data RDBMS MySQL semi-structured data data-graph representation

More information

Boolean Model. Hongning Wang

Boolean Model. Hongning Wang Boolean Model Hongning Wang CS@UVa Abstraction of search engine architecture Indexed corpus Crawler Ranking procedure Doc Analyzer Doc Representation Query Rep Feedback (Query) Evaluation User Indexer

More information

Information Retrieval

Information Retrieval Information Retrieval ETH Zürich, Fall 2012 Thomas Hofmann LECTURE 6 EVALUATION 24.10.2012 Information Retrieval, ETHZ 2012 1 Today s Overview 1. User-Centric Evaluation 2. Evaluation via Relevance Assessment

More information

Informa(on Retrieval

Informa(on Retrieval Introduc)on to Informa(on Retrieval CS276 Informa)on Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan Lecture 8: Evalua)on Sec. 6.2 This lecture How do we know if our results are any good? Evalua)ng

More information

This lecture. Measures for a search engine EVALUATING SEARCH ENGINES. Measuring user happiness. Measures for a search engine

This lecture. Measures for a search engine EVALUATING SEARCH ENGINES. Measuring user happiness. Measures for a search engine Sec. 6.2 Introduc)on to Informa(on Retrieval CS276 Informa)on Retrieval and Web Search Pandu Nayak and Prabhakar Raghavan Lecture 8: Evalua)on This lecture How do we know if our results are any good? Evalua)ng

More information

Introduction to Information Retrieval

Introduction to Information Retrieval Introduction to Information Retrieval http://informationretrieval.org IIR 8: Evaluation & Result Summaries Hinrich Schütze Center for Information and Language Processing, University of Munich 2013-05-07

More information