boolean queries Inverted index query processing Query optimization boolean model September 9, / 39

Similar documents
Introduction to Information Retrieval

CSE 7/5337: Information Retrieval and Web Search Introduction and Boolean Retrieval (IIR 1)

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

Information Retrieval and Text Mining

Introduction to Information Retrieval

Information Retrieval and Organisation

Introduction to Information Retrieval

Information Retrieval

Introduction to Information Retrieval and Boolean model. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H.

Introduction to Information Retrieval IIR 1: Boolean Retrieval

Unstructured Data Management. Advanced Topics in Database Management (INFSCI 2711)

Advanced Retrieval Information Analysis Boolean Retrieval

Information Retrieval

Information Retrieval

Introducing Information Retrieval and Web Search. borrowing from: Pandu Nayak

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

Information Retrieval

PV211: Introduction to Information Retrieval

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

CS105 Introduction to Information Retrieval

Information Retrieval

Information Retrieval

CS 572: Information Retrieval. Lecture 2: Hello World! (of Text Search)

Part 2: Boolean Retrieval Francesco Ricci

Search: the beginning. Nisheeth

Classic IR Models 5/6/2012 1

Informa(on Retrieval

CS60092: Informa0on Retrieval. Sourangshu Bha<acharya

Lecture 1: Introduction and Overview

Text Retrieval and Web Search IIR 1: Boolean Retrieval

Preliminary draft (c)2006 Cambridge UP

Information Retrieval

Lecture 1: Introduction and the Boolean Model

Introduction to Information Retrieval

Behrang Mohit : txt proc! Review. Bag of word view. Document Named

CSCI 5417 Information Retrieval Systems! What is Information Retrieval?

Querying Introduction to Information Retrieval INF 141 Donald J. Patterson. Content adapted from Hinrich Schütze

1Boolean retrieval. information retrieval. term search is quite ambiguous, but in context we use the two synonymously.

Information Retrieval

Introduction to Information Retrieval

Indexing. Lecture Objectives. Text Technologies for Data Science INFR Learn about and implement Boolean search Inverted index Positional index

Boolean Retrieval. Manning, Raghavan and Schütze, Chapter 1. Daniël de Kok

Ges$one Avanzata dell Informazione Part A Full- Text Informa$on Management. Full- Text Indexing

Information Retrieval

A brief introduction to Information Retrieval

Information Retrieval Tutorial 1: Boolean Retrieval

60-538: Information Retrieval

Web Information Retrieval Exercises Boolean query answering. Prof. Luca Becchetti

Information Retrieval. Chap 8. Inverted Files

F INTRODUCTION TO WEB SEARCH AND MINING. Kenny Q. Zhu Dept. of Computer Science Shanghai Jiao Tong University

Data-analysis and Retrieval Boolean retrieval, posting lists and dictionaries

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

An Introduction to Information Retrieval. Draft of March 1, Preliminary draft (c)2007 Cambridge UP

An Introduction to Information Retrieval. Draft of March 5, Preliminary draft (c)2007 Cambridge UP

Information Retrieval

An Introduction to Information Retrieval. Draft of August 14, Preliminary draft (c)2007 Cambridge UP

An Introduction to Information Retrieval. Draft of April 15, Preliminary draft (c)2007 Cambridge UP

Models for Document & Query Representation. Ziawasch Abedjan

Overview of Information Retrieval and Organization. CSC 575 Intelligent Information Retrieval

Introduction to Information Retrieval

Introduction to Information Retrieval

Information Retrieval CS Lecture 01. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Information Retrieval

CS347. Lecture 2 April 9, Prabhakar Raghavan

Today s topics CS347. Inverted index storage. Inverted index storage. Processing Boolean queries. Lecture 2 April 9, 2001 Prabhakar Raghavan

Administrative. Distributed indexing. Index Compression! What I did last summer lunch talks today. Master. Tasks

Information Retrieval. Danushka Bollegala

Index construction CE-324: Modern Information Retrieval Sharif University of Technology

EECS 395/495 Lecture 3 Scalable Indexing, Searching, and Crawling

CSCI 5417 Information Retrieval Systems Jim Martin!

Index construction CE-324: Modern Information Retrieval Sharif University of Technology

Index construction CE-324: Modern Information Retrieval Sharif University of Technology

Recap: lecture 2 CS276A Information Retrieval

Information Retrieval CS Lecture 06. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Information Retrieval

3-2. Index construction. Most slides were adapted from Stanford CS 276 course and University of Munich IR course.

Information Retrieval

COSC572 GUEST LECTURE - PROF. GRACE HUI YANG INTRODUCTION TO INFORMATION RETRIEVAL NOV 2, 2016

Index Construction 1

index construct Overview Overview Recap How to construct index? Introduction Index construction Introduction to Recap

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

Information Retrieval and Web Search

Information Technology for Documentary Data Representation

Course structure & admin. CS276A Text Information Retrieval, Mining, and Exploitation. Dictionary and postings files: a fast, compact inverted index

Information Retrieval. Information Retrieval and Web Search

Corso di Biblioteche Digitali

FRONT CODING. Front-coding: 8automat*a1 e2 ic3 ion. Extra length beyond automat. Encodes automat. Begins to resemble general string compression.

Informa(on Retrieval. Administra*ve. Sta*s*cal MT Overview. Problems for Sta*s*cal MT

PV211: Introduction to Information Retrieval

Informa(on Retrieval

Introduction to Information Retrieval

Information Retrieval

Introduction to Information Retrieval

Information Retrieval

DB-retrieval: I m sorry, I can only look up your order, if you give me your OrderId.

A Closeup View. Class Overview CSE 454. Relevance. Retrieval Model Overview. 10/19 IR & Indexing 10/21 Google & Alta.

Introduction to Information Retrieval

Introduction to. CS276: Information Retrieval and Web Search Christopher Manning and Prabhakar Raghavan. Lecture 4: Index Construction

Introduction to Information Retrieval

Transcription:

boolean model September 9, 2014 1 / 39

Outline 1 boolean queries 2 3 4 2 / 39

taxonomy of IR models Set theoretic fuzzy extended boolean set-based IR models Boolean vector probalistic algebraic generalized vector LSI NN Document Property text links multimedia Semistructured text proximal nodes xml based probablistic BM25 language models Bayersian networks Multimedia image retrieval audio video web page rank hubs and authorities (HITs) 3 / 39

Outline 1 boolean queries 2 3 4 4 / 39

Boolean retrieval The Boolean model is arguably the simplest model to base an information retrieval system on. Queries are Boolean expressions, e.g., Caesar and Brutus The search engine returns all documents that satisfy the Boolean expression. 5 / 39

Boolean retrieval The Boolean model is arguably the simplest model to base an information retrieval system on. Queries are Boolean expressions, e.g., Caesar and Brutus The search engine returns all documents that satisfy the Boolean expression. Does Google use the Boolean model? 5 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Does Google use the Boolean model? On Google, the default interpretation of a query [w 1 w 2... w n ] is w 1 AND w 2 AND... AND w n Cases where you get hits that do not contain one of the w i : anchor text page contains variant of w i (morphology, spelling correction, synonym) long queries (n large) boolean expression generates very few hits Simple Boolean vs. Ranking of result set Simple Boolean retrieval returns matching documents in no particular order. Google (and most well designed Boolean engines) rank the result set they rank good hits (according to some estimator of relevance) higher than bad hits. 6 / 39

Outline 1 boolean queries 2 3 4 7 / 39

Unstructured data in 1650 Which plays of Shakespeare contain the words Brutus and Caesar, but not Calpurnia? One could grep all of Shakespeare s plays for Brutus and Caesar, then strip out lines containing Calpurnia. Why is grep not the solution? 8 / 39

Unstructured data in 1650 Which plays of Shakespeare contain the words Brutus and Caesar, but not Calpurnia? One could grep all of Shakespeare s plays for Brutus and Caesar, then strip out lines containing Calpurnia. Why is grep not the solution? Slow (for large collections) grep is line-oriented, IR is document-oriented not Calpurnia is non-trivial Other operations (e.g., find the word Romans near countryman) not feasible 8 / 39

Term-document incidence matrix Anthony Julius The Hamlet Othello Macbeth... and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0... Entry is 1 if term occurs. Example: Calpurnia occurs in Julius Caesar. Entry is 0 if term doesn t occur. Example: Calpurnia doesn t occur in The tempest. 9 / 39

Term-document incidence matrix Anthony Julius The Hamlet Othello Macbeth... and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0... Entry is 1 if term occurs. Example: Calpurnia occurs in Julius Caesar. Entry is 0 if term doesn t occur. Example: Calpurnia doesn t occur in The tempest. 9 / 39

Term-document incidence matrix Anthony Julius The Hamlet Othello Macbeth... and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0... Entry is 1 if term occurs. Example: Calpurnia occurs in Julius Caesar. Entry is 0 if term doesn t occur. Example: Calpurnia doesn t occur in The tempest. 9 / 39

Incidence vectors So we have a 0/1 vector for each term. To answer the query Brutusand Caesar and not Calpurnia: Take the vectors for Brutus, Caesar, and Calpurnia Complement the vector of Calpurnia Do a (bitwise) and on the three vectors Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 not Calpurnia 1 0 1 1 1 1 AND 1 0 0 1 0 0 10 / 39

0/1 vectors and result of bitwise operations Anthony Julius The Hamlet Othello Macbeth... and Caesar Tempest Cleopatra Anthony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0... result: 1 0 0 1 0 0 11 / 39

Answers to query Anthony and Cleopatra, Act III, Scene ii Agrippa [Aside to Domitius Enobarbus]: Why, Enobarbus, When Antony found Julius Caesar dead, He cried almost to roaring; and he wept When at Philippi he found Brutus slain. Hamlet, Act III, Scene ii Lord Polonius: I did enact Julius Caesar: I was killed i the Capitol; Brutus killed me. 12 / 39

Bigger collections Consider N = 10 6 documents, each with about 1000 tokens total of 10 9 tokens On average 6 bytes per token, including spaces and punctuation size of document collection is about 6 10 9 = 6 GB Assume there are M = 500,000 distinct terms in the collection (Notice that we are making a term/token distinction.) 13 / 39

Can t build the incidence matrix M = 500,000 10 6 = half a trillion 0s and 1s. But the matrix has no more than one billion 1s. Matrix is extremely sparse. What is a better representations? We only record the 1s. 14 / 39

Inverted Index For each term t, we store a list of all documents that contain t. Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132... Calpurnia 2 31 54 101. }{{}}{{} dictionary postings 15 / 39

Inverted Index For each term t, we store a list of all documents that contain t. Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132... Calpurnia 2 31 54 101. }{{}}{{} dictionary postings 15 / 39

Inverted Index For each term t, we store a list of all documents that contain t. Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132... Calpurnia 2 31 54 101. }{{}}{{} dictionary postings 15 / 39

construction 1 Collect the documents to be indexed: Friends, Romans, countrymen. So let it be with Caesar... 2 Tokenize the text, turning each document into a list of tokens: Friends Romans countrymen So... 3 Do linguistic preprocessing, producing a list of normalized tokens, which are the indexing terms: friend roman countryman so... 4 Index the documents that each term occurs in by creating an inverted index, consisting of a dictionary and postings. 16 / 39

Tokenization and preprocessing Doc 1. I did enact Julius Caesar: I was killed i the Capitol; Brutus killed me. Doc 2. So let it be with Caesar. The noble Brutus hath told you Caesar was ambitious: = Doc 1. i did enact julius caesar i was killed i the capitol brutus killed me Doc 2. so let it be with caesar the noble brutus hath told you caesar was ambitious 17 / 39

Generate postings Doc 1. i did enact julius caesar i was killed i the capitol brutus killed me Doc 2. so let it be with caesar the noble brutus hath told you caesar was ambitious = term docid i 1 did 1 enact 1 julius 1 caesar 1 i 1 was 1 killed 1 i 1 the 1 capitol 1 brutus 1 killed 1 me 1 so 2 let 2 it 2 be 2 with 2 caesar 2 the 2 noble 2 brutus 2 hath 2 told 2 you 2 caesar 2 was 2 ambitious 2 18 / 39

Sort postings term docid term docid i 1 ambitious 2 did 1 be 2 enact 1 brutus 1 julius 1 brutus 2 caesar 1 capitol 1 i 1 caesar 1 was 1 caesar 2 killed 1 caesar 2 i 1 did 1 the 1 enact 1 capitol 1 hath 1 brutus 1 i 1 killed 1 i 1 me 1 i 1 = so 2 it 2 let 2 julius 1 it 2 killed 1 be 2 killed 1 with 2 let 2 caesar 2 me 1 the 2 noble 2 noble 2 so 2 brutus 2 the 1 hath 2 the 2 told 2 told 2 you 2 you 2 caesar 2 was 1 was 2 was 2 ambitious 2 with 2 19 / 39

Create postings lists, determine document frequency term docid ambitious 2 be 2 brutus 1 brutus 2 capitol 1 caesar 1 caesar 2 caesar 2 did 1 enact 1 hath 1 i 1 i 1 i 1 it 2 = julius 1 killed 1 killed 1 let 2 me 1 noble 2 so 2 the 1 the 2 told 2 you 2 was 1 was 2 with 2 term doc. freq. postings lists ambitious 1 2 be 1 2 brutus 2 1 2 capitol 1 1 caesar 2 1 2 did 1 1 enact 1 1 hath 1 2 i 1 1 i 1 1 it 1 2 julius 1 1 killed 1 1 let 1 2 me 1 1 noble 1 2 so 1 2 the 2 1 2 told 1 2 you 1 2 was 2 1 2 with 1 2 20 / 39

Split the result into dictionary and postings file Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132... Calpurnia 2 31 54 101. }{{}}{{} dictionary postings file 21 / 39

Outline 1 boolean queries 2 3 4 22 / 39

Simple conjunctive query (two terms) Consider the query: Brutus AND Calpurnia To find all matching documents using inverted index: 1 Locate Brutus in the dictionary 2 Retrieve its postings list from the postings file 3 Locate Calpurnia in the dictionary 4 Retrieve its postings list from the postings file 5 Intersect the two postings lists 6 Return intersection to user 23 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 This is linear in the length of the postings lists. 24 / 39

Intersecting two postings lists Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Intersection = 2 31 This is linear in the length of the postings lists. Note: This only works if postings lists are sorted. 24 / 39

Intersecting two postings lists Intersect(p 1, p 2 ) 1 answer 2 while p 1 nil and p 2 nil 3 do if docid(p 1 ) = docid(p 2 ) 4 then Add(answer, docid(p 1 )) 5 p 1 next(p 1 ) 6 p 2 next(p 2 ) 7 else if docid(p 1 ) < docid(p 2 ) 8 then p 1 next(p 1 ) 9 else p 2 next(p 2 ) 10 return answer 25 / 39

Query processing: Exercise france 1 2 3 4 5 7 8 9 11 12 13 14 15 paris 2 6 10 12 14 lear 12 15 Compute hit list for ((paris AND NOT france) OR lear) 26 / 39

Boolean retrieval model: Assessment The Boolean retrieval model can answer any query that is a Boolean expression. Boolean queries are queries that use and, or and not to join query terms. Views each document as a set of terms. Is precise: Document matches condition or not. Primary commercial retrieval tool for 3 decades Many professional searchers (e.g., lawyers) still like Boolean queries. You know exactly what you are getting. Many search systems you use are also Boolean: spotlight, email, intranet etc. 27 / 39

Commercially successful Boolean retrieval: Westlaw Largest commercial legal search service in terms of the number of paying subscribers Over half a million subscribers performing millions of searches a day over tens of terabytes of text data The service was started in 1975. In 2005, Boolean search (called Terms and Connectors by Westlaw) was still the default, and used by a large percentage of users...... although ranked retrieval has been available since 1992. 28 / 39

Westlaw: Example queries Information need: Information on the legal theories involved in preventing the disclosure of trade secrets by employees formerly employed by a competing company Query: trade secret /s disclos! /s prevent /s employe! 29 / 39

Westlaw: Example queries Information need: Requirements for disabled people to be able to access a workplace Query: disab! /p access! /s work-site work-place (employment /3 place) 29 / 39

Westlaw: Example queries Information need: Cases about a host s responsibility for drunk guests Query: host! /p (responsib! liab!) /p (intoxicat! drunk!) /p guest 29 / 39

Westlaw: Comments Proximity operators: /3 = within 3 words, /s = within a sentence, /p = within a paragraph Space is disjunction, not conjunction! (This was the default in search pre-google.) Long, precise queries: incrementally developed, not like web search Why professional searchers often like Boolean search: precision, transparency, control When are Boolean queries the best way of searching? Depends on: information need, searcher, document collection,... 30 / 39

Outline 1 boolean queries 2 3 4 31 / 39

Consider a query that is an and of n terms, n > 2 For each of the terms, get its postings list, then and them together Example query: Brutus AND Calpurnia AND Caesar What is the best order for processing this query? 32 / 39

Example query: Brutus AND Calpurnia AND Caesar 33 / 39

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency 33 / 39

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further 33 / 39

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Caesar 5 31 33 / 39

Example query: Brutus AND Calpurnia AND Caesar Simple and effective optimization: Process in order of increasing frequency Start with the shortest postings list, then keep cutting further In this example, first Caesar, then Calpurnia, then Brutus Brutus 1 2 4 11 31 45 173 174 Calpurnia 2 31 54 101 Caesar 5 31 33 / 39

Optimized intersection algorithm for conjunctive queries Intersect( t 1,..., t n ) 1 terms SortByIncreasingFrequency( t 1,..., t n ) 2 result postings(first(terms)) 3 terms rest(terms) 4 while terms nil and result nil 5 do result Intersect(result, postings(first(terms))) 6 terms rest(terms) 7 return result 34 / 39

More general optimization Example query: (madding or crowd) and (ignoble or strife) Get frequencies for all terms Estimate the size of each or by the sum of its frequencies (conservative) Process in increasing order of or sizes 35 / 39

Advantages and disadvantages? Advantages: Easy for the system Users get transparency: it is easy to understand why a document was or was not retrieved Users get control: it easy to determine whether the query is too specific (few results) or too broad (many results) Disadvantages: The burden is on the user to formulate a good boolean query 36 / 39

search engine envisioned in 1945 The memex (memory extender) is the name of the hypothetical proto-hypertext system that Vannevar Bush described in his 1945 The Atlantic Monthly article As We May Think. Bush envisioned the memex as a device in which individuals would compress and store all of their books, records, and communications, mechanized so that it may be consulted with exceeding speed and flexibility. The memex would provide an enlarged intimate supplement to one s memory. The concept of the memex influenced the development of early hypertext systems (eventually leading to the creation of the World Wide Web). However, the memex system used a form of document bookmark list, of static microfilm pages, rather than a true hypertext system where parts of pages would have internal structure beyond the common textual format. 37 / 39

38 / 39