Boolean Queries. Keywords combined with Boolean operators:

Similar documents
Boolean Queries. Keywords combined with Boolean operators:

Information Retrieval and Web Search

Optimal Query. Assume that the relevant set of documents C r. 1 N C r d j. d j. Where N is the total number of documents.

Information Retrieval. Γλώσσες Επερώτησης Query Languages

Query Operations. Relevance Feedback Query Expansion Query interpretation

Query Languages. Berlin Chen Reference: 1. Modern Information Retrieval, chapter 4

Information Retrieval. (M&S Ch 15)

CS 6320 Natural Language Processing

Information Retrieval. Information Retrieval and Web Search

Information Retrieval and Web Search

Query reformulation CE-324: Modern Information Retrieval Sharif University of Technology

Sec. 8.7 RESULTS PRESENTATION

CS473: Course Review CS-473. Luo Si Department of Computer Science Purdue University

Query reformulation CE-324: Modern Information Retrieval Sharif University of Technology

Query Suggestion. A variety of automatic or semi-automatic query suggestion techniques have been developed

CS47300: Web Information Search and Management

Information Retrieval. Techniques for Relevance Feedback

CHAPTER 3 INFORMATION RETRIEVAL BASED ON QUERY EXPANSION AND LATENT SEMANTIC INDEXING

Information Retrieval CS Lecture 01. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Chapter 3 - Text. Management and Retrieval

Information Retrieval. CS630 Representing and Accessing Digital Information. What is a Retrieval Model? Basic IR Processes

Ranked Retrieval. Evaluation in IR. One option is to average the precision scores at discrete. points on the ROC curve But which points?

QUERY EXPANSION USING WORDNET WITH A LOGICAL MODEL OF INFORMATION RETRIEVAL

Web Information Retrieval using WordNet

In = number of words appearing exactly n times N = number of words in the collection of words A = a constant. For example, if N=100 and the most

Chapter 6: Information Retrieval and Web Search. An introduction

Introduction to Information Retrieval

TERM BASED WEIGHT MEASURE FOR INFORMATION FILTERING IN SEARCH ENGINES

WEIGHTING QUERY TERMS USING WORDNET ONTOLOGY

CS54701: Information Retrieval

Information Retrieval

CHAPTER 5 SEARCH ENGINE USING SEMANTIC CONCEPTS

Chapter 27 Introduction to Information Retrieval and Web Search

CS47300: Web Information Search and Management

Relevance Feedback & Other Query Expansion Techniques

Information Retrieval

Query Refinement and Search Result Presentation

Introduction to Information Retrieval

Glossary. ASCII: Standard binary codes to represent occidental characters in one byte.

Relevance Feedback and Query Reformulation. Lecture 10 CS 510 Information Retrieval on the Internet Thanks to Susan Price. Outline

WordNet-based User Profiles for Semantic Personalization

Automatic Document; Retrieval Systems. The conventional library classifies documents by numeric subject codes which are assigned manually (Dewey

CS47300: Web Information Search and Management

NATURAL LANGUAGE PROCESSING

Information Retrieval Exercises

Semantic Extensions to Syntactic Analysis of Queries Ben Handy, Rohini Rajaraman

CS54701: Information Retrieval

Outline. Possible solutions. The basic problem. How? How? Relevance Feedback, Query Expansion, and Inputs to Ranking Beyond Similarity

Designing and Building an Automatic Information Retrieval System for Handling the Arabic Data

CS377: Database Systems Text data and information. Li Xiong Department of Mathematics and Computer Science Emory University

TABLE OF CONTENTS. ImageSilo Quick Guide Page 2 of 19

COMP90042 LECTURE 3 LEXICAL SEMANTICS COPYRIGHT 2018, THE UNIVERSITY OF MELBOURNE

A Semantic Based Search Engine for Open Architecture Requirements Documents

Towards Understanding Latent Semantic Indexing. Second Reader: Dr. Mario Nascimento

THE WEB SEARCH ENGINE

Languages and Strings. Chapter 2

Knowledge Retrieval. Franz J. Kurfess. Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A.

A Comprehensive Analysis of using Semantic Information in Text Categorization

Lecture 7: Relevance Feedback and Query Expansion

modern database systems lecture 4 : information retrieval

Sheffield University and the TREC 2004 Genomics Track: Query Expansion Using Synonymous Terms

MIRACLE at ImageCLEFmed 2008: Evaluating Strategies for Automatic Topic Expansion

Web Query Translation with Representative Synonyms in Cross Language Information Retrieval

Indexing: Part IV. Announcements (February 17) Keyword search. CPS 216 Advanced Database Systems

Research Article Relevance Feedback Based Query Expansion Model Using Borda Count and Semantic Similarity Approach

Indexing and Query Processing. What will we cover?

The IR Black Box. Anomalous State of Knowledge. The Information Retrieval Cycle. Different Types of Interactions. Upcoming Topics.

Department of Electronic Engineering FINAL YEAR PROJECT REPORT

International Journal of Scientific & Engineering Research Volume 2, Issue 12, December ISSN Web Search Engine

Text Analytics. Index-Structures for Information Retrieval. Ulf Leser

Modern Information Retrieval

More on indexing CE-324: Modern Information Retrieval Sharif University of Technology

CSCI 5417 Information Retrieval Systems. Jim Martin!

VK Multimedia Information Systems

Improving Retrieval Experience Exploiting Semantic Representation of Documents

Recap of the previous lecture. This lecture. A naïve dictionary. Introduction to Information Retrieval. Dictionary data structures Tolerant retrieval

Information Retrieval CS Lecture 06. Razvan C. Bunescu School of Electrical Engineering and Computer Science

CS6200 Information Retrieval. David Smith College of Computer and Information Science Northeastern University

Information Retrieval

Jan Pedersen 22 July 2010

Semantic Search in s

Effect of log-based Query Term Expansion on Retrieval Effectiveness in Patent Searching

Making Sense Out of the Web

Instructor: Stefan Savev

CS580 - Query Processing. Build Support Word and Dispute Word Dictionary

CS105 Introduction to Information Retrieval

Indexing and Searching

ECE 122. Engineering Problem Solving with Java

ECE 122. Engineering Problem Solving with Java

Midterm Exam Search Engines ( / ) October 20, 2015

Information Retrieval

Indexing and Searching

CS347. Lecture 2 April 9, Prabhakar Raghavan

Ontology Matching with CIDER: Evaluation Report for the OAEI 2008

Inverted Indexes. Indexing and Searching, Modern Information Retrieval, Addison Wesley, 2010 p. 5

Chapter 2. Architecture of a Search Engine

Today s topics CS347. Inverted index storage. Inverted index storage. Processing Boolean queries. Lecture 2 April 9, 2001 Prabhakar Raghavan

Information Retrieval

Elementary IR: Scalable Boolean Text Search. (Compare with R & G )

60-538: Information Retrieval

Transcription:

Query Languages 1

Boolean Queries Keywords combined with Boolean operators: OR: (e 1 OR e 2 ) AND: (e 1 AND e 2 ) BUT: (e 1 BUT e 2 ) Satisfy e 1 but not e 2 Negation only allowed using BUT to allow efficient use of inverted index by filtering another efficiently retrievable set. Naïve users have trouble with Boolean logic. 2

Boolean Retrieval with Inverted Indices Primitive keyword: Retrieve containing documents using the inverted index. OR: Recursively retrieve e 1 and e 2 and take union of results. AND: Recursively retrieve e 1 and e 2 and take intersection of results. BUT: Recursively retrieve e 1 and e 2 and take set difference of results. 3

Natural Language Queries Full text queries as arbitrary strings. Typically just treated as a bag-of-words for a vector-space model. Typically processed using standard vectorspace retrieval methods. 4

Phrasal Queries Retrieve documents with a specific phrase (ordered list of contiguous words) information theory May allow intervening stop words and/or stemming. buy camera matches: buy a camera buying the cameras etc. 5

Phrasal Retrieval with Inverted Indices Must have an inverted index that also stores positions of each keyword in a document. Retrieve documents and positions for each individual word, intersect documents, and then finally check for ordered contiguity of keyword positions. Best to start contiguity check with the least common word in the phrase. 6

Phrasal Search Find set of documents D in which all keywords (k 1 k m ) in phrase occur (using AND query processing). Intitialize empty set, R, of retrieved documents. For each document, d, in D: Get array, P i,of positions of occurrences for each k i in d Find shortest array P s of the P i s For each position p of keyword k s in P s For each keyword k i except k s Use binary search to find a position (p s + i) in the array P i If correct position for every keyword found, add d to R Return R 7

Proximity Queries List of words with specific maximal distance constraints between terms. Example: dogs and race within 4 words match dogs will begin the race May also perform stemming and/or not count stop words. 8

Proximity Retrieval with Inverted Index Use approach similar to phrasal search to find documents in which all keywords are found in a context that satisfies the proximity constraints. During binary search for positions of remaining keywords, find closest position of k i to p and check that it is within maximum allowed distance. 9

Pattern Matching Allow queries that match strings rather than word tokens. Requires more sophisticated data structures and algorithms than inverted indices to retrieve efficiently. 10

Simple Patterns Prefixes: Pattern that matches start of word. anti matches antiquity, antibody, etc. Suffixes: Pattern that matches end of word: ix matches fix, matrix, etc. Substrings: Pattern that matches arbitrary subsequence of characters. rapt matches enrapture, velociraptor etc. Ranges: Pair of strings that matches any word lexicographically (alphabetically) between them. tin to tix matches tip, tire, title, etc. 11

Allowing Errors What if query or document contains typos or misspellings? Judge similarity of words (or arbitrary strings) using: Edit distance (Levenstein distance) Longest Common Subsequence (LCS) Allow proximity search with bound on string similarity. 12

Edit (Levenstein) Distance Minimum number of character deletions, additions, or replacements needed to make two strings equivalent. misspell to mispell is distance 1 misspell to mistell is distance 2 misspell to misspelling is distance 3 Can be computed efficiently using dynamic programming in O(mn) time where m and n are the lengths of the two strings being compared. 13

Longest Common Subsequence (LCS) Length of the longest subsequence of characters shared by two strings. A subsequence of a string is obtained by deleting zero or more characters. Examples: misspell to mispell is 7 misspelled to misinterpretted is 7 mis p e ed 14

Regular Expressions Language for composing complex patterns from simpler ones. An individual character is a regex. Union: If e 1 and e 2 are regexes, then (e 1 e 2 ) is a regex that matches whatever either e 1 or e 2 matches. Concatenation: If e 1 and e 2 are regexes, then e 1 e 2 is a regex that matches a string that consists of a substring that matches e 1 immediately followed by a substring that matches e 2 Repetition (Kleene closure): If e 1 is a regex, then e 1 * is a regex that matches a sequence of zero or more strings that match e 1 15

Regular Expression Examples (u e)nabl(e ing) matches unable unabling enable enabling (un en)*able matches able unable unenable enununenable 16

Enhanced Regex s (Perl) Special terms for common sets of characters, such as alphabetic or numeric or general wildcard. Special repetition operator (+) for 1 or more occurrences. Special optional operator (?) for 0 or 1 occurrences. Special repetition operator for specific range of number of occurrences: {min,max}. A{1,5} One to five A s. A{5,} Five or more A s A{5} Exactly five A s 17

Perl Regex s Character classes: \w (word char) Any alpha-numeric (not: \W) \d (digit char) Any digit (not: \D) \s (space char) Any whitespace (not: \S). (wildcard) Anything Anchor points: \b (boundary) Word boundary ^ Beginning of string $ End of string 18

Perl Regex Examples U.S. phone number with optional area code: /\b(\(\d{3}\)\s?)?\d{3}-\d{4}\b/ Email address: /\b\s+@\s+(\.com \.edu \.gov \.org \.net)\b/ Note: Packages available to support Perl regex s in Java 19

Structural Queries Assumes documents have structure that can be exploited in search. Structure could be: Fixed set of fields, e.g. title, author, abstract, etc. Hierarchical (recursive) tree structure: book chapter chapter title section title section title subsection 20

Queries with Structure Allow queries for text appearing in specific fields: nuclear fusion appearing in a chapter title SFQL: Relational database query language SQL enhanced with full text search. Select abstract from journal.papers where author contains Teller and title contains nuclear fusion and date < 1/1/1950 21

Query Operations Relevance Feedback & Query Expansion 22

Relevance Feedback After initial retrieval results are presented, allow the user to provide feedback on the relevance of one or more of the retrieved documents. Use this feedback information to reformulate the query. Produce new results based on reformulated query. Allows more interactive, multi-pass process. 23

Relevance Feedback Architecture Query String Document corpus Revise d Query IR System Rankings ReRanked Documents Query Reformulation Feedback 1. Doc1 2. Doc2 3. Doc3.. Ranked Documents 1. Doc1 2. Doc2 3. Doc3.. 1. Doc2 2. Doc4 3. Doc5.. 24

Query Reformulation Revise query to account for feedback: Query Expansion: Add new terms to query from relevant documents. Term Reweighting: Increase weight of terms in relevant documents and decrease weight of terms in irrelevant documents. Several algorithms for query reformulation. 25

Query Reformulation for VSR Change query vector using vector algebra. Add the vectors for the relevant documents to the query vector. Subtract the vectors for the irrelevant docs from the query vector. This adds both positive and negative weighted terms to the query, as well as reweighting the initial terms. 26

Optimal Query Assume that the relevant set of documents C r are known. Then the best query that ranks all and only the relevant queries at the top is: 1 1 q opt d j C N C r d j C r r d j C Where N is the total number of documents. d r j 27

28 Standard Rochio Method Since all relevant documents unknown, just use the known relevant (D r ) and irrelevant (D n ) sets of documents and include the initial query q. n j r j D d j n D d j r m d D d D q q : Tunable weight for initial query. : Tunable weight for relevant documents. : Tunable weight for irrelevant documents.

29 Ide Regular Method Since more feedback should perhaps increase the degree of reformulation, do not normalize for amount of feedback: n j r j D d j D d j m d d q q : Tunable weight for initial query. : Tunable weight for relevant documents. : Tunable weight for irrelevant documents.

Ide Dec Hi Method Bias towards rejecting just the highest ranked of the irrelevant documents: q m q d max d j D r j nonrelevant ( d j ) : Tunable weight for initial query. : Tunable weight for relevant documents. : Tunable weight for irrelevant document. 30

Comparison of Methods Overall, experimental results indicate no clear preference for any one of the specific methods. All methods generally improve retrieval performance (recall & precision) with feedback. Generally just let tunable constants equal 1. 31

Evaluating Relevance Feedback By construction, reformulated query will rank explicitly-marked relevant documents higher and explicitly-marked irrelevant documents lower. Method should not get credit for improvement on these documents, since it was told their relevance. In machine learning, this error is called testing on the training data. Evaluation should focus on generalizing to other un-rated documents. 32

Fair Evaluation of Relevance Feedback Remove from the corpus any documents for which feedback was provided. Measure recall/precision performance on the remaining residual collection. Compared to complete corpus, specific recall/precision numbers may decrease since relevant documents were removed. However, relative performance on the residual collection provides fair data on the effectiveness of relevance feedback. 33

Why is Feedback Not Widely Used Users sometimes reluctant to provide explicit feedback. Results in long queries that require more computation to retrieve, and search engines process lots of queries and allow little time for each one. Makes it harder to understand why a particular document was retrieved. 34

Pseudo Feedback Use relevance feedback methods without explicit user input. Just assume the top m retrieved documents are relevant, and use them to reformulate the query. Allows for query expansion that includes terms that are correlated with the query terms. 35

Pseudo Feedback Architecture Query String Document corpus Revise d Query IR System Rankings ReRanked Documents Query Reformulation Pseudo Feedback 1. Doc1 2. Doc2 3. Doc3.. Ranked Documents 1. Doc1 2. Doc2 3. Doc3.. 1. Doc2 2. Doc4 3. Doc5.. 36

PseudoFeedback Results Found to improve performance on TREC competition ad-hoc retrieval task. Works even better if top documents must also satisfy additional boolean constraints in order to be used in feedback. 37

Thesaurus A thesaurus provides information on synonyms and semantically related words and phrases. Example: physician syn: croaker, doc, doctor, MD, medical, mediciner, medico, sawbones rel: medic, general practitioner, surgeon 38

Thesaurus-based Query Expansion For each term, t, in a query, expand the query with synonyms and related words of t from the thesaurus. May weight added terms less than original query terms. Generally increases recall. May significantly decrease precision, particularly with ambiguous terms. interest rate interest rate fascinate evaluate 39

WordNet A more detailed database of semantic relationships between English words. Developed by famous cognitive psychologist George Miller and a team at Princeton University. About 152,059 English words. Nouns, adjectives, verbs, and adverbs grouped into about 115,424 synonym sets called synsets. 40

WordNet Synset Relationships Antonym: front back Attribute: benevolence good (noun to adjective) Pertainym: alphabetical alphabet (adjective to noun) Similar: unquestioning absolute Cause: kill die Entailment: breathe inhale Holonym: chapter text (part-of) Meronym: computer cpu (whole-of) Hyponym: tree plant (specialization) Hypernym: fruit apple (generalization) 41

WordNet Query Expansion Add synonyms in the same synset. Add hyponyms to add specialized terms. Add hypernyms to generalize a query. Add other related terms to expand query. 42

Statistical Thesaurus Existing human-developed thesauri are not easily available in all languages. Human thesauri are limited in the type and range of synonymy and semantic relations they represent. Semantically related terms can be discovered from statistical analysis of corpora. 43

Automatic Global Analysis Determine term similarity through a precomputed statistical analysis of the complete corpus. Compute association matrices which quantify term correlations in terms of how frequently they co-occur. Expand queries with statistically most similar terms. 44

Association Matrix w 1 w 2 w 3..w n w 1 w 2 w 3.. w n c 11 c 12 c 13 c 1n c 21 c 31.. c n1 c ij : Correlation factor between term i and term j c ij d k D f ik f jk f ik : Frequency of term i in document k 45

Normalized Association Matrix Frequency based correlation factor favors more frequent terms. Normalize association scores: s ij c ii c Normalized score is 1 if two terms have the same frequency in all documents. c ij jj c ij 46

Metric Correlation Matrix Association correlation does not account for the proximity of terms in documents, just cooccurrence frequencies within documents. Metric correlations account for term proximity. c ij k V k V u i v j 1 r( k, k V i : Set of all occurrences of term i in any document. r(k u,k v ): Distance in words between word occurrences k u and k v ( if k u and k v are occurrences in different documents). u v ) 47

Normalized Metric Correlation Matrix Normalize scores to account for term frequencies: s ij V i c ij V j 48

Query Expansion with Correlation Matrix For each term i in query, expand query with the n terms, j, with the highest value of c ij (s ij ). This adds semantically related terms in the neighborhood of the query terms. 49

Problems with Global Analysis Term ambiguity may introduce irrelevant statistically correlated terms. Apple computer Apple red fruit computer Since terms are highly correlated anyway, expansion may not retrieve many additional documents. 50

Automatic Local Analysis At query time, dynamically determine similar terms based on analysis of top-ranked retrieved documents. Base correlation analysis on only the local set of retrieved documents for a specific query. Avoids ambiguity by determining similar (correlated) terms only within relevant documents. Apple computer Apple computer Powerbook laptop 51

Global vs. Local Analysis Global analysis requires intensive term correlation computation only once at system development time. Local analysis requires intensive term correlation computation for every query at run time (although number of terms and documents is less than in global analysis). But local analysis gives better results. 52

Global Analysis Refinements Only expand query with terms that are similar to all terms in the query. sim ( k, Q) i fruit not added to Apple computer since it is far from computer. fruit added to apple pie since fruit close to both apple and pie. Use more sophisticated term weights (instead of just frequency) when computing term correlations. k j Q c ij 53

Query Expansion Conclusions Expansion of queries with related terms can improve performance, particularly recall. However, must select similar terms very carefully to avoid problems, such as loss of precision. 54