Overview of Information Retrieval and Organization. CSC 575 Intelligent Information Retrieval

Similar documents
CS105 Introduction to Information Retrieval

Information Retrieval

Information Retrieval

Information Retrieval

Advanced Retrieval Information Analysis Boolean Retrieval

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

Introduction to Information Retrieval

Information Retrieval

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

Introduction to Information Retrieval

Boolean retrieval & basics of indexing CE-324: Modern Information Retrieval Sharif University of Technology

Information Retrieval

Information Retrieval

Introduction to Information Retrieval

Introducing Information Retrieval and Web Search. borrowing from: Pandu Nayak

THIS LECTURE. How do we know if our results are any good? Results summaries: Evaluating a search engine. Making our good results usable to a user

Information Retrieval and Organisation

Introduction to Information Retrieval

Introduction to Information Retrieval and Boolean model. Reference: Introduction to Information Retrieval by C. Manning, P. Raghavan, H.

Unstructured Data Management. Advanced Topics in Database Management (INFSCI 2711)

Information Retrieval

CSE 7/5337: Information Retrieval and Web Search Introduction and Boolean Retrieval (IIR 1)

Information Retrieval and Web Search

boolean queries Inverted index query processing Query optimization boolean model September 9, / 39

CSCI 5417 Information Retrieval Systems! What is Information Retrieval?

CS 572: Information Retrieval. Lecture 2: Hello World! (of Text Search)

Information Retrieval CS Lecture 06. Razvan C. Bunescu School of Electrical Engineering and Computer Science

Information Retrieval. Information Retrieval and Web Search

Part 2: Boolean Retrieval Francesco Ricci

Search: the beginning. Nisheeth

CS 6320 Natural Language Processing

Information Retrieval. Lecture 7

Information Retrieval

Information Retrieval

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

CSCI 5417 Information Retrieval Systems. Jim Martin!

Information Retrieval CS Lecture 01. Razvan C. Bunescu School of Electrical Engineering and Computer Science

CS6322: Information Retrieval Sanda Harabagiu. Lecture 13: Evaluation

Indexing. Lecture Objectives. Text Technologies for Data Science INFR Learn about and implement Boolean search Inverted index Positional index

Information Retrieval

Information Retrieval and Text Mining

Classic IR Models 5/6/2012 1

CS60092: Informa0on Retrieval. Sourangshu Bha<acharya

Information Retrieval

Part 7: Evaluation of IR Systems Francesco Ricci

Models for Document & Query Representation. Ziawasch Abedjan

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology

Information Retrieval

Search Evaluation. Tao Yang CS293S Slides partially based on text book [CMS] [MRS]

60-538: Information Retrieval

Informa(on Retrieval

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology

Boolean Retrieval. Manning, Raghavan and Schütze, Chapter 1. Daniël de Kok

Introduction to Information Retrieval

Ges$one Avanzata dell Informazione Part A Full- Text Informa$on Management. Full- Text Indexing

Information Retrieval

Web Information Retrieval. Exercises Evaluation in information retrieval

CS347. Lecture 2 April 9, Prabhakar Raghavan

Web Information Retrieval Exercises Boolean query answering. Prof. Luca Becchetti

Today s topics CS347. Inverted index storage. Inverted index storage. Processing Boolean queries. Lecture 2 April 9, 2001 Prabhakar Raghavan

Chapter 27 Introduction to Information Retrieval and Web Search

1Boolean retrieval. information retrieval. term search is quite ambiguous, but in context we use the two synonymously.

Information Retrieval. (M&S Ch 15)

Lecture 1: Introduction and the Boolean Model

Informa(on Retrieval

Retrieval Evaluation. Hongning Wang

FRONT CODING. Front-coding: 8automat*a1 e2 ic3 ion. Extra length beyond automat. Encodes automat. Begins to resemble general string compression.

Lecture 1: Introduction and Overview

Introduction to Information Retrieval IIR 1: Boolean Retrieval

Information Retrieval

Knowledge Retrieval. Franz J. Kurfess. Computer Science Department California Polytechnic State University San Luis Obispo, CA, U.S.A.

Informa(on Retrieval

Data Modelling and Multimedia Databases M

Introduction to Information Retrieval

Querying Introduction to Information Retrieval INF 141 Donald J. Patterson. Content adapted from Hinrich Schütze

Introduction to Information Retrieval

Evaluation. David Kauchak cs160 Fall 2009 adapted from:

Behrang Mohit : txt proc! Review. Bag of word view. Document Named

Informa(on Retrieval

This lecture. Measures for a search engine EVALUATING SEARCH ENGINES. Measuring user happiness. Measures for a search engine

Multimedia Information Extraction and Retrieval Term Frequency Inverse Document Frequency

Recap: lecture 2 CS276A Information Retrieval

Information Retrieval CSCI

Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology

Information Retrieval

The Web document collection

A brief introduction to Information Retrieval

Keyword Search in Databases

COSC572 GUEST LECTURE - PROF. GRACE HUI YANG INTRODUCTION TO INFORMATION RETRIEVAL NOV 2, 2016

INFO 4300 / CS4300 Information Retrieval. slides adapted from Hinrich Schütze s, linked from

3-2. Index construction. Most slides were adapted from Stanford CS 276 course and University of Munich IR course.

A Closeup View. Class Overview CSE 454. Relevance. Retrieval Model Overview. 10/19 IR & Indexing 10/21 Google & Alta.

Data-analysis and Retrieval Boolean retrieval, posting lists and dictionaries

Introduction to Computational Advertising. MS&E 239 Stanford University Autumn 2010 Instructors: Andrei Broder and Vanja Josifovski

Information Retrieval. CS630 Representing and Accessing Digital Information. What is a Retrieval Model? Basic IR Processes

Chapter 6: Information Retrieval and Web Search. An introduction

Overview. Lecture 6: Evaluation. Summary: Ranked retrieval. Overview. Information Retrieval Computer Science Tripos Part II.

Elementary IR: Scalable Boolean Text Search. (Compare with R & G )

Course structure & admin. CS276A Text Information Retrieval, Mining, and Exploitation. Dictionary and postings files: a fast, compact inverted index

Information Retrieval

Transcription:

Overview of Information Retrieval and Organization CSC 575 Intelligent Information Retrieval

2

How much information? Google: ~100 PB a day; 1+ million servers (est. 15-20 Exabytes stored) Wayback Machine has 15+ PB + 100+ TB/month Facebook: 300+ PB of user data + 600 TB/day YouTube: ~1000 PB video storage + 4 billion views/day CERN s Large Hydron Collider generates 15 PB a year NSA: ~2+ Exabytes stored 640K ought to be enough for anybody.

Information Overload The greatest problem of today is how to teach people to ignore the irrelevant, how to refuse to know things, before they are suffocated. For too many facts are as bad as none at all. (W.H. Auden) Intelligent Information Retrieval 4

Information Retrieval Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers). Most prominent example: Web Search Engines 5

Web Search System Web Spider/Crawler Document corpus Query String IR System 1. Page1 2. Page2 3. Page3.. Ranked Documents Intelligent Information Retrieval 6

IR v. Database Systems Emphasis on effective, efficient retrieval of unstructured (or semi-structured) data IR systems typically have very simple schemas Query languages emphasize free text and Boolean combinations of keywords Matching is more complex than with structured data (semantics is less obvious) easy to retrieve the wrong objects need to measure the accuracy of retrieval Less focus on concurrency control and recovery (although update is very important). Intelligent Information Retrieval 7

IR on the Web vs. Classsic IR Input: publicly accessible Web Goal: retrieve high quality pages that are relevant to user s need static (text, audio, images, etc.) dynamically generated (mostly database access) What s different about the Web: heterogeneity lack of stability high duplication high linkage lack of quality standard Intelligent Information Retrieval 8

Make poor queries Profile of Web Users short (about 2 terms on average) imprecise queries sub-optimal syntax (80% of queries without operator) Wide variance in: needs and expectations knowledge of domain Impatience 85% look over one result screen only 78% of queries not modified Intelligent Information Retrieval 9

Web Search Systems General-purpose search engines Direct: Google, Yahoo, Bing, Ask. Meta Search: WebCrawler, Search.com, etc. Hierarchical directories Yahoo, and other portals databases mostly built by hand Specialized Search Engines Personalized Search Agents Social Tagging Systems Intelligent Information Retrieval 10

Web Search by the Numbers Intelligent Information Retrieval 11

Web Search by the Numbers 91% of users say they find what they are looking for when using search engines 73% of users stated that the information they found was trustworthy and accurate 66% of users said that search engines are fair and provide unbiased information 55% of users say that search engine results and search engine quality has gotten better over time 93% of online activities begin with a search engine 39% of customers come from a search engine (Source: MarketingCharts) Over 100 billion searches being each month, globally 82.6% of internet users use search 70% to 80% of users ignore paid search ads and focus on the free organic results (Source: UserCentric) 18% of all clicks on the organic search results come from the number 1 position (Source: SlingShot SEO) Source: Pew Research Intelligent Information Retrieval 12

Cognitive (Human) Aspects IR Satisfying an Information Need types of information needs specifying information needs (queries) the process of information access search strategies sensemaking Relevance Modeling the User Intelligent Information Retrieval 13

Cognitive (Human) Aspects IR Three phases: Asking of a question Construction of an answer Assessment of the answer Part of an iterative process Intelligent Information Retrieval 14

Person asking = user Question Asking In a frame of mind, a cognitive state Aware of a gap in their knowledge May not be able to fully define this gap Paradox of IR: If user knew the question to ask, there would often be no work to do. Query The need to describe that which you do not know in order to find it Roland Hjerppe External expression of this ill-defined state Intelligent Information Retrieval 15

Question Answering Say question answerer is human. Can they translate the user s ill-defined question into a better one? Do they know the answer themselves? Are they able to verbalize this answer? Will the user understand this verbalization? Can they provide the needed background? What if answerer is a computer system? Intelligent Information Retrieval 16

Why Don t Users Get What They Want? Example: User Need Need to get rid of mice in the basement Translation Problem User Request What s the best way to trap mice? Polysemy Synonymy Query to IR System Results mouse trap Computer supplies, software, etc. Intelligent Information Retrieval 17

Assessing the Answer How well does it answer the question? Complete answer? Partial? Background Information? Hints for further exploration? How relevant is it to the user? Relevance Feedback for each document retrieved user responds with relevance assessment binary: + or - utility assessment (between 0 and 1) Intelligent Information Retrieval 18

Key Issues in Information Lifecycle Creation Active Authoring Modifying Using Creating Organizing Indexing Retention/ Mining Accessing Filtering Storing Retrieval Semi-Active Discard Utilization Disposition Distribution Networking Searching Inactive Intelligent Information Retrieval 19

Information Retrieval as a Process Text Representation (Indexing) given a text document, identify the concepts that describe the content and how well they describe it Representing Information Need (Query Formulation) describe and refine info. needs as explicit queries Comparing Representations (Retrieval) compare text and query representations to determine which documents are potentially relevant Evaluating Retrieved Text (Feedback) present documents to user and modify query based on feedback Intelligent Information Retrieval 20

Information Retrieval as a Process Information Need Document Objects Representation Representation Query Indexed Objects Evaluation/Feedback Comparison Relevant? Retrieved Objects Intelligent Information Retrieval 21

Keyword Search Simplest notion of relevance is that the query string appears verbatim in the document. Slightly less strict notion is that the words in the query appear frequently in the document, in any order (bag of words). Intelligent Information Retrieval 22

Problems with Keywords May not retrieve relevant documents that include synonymous terms. restaurant vs. café PRC vs. China May retrieve irrelevant documents that include ambiguous terms. bat (baseball vs. mammal) Apple (company vs. fruit) bit (unit of data vs. act of eating) Intelligent Information Retrieval 23

Query Languages A way to express the question (information need) Types: Boolean Natural Language Stylized Natural Language Form-Based (GUI) Spoken Language Interface Others? Intelligent Information Retrieval 24

Ordering/Ranking of Retrieved Documents Pure Boolean retrieval model has no ordering Query is a Boolean expression which is either satisfied by the document or not e.g., information AND ( retrieval OR organization ) In practice: order chronologically order by total number of hits on query terms Most systems use best match or fuzzy methods vector-space models with tf.idf probabilistic methods Pagerank What about personalization? Intelligent Information Retrieval 25

Sec. 1.1 Example: Basic Retrieval Process Which plays of Shakespeare contain the words Brutus AND Caesar but NOT Calpurnia? One could grep all of Shakespeare s plays for Brutus and Caesar, then strip out lines containing Calpurnia? Why is that not the answer? Slow (for large corpora) Other operations (e.g., find the word Romans near countrymen) not feasible Ranked retrieval (best documents to return) Later lectures 26

Sec. 1.1 Term-document incidence Antony and Cleopatra Julius Caesar The Tempest Hamlet Othello Macbeth Antony 1 1 0 0 0 1 Brutus 1 1 0 1 0 0 Caesar 1 1 0 1 1 1 Calpurnia 0 1 0 0 0 0 Cleopatra 1 0 0 0 0 0 mercy 1 0 1 1 1 1 worser 1 0 1 1 1 0 Brutus AND Caesar BUT NOT Calpurnia 1 if play contains word, 0 otherwise

Sec. 1.1 Incidence vectors Basic Boolean Retrieval Model we have a 0/1 vector for each term to answer query: take the vectors for Brutus, Caesar and Calpurnia (complemented) bitwise AND 110100 AND 110111 AND 101111 = 100100 The more general Vector-Space Model allows for weights other that 1 and 0 for term occurrences provides the ability to do partial matching with query key words 28

IR System Architecture User Need User Feedback Query Operations User Interface Text Operations Logical View Indexing Text Database Manager Query Ranked Docs Searching Ranking Index Retrieved Docs Inverted file Text Database Intelligent Information Retrieval 29

IR System Components Text Operations forms index words (tokens). Stopword removal Stemming Indexing constructs an inverted index of word to document pointers. Searching retrieves documents that contain a given query token from the inverted index. Ranking scores all retrieved documents according to a relevance metric. Intelligent Information Retrieval 30

IR System Components (continued) User Interface manages interaction with the user: Query input and document output. Relevance feedback. Visualization of results. Query Operations transform the query to improve retrieval: Query expansion using a thesaurus. Query transformation using relevance feedback. Intelligent Information Retrieval 31

Sec. 1.1 Organization/Indexing Challenges Consider N = 1 million documents, each with about 1000 words. Avg 6 bytes/word including spaces/punctuation 6GB of data in the documents. Say there are M = 500K distinct terms among these. 500K x 1M matrix has half-a-trillion 0 s and 1 s (so, practically we can t build the matrix) But it has no more than one billion 1 s (why?) i.e., matrix is extremely sparse What s a better representation? We only record the 1 positions ( sparse matrix representation ) 32

Sec. 1.2 Inverted index For each term t, we must store a list of all documents that contain t. Identify each by a docid, a document serial number Brutus 1 2 4 11 31 45 173 174 Caesar 1 2 4 5 6 16 57 132 Calpurnia 2 31 54 101 What happens if the word Caesar is added to document 14? What about repeated words? More on Inverted Indexes Later!

Sec. 1.2 Inverted index construction Documents to be indexed Friends, Romans, countrymen. Tokenizer Token stream Friends Romans Countrymen Linguistic modules Modified tokens friend roman countryman Inverted index Indexer friend roman countryman 2 4 1 2 13 16

Initial stages of text processing Tokenization Cut character sequence into word tokens Deal with John s, a state-of-the-art solution Normalization Map text and query term to same form Stemming You want U.S.A. and USA to match We may wish different forms of a root to match authorize, authorization Stop words We may omit very common words (or not) the, a, to, of

Some Features of Modern IR Systems Relevance Ranking Natural language (free text) query capability Boolean or proximity operators Term weighting Query formulation assistance Visual browsing interfaces Query by example Filtering Distributed architecture Intelligent Information Retrieval 36

Intelligent IR Taking into account the meaning of the words used. Taking into account the context of the user s request. Adapting to the user based on direct or indirect feedback (search personalization). Taking into account the authority and quality of the source. Taking into account semantic relationships among objects (e.g., concept hierarchies, ontologies, etc.) Intelligent IR interfaces Information filtering agents Intelligent Information Retrieval 37

Other Intelligent IR Tasks Automated document categorization Automated document clustering Information filtering Information routing Recommending information or products Information extraction Information integration Question answering Social Network Analysis Intelligent Information Retrieval 38

Information System Evaluation IR systems are often components of larger systems Might evaluate several aspects: assistance in formulating queries speed of retrieval resources required presentation of documents ability to find relevant documents Evaluation is generally comparative system A vs. system B, etc. Most common evaluation: retrieval effectiveness. Intelligent Information Retrieval 39

Sec. 8.6.2 Measuring user happiness Issue: who is the user we are trying to make happy? Depends on the setting Web engine: User finds what s/he wants and returns to the engine Can measure rate of return users User completes task search as a means, not end See Russell http://dmrussell.googlepages.com/jcdl-talk-june- 2007-short.pdf Web site: user finds what s/he wants and/or buys User selects search results Measure time to purchase, or fraction of searchers who become buyers? 40

Sec. 8.1 Happiness: elusive to measure Most common proxy: relevance of search results But how do you measure relevance? Relevance measurement requires 3 elements: 1. A benchmark document collection 2. A benchmark suite of queries 3. A usually binary assessment of either Relevant or Nonrelevant for each query and each document Some work on more-than-binary, but not the standard 41

Sec. 8.2 Standard relevance benchmarks TREC - National Institute of Standards and Technology (NIST) has run a large IR test bed for many years Reuters and other benchmark doc collections used Retrieval tasks specified sometimes as queries Human experts mark, for each query and for each doc, Relevant or Nonrelevant or at least for subset of docs that some system returned for that query 42

Sec. 8.3 Unranked retrieval evaluation: Precision and Recall Precision: fraction of retrieved docs that are relevant = P(relevant retrieved) Recall: fraction of relevant docs that are retrieved = P(retrieved relevant) Relevant Retrieved tp fp Not Retrieved fn tn Nonrelevant Precision P = tp/(tp + fp) Recall R = tp/(tp + fn) 43

Retrieved vs. Relevant Documents High Recall Recall = Ret Rel Rel High Precision Retrieved Relevant Intelligent Information Retrieval 44

Retrieved vs. Relevant Documents High Recall Precision = Ret Rel Ret High Precision Retrieved Relevant Intelligent Information Retrieval 45

Sec. 8.4 Evaluating ranked results Evaluation of ranked results: The system can return any number of results By taking various numbers of the top returned documents (levels of recall), the evaluator can produce a precision-recall curve Averaging over queries A precision-recall graph for one query isn t a very sensible thing to look at You need to average performance over a whole bunch of queries 46

Precision/Recall Curves There is a tradeoff between Precision and Recall So measure Precision at different levels of Recall precision x x x x recall Intelligent Information Retrieval 47

Precision/Recall Curves Difficult to determine which of these two hypothetical results is better: precision x x x x recall Intelligent Information Retrieval 48

Sec. 8.3 Difficulties in using precision/recall Should average over large document collection/query ensembles Need human relevance assessments People aren t reliable assessors Assessments have to be binary Nuanced assessments? Heavily skewed by collection/authorship Results may not translate from one domain to another 49