Modern information retrieval Modelling Saif Rababah 1
Introduction IR systems usually adopt index terms to process queries Index term: a keyword or group of selected words any word (more general) Stemming might be used: connect: connecting, connection, connections An inverted file is built for the chosen index terms Saif Rababah 2
Introduction Indexing results in records like Doc12: napoleon, france, revolution, emperor or (weighted terms) Doc12: napoleon-8, france-6, revolution-4, emperor-7 To find all documents about Napoleon would involve looking at every document s index record (possibly 1,000s or millions). (Assume that Doc12 references another file which contains other details about the document.) Saif Rababah 3
Introduction Instead the information is inverted : napoleon : doc12, doc56, doc87, doc99 or (weighted) napoloen : doc12-8, doc56-3, doc87-5, doc99-2 inverted file contains one record per index term inverted file is organised so that a given index term can be found quickly. Saif Rababah 4
Introduction To find documents satisfying the query napoleon and revolution use the inverted file to find the set of documents indexed by napoleon -> doc12, doc56, doc87, doc99 similarly find the set of documents indexed by revolution -> doc12, doc20, doc45, doc99 intersect these sets -> doc12, doc99 go to the document file to retrieve title, authors, location for these documents. Saif Rababah 5
Docs Introduction Index Terms doc Information Need match Ranking query Saif Rababah 6
Introduction Matching at index term level is quite imprecise No surprise that users get frequently unsatisfied Since most users have no training in query formation, problem is even worst Frequent dissatisfaction of Web users Issue of deciding relevance is critical for IR systems: ranking Saif Rababah 7
Introduction A ranking is an ordering of the documents retrieved that (hopefully) reflects the relevance of the documents to the user query A ranking is based on fundamental premisses regarding the notion of relevance, such as: common sets of index terms sharing of weighted terms likelihood of relevance Each set of premisses leads to a distinct IR model Saif Rababah 8
IR Models Set Theoretic Classic Models Fuzzy Extended Boolean U s e r T a s k Retrieval: Adhoc Filtering Browsing boolean vector probabilistic Structured Models Non-Overlapping Lists Proximal Nodes Algebraic Generalized Vector Lat. Semantic Index Neural Networks Probabilistic Inference Network Belief Network Browsing Flat Structure Guided Hypertext Saif Rababah 9
IR Models The IR model, the logical view of the docs, and the retrieval task are distinct aspects of the system LOGICAL VIEW OF DOCUMENTS U S E R T A S K Retrieval Browsing Index Terms Full Text Full Text + Structure Classic Set Theoretic Algebraic Probabilistic Flat Classic Set Theoretic Algebraic Probabilistic Flat Hypertext Structured Structure Guided Hypertext Saif Rababah 10
Retrieval: Ad Hoc x Filtering Ad hoc retrieval: Q1 Q2 Q3 Collection Fixed Size Q4 Q5 Saif Rababah 11
Filtering: Retrieval: Ad Hoc x Filtering User 2 Profile Docs Filtered for User 2 User 1 Profile Docs for User 1 Documents Stream Saif Rababah 12
Classic IR Models - Basic Concepts Each document represented by a set of representative keywords or index terms An index term is a document word useful for remembering the document main themes Usually, index terms are nouns because nouns have meaning by themselves However, search engines assume that all words are index terms (full text representation) Saif Rababah 13
Classic IR Models - Basic Concepts Not all terms are equally useful for representing the document contents: less frequent terms allow identifying a narrower set of documents The importance of the index terms is represented by weights associated to them Let ki be an index term dj be a document wij is a weight associated with (ki,dj) The weight wij quantifies the importance of the index term for describing the document contents Saif Rababah 14
Classic IR Models - Basic Concepts Ki is an index term dj is a document t is the number of index terms K = (k1, k2,, kt) is the set of all index terms wij >= 0 is a weight associated with (ki,dj) wij = 0 indicates that term does not belong to doc vec(dj) = (w1j, w2j,, wtj) is a weighted vector associated with the document dj gi(vec(dj)) = wij is a function which returns the weight associated with pair (ki,dj) Saif Rababah 15
The Boolean Model Simple model based on set theory Queries specified as boolean expressions precise semantics neat formalism q = ka (kb kc) Terms are either present or absent. Thus, wij {0,1} Consider q = ka (kb kc) vec(qdnf) = (1,1,1) (1,1,0) (1,0,0) vec(qcc) = (1,1,0) is a conjunctive component Saif Rababah 16
The Boolean Model q = ka (kb kc) Ka (1,0,0) (1,1,0) (1,1,1) Kb Kc sim(q,dj) = 1 if vec(qcc) (vec(qcc) vec(qdnf)) ( ki, gi(vec(dj)) = gi(vec(qcc))) 0 otherwise Saif Rababah 17
The Boolean Model Example: Terms: K1,,K8. Documents: 1. D1 = {K1, K2, K3, K4, K5} 2. D2 = {K1, K2, K3, K4} 3. D3 = {K2, K4, K6, K8} 4. D4 = {K1, K3, K5, K7} 5. D5 = {K4, K5, K6, K7, K8} 6. D6 = {K1, K2, K3, K4} Query: K1 (K2 K3) Answer: {D1, D2, D4, D6} ({D1, D2, D3, D6} {D3, D5}) = {D1, D2, D6} Saif Rababah 18
Drawbacks of the Boolean Model Retrieval based on binary decision criteria with no notion of partial matching No ranking of the documents is provided (absence of a grading scale) Information need has to be translated into a Boolean expression which most users find awkward The Boolean queries formulated by the users are most often too simplistic As a consequence, the Boolean model frequently returns either too few or too many documents in response to a user query Saif Rababah 19
The Vector Model Use of binary weights is too limiting Non-binary weights provide consideration for partial matches These term weights are used to compute a degree of similarity between a query and each document Ranked set of documents provides for better matching Saif Rababah 20
Define: The Vector Model wij > 0 whenever ki dj wiq >= 0 associated with the pair (ki,q) vec(dj) = (w1j, w2j,..., wtj) and vec(q) = (w1q, w2q,..., wtq) To each term ki is associated a unitary vector vec(i) The unitary vectors vec(i) and vec(j) are assumed to be orthonormal (i.e., index terms are assumed to occur independently within the documents) The t unitary vectors vec(i) form an orthonormal basis for a t-dimensional space In this space, queries and documents are represented as weighted vectors Saif Rababah 21
Vector Model Measurements There are several similarity measurements used to compute the degree of similarity between the document and the query. We will use two of them: Inner product similarity Cosine similarity Saif Rababah 22
j The Vector Model dj q Cosine Similarity Sim(q,dj) = cos( ) = [vec(dj) vec(q)] / dj * q = [ wij * wiq] / dj * q Since wij > 0 and wiq > 0, 0 <= sim(q,dj) <=1 A document is retrieved even if it matches the query terms only partially i Saif Rababah 23
The Vector Model Sim(q,dj) = [ wij * wiq] / dj * q How to compute the weights wij and wiq? A good weight must take into account two effects: quantification of intra-document contents (similarity) tf factor, the term frequency within a document quantification of inter-documents separation (dissimilarity) idf factor, the inverse document frequency wij = tf(i,j) * idf(i) Saif Rababah 24
Let, The Vector Model N be the total number of docs in the collection ni be the number of docs which contain ki freq(i,j) raw frequency of ki within dj A normalized tf factor is given by f(i,j) = freq(i,j) / max(freq(l,j)) where the maximum is computed over all terms which occur within the document dj The idf factor is computed as idf(i) = log (N/ni) the log is used to make the values of tf and idf comparable. It can also be interpreted as the amount of information associated with the term ki. Saif Rababah 25
The Vector Model The best term-weighting schemes use weights which are give by wij = f(i,j) * log(n/ni) the strategy is called a tf-idf weighting scheme For the query term weights, a suggestion is wiq = (0.5 + [0.5 * freq(i,q) / max(freq(l,q)]) * log(n/ni) The vector model with tf-idf weights is a good ranking strategy with general collections The vector model is usually as good as the known ranking alternatives. It is also simple and fast to compute. Saif Rababah 26
The Vector Model Example: A collection includes 10,000 documents The term A appears 20 times in a particular document The maximum apperance of any term in this document is 50 The term A appears in 2,000 of the collection documents. f(i,j) = freq(i,j) / max(freq(l,j)) = 20/50 = 0.4 idf(i) = log(n/ni) = log (10,000/2,000) = log(5) = 2.32 wij = f(i,j) * log(n/ni) = 0.4 * 2.32 = 0.93 Saif Rababah 27
The Vector Model In this measurements we use the same equations in Cosine similarity to compute the terms weights in docs & quires. SC Q, D i wqj dij SC: stands for Similarity Coefficient You can find a full example on Inner Product on the website.( - www.aabu.edu.jo - go to Course Description of IR) t j 1 Inner Product Saif Rababah 28
The Vector Model Advantages: term-weighting improves quality of the answer set partial matching allows retrieval of docs that approximate the query conditions cosine ranking formula sorts documents according to degree of similarity to the query Disadvantages: assumes independence of index terms (??); not clear that this is bad though Saif Rababah 29
The Vector Model: Example I k1 d2 d6 d7 k2 d4 d1 d5 d3 k3 k1 k2 k3 q dj dj Sim(dj,q) d1 1 0 1 2 1.41 0.82 d2 1 0 0 1 1 0.58 d3 0 1 1 2 1.41 0.82 d4 1 0 0 1 1 0.58 d5 1 1 1 3 1.73 1 d6 1 1 0 2 1.41 0.82 d7 0 1 0 1 1 0.58 q 1 1 1 q 1.73 Saif Rababah 30
The Vector Model: Example II k1 d2 d6 d7 k2 d4 d1 d5 d3 k1 k2 k3 q dj d1 1 0 1 4 d2 1 0 0 1 d3 0 1 1 5 d4 1 0 0 1 d5 1 1 1 6 d6 1 1 0 3 d7 0 1 0 2 q 1 2 3 Saif Rababah 31 k3
The Vector Model: Example III k1 d2 d6 d7 k2 d4 d1 d5 d3 k3 k1 k2 k3 q dj d1 2 0 1 5 d2 1 0 0 1 d3 0 1 3 11 d4 2 0 0 2 d5 1 2 4 17 d6 1 2 0 5 d7 0 5 0 10 q 1 2 3 Saif Rababah 32