Refining searches. Refine initially: query. Refining after search. Explicit user feedback. Explicit user feedback

Size: px
Start display at page:

Download "Refining searches. Refine initially: query. Refining after search. Explicit user feedback. Explicit user feedback"

Transcription

1 Refine initially: query Refining searches Commonly, query epansion add synonyms Improve recall Hurt precision? Sometimes done automatically Modify based on pri searches Not automatic All pri searches vs your pri searches Eample: Yahoo Google does too Refining after search Use user feedback Approimate feedback with first results Pseudo-feedback Eample: Yahoo assist change ranking of current results search again with modified query Eplicit user feedback User must participate User marks (some) relevant results User changes der of results Pros and cons? 4 Eplicit user feedback User must participate User marks (some) relevant results User changes der of results Can be me nuanced than relevant not Can be less accurate than relevant not Eample: User moves th item to first says th better than first 9 Does not say which, if any, of first 9 relevant User feedback in classic vect model User marks top p documents f relevance p = to typical Construct new weights f terms in query vect Modifies query Could use just on initial results to re-rank 6

2 Deriving new query f vect model F collection C of n doc.s Let C r denote set all relevant docs in collection, Perfect knowledge Goal: Vect q opt = / C r * (sum of all vects d j in C r ) - /(n- C r ) * (sum of all vects d k not in C r ) centroids 7 Deriving new query f vect model: Rocchio algithm Give query q and relevance judgments f a subset of retrieved docs Let D r denote set of docs judged relevant Let D nr denote set of docs judged not relevant Modified query: Vect q new = αq + β/ D r * (sum of all vects d j in D r ) - γ/( D nr ) * (sum of all vects d k in D nr ) F tunable weights α, β, γ 8 Remarks on new query α: imptance iginal query β: imptance effect of terms in relevant docs γ: imptance effect of terms in docs not relevant Usually terms of docs not relevant are least imptant Reasonable values α=, β=.7, γ=. Reweighting terms leads to long queries Many me non-zero elements in query vect q new Can reweight only most imptant (frequent?) terms Most useful to improve recall Users don t like: wk + wait f new results 9 Simple eample user feedback in vect model q = (,,,) Relevant: d = (,,,) d = (,,,) Not relevant: d=(,,,) α, β, γ = q new = (,,,) + (, /,, ) - (,,,) = (, /,, ) Term weights change New term Observe: Can get negative weights Re-ranking Eample - status? Google eperiment: only affects repeat of same search learned in class is now SearchWiki feature f Google accounts Algithms usually based on machine learning Learn ranking function that best matches partial ranking given Comparing rankings Kendall Tau measure compares two derings of the same set of n objects: Let A = # pairs whose ders agree in the two derings Let Ι = # pairs whose ders disagree in the two dering = # inversions Kendall s Tau (der, der) = (A- Ι)/(A+ Ι) Ordinal crelation = since A+Ι = /(n)(n-) Ι (¼(n)(n-) )

3 Implicit user feedback Click-throughs Use as relevance judgment Use as reranking: When click result, moves it ahead of all results didn t click that come befe it Problems? Better? Single user feedback vs group Compare Recommender Systems Items Users Recommend Items to Users Recommend new items based on similarity to items that: User liked in past: Content-based Liked by other users similar to this user: Collabative Filtering just liked by other users - easier case Documents matching search = items? 4 Recommender System attributes Need eplicit implicit ratings by user Purchase is / rating Movie tickets Books Have focused categy eamples: music, courses, restaurants hard to cross categies with content-based easier to cross categies with collabative-based users share tastes across categies? Content-based recommendation Items must have characteristics user values item values characteristics of item model each item as vect of weights of characteristics much like vect-based IR user can give eplicit preferences f certain characteristics 6 Content-based eample user bought book and book what if actually rated? Average books bought = (,,., ) Sce new books dot product gives: sce(a) =.; sce (B)= decide threshhold f recommendation book book new book A new book B st person romance. mystery sci-fi. 7 Eample with eplicit user preferences How use sces of books bought? Try: preference vect p where component k = user pref f characteristic k if avg. comp. k of books bought when user pref = pref f user = don t care p=(,,., -) New sces? p A =. p B = user pref book book new A new B st per rom. mys sci-fi -. 8

4 Content-based: issues Vect-based one alternative Maj alternatives based on machine-learning F vect based how build a preference vect how combined vects f items rated by user our eample only / rating how include eplicit user preferences what metric use f similarity between new items and preference vect nmalization threshold? 9 Limitations of Content-based Can only recommend items similar to those user rated highly New users Insufficient number of rated items Only consider features eplicitly associated with items Do not include attributes of user Collabative Filtering Recommend new items liked by other users similar to this user need items already rated by user and other users don t need characteristics of items each rating by individual user becomes characteristic Can combine with item characterisitics hybrid content/collabative Method types (see Adomavicius and Tuzhilin paper) Memy-Based Similar to vect model Use (user item) matri Use similarity function Prediction based on previously rated items Model-Based Machine-learning methods Model of probabilities of (users items) Memy-Based: Preliminaries Notation r(u,i) = rating of i th item by user u I u = set of items rated by user u I u,v = set of items rated by both users u and v U i,j = set of users that rated items i and j Adjust scales f user differences Use average rating by user u: r avg u = (/ I u ) * r(u,i) i in I u Adjusted ratings: r adj (u,i) = r(u,i) - r u avg One Memy-Based method: User Similarities similarity between users u and v Pearson crelation coefficient (r adj (u,i)*r adj (v, i) ) i in I u,v sim(u,v) = ( (r adj (u,i)) * (r adj (v, i)) ) ½ i in I u,v i in I u,v 4 4

5 One Memy-Based Method: Item Similarities similarity between items i and j vect of ratings of users in U i,j cosine measure using adjusted ratings sim(i,j) = (r adj (u,i)*r adj (u, j) ) u in U i,j ( (r adj (u,i)) (r adj (u, j)) ) ½ u in U i,j u in U i,j Predicting User s rating of new item: User-based F item i not rated by user u r pred (u,i) = r u avg + (sim(u,v)*r adj (v, i)) v in S sim(u,v) v in S S can be all users just users most similar to u 6 Predicting User s rating of new item: Item-based F item i not rated by user u r item-pred (u,i) = (sim(i,j)*r(u, j)) j in T sim(i,j) j in T T can be all items just items most similar to i Prediction uses only u s ratings, but similarity uses other users ratings 7 user Collabative filtering eample ratings adj. user ratings user user user user 4 user user user user 4 book 4 book book book book book - book 4? book 4 -? 8 Collabative filtering eample sim(u,u4) = (6+)/(*8) / =.894 sim(u,u4) = (-)/(*4) / = sim(u,u4) = (+)/(*8) / = predict r(u4, book4) = + (-)*.894 +*(-.447) + * = -.9 Limitations May not have enough ratings f new users New items may not be rated by enough users Need critical mass of users All similarities based on user ratings 9

6 Recommendation techiques and search Content-based query refinement with user feedback item document characteristic term user preferences initial query user rating of relevance ratings f previous items initial results Recommendation techiques and search: Collabative filtering analogy with product recommendation? users behavi on same search - i.e. same query item search result rating clicked/not clicked on predict whether user will click on based on behavi of similar users user similarity based on what have both clicked on f this search me general predictions of best results based on notions of user similarity hybrid content and collabation 6

Information Retrieval. Techniques for Relevance Feedback

Information Retrieval. Techniques for Relevance Feedback Information Retrieval Techniques for Relevance Feedback Introduction An information need may be epressed using different keywords (synonymy) impact on recall eamples: ship vs boat, aircraft vs airplane

More information

Information Retrieval and Web Search

Information Retrieval and Web Search Information Retrieval and Web Search Relevance Feedback. Query Expansion Instructor: Rada Mihalcea Intelligent Information Retrieval 1. Relevance feedback - Direct feedback - Pseudo feedback 2. Query expansion

More information

Relevance Feedback and Query Reformulation. Lecture 10 CS 510 Information Retrieval on the Internet Thanks to Susan Price. Outline

Relevance Feedback and Query Reformulation. Lecture 10 CS 510 Information Retrieval on the Internet Thanks to Susan Price. Outline Relevance Feedback and Query Reformulation Lecture 10 CS 510 Information Retrieval on the Internet Thanks to Susan Price IR on the Internet, Spring 2010 1 Outline Query reformulation Sources of relevance

More information

Outline. Possible solutions. The basic problem. How? How? Relevance Feedback, Query Expansion, and Inputs to Ranking Beyond Similarity

Outline. Possible solutions. The basic problem. How? How? Relevance Feedback, Query Expansion, and Inputs to Ranking Beyond Similarity Outline Relevance Feedback, Query Expansion, and Inputs to Ranking Beyond Similarity Lecture 10 CS 410/510 Information Retrieval on the Internet Query reformulation Sources of relevance for feedback Using

More information

Query reformulation CE-324: Modern Information Retrieval Sharif University of Technology

Query reformulation CE-324: Modern Information Retrieval Sharif University of Technology Query reformulation CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2016 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford) Sec.

More information

Relevance Feedback and Query Expansion. Debapriyo Majumdar Information Retrieval Spring 2015 Indian Statistical Institute Kolkata

Relevance Feedback and Query Expansion. Debapriyo Majumdar Information Retrieval Spring 2015 Indian Statistical Institute Kolkata Relevance Feedback and Query Epansion Debapriyo Majumdar Information Retrieval Spring 2015 Indian Statistical Institute Kolkata Importance of Recall Academic importance Not only of academic importance

More information

Query reformulation CE-324: Modern Information Retrieval Sharif University of Technology

Query reformulation CE-324: Modern Information Retrieval Sharif University of Technology Query reformulation CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2015 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford) Sec.

More information

The IR Black Box. Anomalous State of Knowledge. The Information Retrieval Cycle. Different Types of Interactions. Upcoming Topics.

The IR Black Box. Anomalous State of Knowledge. The Information Retrieval Cycle. Different Types of Interactions. Upcoming Topics. The IR Black Bo LBSC 796/INFM 718R: Week 8 Relevance Feedback Query Search Ranked List Jimmy Lin College of Information Studies University of Maryland Monday, March 27, 2006 Anomalous State of Knowledge

More information

Collaborative Filtering and Recommender Systems. Definitions. .. Spring 2009 CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

Collaborative Filtering and Recommender Systems. Definitions. .. Spring 2009 CSC 466: Knowledge Discovery from Data Alexander Dekhtyar.. .. Spring 2009 CSC 466: Knowledge Discovery from Data Alexander Dekhtyar.. Collaborative Filtering and Recommender Systems Definitions Recommendation generation problem. Given a set of users and their

More information

Chapter 6: Information Retrieval and Web Search. An introduction

Chapter 6: Information Retrieval and Web Search. An introduction Chapter 6: Information Retrieval and Web Search An introduction Introduction n Text mining refers to data mining using text documents as data. n Most text mining tasks use Information Retrieval (IR) methods

More information

CSCI 599: Applications of Natural Language Processing Information Retrieval Retrieval Models (Part 1)"

CSCI 599: Applications of Natural Language Processing Information Retrieval Retrieval Models (Part 1) CSCI 599: Applications of Natural Language Processing Information Retrieval Retrieval Models (Part 1)" All slides Addison Wesley, Donald Metzler, and Anton Leuski, 2008, 2012! Retrieval Models" Provide

More information

Relevance Feedback & Other Query Expansion Techniques

Relevance Feedback & Other Query Expansion Techniques Relevance Feedback & Other Query Expansion Techniques (Thesaurus, Semantic Network) (COSC 416) Nazli Goharian nazli@cs.georgetown.edu Slides are mostly based on Informion Retrieval Algorithms and Heuristics,

More information

10/10/13. Traditional database system. Information Retrieval. Information Retrieval. Information retrieval system? Information Retrieval Issues

10/10/13. Traditional database system. Information Retrieval. Information Retrieval. Information retrieval system? Information Retrieval Issues COS 597A: Principles of Database and Information Systems Information Retrieval Traditional database system Large integrated collection of data Uniform access/modifcation mechanisms Model of data organization

More information

Part 11: Collaborative Filtering. Francesco Ricci

Part 11: Collaborative Filtering. Francesco Ricci Part : Collaborative Filtering Francesco Ricci Content An example of a Collaborative Filtering system: MovieLens The collaborative filtering method n Similarity of users n Methods for building the rating

More information

COMP6237 Data Mining Making Recommendations. Jonathon Hare

COMP6237 Data Mining Making Recommendations. Jonathon Hare COMP6237 Data Mining Making Recommendations Jonathon Hare jsh2@ecs.soton.ac.uk Introduction Recommender systems 101 Taxonomy of recommender systems Collaborative Filtering Collecting user preferences as

More information

Lecture 7: Relevance Feedback and Query Expansion

Lecture 7: Relevance Feedback and Query Expansion Lecture 7: Relevance Feedback and Query Expansion Information Retrieval Computer Science Tripos Part II Ronan Cummins Natural Language and Information Processing (NLIP) Group ronan.cummins@cl.cam.ac.uk

More information

Introduction to Data Mining

Introduction to Data Mining Introduction to Data Mining Lecture #7: Recommendation Content based & Collaborative Filtering Seoul National University In This Lecture Understand the motivation and the problem of recommendation Compare

More information

Information Retrieval

Information Retrieval Introduction to Information Retrieval SCCS414: Information Storage and Retrieval Christopher Manning and Prabhakar Raghavan Lecture 10: Text Classification; Vector Space Classification (Rocchio) Relevance

More information

Evaluation. David Kauchak cs160 Fall 2009 adapted from:

Evaluation. David Kauchak cs160 Fall 2009 adapted from: Evaluation David Kauchak cs160 Fall 2009 adapted from: http://www.stanford.edu/class/cs276/handouts/lecture8-evaluation.ppt Administrative How are things going? Slides Points Zipf s law IR Evaluation For

More information

CS 124/LINGUIST 180 From Languages to Information

CS 124/LINGUIST 180 From Languages to Information CS /LINGUIST 80 From Languages to Information Dan Jurafsky Stanford University Recommender Systems & Collaborative Filtering Slides adapted from Jure Leskovec Recommender Systems Customer X Buys CD of

More information

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi

Unsupervised Learning. Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi Unsupervised Learning Presenter: Anil Sharma, PhD Scholar, IIIT-Delhi Content Motivation Introduction Applications Types of clustering Clustering criterion functions Distance functions Normalization Which

More information

Ordered Pairs. 2 f(2) = 4 (2, 4) 1 f(1) = 1 (1, 1) Outputs. Inputs

Ordered Pairs. 2 f(2) = 4 (2, 4) 1 f(1) = 1 (1, 1) Outputs. Inputs In the previous set of notes we looked at vertical shifts, where the graph of some iginal function was moved up down by adding numbers to subtracting numbers from the outputs of a function. In this set

More information

Information Retrieval. Information Retrieval and Web Search

Information Retrieval. Information Retrieval and Web Search Information Retrieval and Web Search Introduction to IR models and methods Information Retrieval The indexing and retrieval of textual documents. Searching for pages on the World Wide Web is the most recent

More information

Part 11: Collaborative Filtering. Francesco Ricci

Part 11: Collaborative Filtering. Francesco Ricci Part : Collaborative Filtering Francesco Ricci Content An example of a Collaborative Filtering system: MovieLens The collaborative filtering method n Similarity of users n Methods for building the rating

More information

Recommender Systems - Content, Collaborative, Hybrid

Recommender Systems - Content, Collaborative, Hybrid BOBBY B. LYLE SCHOOL OF ENGINEERING Department of Engineering Management, Information and Systems EMIS 8331 Advanced Data Mining Recommender Systems - Content, Collaborative, Hybrid Scott F Eisenhart 1

More information

Retrieval Evaluation. Hongning Wang

Retrieval Evaluation. Hongning Wang Retrieval Evaluation Hongning Wang CS@UVa What we have learned so far Indexed corpus Crawler Ranking procedure Research attention Doc Analyzer Doc Rep (Index) Query Rep Feedback (Query) Evaluation User

More information

Information Retrieval. Lecture 7

Information Retrieval. Lecture 7 Information Retrieval Lecture 7 Recap of the last lecture Vector space scoring Efficiency considerations Nearest neighbors and approximations This lecture Evaluating a search engine Benchmarks Precision

More information

CS 124/LINGUIST 180 From Languages to Information

CS 124/LINGUIST 180 From Languages to Information CS /LINGUIST 80 From Languages to Information Dan Jurafsky Stanford University Recommender Systems & Collaborative Filtering Slides adapted from Jure Leskovec Recommender Systems Customer X Buys CD of

More information

Instructor: Stefan Savev

Instructor: Stefan Savev LECTURE 2 What is indexing? Indexing is the process of extracting features (such as word counts) from the documents (in other words: preprocessing the documents). The process ends with putting the information

More information

Chapter 8. Evaluating Search Engine

Chapter 8. Evaluating Search Engine Chapter 8 Evaluating Search Engine Evaluation Evaluation is key to building effective and efficient search engines Measurement usually carried out in controlled laboratory experiments Online testing can

More information

Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman

Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman http://www.mmds.org Overview of Recommender Systems Content-based Systems Collaborative Filtering J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive

More information

Retrieval by Content. Part 3: Text Retrieval Latent Semantic Indexing. Srihari: CSE 626 1

Retrieval by Content. Part 3: Text Retrieval Latent Semantic Indexing. Srihari: CSE 626 1 Retrieval by Content art 3: Text Retrieval Latent Semantic Indexing Srihari: CSE 626 1 Latent Semantic Indexing LSI isadvantage of exclusive use of representing a document as a T-dimensional vector of

More information

Information Retrieval and Web Search

Information Retrieval and Web Search Information Retrieval and Web Search Introduction to IR models and methods Rada Mihalcea (Some of the slides in this slide set come from IR courses taught at UT Austin and Stanford) Information Retrieval

More information

Information Retrieval

Information Retrieval Natural Language Processing SoSe 2015 Information Retrieval Dr. Mariana Neves June 22nd, 2015 (based on the slides of Dr. Saeedeh Momtazi) Outline Introduction Indexing Block 2 Document Crawling Text Processing

More information

Evaluation of Retrieval Systems

Evaluation of Retrieval Systems Performance Criteria Evaluation of Retrieval Systems 1 1. Expressiveness of query language Can query language capture information needs? 2. Quality of search results Relevance to users information needs

More information

Recommender Systems - Introduction. Data Mining Lecture 2: Recommender Systems

Recommender Systems - Introduction. Data Mining Lecture 2: Recommender Systems Recommender Systems - Introduction Making recommendations: Big Money 35% of amazons income from recommendations Netflix recommendation engine worth $ Billion per year And yet, Amazon seems to be able to

More information

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University  Infinite data. Filtering data streams /9/7 Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them

More information

Data Mining Lecture 2: Recommender Systems

Data Mining Lecture 2: Recommender Systems Data Mining Lecture 2: Recommender Systems Jo Houghton ECS Southampton February 19, 2019 1 / 32 Recommender Systems - Introduction Making recommendations: Big Money 35% of Amazons income from recommendations

More information

- Content-based Recommendation -

- Content-based Recommendation - - Content-based Recommendation - Institute for Software Technology Inffeldgasse 16b/2 A-8010 Graz Austria 1 Content-based recommendation While CF methods do not require any information about the items,

More information

Preference Learning in Recommender Systems

Preference Learning in Recommender Systems Preference Learning in Recommender Systems M. de Gemmis L. Iaquinta P. Lops C. Musto F. Narducci G. Semeraro Department of Computer Science University of Bari, Italy ECML/PKDD workshop on Preference Learning,

More information

Query Operations. Relevance Feedback Query Expansion Query interpretation

Query Operations. Relevance Feedback Query Expansion Query interpretation Query Operations Relevance Feedback Query Expansion Query interpretation 1 Relevance Feedback After initial retrieval results are presented, allow the user to provide feedback on the relevance of one or

More information

Recommender Systems (RSs)

Recommender Systems (RSs) Recommender Systems Recommender Systems (RSs) RSs are software tools providing suggestions for items to be of use to users, such as what items to buy, what music to listen to, or what online news to read

More information

5/13/2009. Introduction. Introduction. Introduction. Introduction. Introduction

5/13/2009. Introduction. Introduction. Introduction. Introduction. Introduction Applying Collaborative Filtering Techniques to Movie Search for Better Ranking and Browsing Seung-Taek Park and David M. Pennock (ACM SIGKDD 2007) Two types of technologies are widely used to overcome

More information

Searching non-text information objects

Searching non-text information objects Non-text digital objects Searching non-text information objects Music Speech Images 3D models Video? 1 2 Ways to query for something Query by describing content 1. Query by category/ theme easiest - work

More information

CS 124/LINGUIST 180 From Languages to Information

CS 124/LINGUIST 180 From Languages to Information CS /LINGUIST 80 From Languages to Information Dan Jurafsky Stanford University Recommender Systems & Collaborative Filtering Slides adapted from Jure Leskovec Recommender Systems Customer X Buys Metallica

More information

Search Engines. Informa1on Retrieval in Prac1ce. Annota1ons by Michael L. Nelson

Search Engines. Informa1on Retrieval in Prac1ce. Annota1ons by Michael L. Nelson Search Engines Informa1on Retrieval in Prac1ce Annota1ons by Michael L. Nelson All slides Addison Wesley, 2008 Evalua1on Evalua1on is key to building effec$ve and efficient search engines measurement usually

More information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu [Kumar et al. 99] 2/13/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu

More information

Search Engines Chapter 8 Evaluating Search Engines Felix Naumann

Search Engines Chapter 8 Evaluating Search Engines Felix Naumann Search Engines Chapter 8 Evaluating Search Engines 9.7.2009 Felix Naumann Evaluation 2 Evaluation is key to building effective and efficient search engines. Drives advancement of search engines When intuition

More information

Diversity in Recommender Systems Week 2: The Problems. Toni Mikkola, Andy Valjakka, Heng Gui, Wilson Poon

Diversity in Recommender Systems Week 2: The Problems. Toni Mikkola, Andy Valjakka, Heng Gui, Wilson Poon Diversity in Recommender Systems Week 2: The Problems Toni Mikkola, Andy Valjakka, Heng Gui, Wilson Poon Review diversification happens by searching from further away balancing diversity and relevance

More information

Information retrieval

Information retrieval Information retrieval Ryan Tibshirani Data Mining: 36-462/36-662 January 17 2013 1 What we use to do I want to learn about that magic trick with the rings! Then: go to the library Librarian Card catalog

More information

Modern Information Retrieval

Modern Information Retrieval Modern Information Retrieval Chapter 5 Relevance Feedback and Query Expansion Introduction A Framework for Feedback Methods Explicit Relevance Feedback Explicit Feedback Through Clicks Implicit Feedback

More information

CS473: Course Review CS-473. Luo Si Department of Computer Science Purdue University

CS473: Course Review CS-473. Luo Si Department of Computer Science Purdue University CS473: CS-473 Course Review Luo Si Department of Computer Science Purdue University Basic Concepts of IR: Outline Basic Concepts of Information Retrieval: Task definition of Ad-hoc IR Terminologies and

More information

A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems

A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems A novel supervised learning algorithm and its use for Spam Detection in Social Bookmarking Systems Anestis Gkanogiannis and Theodore Kalamboukis Department of Informatics Athens University of Economics

More information

CSCI 5417 Information Retrieval Systems. Jim Martin!

CSCI 5417 Information Retrieval Systems. Jim Martin! CSCI 5417 Information Retrieval Systems Jim Martin! Lecture 7 9/13/2011 Today Review Efficient scoring schemes Approximate scoring Evaluating IR systems 1 Normal Cosine Scoring Speedups... Compute the

More information

Gene Clustering & Classification

Gene Clustering & Classification BINF, Introduction to Computational Biology Gene Clustering & Classification Young-Rae Cho Associate Professor Department of Computer Science Baylor University Overview Introduction to Gene Clustering

More information

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering

INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering INF4820, Algorithms for AI and NLP: Evaluating Classifiers Clustering Erik Velldal University of Oslo Sept. 18, 2012 Topics for today 2 Classification Recap Evaluating classifiers Accuracy, precision,

More information

James Mayfield! The Johns Hopkins University Applied Physics Laboratory The Human Language Technology Center of Excellence!

James Mayfield! The Johns Hopkins University Applied Physics Laboratory The Human Language Technology Center of Excellence! James Mayfield! The Johns Hopkins University Applied Physics Laboratory The Human Language Technology Center of Excellence! (301) 219-4649 james.mayfield@jhuapl.edu What is Information Retrieval? Evaluation

More information

Recommender Systems. Techniques of AI

Recommender Systems. Techniques of AI Recommender Systems Techniques of AI Recommender Systems User ratings Collect user preferences (scores, likes, purchases, views...) Find similarities between items and/or users Predict user scores for

More information

Sec. 8.7 RESULTS PRESENTATION

Sec. 8.7 RESULTS PRESENTATION Sec. 8.7 RESULTS PRESENTATION 1 Sec. 8.7 Result Summaries Having ranked the documents matching a query, we wish to present a results list Most commonly, a list of the document titles plus a short summary,

More information

Recommender Systems 6CCS3WSN-7CCSMWAL

Recommender Systems 6CCS3WSN-7CCSMWAL Recommender Systems 6CCS3WSN-7CCSMWAL http://insidebigdata.com/wp-content/uploads/2014/06/humorrecommender.jpg Some basic methods of recommendation Recommend popular items Collaborative Filtering Item-to-Item:

More information

Boolean Model. Hongning Wang

Boolean Model. Hongning Wang Boolean Model Hongning Wang CS@UVa Abstraction of search engine architecture Indexed corpus Crawler Ranking procedure Doc Analyzer Doc Representation Query Rep Feedback (Query) Evaluation User Indexer

More information

Information Retrieval

Information Retrieval Natural Language Processing SoSe 2014 Information Retrieval Dr. Mariana Neves June 18th, 2014 (based on the slides of Dr. Saeedeh Momtazi) Outline Introduction Indexing Block 2 Document Crawling Text Processing

More information

CMPSCI 646, Information Retrieval (Fall 2003)

CMPSCI 646, Information Retrieval (Fall 2003) CMPSCI 646, Information Retrieval (Fall 2003) Midterm exam solutions Problem CO (compression) 1. The problem of text classification can be described as follows. Given a set of classes, C = {C i }, where

More information

Ranked Retrieval. Evaluation in IR. One option is to average the precision scores at discrete. points on the ROC curve But which points?

Ranked Retrieval. Evaluation in IR. One option is to average the precision scores at discrete. points on the ROC curve But which points? Ranked Retrieval One option is to average the precision scores at discrete Precision 100% 0% More junk 100% Everything points on the ROC curve But which points? Recall We want to evaluate the system, not

More information

Machine vision. Summary # 5: Morphological operations

Machine vision. Summary # 5: Morphological operations 1 Machine vision Summary # 5: Mphological operations MORPHOLOGICAL OPERATIONS A real image has continuous intensity. It is quantized to obtain a digital image with a given number of gray levels. Different

More information

Information Retrieval

Information Retrieval Information Retrieval Suan Lee - Information Retrieval - 09 Relevance Feedback & Query Epansion 1 Recap of the last lecture Evaluating a search engine Benchmarks Precision and recall Results summaries

More information

Information Retrieval. CS630 Representing and Accessing Digital Information. What is a Retrieval Model? Basic IR Processes

Information Retrieval. CS630 Representing and Accessing Digital Information. What is a Retrieval Model? Basic IR Processes CS630 Representing and Accessing Digital Information Information Retrieval: Retrieval Models Information Retrieval Basics Data Structures and Access Indexing and Preprocessing Retrieval Models Thorsten

More information

Information Retrieval

Information Retrieval Information Retrieval Additional Reference Introduction to Information Retrieval, Manning, Raghavan and Schütze, online book: http://nlp.stanford.edu/ir-book/ Why Study Information Retrieval? Google Searches

More information

Combining Information Retrieval and Relevance Feedback for Concept Location

Combining Information Retrieval and Relevance Feedback for Concept Location Combining Information Retrieval and Relevance Feedback for Concept Location Sonia Haiduc - WSU CS Graduate Seminar - Jan 19, 2010 1 Software changes Software maintenance: 50-90% of the global costs of

More information

Learning Ranking Functions with Implicit Feedback

Learning Ranking Functions with Implicit Feedback Learning Ranking Functions with Implicit Feedback CS4780 Machine Learning Fall 2011 Pannaga Shivaswamy Cornell University These slides are built on an earlier set of slides by Prof. Joachims. Current Search

More information

Search Engine Architecture. Hongning Wang

Search Engine Architecture. Hongning Wang Search Engine Architecture Hongning Wang CS@UVa CS@UVa CS4501: Information Retrieval 2 Document Analyzer Classical search engine architecture The Anatomy of a Large-Scale Hypertextual Web Search Engine

More information

INFSCI 2140 Information Storage and Retrieval Lecture 6: Taking User into Account. Ad-hoc IR in text-oriented DS

INFSCI 2140 Information Storage and Retrieval Lecture 6: Taking User into Account. Ad-hoc IR in text-oriented DS INFSCI 2140 Information Storage and Retrieval Lecture 6: Taking User into Account Peter Brusilovsky http://www2.sis.pitt.edu/~peterb/2140-051/ Ad-hoc IR in text-oriented DS The context (L1) Querying and

More information

vector space retrieval many slides courtesy James Amherst

vector space retrieval many slides courtesy James Amherst vector space retrieval many slides courtesy James Allan@umass Amherst 1 what is a retrieval model? Model is an idealization or abstraction of an actual process Mathematical models are used to study the

More information

Semantic Search in s

Semantic Search in  s Semantic Search in Emails Navneet Kapur, Mustafa Safdari, Rahul Sharma December 10, 2010 Abstract Web search technology is abound with techniques to tap into the semantics of information. For email search,

More information

Centralities (4) By: Ralucca Gera, NPS. Excellence Through Knowledge

Centralities (4) By: Ralucca Gera, NPS. Excellence Through Knowledge Centralities (4) By: Ralucca Gera, NPS Excellence Through Knowledge Some slide from last week that we didn t talk about in class: 2 PageRank algorithm Eigenvector centrality: i s Rank score is the sum

More information

Search Evaluation. Tao Yang CS293S Slides partially based on text book [CMS] [MRS]

Search Evaluation. Tao Yang CS293S Slides partially based on text book [CMS] [MRS] Search Evaluation Tao Yang CS293S Slides partially based on text book [CMS] [MRS] Table of Content Search Engine Evaluation Metrics for relevancy Precision/recall F-measure MAP NDCG Difficulties in Evaluating

More information

BBS654 Data Mining. Pinar Duygulu

BBS654 Data Mining. Pinar Duygulu BBS6 Data Mining Pinar Duygulu Slides are adapted from J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org Mustafa Ozdal Example: Recommender Systems Customer X Buys Metallica

More information

Optimal Query. Assume that the relevant set of documents C r. 1 N C r d j. d j. Where N is the total number of documents.

Optimal Query. Assume that the relevant set of documents C r. 1 N C r d j. d j. Where N is the total number of documents. Optimal Query Assume that the relevant set of documents C r are known. Then the best query is: q opt 1 C r d j C r d j 1 N C r d j C r d j Where N is the total number of documents. Note that even this

More information

EPL660: INFORMATION RETRIEVAL AND SEARCH ENGINES. Slides by Manning, Raghavan, Schutze

EPL660: INFORMATION RETRIEVAL AND SEARCH ENGINES. Slides by Manning, Raghavan, Schutze EPL660: INFORMATION RETRIEVAL AND SEARCH ENGINES 1 This lecture How do we know if our results are any good? Evaluating a search engine Benchmarks Precision and recall Results summaries: Making our good

More information

Evaluation of Retrieval Systems

Evaluation of Retrieval Systems Performance Criteria Evaluation of Retrieval Systems. Expressiveness of query language Can query language capture information needs? 2. Quality of search results Relevance to users information needs 3.

More information

Recommendation Algorithms: Collaborative Filtering. CSE 6111 Presentation Advanced Algorithms Fall Presented by: Farzana Yasmeen

Recommendation Algorithms: Collaborative Filtering. CSE 6111 Presentation Advanced Algorithms Fall Presented by: Farzana Yasmeen Recommendation Algorithms: Collaborative Filtering CSE 6111 Presentation Advanced Algorithms Fall. 2013 Presented by: Farzana Yasmeen 2013.11.29 Contents What are recommendation algorithms? Recommendations

More information

Web Information Retrieval. Exercises Evaluation in information retrieval

Web Information Retrieval. Exercises Evaluation in information retrieval Web Information Retrieval Exercises Evaluation in information retrieval Evaluating an IR system Note: information need is translated into a query Relevance is assessed relative to the information need

More information

An Adaptive Agent for Web Exploration Based on Concept Hierarchies

An Adaptive Agent for Web Exploration Based on Concept Hierarchies An Adaptive Agent for Web Exploration Based on Concept Hierarchies Scott Parent, Bamshad Mobasher, Steve Lytinen School of Computer Science, Telecommunication and Information Systems DePaul University

More information

Clustering. Huanle Xu. Clustering 1

Clustering. Huanle Xu. Clustering 1 Clustering Huanle Xu Clustering 1 High Dimensional Data Given a cloud of data points we want to understand their structure 10/31/2016 Clustering 4 The Problem of Clustering Given a set of points, with

More information

THIS LECTURE. How do we know if our results are any good? Results summaries: Evaluating a search engine. Making our good results usable to a user

THIS LECTURE. How do we know if our results are any good? Results summaries: Evaluating a search engine. Making our good results usable to a user EVALUATION Sec. 6.2 THIS LECTURE How do we know if our results are any good? Evaluating a search engine Benchmarks Precision and recall Results summaries: Making our good results usable to a user 2 3 EVALUATING

More information

Optimizing Search Engines using Click-through Data

Optimizing Search Engines using Click-through Data Optimizing Search Engines using Click-through Data By Sameep - 100050003 Rahee - 100050028 Anil - 100050082 1 Overview Web Search Engines : Creating a good information retrieval system Previous Approaches

More information

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review Gautam Kunapuli Machine Learning Data is identically and independently distributed Goal is to learn a function that maps to Data is generated using an unknown function Learn a hypothesis that minimizes

More information

Ranking and Learning. Table of Content. Weighted scoring for ranking Learning to rank: A simple example Learning to ranking as classification.

Ranking and Learning. Table of Content. Weighted scoring for ranking Learning to rank: A simple example Learning to ranking as classification. Table of Content anking and Learning Weighted scoring for ranking Learning to rank: A simple example Learning to ranking as classification 290 UCSB, Tao Yang, 2013 Partially based on Manning, aghavan,

More information

Collaborative Filtering based on User Trends

Collaborative Filtering based on User Trends Collaborative Filtering based on User Trends Panagiotis Symeonidis, Alexandros Nanopoulos, Apostolos Papadopoulos, and Yannis Manolopoulos Aristotle University, Department of Informatics, Thessalonii 54124,

More information

SOCIAL MEDIA MINING. Data Mining Essentials

SOCIAL MEDIA MINING. Data Mining Essentials SOCIAL MEDIA MINING Data Mining Essentials Dear instructors/users of these slides: Please feel free to include these slides in your own material, or modify them as you see fit. If you decide to incorporate

More information

Neighborhood-Based Collaborative Filtering

Neighborhood-Based Collaborative Filtering Chapter 2 Neighborhood-Based Collaborative Filtering When one neighbor helps another, we strengthen our communities. Jennifer Pahlka 2.1 Introduction Neighborhood-based collaborative filtering algorithms,

More information

PERSONALIZED TAG RECOMMENDATION

PERSONALIZED TAG RECOMMENDATION PERSONALIZED TAG RECOMMENDATION Ziyu Guan, Xiaofei He, Jiajun Bu, Qiaozhu Mei, Chun Chen, Can Wang Zhejiang University, China Univ. of Illinois/Univ. of Michigan 1 Booming of Social Tagging Applications

More information

Collaborative Filtering and Evaluation of Recommender Systems

Collaborative Filtering and Evaluation of Recommender Systems Collaborative Filtering and Evaluation of Recommender Systems Francesco Ricci ecommerce and Tourism Research Laboratory Automated Reasoning Systems Division ITC-irst Trento Italy ricci@itc.it http://ectrl.itc.it

More information

Available online at ScienceDirect. Procedia Technology 17 (2014 )

Available online at  ScienceDirect. Procedia Technology 17 (2014 ) Available online at www.sciencedirect.com ScienceDirect Procedia Technology 17 (2014 ) 528 533 Conference on Electronics, Telecommunications and Computers CETC 2013 Social Network and Device Aware Personalized

More information

The Principle and Improvement of the Algorithm of Matrix Factorization Model based on ALS

The Principle and Improvement of the Algorithm of Matrix Factorization Model based on ALS of the Algorithm of Matrix Factorization Model based on ALS 12 Yunnan University, Kunming, 65000, China E-mail: shaw.xuan820@gmail.com Chao Yi 3 Yunnan University, Kunming, 65000, China E-mail: yichao@ynu.edu.cn

More information

Text Categorization (I)

Text Categorization (I) CS473 CS-473 Text Categorization (I) Luo Si Department of Computer Science Purdue University Text Categorization (I) Outline Introduction to the task of text categorization Manual v.s. automatic text categorization

More information

This lecture: IIR Sections Ranked retrieval Scoring documents Term frequency Collection statistics Weighting schemes Vector space scoring

This lecture: IIR Sections Ranked retrieval Scoring documents Term frequency Collection statistics Weighting schemes Vector space scoring This lecture: IIR Sections 6.2 6.4.3 Ranked retrieval Scoring documents Term frequency Collection statistics Weighting schemes Vector space scoring 1 Ch. 6 Ranked retrieval Thus far, our queries have all

More information

Keywords: geolocation, recommender system, machine learning, Haversine formula, recommendations

Keywords: geolocation, recommender system, machine learning, Haversine formula, recommendations Volume 6, Issue 4, April 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Geolocation Based

More information

Evaluation of Retrieval Systems

Evaluation of Retrieval Systems Performance Criteria Evaluation of Retrieval Systems. Expressiveness of query language Can query language capture information needs? 2. Quality of search results Relevance to users information needs 3.

More information

A Review on Relevance Feedback in CBIR

A Review on Relevance Feedback in CBIR A Review on Relevance Feedback in CBIR K. Srinivasa Reddy 1 P. Swaminathan 2 R. Anandan 3 K. Kalaivani 4 1 Ph.D. Research Scholar, Vels Institute of Science, Technology & Advanced Studies (VISTAS), Chennai,

More information