Learning to Rank for Information Retrieval. Tie-Yan Liu Lead Researcher Microsoft Research Asia
|
|
- Andrew Lewis
- 6 years ago
- Views:
Transcription
1 Learning to Rank for Information Retrieval Tie-Yan Liu Lead Researcher Microsoft Research Asia 4/20/2008 Tie-Yan Tutorial at WWW
2 The Speaker Tie-Yan Liu Lead Researcher, Microsoft Research Asia. Ph.D., Tsinghua University, China. Research Interests: Learning to Rank, Large-scale Graph Learning. ~70 Papers: SIGIR(9), WWW(3), ICML (3), KDD(2), etc. Winner of Most Cited Paper Award, JVCIR, Senior PC Member of SIGIR Co-chair, SIGIR Workshop on Learning to Rank, Co-chair, SIGIR Workshop on Learning to Rank, Speaker, Tutorial on Learning to Rank, SIGIR 2008 (to be given in July), AIRS 2008, etc. 4/20/2008 Tie-Yan Tutorial at WWW
3 This Tutorial Learning to rank for information retrieval But not other generic ranking problems. Supervised learning But not unsupervised or semi-supervised learning. Mostly discriminative learning but not generative learning. Learning in vector space But not on graphs or other structured data. Mainly based on papers published at SIGIR, WWW, ICML, KDD, and NIPS in the past five years. Papers at other conferences and journals might not be covered comprehensively in this tutorial. 4/20/2008 Tie-Yan Tutorial at WWW
4 Background Knowledge Required Information Retrieval We will just briefly introduce, but not comprehensively review the literature of IR. Machine Learning We will assume you are familiar with regression, classification and their learning algorithms, such as SVM, Boosting, NN, etc. No comprehensive introduction to these algorithms and their principles will be given. Probability Theory. Linear Algebra. Optimization. 4/20/2008 Tie-Yan Tutorial at WWW
5 Outline Ranking in Information Retrieval Learning to Rank Overview Pointwise approach Pairwise approach Listwise approach Summary 4/20/2008 Tie-Yan Tutorial at WWW
6 Ranking in Information Retrieval 4/20/2008 Tie-Yan Tutorial at WWW
7 We are Overwhelmed by Flood of Information 4/20/2008 Tie-Yan Tutorial at WWW
8 Information Explosion 2008? 4/20/2008 Tie-Yan Tutorial at WWW
9 4/20/2008 Tie-Yan Tutorial at WWW
10 Search Engine as A Tool 4/20/2008 Tie-Yan Tutorial at WWW
11 Inside Search Engine Indexed Document Repository D d i Query q {www2008} Ranking model d d q k q 2 Ranked List of Documents d q 1 www2008.org www2008.org/submissions/index. html elcome/www2008 4/20/2008 Tie-Yan Tutorial at WWW
12 IR Evaluation Objective Evaluate the effectiveness of a ranking model A standard test set, containing A large number of queries, their associated documents, and the label (relevance judgments) of these documents. A measure Evaluating the effectiveness of a ranking model for a particular query. Averaging the measure over the entire test set to represent the expected effectiveness of the model. 4/20/2008 Tie-Yan Tutorial at WWW
13 Widely-used Judgments Binary judgment Relevant vs. Irrelevant Multi-level ratings Perfect > Excellent > Good > Fair > Bad Pairwise preferences Document A is more relevant than document B with respect to query q 4/20/2008 Tie-Yan Tutorial at WWW
14 Evaluation Measures MAP (Mean Average Precision) NDCG (Normalized Discounted Cumulative Gain) MRR (Mean Reciprocal Rank) For query, rank position of the first relevant document: MRR: average of 1/ over all queries WTA (Winners Take All) If top ranked document is relevant: 1; otherwise 0 average over all queries q i r i r i 4/20/2008 Tie-Yan Tutorial at WWW
15 MAP Precision at position n P@ n Average precision AP n #{relevant documents in top n results} n P@ n I{document n is #{relevant documents} AP = relevant} MAP: averaged over all queries in the test set /20/2008 Tie-Yan Tutorial at WWW
16 NDCG NDCG at position n: N( n) Z n n j1 (2 r( j) 1) / log(1 j) Normalization Cumulating Gain Position discount NDCG averaged for all queries in test set 4/20/2008 Tie-Yan Tutorial at WWW
17 G: Gain Relevance Rating Value (Gain) Perfect 31=2 5-1 Excellent 15=2 4-1 Good 7=2 3-1 Fair 3=2 2-1 Bad 0= /20/2008 Tie-Yan Tutorial at WWW
18 CG: Cumulative G If each rank position is treated equal Query={abc} URL Gain Cumulative Gain # # = # = # = # = #6 4/20/2008 Tie-Yan Tutorial at WWW
19 DCG: Discounted CG Discounting factor: Log(2) / (Log(1+rank)) URL Gain Discounted Cumulative Gain # = 31x1 # = x0.63 # = x0.50 # = x0.43 # = x0.39 #6 4/20/2008 Tie-Yan Tutorial at WWW
20 Ideal Ordering: Max DCG Sort the documents according to their labels URL Gain Max DCG # = 31x1 # = x0.63 # = x0.50 # = x0.43 # = x0.39 #6 4/20/2008 Tie-Yan Tutorial at WWW
21 NDCG: Normalized DCG Normalized by the Max DCG URL Gain DCG Max DCG NDCG # = 31/31 # =32.9/40.5 # =40.4/48.0 # =46.9/54.5 # =52.7/60.4 #6 4/20/2008 Tie-Yan Tutorial at WWW
22 More about Evaluation Measures Query-level Averaged over all testing queries. The measure is bounded, and every query contributes equally to the averaged measure. Position Position discount is used here and there. Non-smooth Computed based on ranks of documents. With small changes to parameters of the ranking model, the scores will change smoothly, but the ranks will not change until one document s score passes another. 4/20/2008 Tie-Yan Tutorial at WWW
23 Conventional Ranking Models Category Example Models Boolean model Similarity-based models Vector space model Latent Semantic Indexing BM25 model Probabilistic models Hyperlink-based models Language model for IR HITS PageRank 4/20/2008 Tie-Yan Tutorial at WWW
24 Discussions on Conventional Ranking For a particular model Models Parameter tuning is usually difficult, especially when there are many parameters to tune. For comparison between two models Given a test set, it is difficult to compare two models, one is over-tuned (over-fitting) while the other is not. For a collection of models There are hundreds of models proposed in the literature. It is non-trivial to combine them effectively. 4/20/2008 Tie-Yan Tutorial at WWW
25 Machine Learning Can Help Machine learning is an effective tool To automatically tune parameters To combine multiple evidences To avoid over-fitting (regularization framework, structure risk minimization, etc.) 4/20/2008 Tie-Yan Tutorial at WWW
26 Learning to Rank for IR Overview Pointwise Approach Pairwise Approach Listwise Approach 4/20/2008 Tie-Yan Tutorial at WWW
27 documents Framework of Learning to Rank for IR queries d d q (1) (1) 1 (1) 2 (1) d n,4,2,1 (1) d d d q Training Data q ( d1,?),( d2,?)...,( d n Test data,?) ( m) ( m) 1 ( m) 2 ( m) ( m ) n,5,3,2 Labels: multiple-level ratings for example. Learning System Ranking System 4/20/2008 Tie-Yan Tutorial at WWW 2008 d d d i1 i2 in f,,, Model ( q, d, w) f ( q, d f ( q, d f ( q, d i1 i2 in, w), w), w) 27
28 Learning to Rank Algorithms Query refinement (WWW 2008) ListNet (ICML 2007) SVM-MAP (SIGIR 2007) Nested Ranker (SIGIR 2006) LambdaRank (NIPS 2006) Frank (SIGIR 2007) MPRank (ICML 2007) MHR (SIGIR 2007) RankBoost (JMLR 2003) LDM (SIGIR 2005) RankNet (ICML 2005) Ranking SVM (ICANN 1999) Discriminative model for IR (SIGIR 2004) IRSVM (SIGIR 2006) SVM Structure (JMLR 2005) GPRank (LR4IR 2007) QBRank (NIPS 2007) GBRank (SIGIR 2007) McRank (NIPS 2007) SoftRank (LR4IR 2007) AdaRank (SIGIR 2007) CCA (SIGIR 2007) ListMLE (ICML 2008) RankCosine (IP&M 2007) Supervised Rank Aggregation (WWW 2007) Relational ranking (WWW 2008) 4/20/2008 Tie-Yan Tutorial at WWW
29 Categorization: Relation to Conventional Machine Learning With certain assumptions, learning to rank can be reduced to conventional learning problems Regression Assume labels to be scores Classification Assume labels to have no orders Pairwise classification Assume document pairs to be independent of each other 4/20/2008 Tie-Yan Tutorial at WWW
30 Categorization: Relation to Conventional Machine Learning Ranking in IR has unique properties Query determines the logical structure of the ranking problem. Loss function should be defined on ranked lists w.r.t. a query, since IR measures are computed at query level. Relative order is important No need to predict category, or value of f(x). Position sensitive Top-ranked objects are more important. Rank based evaluation Learning objective is non-smooth and non-differentiable. 4/20/2008 Tie-Yan Tutorial at WWW
31 Categorization: Basic Unit of Learning Pointwise Input: single documents Output: scores or class labels Pairwise Input: document pairs Output: partial order preference Listwise Input: document collections Output: ranked document List 4/20/2008 Tie-Yan Tutorial at WWW
32 Categorization of Learning to Rank Algorithms Pointwise Approach Pairwise Approach Listwise Approach Reduced to classification or regression Discriminative model for IR (SIGIR 2004) McRank (NIPS 2007) Ranking SVM (ICANN 1999) RankBoost (JMLR 2003) LDM (SIGIR 2005) RankNet (ICML 2005) Frank (SIGIR 2007) GBRank (SIGIR 2007) QBRank (NIPS 2007) MPRank (ICML 2007) Make use of unique properties of Ranking for IR IRSVM (SIGIR 2006) LambdaRank (NIPS 2006) AdaRank (SIGIR 2007) SVM-MAP (SIGIR 2007) SoftRank (LR4IR 2007) GPRank (LR4IR 2007) CCA (SIGIR 2007) RankCosine (IP&M 2007) ListNet (ICML 2007) ListMLE (ICML 2008) 4/20/2008 Tie-Yan Tutorial at WWW
33 Learning to Rank for IR Overview Pointwise Approach Pairwise Approach Listwise Approach 4/20/2008 Tie-Yan Tutorial at WWW
34 Overview of Pointwise Approach Reduce ranking to regression or classification on single documents. Examples: Discriminative model for IR. McRank: learning to rank using multi-class classification and gradient boosting. 4/20/2008 Tie-Yan Tutorial at WWW
35 Discriminative Model for IR (R. Nallapati, SIGIR 2004) Features extracted for each document Treat relevant documents as positive examples, while irrelevant documents as negative examples. 4/20/2008 Tie-Yan Tutorial at WWW
36 Discriminative Model for IR Learning algorithms Max Entropy Support Vector Machines Experimental results Comparable with LM 4/20/2008 Tie-Yan Tutorial at WWW
37 McRank (P. Li, et al. NIPS 2007) Loss in terms of DCG is upper bounded by multi-class classification error Multi-class classification: P( y i k) Multiple ordinal classification: P( y k) Regression: min f ( x i ) 2 y 1 i i 4/20/2008 Tie-Yan Tutorial at WWW
38 Different Reductions From classification to ranking Learn a classifier for each category Classifier outputs converted to probabilities using logistic function Convert classification to ranking: Experimental results Multiple ordinal classification > Multi-class classification > Regression 4/20/2008 Tie-Yan Tutorial at WWW
39 Discussions Assumption Relevance is absolute and query-independent. E.g. documents associated with different queries are put into the same category, if their labels are all perfect. However, in practice, relevance is query-dependent An irrelevant document for a popular query might have higher term frequency than a relevant document for a rare query. If simply put documents associated with different queries together, the training process may be hurt. 4/20/2008 Tie-Yan Tutorial at WWW
40 Learning to Rank for IR Overview Pointwise Approach Pairwise Approach Listwise Approach 4/20/2008 Tie-Yan Tutorial at WWW
41 Overview of Pairwise Approach No longer assume absolute relevance. Reduce ranking to classification on document pairs w.r.t. the same query. Examples RankNet, FRank, RankBoost, Ranking SVM, etc. Transform,2,3,5 ) ( ) ( 2 ) ( 1 ) ( ) ( i n i i i i d d d q ) ( ) ( 2 ) ( ) ( 1 ) ( 2 ) ( 1 ) ( ) ( ) (,,...,,,, i n i i n i i i i i i d d d d d d q 5 > 3 5 > 2 3 > /20/2008 Tie-Yan Tutorial at WWW 2008
42 Target probability: (1 means i j; 0 mean i j). Modeled probability: Where Cross entropy as the loss function RankNet (C. Burges, et al. ICML 2005) 4/20/2008 Tie-Yan Tutorial at WWW
43 RankNet (2) Use Neural Network as model, and gradient descent as algorithm, to optimize the cross-entropy loss. Evaluate on single documents: output a relevance score for each document w.r.t. a new query. 4/20/2008 Tie-Yan Tutorial at WWW
44 FRank (M. Tsai, T. Liu, et al. SIGIR 2007) Similar setting to that of RankNet New loss function: fidelity F F o P P P P * * ij ( ij) 1 ( ij ij (1 ij ) (1 ij)) 4/20/2008 Tie-Yan Tutorial at WWW
45 FRank (2) RankNet (Cross entropy loss) Non-zero minimum loss Unbounded Convex FRank (Fidelity loss) Zero minimum loss Bounded within [0,1] Non-convex Experimental results show: FRank can outperform RankNet in many cases 4/20/2008 Tie-Yan Tutorial at WWW
46 RankBoost (Y. Freund, et al. JMLR 2003) Given: initial distribution D 1 over For t=1,, T: Train weak learning using distribution D t Get weak ranking h t : X R Choose t R Dt ( x0, x1)exp Update: t( ht ( x0) ht ( x1)) Dt 1( x0, x1) Zt where Z D ( x, x )exp ( h ( x ) h ( x )) 0 1 Output the final ranking: t t 0 1 t t 0 t 1 x, x T H ( x) tht( x) Use AdaBoost to perform pairwise classification t1 4/20/2008 Tie-Yan Tutorial at WWW
47 Weak Learners Feature values Thresholding Note: many theoretical properties of AdaBoost can be inherited by RankBoost. 4/20/2008 Tie-Yan Tutorial at WWW
48 Ranking SVM (R. Herbrich, et al. ICANN 1999) min z w, i i 1 2 w, x 0 w (1) i 2 x C (2) i f ( x; wˆ) wˆ, x i 1 i x (1) -x (2) as positive instance of learning Use SVM to perform binary classification on these instances, to learn model parameter w Use w for testing Use SVM to perform pairwise classification 4/20/2008 Tie-Yan Tutorial at WWW
49 Testing 4/20/2008 Tie-Yan Tutorial at WWW
50 Other Work in this Category Linear discriminant model for information retrieval (LDM), (J. Gao, H. Qi, et al. SIGIR 2005) A Regression Framework for Learning Ranking Functions using Relative Relevance Judgments (GBRank), (Z. Zheng, H. Zha, et al. SIGIR 2007) A General Boosting Method and its Application to Learning Ranking Functions for Web Search (QBRank), (Z. Zheng, H. Zha, et al. NIPS 2007) Magnitude-preserving Ranking Algorithms (MPRank), (C. Cortes, M. Mohri, et al. ICML 2007) 4/20/2008 Tie-Yan Tutorial at WWW
51 Discussions Progress made as compared to pointwise approach No longer assume absolute relevance However, Unique properties of ranking in IR have not been fully modeled. 4/20/2008 Tie-Yan Tutorial at WWW
52 Problems with Pairwise Approach (1) Number of instance pairs varies according to query Two queries in total Same error in terms of pairwise classification 780/790 = 98.73%. Different errors in terms of query level evaluation 99% vs. 50%. 4/20/2008 Tie-Yan Tutorial at WWW
53 Problems with Pairwise Approach (2) Negative effects of making errors on top positions Same error in terms of pairwise classification Different errors in terms of position-discounted evaluation. p: perfect, g: good, b: bad Ideal: p g g b b b b ranking 1: g p g b b b b one wrong pair Worse ranking 2: p g b g b b b one wrong pair Better 4/20/2008 Tie-Yan Tutorial at WWW
54 As a Result, Users only see ranked list of documents, but not document pairs It is not clear how pairwise loss correlates with IR evaluation measures, which are defined on query level. Pairwise loss vs. (1-NDCG@5) TREC Dataset 4/20/2008 Tie-Yan Tutorial at WWW
55 Incremental Solution to Problems (1) IRSVM: Change Loss of Ranking SVM (Y. Cao, J. Xu, T. Liu, et al. SIGIR 2006) Query-level normalizer 4/20/2008 Tie-Yan Tutorial at WWW
56 Experimental Results Web Data 4/20/2008 Tie-Yan Tutorial at WWW
57 Theoretical Analysis (Y. Lan, T. Liu, et al. ICML 2008) Stability theory on query-level generalization bound Loss function Learning instance Ground truth Loss on a single query Risks on a set 4/20/2008 Tie-Yan Tutorial at WWW
58 Theoretical Analysis (2) Stability Represents the degree of change in the loss when randomly removing a query and its associated documents from the training set. Generalization Ability If an algorithm has querylevel stability, with high probability, expected querylevel risk can be bounded by empirical risk as follows. Query q j removed from training set Stability coefficient Bound depending on query number and stability coefficient 4/20/2008 Tie-Yan Tutorial at WWW
59 << Generalization Bounds for Ranking SVM For Ranking SVM and IRSVM For IRSVM When number of document pairs per query varies largely, the generalization bound of IRSVM will be much tighter than that of Ranking SVM 4/20/2008 Tie-Yan Tutorial at WWW
60 Distribution of Pair Numbers Variance is large IR SVM will have a better generalization ability than Ranking SVM Web Data 4/20/2008 Tie-Yan Tutorial at WWW
61 Incremental Solution to Problems (2) IRSVM: Change Loss of Ranking SVM (Y. Cao, J. Xu, T. Liu, et al. SIGIR 2006) Implicit Position Discount: Larger weight for more critical type of pairs Learned from a labeled set 4/20/2008 Tie-Yan Tutorial at WWW
62 Experimental Results Web Data 4/20/2008 Tie-Yan Tutorial at WWW
63 More Fundamental Solution Listwise Approach to Ranking Directly operate on ranked lists Directly optimize IR evaluation measures AdaRank, SVM-MAP, SoftRank, LambdaRank, RankGP, Define listwise loss functions RankCosine, ListNet, ListMLE, 4/20/2008 Tie-Yan Tutorial at WWW
64 Contextual Ads SIGIR 2008 workshop on learning to rank for IR Paper submission due: May 16, Organizer: Tie-Yan Liu, Hang Li, Chengxiang Zhai We organized a successful workshop on the same topic at SIGIR 2007 (the largest workshop, with 100+ registrations) 4/20/2008 Tie-Yan Tutorial at WWW
65 Contextual Ads (2) SIGIR 2008 tutorial on learning to rank for IR Speaker: Tie-Yan Liu Differences from this WWW tutorial Include more new algorithms (SIGIR 2008, ICML 2008, etc.) Discussions on the whole pipeline of learning to rank (feature extraction, ground truth mining, loss function, ranking function, post processing, etc.) 4/20/2008 Tie-Yan Tutorial at WWW
66 Contextual Ads (3) LETOR: Benchmark Dataset for Learning to Rank downloads till now. Released by Tie-Yan Liu, Jun Xu, et al. 4/20/2008 Tie-Yan Tutorial at WWW
67 - Next Listwise Approach to Learning to Rank 4/20/2008 Tie-Yan Tutorial at WWW
68 Learning to Rank for IR Overview Pointwise Approach Pairwise Approach Listwise Approach 4/20/2008 Tie-Yan Tutorial at WWW
69 Overview of Listwise Approach Instead of reducing ranking to regression or classification, perform learning directly on document list. Treats ranked lists as learning instances. Two major types Directly optimize IR evaluation measures Define listwise loss functions 4/20/2008 Tie-Yan Tutorial at WWW
70 Directly Optimize IR Evaluation Measures It is a natural idea to directly optimize what is used to evaluate the ranking results. However, it is non-trivial. Evaluation measures such as NDCG are usually nonsmooth and non-differentiable since they depend on the ranks. It is challenging to optimize such objective functions, since most of the optimization techniques were developed to work with smooth and differentiable cases. 4/20/2008 Tie-Yan Tutorial at WWW
71 Tackle the Challenges Use optimization technologies designed for smooth and differentiable problems Embed the evaluation measure in the logic of existing optimization technique (AdaRank) Optimize a smooth and differentiable upper bound of the evaluation measure (SVM-MAP) Soften (approximate) the evaluation measure so as to make it smooth and differentiable (SoftRank) Use optimization technologies designed for non-smooth and non-differentiable problems Genetic programming (RankGP) Directly define gradient (LambdaRank) 4/20/2008 Tie-Yan Tutorial at WWW
72 AdaRank (J. Xu, et al. SIGIR 2007) The optimization process of AdaBoost Use the loss function to update the distributions of learning instances and to determine the parameter α 4/20/2008 Tie-Yan Tutorial at WWW
73 Use Boosting to perform the optimization. Embed IR evaluation measure in the distribution update of Boosting. Focus on those hard queries. Weak learner that can rank important queries more correctly will have larger weight α AdaRank (J. Xu, et al. SIGIR 2007) 4/20/2008 Tie-Yan Tutorial at WWW
74 Theoretical Analysis Training error will be continuously reduced during learning phase when <1 Assumes the linearity of the IR evaluation measure. 4/20/2008 Tie-Yan Tutorial at WWW
75 Practical Solution In practice, we cannot guarantee <1, therefore, AdaRank might not converge. Solution: when a weak ranker is selected, we test whether <1, if not, the second best weak ranker is selected. 4/20/2008 Tie-Yan Tutorial at WWW
76 Experimental Results TD 2003 Data 4/20/2008 Tie-Yan Tutorial at WWW
77 SVM-MAP (Y. Yue, et al. SIGIR 2007) Use SVM framework for optimization Incorporate IR evaluation measures in the listwise constraints: The score w.r.t true labeling should be higher than that w.r.t. incorrect labeling. Sum of slacks ξ upper bounds (1-E(y,y )). y' y : min 1 2 w 2 C N i F( q, d, y) F( q, d, y') (1 E( y', y)) i 4/20/2008 Tie-Yan Tutorial at WWW
78 Challenges in SVM-MAP For Average Precision, the true labeling is a ranking where the relevant documents are all ranked in the front, e.g., An incorrect labeling would be any other ranking, e.g., Exponential number of rankings, thus an exponential number of constraints! 4/20/2008 Tie-Yan Tutorial at WWW
79 Structural SVM Training STEP 1: Perform optimization on only current working set of constraints. STEP 2: Use the model learned in STEP 1 to find the most violated constraint from the exponential set of constraints. STEP 3: If the constraint returned in STEP 2 is more violated than the most violated constraint in the working set, add it to the working set. STEP 1-3 is guaranteed to loop for at most a polynomial number of iterations. [Tsochantaridis et al. 2005] 4/20/2008 Tie-Yan Tutorial at WWW
80 Finding the Most Violated Constraint Most violated y: arg max y' Y 1 E( y', y) F( q, d, y') If the relevance at each position is fixed, E(y,y ) will be the same. But if the documents are sorted by their scores in descending order, F(q,d,y ) will be maximized. Strategy Sort the relevant and irrelevant documents and form a perfect ranking. Start with the perfect ranking and swap two adjacent relevant and irrelevant documents, so as to find the optimal interleaving of the two sorted lists. For real cases, the complexity is O(nlogn). 4/20/2008 Tie-Yan Tutorial at WWW
81 Mean Average Precision Experimental Results Comparison with other SVM methods SVM-MAP SVM-ROC SVM-ACC SVM-ACC2 SVM-ACC3 SVM-ACC TREC 9 Indri TREC 10 Indri TREC 9 Submissions TREC 10 Submissions TREC 9 Submissions (without best) TREC 10 Submissions (without best) Dataset 4/20/2008 Tie-Yan Tutorial at WWW
82 Key Idea: SoftRank (M. Taylor, et al. LR4IR 2007) Avoid sorting by treating scores as random variables Steps Construct score distribution Map score distribution to rank distribution Compute expected NDCG (SoftNDCG) which is smooth and differentiable. Optimize SoftNDCG by gradient descent. 4/20/2008 Tie-Yan Tutorial at WWW
83 Construct Score Distribution Considering the score for each document s j as random variable, by using Gaussian. 4/20/2008 Tie-Yan Tutorial at WWW
84 From Scores to Rank Distribution Probability of doc i being ranked before doc j Expected rank Rank Distribution Define the probability of a document being ranked at a particular position 4/20/2008 Tie-Yan Tutorial at WWW
85 Soft NDCG From hard to soft Since depends on, the optimization can be done if we know. 4/20/2008 Tie-Yan Tutorial at WWW
86 Experimental Results Here G n means position n 4/20/2008 Tie-Yan Tutorial at WWW
87 RankGP (J. Yeh, et al. LR4IR 2007) RankGP = Preprocessing + Learning + Prediction Preprocessing Query-based normalization on features Learning Single-population GP Widely used to optimize non-smoothing non-differentiable objectives. Prediction 4/20/2008 Tie-Yan Tutorial at WWW
88 Ranking Function Learning Define ranking function as a tree Use GP to learn the optimal tree Evolution mechanism Crossover, mutation, reproduction, tournament selection. Fitness function IR evaluation measure (MAP). 4/20/2008 Tie-Yan Tutorial at WWW
89 Experimental Results TD 2003 Data 4/20/2008 Tie-Yan Tutorial at WWW
90 LambdaRank (C. Burges, et al. NIPS 2006) Basic idea IR evaluation measures are non-smooth and nondifferentiable. It is usually much easier to specify rules determining how to change rank order of documents than to construct a smooth loss function. Example To achieve WTA=1 Two relevant documents D 1 and D 2 ; WTA as the evaluation measure. 4/20/2008 Tie-Yan Tutorial at WWW
91 Lambda Function, Instead of Loss Function Instead of explicitly defining the loss function, directly define the gradient: C s j j ( s, l1,..., s, l 1 n n ) S: score; l: label. A lambda which was shown to effectively optimize NDCG. 4/20/2008 Tie-Yan Tutorial at WWW
92 Experimental Results Web Data 4/20/2008 Tie-Yan Tutorial at WWW
93 Other Work in this Category A combined component approach for finding collection-adapted ranking functions based on genetic programming (CCA), (H. Almeida, M. Goncalves, et al. SIGIR 2007). A Markov random field model for term dependencies, (D. Metzler, W. Croft. SIGIR 2005). 4/20/2008 Tie-Yan Tutorial at WWW
94 Discussions Progress has been made as compared to pointwise and pairwise approaches Unsolved problems Is the upper bound in SVM-MAP tight enough? Is SoftNDCG really correlated with NDCG? Is there any theoretical justification on the correlation between the implicit loss function used in LambdaRank and NDCG? Multiple measures are not necessarily consistent with each other: which to optimize? 4/20/2008 Tie-Yan Tutorial at WWW
95 Define Listwise Loss Function Indirectly optimizing IR evaluation measures Consider properties of ranking for IR. Examples RankCosine: use cosine similarity between score vectors as loss function ListNet: use KL divergence between permutation probability distributions as loss function ListMLE: use negative log likelihood of the ground truth permutation as loss function Theoretical analysis on listwise loss functions 4/20/2008 Tie-Yan Tutorial at WWW
96 RankCosine (T. Qin, T. Liu, et al. IP&M 2007) Loss function define as Properties Loss is computed by using all documents of a query. Insensitive to number of documents / document pairs per query Emphasize correct ranking by setting high scores for top in ground truth. Bounded: 4/20/2008 Tie-Yan Tutorial at WWW
97 Optimization Additive model as ranking function Loss over queries Gradient descent for optimization 4/20/2008 Tie-Yan Tutorial at WWW
98 Experimental Results 4/20/2008 Tie-Yan Tutorial at WWW
99 More Investigation on Loss Functions A Simple Example: function f: f(a)=3, f(b)=0, f(c)=1 ACB function h: h(a)=4, h(b)=6, h(c)=3 BAC ground truth g: g(a)=6, g(b)=4, g(c)=3 ABC Question: which function is closer to ground truth? Based on pointwise similarity: sim(f,g) < sim(g,h). Based on pairwise similarity: sim(f,g) = sim(g,h) Based on cosine similarity between score vectors? f: <3,0,1> g:<6,4,3> h:<4,6,3> sim(f,g)=0.85 Closer! sim(g,h)=0.93 However, according to position discount, f should be closer to g! 4/20/2008 Tie-Yan Tutorial at WWW
100 Permutation Probability Distribution Question: How to represent a ranked list? Solution Ranked list Permutation probability distribution More informative representation for ranked list: permutation and ranked list has 1-1 correspondence. 4/20/2008 Tie-Yan Tutorial at WWW
101 Luce Model: Defining Permutation Probability Probability of permutation π is defined as Ps ( ) Example: P f ABC n ( j) n 1 ( s k j ( k ) j ( s f (A) f (A) f (B) f (A) ) ) f (B) f (B) f (C) f f (C) (C) P(A ranked No.1) P(B ranked No.2 A ranked No.1) = P(B ranked No.1)/(1- P(A ranked No.1)) P(C ranked No.3 A ranked No.1, B ranked No.2) 4/20/2008 Tie-Yan Tutorial at WWW
102 Properties of Luce Model Nice properties Continuous, differentiable, and convex w.r.t. score s. Suppose g(a) = 6, g(b)=4, g(c)=3, P(ABC) is largest and P(CBA) is smallest; for the ranking based on f: ABC, swap any two, probability will decrease, e.g., P(ABC) > P(ACB) Translation invariant, when ( s) exp( s) Scale invariant, when ( s) as 4/20/2008 Tie-Yan Tutorial at WWW
103 Distance between Ranked Lists φ = exp Using KL-divergence to measure difference between distributions dis(f,g) = 0.46 dis(g,h) = /20/2008 Tie-Yan Tutorial at WWW
104 ListNet Algorithm (Z. Cao, T. Qin, T. Liu, et al. ICML 2007) Loss function = KL-divergence between two permutation probability distributions (φ = exp) L( w) t Model = Neural Network Algorithm = Gradient Descent n exp( s y( j ) t ) log t exp( w X n n qq G( j,..., jk ) 1 exp( s ut y( j w X u )) 1 exp( ut y( ju ) n y( j ) 1 ) t ) 4/20/2008 Tie-Yan Tutorial at WWW
105 Experimental Results More correlated! Pairwise (RankNet) Listwise (ListNet) Training Performance on TD2003 Dataset 4/20/2008 Tie-Yan Tutorial at WWW
106 Experimental Results 4/20/2008 Tie-Yan Tutorial at WWW
107 Limitations of ListNet Still too complex in practice: O(n n!) Performance of ListNet depends on the mapping function from ground truth label to scores Ground truths are permutations (ranked list) or group of permutations; but not naturally scores. 4/20/2008 Tie-Yan Tutorial at WWW
108 Solution Never assume ground truth to be scores. Given the scores outputted by the ranking model, compute the likelihood of a ground truth permutation, and maximize it to learn the model parameter. The complexity is reduced from O(n n!) to O(n). 4/20/2008 Tie-Yan Tutorial at WWW
109 Likelihood Loss Permutation Likelihood Likelihood Loss 4/20/2008 Tie-Yan Tutorial at WWW
110 ListMLE Algorithm (F. Xia, T. Liu, et al. ICML 2008) Loss function = likelihood loss Model = Neural Network Algorithm = Stochastic Gradient Descent 4/20/2008 Tie-Yan Tutorial at WWW
111 Experimental Results TD2004 Dataset 4/20/2008 Tie-Yan Tutorial at WWW
112 Listwise Ranking Framework Input space: X Elements in X are sets of objects to be ranked Output space: Y Elements in Y are permutations of objects Joint probability distribution: P XY Hypothesis space: H Expected loss 4/20/2008 Tie-Yan Tutorial at WWW
113 True Loss in Listwise Ranking To analysis the theoretical properties of listwise loss functions, the true loss of ranking is to be defined. The true loss describes the difference between a given ranked list (permutation) and the ground truth ranked list (permutation). Ideally, the true loss should be cost-sensitive, but for simplicity, we investigate the 0-1 loss. 4/20/2008 Tie-Yan Tutorial at WWW
114 Surrogate Loss in Listwise Approach Widely-used ranking function Corresponding empirical loss Challenges Due to the sorting function and the 0-1 loss, the empirical loss is non-differentiable. To tackle the problem, a surrogate loss is used. 4/20/2008 Tie-Yan Tutorial at WWW
115 Surrogate Loss Minimization Framework RankCosine, ListNet and ListMLE can all be well fitted into this framework of surrogate loss minimization. RankCosine ListNet ListMLE 4/20/2008 Tie-Yan Tutorial at WWW
116 Evaluation of Surrogate Loss Continuity, differentiability and convexity Computational efficiency Statistical consistency Soundness These properties have been well studied in classification, but not sufficiently in ranking. 4/20/2008 Tie-Yan Tutorial at WWW
117 Continuity, Differentiability, Convexity, Efficiency Loss Continuity Differentiability Convexity Efficiency Cosine Loss (RankCosine) Cross-entropy loss (ListNet) Likelihood loss (ListMLE) X O(n) O(n n!) O(n) 4/20/2008 Tie-Yan Tutorial at WWW
118 Statistical Consistency When minimizing the surrogate loss is equivalent to minimizing the expected 0-1 loss, we say the surrogate loss function is consistent. A theory for verifying consistency in ranking. The ranking of an object is inherently determined by its own. Starting with a ground-truth permutation, the loss will increase after exchanging the positions of two objects in it, and the speed of increase in loss is sensitive to the positions of objects. 4/20/2008 Tie-Yan Tutorial at WWW
119 Statistical Consistency (2) It has been proven (F. Xia, T. Liu, et al. ICML 2008) Cosine Loss is statistically consistent. Cross entropy loss is statistically consistent. Likelihood loss is statistically consistent. 4/20/2008 Tie-Yan Tutorial at WWW
120 Soundness Cosine loss is not very sound Suppose we have two documents D2 D1. g2 g1=g2 α g1 Incorrect Ranking Correct ranking 4/20/2008 Tie-Yan Tutorial at WWW
121 Soundness (2) Cross entropy loss is not very sound Suppose we have two documents D2 D1. g2 g1=g2 g1 Incorrect Ranking Correct ranking 4/20/2008 Tie-Yan Tutorial at WWW
122 Soundness (3) Likelihood loss is sound Suppose we have two documents D2 D1. g2 g1=g2 g1 Incorrect Ranking Correct ranking 4/20/2008 Tie-Yan Tutorial at WWW
123 Discussion Progress has been made as compared to pointwise and pairwise approaches Likelihood loss seems to be one of the best listwise loss functions, according to the above discussions. In real ranking problems, the true loss should be cost-sensitive. Need further investigations. 4/20/2008 Tie-Yan Tutorial at WWW
124 Summary Ranking is a new application, and also a new machine learning problem. Learning to rank: from pointwise, pairwise to listwise approach. 4/20/2008 Tie-Yan Tutorial at WWW
125 poinwise pairwise listwise * Papers published by our group RankingSVM RankBoost ListNet ListMLE SVM-MAP AdaRank RankCosine SoftRank RankGP CCA LambdaRank Supervised Rank Aggregation MPRank RankNet MHR QBRank FRank LDM IRSVM GBRank Relational Ranking Rank Refinement Discriminative IR model MCRank /20/2008 Tie-Yan Tutorial at WWW
126 Future Work From loss function to ranking function. Beyond the ranking function on single documents. Beyond single ranking model for all kinds of queries. From learning process to preprocessing and post-processing. Automatic ground truth mining Feature extraction and selection might be even more important than selecting learning algorithms in practice. Dedup and diversification are helpful. From discriminative learning to generative learning From algorithms to learning theory. True loss for ranking. Generalization ability of listwise approach. 4/20/2008 Tie-Yan Tutorial at WWW
127 You May Want to Read Ground Truth Mining Active Exploration for Learning Rankings from Clickthrough Data (F. Radlinski, T. Joachims, KDD 2007) Query Chains: Learning to Rank from Implicit Feedback (F. Radlinski, T. Joachims, KDD 2005) Feature Selection Feature Selection for Ranking (X. Geng, T. Liu, et al. SIGIR 2007) Ranking Function Multi-hyperplane ranker (T. Qin, T. Liu, et al. SIGIR 2007) Multiple Nested Ranker (I. Matveeva, C.J.C. Burges, et al. SIGIR 2006) Supervised Rank Aggregation (Y. Liu, T. Liu, et al. WWW 2007) Learning to Rank Relational Objects (T. Qin, T. Liu, et al. WWW 2008) Ranking Refinement and Its Application to Information Retrieval (R. Jin, H. Valizadegan, et al. WWW 2008) 4/20/2008 Tie-Yan Tutorial at WWW
128 References R. Herbrich, T. Graepel, et al. Support Vector Learning for Ordinal Regression, ICANN1999. T. Joachims, Optimizing Search Engines Using Clickthrough Data, KDD Y. Freund, R. Iyer, et al. An Efficient Boosting Algorithm for Combining Preferences, JMLR R. Nallapati, Discriminative model for information retrieval, SIGIR J. Gao, H. Qi, et al. Linear discriminant model for information retrieval, SIGIR D. Metzler, W. Croft. A Markov random field model for term dependencies, SIGIR C.J.C. Burges, T. Shaked, et al. Learning to Rank using Gradient Descent, ICML I. Tsochantaridis, T. Joachims, et al. Large Margin Methods for Structured and Interdependent Output Variables, JMLR, F. Radlinski, T. Joachims, Query Chains: Learning to Rank from Implicit Feedback, KDD I. Matveeva, C.J.C. Burges, et al. High accuracy retrieval with multiple nested ranker, SIGIR C.J.C. Burges, R. Ragno, et al. Learning to Rank with Nonsmooth Cost Functions, NIPS 2006 Y. Cao, J. Xu, et al. Adapting Ranking SVM to Information Retrieval, SIGIR Z. Cao, T. Qin, et al. Learning to Rank: From Pairwise to Listwise Approach, ICML /20/2008 Tie-Yan Tutorial at WWW
129 References (2) T. Qin, T.-Y. Liu, et al, Multiple hyperplane Ranker, SIGIR T.-Y. Liu, J. Xu, et al. LETOR: Benchmark dataset for research on learning to rank for information retrieval, LR4IR M.-F. Tsai, T.-Y. Liu, et al. FRank: A Ranking Method with Fidelity Loss, SIGIR Z. Zheng, H. Zha, et al. A General Boosting Method and its Application to Learning Ranking Functions for Web Search, NIPS Z. Zheng, H. Zha, et al, A Regression Framework for Learning Ranking Functions Using Relative Relevance Judgment, SIGIR M. Taylor, J. Guiver, et al. SoftRank: Optimising Non-Smooth Rank Metrics, LR4IR J.-Y. Yeh, J.-Y. Lin, et al. Learning to Rank for Information Retrieval using Generic Programming, LR4IR T. Qin, T.-Y. Liu, et al, Query-level Loss Function for Information Retrieval, Information Processing and Management, X. Geng, T.-Y. Liu, et al, Feature Selection for Ranking, SIGIR /20/2008 Tie-Yan Tutorial at WWW
130 References (3) Y. Yue, T. Finley, et al. A Support Vector Method for Optimizing Average Precision, SIGIR Y.-T. Liu, T.-Y. Liu, et al, Supervised Rank Aggregation, WWW J. Xu and H. Li, A Boosting Algorithm for Information Retrieval, SIGIR F. Radlinski, T. Joachims. Active Exploration for Learning Rankings from Clickthrough Data, KDD P. Li, C. Burges, et al. McRank: Learning to Rank Using Classification and Gradient Boosting, NIPS C. Cortes, M. Mohri, et al. Magnitude-preserving Ranking Algorithms, ICML H. Almeida, M. Goncalves, et al. A combined component approach for finding collection-adapted ranking functions based on genetic programming, SIGIR T. Qin, T.-Y. Liu, et al, Learning to Rank Relational Objects and Its Application to Web Search, WWW R. Jin, H. Valizadegan, et al. Ranking Refinement and Its Application to Information Retrieval, WWW F. Xia. T.-Y. Liu, et al. Listwise Approach to Learning to Rank Theory and Algorithm, ICML Y. Lan, T.-Y. Liu, et al. On Generalization Ability of Learning to Rank Algorithms, ICML /20/2008 Tie-Yan Tutorial at WWW
131 Acknowledgement Hang Li (Microsoft Research Asia) Wei-Ying Ma (Microsoft Research Asia) Jun Xu (Microsoft Research Asia) Tao Qin (Tsinghua University) Soumen Chakrabarti (IIT) 4/20/2008 Tie-Yan Tutorial at WWW
WebSci and Learning to Rank for IR
WebSci and Learning to Rank for IR Ernesto Diaz-Aviles L3S Research Center. Hannover, Germany diaz@l3s.de Ernesto Diaz-Aviles www.l3s.de 1/16 Motivation: Information Explosion Ernesto Diaz-Aviles
More informationLearning to Rank. Tie-Yan Liu. Microsoft Research Asia CCIR 2011, Jinan,
Learning to Rank Tie-Yan Liu Microsoft Research Asia CCIR 2011, Jinan, 2011.10 History of Web Search Search engines powered by link analysis Traditional text retrieval engines 2011/10/22 Tie-Yan Liu @
More informationLearning to Rank. from heuristics to theoretic approaches. Hongning Wang
Learning to Rank from heuristics to theoretic approaches Hongning Wang Congratulations Job Offer from Bing Core Ranking team Design the ranking module for Bing.com CS 6501: Information Retrieval 2 How
More informationStructured Ranking Learning using Cumulative Distribution Networks
Structured Ranking Learning using Cumulative Distribution Networks Jim C. Huang Probabilistic and Statistical Inference Group University of Toronto Toronto, ON, Canada M5S 3G4 jim@psi.toronto.edu Brendan
More informationLearning to Rank for Information Retrieval
Learning to Rank for Information Retrieval Tie-Yan Liu Learning to Rank for Information Retrieval Tie-Yan Liu Microsoft Research Asia Bldg #2, No. 5, Dan Ling Street Haidian District Beijing 100080 People
More informationA General Approximation Framework for Direct Optimization of Information Retrieval Measures
A General Approximation Framework for Direct Optimization of Information Retrieval Measures Tao Qin, Tie-Yan Liu, Hang Li October, 2008 Abstract Recently direct optimization of information retrieval (IR)
More informationRanking with Query-Dependent Loss for Web Search
Ranking with Query-Dependent Loss for Web Search Jiang Bian 1, Tie-Yan Liu 2, Tao Qin 2, Hongyuan Zha 1 Georgia Institute of Technology 1 Microsoft Research Asia 2 Outline Motivation Incorporating Query
More informationA Few Things to Know about Machine Learning for Web Search
AIRS 2012 Tianjin, China Dec. 19, 2012 A Few Things to Know about Machine Learning for Web Search Hang Li Noah s Ark Lab Huawei Technologies Talk Outline My projects at MSRA Some conclusions from our research
More informationInformation Retrieval
Information Retrieval Learning to Rank Ilya Markov i.markov@uva.nl University of Amsterdam Ilya Markov i.markov@uva.nl Information Retrieval 1 Course overview Offline Data Acquisition Data Processing Data
More informationLearning Dense Models of Query Similarity from User Click Logs
Learning Dense Models of Query Similarity from User Click Logs Fabio De Bona, Stefan Riezler*, Keith Hall, Massi Ciaramita, Amac Herdagdelen, Maria Holmqvist Google Research, Zürich *Dept. of Computational
More informationFall Lecture 16: Learning-to-rank
Fall 2016 CS646: Information Retrieval Lecture 16: Learning-to-rank Jiepu Jiang University of Massachusetts Amherst 2016/11/2 Credit: some materials are from Christopher D. Manning, James Allan, and Honglin
More informationLearning to Rank: A New Technology for Text Processing
TFANT 07 Tokyo Univ. March 2, 2007 Learning to Rank: A New Technology for Text Processing Hang Li Microsoft Research Asia Talk Outline What is Learning to Rank? Ranking SVM Definition Search Ranking SVM
More informationLETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval Tie-Yan Liu 1, Jun Xu 1, Tao Qin 2, Wenying Xiong 3, and Hang Li 1
LETOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval Tie-Yan Liu 1, Jun Xu 1, Tao Qin 2, Wenying Xiong 3, and Hang Li 1 1 Microsoft Research Asia, No.49 Zhichun Road, Haidian
More informationLearning to rank, a supervised approach for ranking of documents Master Thesis in Computer Science - Algorithms, Languages and Logic KRISTOFER TAPPER
Learning to rank, a supervised approach for ranking of documents Master Thesis in Computer Science - Algorithms, Languages and Logic KRISTOFER TAPPER Chalmers University of Technology University of Gothenburg
More informationOne-Pass Ranking Models for Low-Latency Product Recommendations
One-Pass Ranking Models for Low-Latency Product Recommendations Martin Saveski @msaveski MIT (Amazon Berlin) One-Pass Ranking Models for Low-Latency Product Recommendations Amazon Machine Learning Team,
More informationA Stochastic Learning-To-Rank Algorithm and its Application to Contextual Advertising
A Stochastic Learning-To-Rank Algorithm and its Application to Contextual Advertising ABSTRACT Maryam Karimzadehgan Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL
More informationSearch Engines and Learning to Rank
Search Engines and Learning to Rank Joseph (Yossi) Keshet Query processor Ranker Cache Forward index Inverted index Link analyzer Indexer Parser Web graph Crawler Representations TF-IDF To get an effective
More informationHigh Accuracy Retrieval with Multiple Nested Ranker
High Accuracy Retrieval with Multiple Nested Ranker Irina Matveeva University of Chicago 5801 S. Ellis Ave Chicago, IL 60637 matveeva@uchicago.edu Chris Burges Microsoft Research One Microsoft Way Redmond,
More informationRankDE: Learning a Ranking Function for Information Retrieval using Differential Evolution
RankDE: Learning a Ranking Function for Information Retrieval using Differential Evolution Danushka Bollegala 1 Nasimul Noman 1 Hitoshi Iba 1 1 The University of Tokyo Abstract: Learning a ranking function
More informationarxiv: v1 [cs.ir] 19 Sep 2016
Enhancing LambdaMART Using Oblivious Trees Marek Modrý 1 and Michal Ferov 2 arxiv:1609.05610v1 [cs.ir] 19 Sep 2016 1 Seznam.cz, Radlická 3294/10, 150 00 Praha 5, Czech Republic marek.modry@firma.seznam.cz
More informationLearning to Rank for Faceted Search Bridging the gap between theory and practice
Learning to Rank for Faceted Search Bridging the gap between theory and practice Agnes van Belle @ Berlin Buzzwords 2017 Job-to-person search system Generated query Match indicator Faceted search Multiple
More informationLizhe Sun. November 17, Florida State University. Ranking in Statistics and Machine Learning. Lizhe Sun. Introduction
in in Florida State University November 17, 2017 Framework in 1. our life 2. Early work: Model Examples 3. webpage Web page search modeling Data structure Data analysis with machine learning algorithms
More informationLearning to Rank with Deep Neural Networks
Learning to Rank with Deep Neural Networks Dissertation presented by Goeric HUYBRECHTS for obtaining the Master s degree in Computer Science and Engineering Options: Artificial Intelligence Computing and
More informationPersonalized Web Search
Personalized Web Search Dhanraj Mavilodan (dhanrajm@stanford.edu), Kapil Jaisinghani (kjaising@stanford.edu), Radhika Bansal (radhika3@stanford.edu) Abstract: With the increase in the diversity of contents
More informationPart 5: Structured Support Vector Machines
Part 5: Structured Support Vector Machines Sebastian Nowozin and Christoph H. Lampert Providence, 21st June 2012 1 / 34 Problem (Loss-Minimizing Parameter Learning) Let d(x, y) be the (unknown) true data
More informationLearning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li
Learning to Match Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li 1. Introduction The main tasks in many applications can be formalized as matching between heterogeneous objects, including search, recommendation,
More informationActive Evaluation of Ranking Functions based on Graded Relevance (Extended Abstract)
Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Active Evaluation of Ranking Functions based on Graded Relevance (Extended Abstract) Christoph Sawade sawade@cs.uni-potsdam.de
More informationStructured Learning. Jun Zhu
Structured Learning Jun Zhu Supervised learning Given a set of I.I.D. training samples Learn a prediction function b r a c e Supervised learning (cont d) Many different choices Logistic Regression Maximum
More informationAdvanced Topics in Information Retrieval. Learning to Rank. ATIR July 14, 2016
Advanced Topics in Information Retrieval Learning to Rank Vinay Setty vsetty@mpi-inf.mpg.de Jannik Strötgen jannik.stroetgen@mpi-inf.mpg.de ATIR July 14, 2016 Before we start oral exams July 28, the full
More informationCSCI 599: Applications of Natural Language Processing Information Retrieval Evaluation"
CSCI 599: Applications of Natural Language Processing Information Retrieval Evaluation" All slides Addison Wesley, Donald Metzler, and Anton Leuski, 2008, 2012! Evaluation" Evaluation is key to building
More informationEfficient and Accurate Local Learning for Ranking
Efficient and Accurate Local Learning for Ranking Somnath Banerjee HP Labs India Avinava Dubey IIT Bombay Jinesh Machchhar IIT Bombay Soumen Chakrabarti IIT Bombay ABSTRACT Learning to score and thereby
More informationSTA 4273H: Statistical Machine Learning
STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.utstat.utoronto.ca/~rsalakhu/ Sidney Smith Hall, Room 6002 Lecture 12 Combining
More informationNaïve Bayes for text classification
Road Map Basic concepts Decision tree induction Evaluation of classifiers Rule induction Classification using association rules Naïve Bayesian classification Naïve Bayes for text classification Support
More informationPerformance Measures for Multi-Graded Relevance
Performance Measures for Multi-Graded Relevance Christian Scheel, Andreas Lommatzsch, and Sahin Albayrak Technische Universität Berlin, DAI-Labor, Germany {christian.scheel,andreas.lommatzsch,sahin.albayrak}@dai-labor.de
More informationLearning-to-rank with Prior Knowledge as Global Constraints
Learning-to-rank with Prior Knowledge as Global Constraints Tiziano Papini and Michelangelo Diligenti 1 Abstract. A good ranking function is the core of any Information Retrieval system. The ranking function
More informationRanking and Learning. Table of Content. Weighted scoring for ranking Learning to rank: A simple example Learning to ranking as classification.
Table of Content anking and Learning Weighted scoring for ranking Learning to rank: A simple example Learning to ranking as classification 290 UCSB, Tao Yang, 2013 Partially based on Manning, aghavan,
More informationModern Retrieval Evaluations. Hongning Wang
Modern Retrieval Evaluations Hongning Wang CS@UVa What we have known about IR evaluations Three key elements for IR evaluation A document collection A test suite of information needs A set of relevance
More informationSearch Engines Chapter 8 Evaluating Search Engines Felix Naumann
Search Engines Chapter 8 Evaluating Search Engines 9.7.2009 Felix Naumann Evaluation 2 Evaluation is key to building effective and efficient search engines. Drives advancement of search engines When intuition
More informationAdapting Ranking Functions to User Preference
Adapting Ranking Functions to User Preference Keke Chen, Ya Zhang, Zhaohui Zheng, Hongyuan Zha, Gordon Sun Yahoo! {kchen,yazhang,zhaohui,zha,gzsun}@yahoo-inc.edu Abstract Learning to rank has become a
More informationA Comparing Pointwise and Listwise Objective Functions for Random Forest based Learning-to-Rank
A Comparing Pointwise and Listwise Objective Functions for Random Forest based Learning-to-Rank MUHAMMAD IBRAHIM, Monash University, Australia MARK CARMAN, Monash University, Australia Current random forest
More informationImproving Recommendations Through. Re-Ranking Of Results
Improving Recommendations Through Re-Ranking Of Results S.Ashwini M.Tech, Computer Science Engineering, MLRIT, Hyderabad, Andhra Pradesh, India Abstract World Wide Web has become a good source for any
More informationDocument indexing, similarities and retrieval in large scale text collections
Document indexing, similarities and retrieval in large scale text collections Eric Gaussier Univ. Grenoble Alpes - LIG Eric.Gaussier@imag.fr Eric Gaussier Document indexing, similarities & retrieval 1
More informationEstimating Human Pose in Images. Navraj Singh December 11, 2009
Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks
More informationCS6375: Machine Learning Gautam Kunapuli. Mid-Term Review
Gautam Kunapuli Machine Learning Data is identically and independently distributed Goal is to learn a function that maps to Data is generated using an unknown function Learn a hypothesis that minimizes
More informationWCL2R: A Benchmark Collection for Learning to Rank Research with Clickthrough Data
WCL2R: A Benchmark Collection for Learning to Rank Research with Clickthrough Data Otávio D. A. Alcântara 1, Álvaro R. Pereira Jr. 3, Humberto M. de Almeida 1, Marcos A. Gonçalves 1, Christian Middleton
More informationMulti-label Classification. Jingzhou Liu Dec
Multi-label Classification Jingzhou Liu Dec. 6 2016 Introduction Multi-class problem, Training data (x $, y $ ) ( ), x $ X R., y $ Y = 1,2,, L Learn a mapping f: X Y Each instance x $ is associated with
More informationLearning to Rank Only Using Training Data from Related Domain
Learning to Rank Only Using Training Data from Related Domain Wei Gao, Peng Cai 2, Kam-Fai Wong, and Aoying Zhou 2 The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, China {wgao, kfwong}@se.cuhk.edu.hk
More informationChapter 8. Evaluating Search Engine
Chapter 8 Evaluating Search Engine Evaluation Evaluation is key to building effective and efficient search engines Measurement usually carried out in controlled laboratory experiments Online testing can
More informationRandom projection for non-gaussian mixture models
Random projection for non-gaussian mixture models Győző Gidófalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract Recently,
More informationFeature Selection. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Feature Selection CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Dimensionality reduction Feature selection vs. feature extraction Filter univariate
More informationRetrieval Evaluation. Hongning Wang
Retrieval Evaluation Hongning Wang CS@UVa What we have learned so far Indexed corpus Crawler Ranking procedure Research attention Doc Analyzer Doc Rep (Index) Query Rep Feedback (Query) Evaluation User
More informationEffective Latent Space Graph-based Re-ranking Model with Global Consistency
Effective Latent Space Graph-based Re-ranking Model with Global Consistency Feb. 12, 2009 1 Outline Introduction Related work Methodology Graph-based re-ranking model Learning a latent space graph A case
More informationLearning Non-linear Ranking Functions for Web Search using Probabilistic Model Building GP
Learning Non-linear Ranking Functions for Web Search using Probabilistic Model Building GP Hiroyuki Sato, Danushka Bollegala, Yoshihiko Hasegawa and Hitoshi Iba The University of Tokyo, Tokyo, Japan 113-8654
More informationUMass at TREC 2017 Common Core Track
UMass at TREC 2017 Common Core Track Qingyao Ai, Hamed Zamani, Stephen Harding, Shahrzad Naseri, James Allan and W. Bruce Croft Center for Intelligent Information Retrieval College of Information and Computer
More informationMining the Search Trails of Surfing Crowds: Identifying Relevant Websites from User Activity Data
Mining the Search Trails of Surfing Crowds: Identifying Relevant Websites from User Activity Data Misha Bilenko and Ryen White presented by Matt Richardson Microsoft Research Search = Modeling User Behavior
More informationStructured Learning for Non-Smooth Ranking Losses Soumen Chakrabarti
Structured Learning for Non-Smooth Ranking Losses Soumen Chakrabarti IIT Bombay soumen@cse.iitb.ac.in Rajiv Khanna IIT Bombay rajivk@cse.iitb.ac.in ABSTRACT Learning to rank from relevance judgment is
More informationA Deep Relevance Matching Model for Ad-hoc Retrieval
A Deep Relevance Matching Model for Ad-hoc Retrieval Jiafeng Guo 1, Yixing Fan 1, Qingyao Ai 2, W. Bruce Croft 2 1 CAS Key Lab of Web Data Science and Technology, Institute of Computing Technology, Chinese
More informationUniversity of Delaware at Diversity Task of Web Track 2010
University of Delaware at Diversity Task of Web Track 2010 Wei Zheng 1, Xuanhui Wang 2, and Hui Fang 1 1 Department of ECE, University of Delaware 2 Yahoo! Abstract We report our systems and experiments
More informationCPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016
CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:
More informationApplying Supervised Learning
Applying Supervised Learning When to Consider Supervised Learning A supervised learning algorithm takes a known set of input data (the training set) and known responses to the data (output), and trains
More informationArama Motoru Gelistirme Dongusu: Siralamayi Ogrenme ve Bilgiye Erisimin Degerlendirilmesi. Retrieval Effectiveness and Learning to Rank
Arama Motoru Gelistirme Dongusu: Siralamayi Ogrenme ve Bilgiye Erisimin Degerlendirilmesi etrieval Effectiveness and Learning to ank EMIE YILMAZ Professor and Turing Fellow University College London esearch
More informationThorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA
Retrospective ICML99 Transductive Inference for Text Classification using Support Vector Machines Thorsten Joachims Then: Universität Dortmund, Germany Now: Cornell University, USA Outline The paper in
More informationLecture 3: Improving Ranking with
Modeling User Behavior and Interactions Lecture 3: Improving Ranking with Behavior Data 1 * &!" ) & $ & 6+ $ & 6+ + "& & 8 > " + & 2 Lecture 3 Plan $ ( ' # $ #! & $ #% #"! #! -( #", # + 4 / 0 3 21 0 /
More informationEvaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology
Evaluating search engines CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2016 Most slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford)
More informationOptimizing Search Engines using Click-through Data
Optimizing Search Engines using Click-through Data By Sameep - 100050003 Rahee - 100050028 Anil - 100050082 1 Overview Web Search Engines : Creating a good information retrieval system Previous Approaches
More informationContext based Re-ranking of Web Documents (CReWD)
Context based Re-ranking of Web Documents (CReWD) Arijit Banerjee, Jagadish Venkatraman Graduate Students, Department of Computer Science, Stanford University arijitb@stanford.edu, jagadish@stanford.edu}
More informationSession Based Click Features for Recency Ranking
Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) Session Based Click Features for Recency Ranking Yoshiyuki Inagaki and Narayanan Sadagopan and Georges Dupret and Ciya
More information10-701/15-781, Fall 2006, Final
-7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly
More informationRobust Relevance-Based Language Models
Robust Relevance-Based Language Models Xiaoyan Li Department of Computer Science, Mount Holyoke College 50 College Street, South Hadley, MA 01075, USA Email: xli@mtholyoke.edu ABSTRACT We propose a new
More informationAdaptive Search Engines Learning Ranking Functions with SVMs
Adaptive Search Engines Learning Ranking Functions with SVMs CS478/578 Machine Learning Fall 24 Thorsten Joachims Cornell University T. Joachims, Optimizing Search Engines Using Clickthrough Data, Proceedings
More informationarxiv: v1 [cs.ir] 16 Oct 2017
DeepRank: A New Deep Architecture for Relevance Ranking in Information Retrieval Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, Xueqi Cheng pl8787@gmail.com,{lanyanyan,guojiafeng,junxu,cxq}@ict.ac.cn,xujingfang@sogou-inc.com
More informationFractional Similarity : Cross-lingual Feature Selection for Search
: Cross-lingual Feature Selection for Search Jagadeesh Jagarlamudi University of Maryland, College Park, USA Joint work with Paul N. Bennett Microsoft Research, Redmond, USA Using All the Data Existing
More informationEstimating Embedding Vectors for Queries
Estimating Embedding Vectors for Queries Hamed Zamani Center for Intelligent Information Retrieval College of Information and Computer Sciences University of Massachusetts Amherst Amherst, MA 01003 zamani@cs.umass.edu
More informationCPSC 340: Machine Learning and Data Mining. More Linear Classifiers Fall 2017
CPSC 340: Machine Learning and Data Mining More Linear Classifiers Fall 2017 Admin Assignment 3: Due Friday of next week. Midterm: Can view your exam during instructor office hours next week, or after
More informationCS570: Introduction to Data Mining
CS570: Introduction to Data Mining Classification Advanced Reading: Chapter 8 & 9 Han, Chapters 4 & 5 Tan Anca Doloc-Mihu, Ph.D. Slides courtesy of Li Xiong, Ph.D., 2011 Han, Kamber & Pei. Data Mining.
More informationClickthrough Log Analysis by Collaborative Ranking
Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI-10) Clickthrough Log Analysis by Collaborative Ranking Bin Cao 1, Dou Shen 2, Kuansan Wang 3, Qiang Yang 1 1 Hong Kong
More informationSearch Engines. Information Retrieval in Practice
Search Engines Information Retrieval in Practice All slides Addison Wesley, 2008 Classification and Clustering Classification and clustering are classical pattern recognition / machine learning problems
More informationKernel-based online machine learning and support vector reduction
Kernel-based online machine learning and support vector reduction Sumeet Agarwal 1, V. Vijaya Saradhi 2 andharishkarnick 2 1- IBM India Research Lab, New Delhi, India. 2- Department of Computer Science
More informationA Taxonomy of Semi-Supervised Learning Algorithms
A Taxonomy of Semi-Supervised Learning Algorithms Olivier Chapelle Max Planck Institute for Biological Cybernetics December 2005 Outline 1 Introduction 2 Generative models 3 Low density separation 4 Graph
More informationMachine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013
Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork
More informationLecture 9: Support Vector Machines
Lecture 9: Support Vector Machines William Webber (william@williamwebber.com) COMP90042, 2014, Semester 1, Lecture 8 What we ll learn in this lecture Support Vector Machines (SVMs) a highly robust and
More informationA Machine Learning Approach for Improved BM25 Retrieval
A Machine Learning Approach for Improved BM25 Retrieval Krysta M. Svore and Christopher J. C. Burges Microsoft Research One Microsoft Way Redmond, WA 98052 {ksvore,cburges}@microsoft.com Microsoft Research
More informationSupplementary material: Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features
Supplementary material: Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features Sakrapee Paisitkriangkrai, Chunhua Shen, Anton van den Hengel The University of Adelaide,
More informationCombine the PA Algorithm with a Proximal Classifier
Combine the Passive and Aggressive Algorithm with a Proximal Classifier Yuh-Jye Lee Joint work with Y.-C. Tseng Dept. of Computer Science & Information Engineering TaiwanTech. Dept. of Statistics@NCKU
More informationThe exam is closed book, closed notes except your one-page (two-sided) cheat sheet.
CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or
More informationSemi-supervised learning and active learning
Semi-supervised learning and active learning Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Combining classifiers Ensemble learning: a machine learning paradigm where multiple learners
More informationCSC411 Fall 2014 Machine Learning & Data Mining. Ensemble Methods. Slides by Rich Zemel
CSC411 Fall 2014 Machine Learning & Data Mining Ensemble Methods Slides by Rich Zemel Ensemble methods Typical application: classi.ication Ensemble of classi.iers is a set of classi.iers whose individual
More informationClassification. 1 o Semestre 2007/2008
Classification Departamento de Engenharia Informática Instituto Superior Técnico 1 o Semestre 2007/2008 Slides baseados nos slides oficiais do livro Mining the Web c Soumen Chakrabarti. Outline 1 2 3 Single-Class
More informationLearning Ranking Functions with SVMs
Learning Ranking Functions with SVMs CS4780/5780 Machine Learning Fall 2014 Thorsten Joachims Cornell University T. Joachims, Optimizing Search Engines Using Clickthrough Data, Proceedings of the ACM Conference
More informationLearning Hierarchies at Two-class Complexity
Learning Hierarchies at Two-class Complexity Sandor Szedmak ss03v@ecs.soton.ac.uk Craig Saunders cjs@ecs.soton.ac.uk John Shawe-Taylor jst@ecs.soton.ac.uk ISIS Group, Electronics and Computer Science University
More informationOpinion Mining by Transformation-Based Domain Adaptation
Opinion Mining by Transformation-Based Domain Adaptation Róbert Ormándi, István Hegedűs, and Richárd Farkas University of Szeged, Hungary {ormandi,ihegedus,rfarkas}@inf.u-szeged.hu Abstract. Here we propose
More informationCS535 Big Data Fall 2017 Colorado State University 10/10/2017 Sangmi Lee Pallickara Week 8- A.
CS535 Big Data - Fall 2017 Week 8-A-1 CS535 BIG DATA FAQs Term project proposal New deadline: Tomorrow PA1 demo PART 1. BATCH COMPUTING MODELS FOR BIG DATA ANALYTICS 5. ADVANCED DATA ANALYTICS WITH APACHE
More informationarxiv: v2 [cs.ir] 27 Feb 2019
Unbiased LambdaMART: An Unbiased Pairwise Learning-to-Rank Algorithm arxiv:1809.05818v2 [cs.ir] 27 Feb 2019 ABSTRACT Ziniu Hu University of California, Los Angeles, USA bull@cs.ucla.edu Recently a number
More informationMachine Learning Classifiers and Boosting
Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve
More informationSearch Evaluation. Tao Yang CS293S Slides partially based on text book [CMS] [MRS]
Search Evaluation Tao Yang CS293S Slides partially based on text book [CMS] [MRS] Table of Content Search Engine Evaluation Metrics for relevancy Precision/recall F-measure MAP NDCG Difficulties in Evaluating
More informationSemi-Supervised Clustering with Partial Background Information
Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject
More informationA Survey on Postive and Unlabelled Learning
A Survey on Postive and Unlabelled Learning Gang Li Computer & Information Sciences University of Delaware ligang@udel.edu Abstract In this paper we survey the main algorithms used in positive and unlabeled
More informationarxiv: v1 [stat.ap] 14 Mar 2018
arxiv:1803.05127v1 [stat.ap] 14 Mar 2018 Feature Selection and Model Comparison on Microsoft Learning-to-Rank Data Sets Sen LEI, Xinzhi HAN Submitted for the PSTAT 231 (Fall 2017) Final Project ONLY University
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 22, 2016 Course Information Website: http://www.stat.ucdavis.edu/~chohsieh/teaching/ ECS289G_Fall2016/main.html My office: Mathematical Sciences
More informationCPSC 340: Machine Learning and Data Mining. Logistic Regression Fall 2016
CPSC 340: Machine Learning and Data Mining Logistic Regression Fall 2016 Admin Assignment 1: Marks visible on UBC Connect. Assignment 2: Solution posted after class. Assignment 3: Due Wednesday (at any
More information