Recommendation and Advertising. Shannon Quinn (with thanks to J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University)

Similar documents
CS 124/LINGUIST 180 From Languages to Information

Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams

Introduction to Data Mining

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

BBS654 Data Mining. Pinar Duygulu

CS 124/LINGUIST 180 From Languages to Information

COMP 465: Data Mining Recommender Systems

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS 124/LINGUIST 180 From Languages to Information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #16: Recommenda2on Systems

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

Data Mining Techniques

Data Mining Techniques

Real-time Recommendations on Spark. Jan Neumann, Sridhar Alla (Comcast Labs) DC Spark Interactive Meetup East May

Recommendation Systems

Recommender Systems Collabora2ve Filtering and Matrix Factoriza2on

Machine Learning and Data Mining. Collaborative Filtering & Recommender Systems. Kalev Kask

CSE 258 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

CSE 158 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

Recommender Systems - Content, Collaborative, Hybrid

Recommender Systems New Approaches with Netflix Dataset

Recommender Systems. Collaborative Filtering & Content-Based Recommending

CS224W Project: Recommendation System Models in Product Rating Predictions

Collaborative Filtering Applied to Educational Data Mining

Recommender System. What is it? How to build it? Challenges. R package: recommenderlab

Recommendation Algorithms: Collaborative Filtering. CSE 6111 Presentation Advanced Algorithms Fall Presented by: Farzana Yasmeen

Data Mining Lecture 2: Recommender Systems

Recommender Systems - Introduction. Data Mining Lecture 2: Recommender Systems

Recommender Systems 6CCS3WSN-7CCSMWAL

Web Personalization & Recommender Systems

Performance Comparison of Algorithms for Movie Rating Estimation

CPSC 340: Machine Learning and Data Mining. Recommender Systems Fall 2017

Using Social Networks to Improve Movie Rating Predictions

Web Personalisation and Recommender Systems

CS 572: Information Retrieval

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Recommender Systems. Techniques of AI

Knowledge Discovery and Data Mining 1 (VO) ( )

Web Personalization & Recommender Systems

General Instructions. Questions

COMP6237 Data Mining Making Recommendations. Jonathon Hare

Machine Learning using MapReduce

CSE 6242 A / CX 4242 DVA. March 6, Dimension Reduction. Guest Lecturer: Jaegul Choo

Collaborative filtering models for recommendations systems

CSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo

Mining Web Data. Lijun Zhang

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Map-Reduce and Adwords Problem

A Brief Look at Optimization

Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University

CSC2420 Fall 2012: Algorithm Design, Analysis and Theory An introductory (i.e. foundational) level graduate course.

Introduction to Data Science Lecture 8 Unsupervised Learning. CS 194 Fall 2015 John Canny

CS 345A Data Mining Lecture 1. Introduction to Web Mining

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

Part 12: Advanced Topics in Collaborative Filtering. Francesco Ricci

By Atul S. Kulkarni Graduate Student, University of Minnesota Duluth. Under The Guidance of Dr. Richard Maclin

Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD. Abstract

Using Data Mining to Determine User-Specific Movie Ratings

Matrix-Vector Multiplication by MapReduce. From Rajaraman / Ullman- Ch.2 Part 1

CSE 6242 / CX October 9, Dimension Reduction. Guest Lecturer: Jaegul Choo

Mining Web Data. Lijun Zhang

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Classic Enterprise. Digital Marketing. Technology and Your Business. Pankaj

Hybrid Recommendation System Using Clustering and Collaborative Filtering

Sparse Estimation of Movie Preferences via Constrained Optimization

Sublinear Algorithms for Big Data Analysis

Part 11: Collaborative Filtering. Francesco Ricci

CS249: ADVANCED DATA MINING

Recommendation System for Netflix

DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li

Predicting User Ratings Using Status Models on Amazon.com

An Empirical Comparison of Collaborative Filtering Approaches on Netflix Data

CSE 547: Machine Learning for Big Data Spring Problem Set 2. Please read the homework submission policies.

Collaborative Filtering using Weighted BiPartite Graph Projection A Recommendation System for Yelp

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Jeff Howbert Introduction to Machine Learning Winter

Online Bipartite Matching: A Survey and A New Problem

A probabilistic model to resolve diversity-accuracy challenge of recommendation systems

THE goal of a recommender system is to make predictions

Towards a hybrid approach to Netflix Challenge

Lecture on Modeling Tools for Clustering & Regression

CSE 546 Machine Learning, Autumn 2013 Homework 2

How to maximize the revenue of market

7. Decision or classification trees

Collaborative Filtering for Netflix

Gene expression & Clustering (Chapter 10)

Personalize Movie Recommendation System CS 229 Project Final Writeup

arxiv: v4 [cs.ir] 28 Jul 2016

Recommender System Optimization through Collaborative Filtering

Predictive Indexing for Fast Search

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Part 11: Collaborative Filtering. Francesco Ricci

Big Data Analytics CSCI 4030

Instructor: Stefan Savev

Transcription:

Recommendation and Advertising Shannon Quinn (with thanks to J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University)

Lecture breakdown Part : Advertising Bipartite Matching AdWords Part : Recommendation Collaborative Filtering Latent Factor Models

: Advertising on the Web

Example: Bipartite Matching a b c Boys d Girls Nodes: Boys and Girls; Edges: Preferences Goal: Match boys to girls so that maximum number of preferences is sa>sfied

Example: Bipartite Matching a b c Boys d Girls M = {(,a),(,b),(,d)} is a matching Cardinality of matching = M =

Example: Bipartite Matching a b c Boys d Girls M = {(,c),(,b),(,d),(,a)} is a perfect matching Perfect matching all vertices of the graph are matched Maximum matching a matching that contains the largest possible number of matches 6

Matching Algorithm Problem: Find a maximum matching for a given bipartite graph A perfect one if it exists There is a polynomial- time ofqline algorithm based on augmenting paths (Hopcroft & Karp 97, see http://en.wikipedia.org/wiki/hopcroft- Karp_algorithm) But what if we do not know the entire graph upfront? 7

Online Graph Matching Problem Initially, we are given the set boys In each round, one girl s choices are revealed That is, girl s edges are revealed At that time, we have to decide to either: Pair the girl with a boy Do not pair the girl with any boy Example of application: Assigning tasks to servers 8

Greedy Algorithm Greedy algorithm for the online graph matching problem: Pair the new girl with any eligible boy If there is none, do not pair girl How good is the algorithm? 9

Competitive Ratio For input I, suppose greedy produces matching M greedy while an optimal matching is M opt Competitive ratio = min all possible inputs I ( M greedy / M opt ) (what is greedy s worst performance over all possible inputs I) 0

Worst-case Scenario a b c d (,a) (,b)

History of Web Advertising Banner ads (99-00) Initial form of web advertising Popular websites charged X$ for every,000 impressions of the ad Called CPM rate (Cost per thousand impressions) Modeled similar to TV, magazine ads From untargeted to demographically targeted Low click- through rates Low ROI for advertisers CPM cost per mille Mille thousand in Latin

Performance-based Advertising Introduced by Overture around 000 Advertisers bid on search keywords When someone searches for that keyword, the highest bidder s ad is shown Advertiser is charged only if the ad is clicked on Similar model adopted by Google with some changes around 00 Called Adwords

Ads vs. Search Results

Web.0 Performance- based advertising works! Multi- billion- dollar industry Interesting problem: What ads to show for a given query? (Today s lecture) If I am an advertiser, which search terms should I bid on and how much should I bid? (Not focus of today s lecture)

Adwords Problem Given:. A set of bids by advertisers for search queries. A click- through rate for each advertiser- query pair. A budget for each advertiser (say for month). A limit on the number of ads to be displayed with each search query Respond to each search query with a set of advertisers such that:. The size of the set is no larger than the limit on the number of ads per query. Each advertiser has bid on the search query. Each advertiser has enough budget left to pay for the ad if it J. is Leskovec, clicked A. Rajaraman, upon J. Ullman: Mining 6

Adwords Problem A stream of queries arrives at the search engine: q, q, Several advertisers bid on each query When query q i arrives, search engine must pick a subset of advertisers whose ads are shown Goal: Maximize search engine s revenues Simple solution: Instead of raw bids, use the expected revenue per click (i.e., Bid*CTR) Clearly we need an online algorithm! 7

The Adwords Innovation Advertiser Bid CTR Bid * CTR A $.00 % cent B $0.7 %. cents C $0.0.%. cents Click through rate Expected revenue 8

The Adwords Innovation Advertiser Bid CTR Bid * CTR B $0.7 %. cents C $0.0.%. cents A $.00 % cent 9

Complications: Budget Two complications: Budget CTR of an ad is unknown Each advertiser has a limited budget Search engine guarantees that the advertiser will not be charged more than their daily budget 0

Complications: CTR CTR: Each ad has a different likelihood of being clicked Advertiser bids $, click probability = 0. Advertiser bids $, click probability = 0. Clickthrough rate (CTR) is measured historically Very hard problem: Exploration vs. exploitation Exploit: Should we keep showing an ad for which we have good estimates of click- through rate or Explore: Shall we show a brand new ad to get a better sense of its click- through rate

BALANCE Algorithm [MSVV] BALANCE Algorithm by Mehta, Saberi, Vazirani, and Vazirani For each query, pick the advertiser with the largest unspent budget Break ties arbitrarily (but in a deterministic way)

Example: BALANCE Two advertisers A and B A bids on query x, B bids on x and y Both have budgets of $ Query stream: x x x x y y y y BALANCE choice: A B A B B B Optimal: A A A A B B B B In general: For BALANCE on advertisers Competitive ratio = ¾

BALANCE: General Result In the general case, worst competitive ratio of BALANCE is /e = approx. 0.6 Interestingly, no online algorithm has a better competitive ratio!

: Recommender Systems

Recommendations Examples: Search Recommendations Items Products, web sites, blogs, news items, 6

Sidenote: The Long Tail J. Leskovec, Source: A. Chris Rajaraman, Anderson J. Ullman: (00) Mining 7

Formal Model X = set of Customers S = set of Items Utility function u: X S à R R = set of ratings R is a totally ordered set e.g., 0- stars, real number in [0,] 8

Utility Matrix Avatar LOTR Matrix Pirates Alice 0. Bob 0. 0. Carol 0. David 0. 9

Key Problems () Gathering known ratings for matrix How to collect the data in the utility matrix () Extrapolate unknown ratings from the known ones Mainly interested in high unknown ratings We are not interested in knowing what you don t like but what you like () Evaluating extrapolation methods How to measure success/performance of recommendation methods 0

() Gathering Ratings Explicit Ask people to rate items Doesn t work well in practice people can t be bothered Implicit Learn ratings from user actions E.g., purchase implies high rating What about low ratings?

() Extrapolating Utilities Key problem: Utility matrix U is sparse Most people have not rated most items Cold start: New items have no ratings New users have no history Three approaches to recommender systems: ) Content- based ) Collaborative ) Latent factor based

Content-based Recommendations Main idea: Recommend items to customer x similar to previous items rated highly by x Example: Movie recommendations Recommend movies with same actor(s), director, genre, Websites, blogs, news Recommend other sites with similar content

Plan of Action Item profiles likes recommend build match Red Circles Triangles User profile

Item Profiles For each item, create an item proqile ProQile is a set (vector) of features Movies: author, title, actor, director, Text: Set of important words in document How to pick important features? Usual heuristic from text mining is TF- IDF (Term frequency * Inverse Doc Frequency) Term Feature Document Item

Pros: Content-based Approach +: No need for data on other users No cold- start or sparsity problems +: Able to recommend to users with unique tastes +: Able to recommend new & unpopular items No Qirst- rater problem +: Able to provide explanations Can provide explanations of recommended items by listing content- features that caused an item to be recommended 6

Cons: Content-based Approach : Finding the appropriate features is hard E.g., images, movies, music : Recommendations for new users How to build a user proqile? : Overspecialization Never recommends items outside user s content proqile People might have multiple interests Unable to exploit quality judgments of other users 7

Collaborative Filtering Consider user x Find set N of other users whose ratings are similar to x s ratings Estimate x s ratings based on ratings of users in N x N 8

Item-Item CF ( N =) 0 9 8 7 6 6 users movies - unknown rating - rating between to 9

Item-Item CF ( N =) 0 9 8 7 6? 6 users - estimate rating of movie by user 0 movies

Item-Item CF ( N =) users? 6 7 8 9 0 sim(,m).00-0.8 movies 0. -0.0-0. 6 0.9 Here we use Pearson correlation as similarity: ) Subtract mean rating m i from each movie i m = (++++)/ =.6 row : [-.6, 0, -0.6, 0, 0,., 0, 0,., 0, 0., 0] ) Compute cosine similarities between rows Neighbor selection: Identify movies similar to movie, rated J. Leskovec, by user A. Rajaraman, J. Ullman: Mining

Item-Item CF ( N =) users? 6 7 8 9 0 sim(,m).00-0.8 movies 0. -0.0-0. 6 0.9 Compute similarity weights: s, =0., s,6 =0.9

Item-Item CF ( N =) users 6 7 8 9 0.6 movies 6 Predict by taking weighted average: r. = (0.* + 0.9*) / (0.+0.9) =.6 r ix = j N(i;x) s ij r

CF: Common Practice Before: r xi = j N ( i; x) s j N ( i; x) ij s r ij xj DeQine similarity s ij of items i and j Select k nearest neighbors N(i; x) Items most similar to i, that were rated by x Estimate rating r xi as the weighted average: r xi = b xi + j N ( i; x) s ij ( r j N ( i; x) xj s ij b xj ) baseline estimate for r xi b xi =μ+ b x + b i J. Leskovec, A. Rajaraman, J. Ullman: Mining μ = overall mean movie ra-ng b x = ra-ng devia-on of user x = (avg. ra'ng of user x) μ b i = ra-ng devia-on of movie i

Item-Item vs. User-User Avatar LOTR Matrix Pirates Alice 0.8 Bob 0. 0. Carol 0.9 0.8 David 0. In prac>ce, it has been observed that item- item ooen works berer than user- user Why? Items are simpler, users have mul-ple tastes

Pros/Cons of Collaborative Filtering + Works for any kind of item No feature selection needed - Cold Start: Need enough users in the system to Qind a match - Sparsity: The user/ratings matrix is sparse Hard to Qind users that have rated the same items - First rater: Cannot recommend an item that has not been previously rated New items, Esoteric items - Popularity bias: Cannot recommend items to someone with unique taste Tends to recommend popular items 6

Hybrid Methods Implement two or more different recommenders and combine predictions Perhaps using a linear model Add content- based methods to collaborative Qiltering Item proqiles for new item problem Demographics to deal with new user problem 7

Problems with Error Measures Narrow focus on accuracy sometimes misses the point Prediction Diversity Prediction Context Order of predictions In practice, we care only to predict high ratings: RMSE might penalize a method that does well for high ratings and badly for others 8

Collaborative Filtering: Complexity Expensive step is Qinding k most similar customers: O( X ) Too expensive to do at runtime Could pre- compute Naïve pre- computation takes time O(k X ) X set of customers We already know how to do this! Near- neighbor search in high dimensions (LSH) Clustering Dimensionality reduction 9

Tip: Add Data Leverage all the data Don t try to reduce data size in an effort to make fancy algorithms work Simple methods on large data do best Add more data e.g., add IMDB data on genres More data beats better algorithms http://anand.typepad.com/datawocky/008/0/more-data-usual.html 0

The Netflix Prize Training data 00 million ratings, 80,000 users, 7,770 movies 6 years of data: 000-00 Test data Last few ratings of each user (.8 million) Evaluation criterion: Root Mean Square Error (RMSE) NetQlix s system RMSE: 0.9 Competition,700+ teams $ million prize for 0% improvement on NetQlix

The Netflix Utility Matrix R Matrix R 7,700 movies 80,000 users

BellKor Recommender System The winner of the NetQlix Challenge! Multi- scale modeling of the data: Combine top level, regional modeling of the data, with a reqined, local view: Global: Overall deviations of users/movies Factorization: Addressing regional effects Collaborative Qiltering: Extract local patterns Global effects Factorization Collaborative filtering

Modeling Local & Global Effects Global: Mean movie rating:.7 stars The Sixth Sense is 0. stars above avg. Joe rates 0. stars below avg. Baseline estimation: Joe will rate The Sixth Sense stars Local neighborhood (CF/NN): Joe didn t like related movie Signs Final estimate: Joe will rate The Sixth Sense.8 stars

Modeling Local & Global Effects In practice we get better estimates if we model deviations: ^ r xi = b xi + baseline estimate for r xi b xi =μ+ b x + b i µ = overall mean rating b x = rating deviation of user x = (avg. rating of user x) µ b i = (avg. rating of movie i) µ j N ( i; x) s ij ( r j N ( i; x) xj s ij b xj ) Problems/Issues: ) Similarity measures are arbitrary ) Pairwise similari-es neglect interdependencies among users ) Taking a weighted average can be restric-ng Solu>on: Instead of s ij use w ij that we es>mate directly from data

Recommendations via Optimization Goal: Make good recommendations Quantify goodness using RMSE: Lower RMSE better recommendations Want to make good recommendations on items that user has not yet seen. Can t really do this! Let s set build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings 6

Recommendations via Optimization Idea: Let s set values w such that they work well on known (user, item) ratings How to Qind such values w? Idea: DeQine an objective function and solve the optimization problem Find w ij that minimize SSE on training data! J(w)= x,i ([ b xi + j N(i;x) w ij ( r xj b xj ) ] r xi ) Predicted rating Think of w as a vector of numbers True rating 7

Latent Factor Models (e.g., SVD) The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared towards males The Lion King The Princess Diaries Funny Independence Day Dumb and Dumber 8

Latent Factor Models The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber 9

Back to Our Problem Want to minimize SSE for unseen test data Idea: Minimize SSE on training data Want large k (# of factors) to capture all the signals But, SSE on test data begins to rise for k > This is a classical example of overqitting: With too much freedom (too many free parameters) the model starts Qitting noise That is it Qits too well the training data and thus not generalizing well to unseen test data????? 60

Dealing with Missing Entries To solve overqitting we introduce regularization:????? Allow rich model where there are sufqicient data Shrink aggressively where data are scarce + + min ( rxi qi px ) λ px λ qi P, Q training x i error λ, λ user set regularization parameters length Note: We do not care about the raw value of the objective function, but we care in P,Q that J. Leskovec, achieve A. Rajaraman, the J. Ullman: minimum Mining of the objective 6

The Effect of Regularization The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor Independence Day of Massive Datasets, funny http://www.mmds.org Dumb and Dumber 6

The Effect of Regularization The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor Independence Day of Massive Datasets, funny http://www.mmds.org Dumb and Dumber 6

The Effect of Regularization The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor Independence Day of Massive Datasets, funny http://www.mmds.org Dumb and Dumber 6

The Effect of Regularization The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Factor Geared towards males min P, Q training ( r q p ) xi The Princess Diaries i x + λ min factors error + λ length x p x + i q i The Lion King Factor Independence Day of Massive Datasets, funny http://www.mmds.org Dumb and Dumber 6

Stochastic Gradient Descent Want to Qind matrices P and Q: min P, Q training ( r xi q Gradient decent: Initialize P and Q (using SVD, pretend missing ratings are 0) Do gradient descent: P P - η P Q Q - η Q i p x ) + λ + λ How to compute gradient of a matrix? Compute gradient of every element independently! where Q is gradient/derivative of matrix Q: and Here is entry f of row q i of matrix Q Observation: Computing gradients is slow! x p x i q i 66

Fitting the New Model Solve: Stochastic gradient decent to Qind parameters Note: Both biases b x, b i as well as interactions q i, p x are treated as parameters (we estimate them) 67 regularization goodness of fit λ is selected via gridsearch on a validation set ( ) + + + + + + + i i x x x x i i R i x x i i x xi P Q b b p q p q b b r ), (, ) ( min λ λ λ λ µ

Performance of Various Methods 0.9 0.9 0.9 CF (no time bias) Basic Latent Factors Latent Factors w/ Biases RMSE 0.90 0.9 0.89 0.89 0.88 0 00 000 Millions of parameters 68

Performance of Various Methods Global average:.96 User average:.06 Movie average:.0 Netflix: 0.9 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.9 Latent factors: 0.90 Latent factors+biases: 0.89 Grand Prize: 0.86 69