CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Similar documents
CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

COMP 465: Data Mining Recommender Systems

Recommendation and Advertising. Shannon Quinn (with thanks to J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University)

Data Mining Techniques

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Data Mining Techniques

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS 124/LINGUIST 180 From Languages to Information

Introduction to Data Mining

Recommendation Systems

Recommender Systems New Approaches with Netflix Dataset

CSE 258 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

Machine Learning and Data Mining. Collaborative Filtering & Recommender Systems. Kalev Kask

CSE 158 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #16: Recommenda2on Systems

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

CS 572: Information Retrieval

Collaborative Filtering Applied to Educational Data Mining

Recommender Systems Collabora2ve Filtering and Matrix Factoriza2on

CS 124/LINGUIST 180 From Languages to Information

General Instructions. Questions

CS 124/LINGUIST 180 From Languages to Information

Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams

Concept of Curve Fitting Difference with Interpolation

Performance Comparison of Algorithms for Movie Rating Estimation

Additive Regression Applied to a Large-Scale Collaborative Filtering Problem

CSE 547: Machine Learning for Big Data Spring Problem Set 2. Please read the homework submission policies.

Real-time Recommendations on Spark. Jan Neumann, Sridhar Alla (Comcast Labs) DC Spark Interactive Meetup East May

CptS 570 Machine Learning Project: Netflix Competition. Parisa Rashidi Vikramaditya Jakkula. Team: MLSurvivors. Wednesday, December 12, 2007

An Empirical Comparison of Collaborative Filtering Approaches on Netflix Data

CS224W Project: Recommendation System Models in Product Rating Predictions

Yelp Recommendation System

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CPSC 340: Machine Learning and Data Mining. Recommender Systems Fall 2017

Singular Value Decomposition, and Application to Recommender Systems

Collaborative Filtering for Netflix

CSE 546 Machine Learning, Autumn 2013 Homework 2

By Atul S. Kulkarni Graduate Student, University of Minnesota Duluth. Under The Guidance of Dr. Richard Maclin

Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD. Abstract

Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

THE goal of a recommender system is to make predictions

Large Scale Data Analysis Using Deep Learning

CS6716 Pattern Recognition

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

BBS654 Data Mining. Pinar Duygulu

Advances in Collaborative Filtering

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Predicting Popular Xbox games based on Search Queries of Users

CS294-1 Assignment 2 Report

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS249: ADVANCED DATA MINING

Bayesian Personalized Ranking for Las Vegas Restaurant Recommendation

Lecture on Modeling Tools for Clustering & Regression

Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in class hard-copy please)

HMC CS 158, Fall 2017 Problem Set 3 Programming: Regularized Polynomial Regression

Sparse Estimation of Movie Preferences via Constrained Optimization

Using Social Networks to Improve Movie Rating Predictions

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model

Advances in Collaborative Filtering

Partitioning Data. IRDS: Evaluation, Debugging, and Diagnostics. Cross-Validation. Cross-Validation for parameter tuning

Neural Network Optimization and Tuning / Spring 2018 / Recitation 3

Progress Report: Collaborative Filtering Using Bregman Co-clustering

CS570: Introduction to Data Mining

Missing Data. Where did it go?

Gradient Descent Optimization Algorithms for Deep Learning Batch gradient descent Stochastic gradient descent Mini-batch gradient descent

Hotel Recommendation Based on Hybrid Model

Recommender Systems 6CCS3WSN-7CCSMWAL

Supervised Random Walks

Seminar Collaborative Filtering. KDD Cup. Ziawasch Abedjan, Arvid Heise, Felix Naumann

Sampling PCA, enhancing recovered missing values in large scale matrices. Luis Gabriel De Alba Rivera 80555S

Assignment 5: Collaborative Filtering

Comparison of Variational Bayes and Gibbs Sampling in Reconstruction of Missing Values with Probabilistic Principal Component Analysis

CPSC 340: Machine Learning and Data Mining. Probabilistic Classification Fall 2017

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017

CPSC 340: Machine Learning and Data Mining

Recommender System Optimization through Collaborative Filtering

Recommendation System Using Yelp Data CS 229 Machine Learning Jia Le Xu, Yingran Xu

Towards a hybrid approach to Netflix Challenge

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

Introduction to Deep Learning

CSC 411: Lecture 02: Linear Regression

Web Personalisation and Recommender Systems

Ranking by Alternating SVM and Factorization Machine

CSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo

Plankton Classification Using ConvNets

Lecture 26: Missing data

CSE 5243 INTRO. TO DATA MINING

Non-negative Matrix Factorization for Multimodal Image Retrieval

ECS289: Scalable Machine Learning

CPSC 535 Assignment 1: Introduction to Matlab and Octave

Machine Learning Classifiers and Boosting

Transcription:

We need your help with our research on human interpretable machine learning. Please complete a survey at http://stanford.io/1wpokco. It should be fun and take about 1min to complete. Thanks a lot for your help! CS6: Mining Massive Datasets Jure Leskovec, Stanford University http://cs6.stanford.edu

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets Training movie rating data 100 million ratings, 80,000 users, 17,770 movies 6 years of data: 000-00 Test movie rating data data Last few ratings of each user (.8 million ratings) Evaluation criterion: Root Mean Square Error (RMSE) = 1 RR rr xxxx rr xxxx (ii,xx) RR Netflix s system RMSE: 0.91 Competition:,700+ teams $1 million prize for 10% improvement on Netflix

Labels known publicly Labels only known to Netflix Training Data Held-Out Data million ratings 100 million ratings 1.m ratings 1.m ratings Quiz Set: scores posted on leaderboard Test Set: scores known only to Netflix Scores used in determining final winner 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets Matrix R 17,700 movies 80,000 users 1 1 1 1

Matrix R 17,700 movies Training Data Set 80,000 users 1??? 1?? 1 rr,66 Test Data Set True rating of user x on item i RMSE = 1 R 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets rr xxxx rr xxxx (ii,xx) RR Predicted rating

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 The winner of the Netflix Challenge Multi-scale modeling of the data: Combine top level, regional modeling of the data, with a refined, local view: Global: Overall deviations of users/movies Factorization: Addressing regional effects Collaborative filtering: Extract local patterns Global effects Factorization Collaborative filtering

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 7 Global: Mean movie rating:.7 stars The Sixth Sense is 0. stars above avg. Joe rates 0. stars below avg. Baseline estimation: Joe will rate The Sixth Sense stars Local neighborhood (CF/NN): Joe didn t like related movie Signs Final estimate: Joe will rate The Sixth Sense.8 stars

Earliest and most popular collaborative filtering method Derive unknown ratings from those of similar movies (item-item variant) Define similarity measure s ij of items i and j Select k-nearest neighbors, compute the rating N(i; x): items most similar to i that were rated by x rˆ xi = j N ( i; x) s ij j N ( i; x) s ij r 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 8 xj s ij similarity of items i and j r xj rating of user x on item j N(i;x) set of items similar to item i that were rated by x

In practice we get better estimates if we model deviations: ^ r xi = b xi + baseline estimate for r xi bb xxxx = μμ + bb xx + bb ii μ = overall mean rating b x = rating deviation of user x = (avg. rating of user x) μ b i = (avg. rating of movie i) μ j N ( i; x) s ij ( r j N ( i; x) Problems/Issues with CF: 1) Similarity measures are arbitrary ) Pairwise similarities neglect interdependencies among users ) Taking a weighted average can be restricting Solution: Instead of s ij use w ij that we estimate directly from data 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 xj s ij b xj )

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 10 Use a weighted sum rather than weighted avg.: A few notes: rr xxxx = bb xxxx + ww iiii rr xxxx bb xxxx jj NN(ii;xx) NN(ii; xx) set of movies rated by user x that are similar to movie i ww iiii is the interpolation weight (some real number) We allow: jj NN(ii;xx) ww iiii 11 ww iiii models interaction between pairs of movies (it does not depend on user x)

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 11 rr xxxx = bb xxxx + How to set w ij? jj NN(ii,xx) ww iiii rr xxxx bb xxxx Remember, error metric is: 1 RR rr xxxx rr xxxx (ii,xx) RR or equivalently SSE: rr xxxx rr (ii,xx) RR xxxx Find w ij that minimize SSE on training data! Models relationships between item i and its neighbors j w ij can be learned/estimated based on x and all other users that rated i Why is this a good idea?

Goal: Make good recommendations Quantify goodness using RMSE: Lower RMSE better recommendations Want to make good recommendations on items that user has not yet seen. Can t really do this! Let s set build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings 1 1 1 1 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1 Idea: Let s set values w such that they work well on known (user, item) ratings How to find such values w? Idea: Define an objective function and solve the optimization problem Find w ij that minimize SSE on training data! JJ ww = bb xxxx + ww iiii rr xxxx bb xxxx xx,ii RR jj NN ii;xx Predicted rating Think of w as a vector of numbers rr xxxx True rating

A simple way to minimize a function ff(xx): Compute the take a derivative Start at some point yy and evaluate (yy) Make a step in the reverse direction of the gradient: yy = yy (yy) Repeat until converged ff ff yy + (yy) 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets yy 1

We have the optimization problem, now what? Gradient decent: Iterate until convergence: ww ww η ww JJ where ww JJ is the gradient (derivative evaluated on data): ww JJ = JJ(ww) = bb ww xxii + ww iikk rr xxxx bb xxxx iijj xx,ii RR kk NN ii;xx JJ ww = bb xxxx + ww iiii rr xxxx bb xxxx jj NN ii;xx rr xxxx η learning rate rr xxxx bb xxxx for jj {NN ii; xx, ii, xx } else (ww) = 00 ww iijj Note: We fix movie i, go over all r xi, for every movie jj NN ii; xx, we compute JJ(ww) ww iiii while w new - w old > ε: w old = w new w new = w old - η w old 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1 xx,ii RR rr xxxx

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 16 So far: rr xxxx = bb xxxx + jj NN(ii;xx) Weights w ij derived based on their role; no use of an arbitrary similarity measure (w ij s ij ) Explicitly account for interrelationships among the neighboring movies Next: Latent factor model Extract regional correlations ww iiii rr xxxx bb xxxx Global effects Factorization CF/NN

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 17 Global average: 1.196 User average: 1.061 Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 CF+Biases+learned weights: 0.91 Grand Prize: 0.86

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 18 The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared towards males The Lion King The Princess Diaries Funny Independence Day Dumb and Dumber

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 19 items 1 1 SVD on Netflix data: R Q P T 1 users R 1 items factors.1 -. -. 1.1 -.7-1 -..6..1.1.7 Q.... -. 1.1 -.8.1 For now let s assume we can approximate the rating matrix R as a product of thin Q P T R has missing entries but let s ignore that for now! -..7 -. Basically, we will want the reconstruction error to be small on known ratings and we don t care about the values on the missing ones...6. 1. 1.7 -.. -. -1.9 users.8 1. -. P T SVD: A = U Σ V T -..9.. -.7.8 1. 1..7. -.1 -.6 factors -.9 1..1

How to estimate the missing rating of user x for item i? items 1 1 items 1.1 -. -. 1.1 -.7-1 users? -..6..1.1.7 factors.... -. 1 Q factors 1.1 -.8.1 -..7 -....6 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 0. 1. 1.7 -.. -. -1.9 rr xxxx = qq ii pp xx = qq iiii pp xxxx users.8 1. -. P T ff q i = row i of Q p x = column x of P T -..9.. -.7.8 1. 1..7. -.1 -.6 -.9 1..1

How to estimate the missing rating of user x for item i? items 1 1 items 1.1 -. -. 1.1 -.7-1 users? -..6..1.1.7 factors.... -. 1 Q factors 1.1 -.8.1 -..7 -....6 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1. 1. 1.7 -.. -. -1.9 rr xxxx = qq ii pp xx = qq iiii pp xxxx users.8 1. -. P T ff q i = row i of Q p x = column x of P T -..9.. -.7.8 1. 1..7. -.1 -.6 -.9 1..1

How to estimate the missing rating of user x for item i? items 1 1 items 1.1 -. -. 1.1 -.7-1 users.? -..6..1.1.7.... -. f factors 1 Q f factors 1.1 -.8.1 -..7 -....6 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets. 1. 1.7 -.. -. -1.9 rr xxxx = qq ii pp xx = qq iiii pp xxxx users.8 1. -. P T ff q i = row i of Q p x = column x of P T -..9.. -.7.8 1. 1..7. -.1 -.6 -.9 1..1

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared Factor 1 towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared Factor 1 towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets n n Remember SVD: A: Input data matrix U: Left singular vecs m A m Σ V T V: Right singular vecs Σ: Singular values U So in our case: SVD on Netflix data: R Q P T A = R, Q = U, P T = Σ V T rr xxxx = qq ii pp xx

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 We already know that SVD gives minimum reconstruction error (Sum of Squared Errors): min AA iiii UUΣVV T iiii UU,V,Σ iiii AA Note two things: SSE and RMSE are monotonically related: RRRRRRRR = 11 cc SSSSSS Great news: SVD is minimizing RMSE! Complication: The sum in SVD error term is over all entries (no-rating in interpreted as zero-rating). But our R has missing entries!

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 7 items 1 1 1 users 1 items.1 -. -. 1.1 -.7-1 factors -..6..1.1.7.... -. 1.1 -.8.1 Q SVD isn t defined when entries are missing! Use specialized methods to find P, Q -..7 -. min PP,QQ ii,xx R rr xxxx qq ii pp xx rr xxxx = qq ii pp xx Note: We don t require cols of P, Q to be orthogonal/unit length P, Q map users/movies to a latent space The most popular model among Netflix contestants...6. 1. 1.7 -.. -. -1.9 users.8 1. -. -..9. P T. -.7.8 1. 1..7. -.1 -.6 -.9 1..1 factors

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 Our goal is to find P and Q such tat: mmmmmm PP,QQ ii,xx RR rr xxxx qq ii pp xx users factors items 1 1 1 1 items.1 -. -. 1.1 -.7-1 -..6..1.1.7.... -. Q 1.1 -.8.1 -..7 -....6. 1. 1.7 -.. -. -1.9 users.8 -. 1..9 -.. P T. -.7.8 1. 1..7. -.1 -.6 -.9 1..1 factors

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 0 Want to minimize SSE for unseen test data Idea: Minimize SSE on training data Want large k (# of factors) to capture all the signals But, SSE on test data begins to rise for k > This is a classical example of overfitting: With too much freedom (too many free parameters) the model starts fitting noise That is, it fits too well the training data and thus not generalizing well to unseen test data 1??? 1?? 1

To solve overfitting we introduce regularization: Allow rich model where there are sufficient data Shrink aggressively where data are scarce 1??? 1?? 1 min P, Q training ( r xi q i p x ) + λ1 x p x + λ i q i error λ 1, λ user set regularization parameters length Note: We do not care about the raw value of the objective function, but we care in P,Q that achieve the minimum of the objective 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1

The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day Dumb and Dumber 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets

The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day Dumb and Dumber 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets

The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day Dumb and Dumber 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets

The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day Dumb and Dumber 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 Want to find matrices P and Q: + + min ( rxi qi px ) λ1 px λ qi P, Q training x i Gradient descent: Initialize P and Q (using SVD, pretend missing ratings are 0) Do gradient descent: P P - η P Q Q - η Q where Q is gradient/derivative of matrix Q: QQ = [ qq iiii ] and qq iiii = rr xxxx qq ii pp xx pp xxxx Here qq iiii is entry f of row q i of matrix Q Observation: Computing gradients is slow! xx,ii How to compute gradient of a matrix? Compute gradient of every element independently! + λλ qq iiii

Gradient Descent (GD) vs. Stochastic GD (SGD) Observation: QQ = [ qq iiii ] where qq iiff = rr xxxx qq iiii pp xxxx pp xxxx xx,ii Here qq iiii is entry f of row q i of matrix Q QQ = QQ + λλqq iiii = QQ xx,ii rr xxxx

Convergence of GD vs. SGD Value of the objective function Iteration/step GD improves the value of the objective function at every step. SGD improves the value but in a noisy way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster! 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 8

Stochastic gradient descent: Initialize P and Q (using SVD, pretend missing ratings are 0) Then iterate over the ratings (multiple times if necessary) and update factors: For each r xi : εε xxxx = (rr xxxx qq ii pp xx ) qq ii qq ii + μμ 1 εε xxxx pp xx λλ qq ii pp xx pp xx + μμ εε xxii qq ii λλ 1 pp xx for loops: (derivative of the error ) (update equation) (update equation) μμ learning rate For until convergence: For each r xi Compute gradient, do a step a above 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9

Koren, Bell, Volinksy, IEEE Computer, 009 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 0

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets user bias movie bias user-movie interaction Baseline predictor Separates users and movies Benefits from insights into user s behavior Among the main practical contributions of the competition User-Movie interaction Characterizes the matching between users and movies Attracts most research in the field Benefits from algorithmic and mathematical innovations μ = overall mean rating b x = bias of user x b i = bias of movie i

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets We have expectations on the rating by user x of movie i, even without estimating x s attitude towards movies like i Rating scale of user x Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) (Recent) popularity of movie i Selection bias; related to number of ratings user gave on the same day ( frequency )

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets rr xxxx = μμ + bb xx + bb ii + qq ii pp xx Overall mean rating Bias for user x Bias for movie i User-Movie interaction Example: Mean rating: µ =.7 You are a critical reviewer: your ratings are 1 star lower than the mean: b x = -1 Star Wars gets a mean rating of 0. higher than average movie: b i = + 0. Predicted rating for you on Star Wars: =.7-1 + 0. =.

Solve: Stochastic gradient decent to find parameters Note: Both biases b x, b i as well as interactions q i, p x are treated as parameters (we estimate them) 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets regularization goodness of fit λ is selected via gridsearch on a validation set ( ) + + + + + + + i i x x x x i i R i x x i i x xi P Q b b p q p q b b r 1 ), (, ) ( min λ λ λ λ µ

0.9 0.91 CF (no time bias) Basic Latent Factors Latent Factors w/ Biases 0.91 RMSE 0.90 0.9 0.89 0.89 0.88 1 10 100 1000 Millions of parameters 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 7 Global average: 1.196 User average: 1.061 Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+biases: 0.89 Grand Prize: 0.86

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 Sudden rise in the average movie rating (early 00) Improvements in Netflix GUI improvements Meaning of rating changed Movie age Users prefer new movies without any reasons Older movies are just inherently better than newer ones Y. Koren, Collaborative filtering with temporal dynamics, KDD 09

Original model: r xi = µ +b x + b i + q i p x Add time dependence to biases: r xi = µ +b x (t)+ b i (t) +q i p x Make parameters b x and b i to depend on time (1) Parameterize time-dependence by linear trends () Each bin corresponds to 10 consecutive weeks Add temporal dependence to factors p x (t) user preference vector on day t Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 0

RMSE 0.9 0.91 0.91 0.90 0.9 0.89 CF (no time bias) Basic Latent Factors CF (time bias) Latent Factors w/ Biases + Linear time factors + Per-day user biases + CF 0.89 0.88 0.88 0.87 1 10 100 1000 10000 Millions of parameters 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets Global average: 1.196 User average: 1.061 Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+biases: 0.89 Latent factors+biases+time: 0.876 Still no prize! Getting desperate. Try a kitchen sink approach! Grand Prize: 0.86

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets

June 6 th submission triggers 0-day last call 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets Ensemble team formed Group of other teams on leaderboard forms a new team Relies on combining their models Quickly also get a qualifying score over 10% BellKor Continue to get small improvements in their scores Realize they are in direct competition with team Ensemble Strategy Both teams carefully monitoring the leaderboard Only sure way to check for improvement is to submit a set of predictions This alerts the other team of your latest score

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 Submissions limited to 1 a day Only 1 final submission could be made in the last h hours before deadline BellKor team member in Austria notices (by chance) that Ensemble posts a score that is slightly better than BellKor s Frantic last hours for both teams Much computer time on final optimization Carefully calibrated to end about an hour before deadline Final submissions BellKor submits a little early (on purpose), 0 mins before deadline Ensemble submits their final entry 0 mins later.and everyone waits.

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 7

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 8

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 What s the moral of the story? Submit early!

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 60 We need your help with our research on human interpretable machine learning. Please complete a survey at http://stanford.io/1wpokco. It should be fun and take about 1min to complete. Thanks a lot for your help!

1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 61 Some slides and plots borrowed from Yehuda Koren, Robert Bell and Padhraic Smyth Further reading: Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 http://www.research.att.com/~volinsky/netflix/bpc.html http://www.the-ensemble.com/