CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Similar documents
CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS 124/LINGUIST 180 From Languages to Information

Introduction to Data Mining

BBS654 Data Mining. Pinar Duygulu

CS 124/LINGUIST 180 From Languages to Information

CS 124/LINGUIST 180 From Languages to Information

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #16: Recommenda2on Systems

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

Recommendation and Advertising. Shannon Quinn (with thanks to J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University)

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Data Mining Techniques

COMP 465: Data Mining Recommender Systems

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Data Mining Techniques

Real-time Recommendations on Spark. Jan Neumann, Sridhar Alla (Comcast Labs) DC Spark Interactive Meetup East May

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Machine Learning and Data Mining. Collaborative Filtering & Recommender Systems. Kalev Kask

Recommender Systems Collabora2ve Filtering and Matrix Factoriza2on

Recommendation Systems

Recommender Systems. Collaborative Filtering & Content-Based Recommending

Recommender Systems - Content, Collaborative, Hybrid

Web Personalisation and Recommender Systems

Recommender Systems 6CCS3WSN-7CCSMWAL

Data Mining Lecture 2: Recommender Systems

Recommender Systems - Introduction. Data Mining Lecture 2: Recommender Systems

Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University

CS 345A Data Mining Lecture 1. Introduction to Web Mining

CS224W Project: Recommendation System Models in Product Rating Predictions

Knowledge Discovery and Data Mining 1 (VO) ( )

Performance Comparison of Algorithms for Movie Rating Estimation

Using Social Networks to Improve Movie Rating Predictions

CS 572: Information Retrieval

Recommender Systems New Approaches with Netflix Dataset

COMP6237 Data Mining Making Recommendations. Jonathon Hare

CS249: ADVANCED DATA MINING

CSE 258 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Recommender Systems. Techniques of AI

CSE 158 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

Recommender Systems (RSs)

Recommender System. What is it? How to build it? Challenges. R package: recommenderlab

Towards a hybrid approach to Netflix Challenge

Machine Learning using MapReduce

arxiv: v4 [cs.ir] 28 Jul 2016

A probabilistic model to resolve diversity-accuracy challenge of recommendation systems

Distributed Itembased Collaborative Filtering with Apache Mahout. Sebastian Schelter twitter.com/sscdotopen. 7.

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Recommender System Optimization through Collaborative Filtering

Collaborative filtering models for recommendations systems

Part 11: Collaborative Filtering. Francesco Ricci

Part 11: Collaborative Filtering. Francesco Ricci

CS535 Big Data Fall 2017 Colorado State University 10/10/2017 Sangmi Lee Pallickara Week 8- A.

Part 12: Advanced Topics in Collaborative Filtering. Francesco Ricci

By Atul S. Kulkarni Graduate Student, University of Minnesota Duluth. Under The Guidance of Dr. Richard Maclin

Recommendation Algorithms: Collaborative Filtering. CSE 6111 Presentation Advanced Algorithms Fall Presented by: Farzana Yasmeen

CPSC 340: Machine Learning and Data Mining. Recommender Systems Fall 2017

Predicting User Ratings Using Status Models on Amazon.com

CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp

Web Personalization & Recommender Systems

Web Personalization & Recommender Systems

Collaborative Filtering using Weighted BiPartite Graph Projection A Recommendation System for Yelp

CS435 Introduction to Big Data Spring 2018 Colorado State University. 3/21/2018 Week 10-B Sangmi Lee Pallickara. FAQs. Collaborative filtering

DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li

Personalize Movie Recommendation System CS 229 Project Final Writeup

Neighborhood-Based Collaborative Filtering

A PROPOSED HYBRID BOOK RECOMMENDER SYSTEM

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Using Data Mining to Determine User-Specific Movie Ratings

Property1 Property2. by Elvir Sabic. Recommender Systems Seminar Prof. Dr. Ulf Brefeld TU Darmstadt, WS 2013/14

Movie Recommender System - Hybrid Filtering Approach

Mining Web Data. Lijun Zhang

Hybrid Recommendation System Using Clustering and Collaborative Filtering

Jeff Howbert Introduction to Machine Learning Winter

General Instructions. Questions

The influence of social filtering in recommender systems

Recommender Systems. Nivio Ziviani. Junho de Departamento de Ciência da Computação da UFMG

Mining Web Data. Lijun Zhang

COSC6376 Cloud Computing Homework 1 Tutorial

Collaborative Filtering and Recommender Systems. Definitions. .. Spring 2009 CSC 466: Knowledge Discovery from Data Alexander Dekhtyar..

Computational Intelligence Meets the NetFlix Prize

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Introduction to Data Science Lecture 8 Unsupervised Learning. CS 194 Fall 2015 John Canny

International Journal of Advance Engineering and Research Development. A Facebook Profile Based TV Shows and Movies Recommendation System

An Empirical Comparison of Collaborative Filtering Approaches on Netflix Data

Recommendation System for Netflix

CptS 570 Machine Learning Project: Netflix Competition. Parisa Rashidi Vikramaditya Jakkula. Team: MLSurvivors. Wednesday, December 12, 2007

Information Retrieval: Retrieval Models

Collaborative Filtering Applied to Educational Data Mining

Document Information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Sparse Estimation of Movie Preferences via Constrained Optimization

vector space retrieval many slides courtesy James Amherst

Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD. Abstract

Instructor: Stefan Savev

City, University of London Institutional Repository. This version of the publication may differ from the final published version.

Transcription:

CS6: Mining Massive Datasets Jure Leskovec, Stanford University http://cs6.stanford.edu

Customer X Buys Metalica CD Buys Megadeth CD Customer Y Does search on Metalica Recommender system suggests Megadeth from data collected from customer X //0 Jure Leskovec, Stanford C6: Mining Massive Datasets

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Examples: Search Recommendations Items Products, web sites, blogs, news items,

Shelf space is a scarce commodity for traditional retailers Also: TV networks, movie theaters, Web enables near-zero-cost dissemination of information about products From scarcity to abundance More choice necessitates better filters Recommendation engines How Into Thin Air made Touching the Void a bestseller: http://www.wired.com/wired/archive/.0/tail.html //0 Jure Leskovec, Stanford C6: Mining Massive Datasets

Source: Chris Anderson (00) //0 Jure Leskovec, Stanford C6: Mining Massive Datasets

Read http://www.wired.com/wired/archive/.0/tail.html to learn more! //0 Jure Leskovec, Stanford C6: Mining Massive Datasets 6

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 7 Editorial Simple aggregates Top 0, Most Popular, Recent Uploads Tailored to individual users Amazon, Netflix,

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 8 C = set of Customers S = set of Items Utility function u: C S R R = set of ratings R is a totally ordered set e.g., 0- stars, real number in [0,]

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 Avatar LOTR Matrix Pirates Alice 0. Bob 0. 0. Carol 0. David 0.

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 0 Gathering known ratings for matrix Extrapolate unknown ratings from known ratings Mainly interested in high unknown ratings Evaluating extrapolation methods

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Explicit Ask people to rate items Doesn t work well in practice people can t be bothered Implicit Learn ratings from user actions E.g., purchase implies high rating What about low ratings?

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Key problem: matrix U is sparse Most people have not rated most items Cold start: New items have no ratings New users have no history Three approaches: Content-based Collaborative Hybrid

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Main idea: Recommend items to customer x similar to previous items rated highly by x Example: Movie recommendations Recommend movies with same actor(s), director, genre, Websites, blogs, news Recommend other sites with similar content

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Item profiles likes recommend build match Red Circles Triangles User profile

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets For each item, create an item profile Profile is a set (vector) of features Movies: author, title, actor, director, Text: set of important words in document How to pick important features? Usual heuristic from text mining is TF-IDF (Term frequency * Inverse Doc Frequency) Term feature Document item

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 f ij = frequency of term (feature) i in document (item) j n i = number of docs that mention term i N = total number of docs Note: we normalize TF to discount for longer documents TF-IDF score: w ij = TF ij IDF i Doc profile = set of words with highest TF-IDF scores, together with their scores

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 7 User profile possibilities: Weighted average of rated item profiles Variation: weight by difference from average rating for item Prediction heuristic: Given user profile u and item profile i, estimate u(u,i) = cos(u,i) = u i/( u i ) Need efficient method to find items with high utility: LSH!

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 8 +: No need for data on other users No cold-start or sparsity problems +: Able to recommend to users with unique tastes +: Able to recommend new and unpopular items No first-rater problem Can provide explanations of recommended items by listing content-features that caused an item to be recommended

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 : Finding the appropriate features is hard E.g., images, movies, music : Overspecialization Never recommends items outside user s content profile People might have multiple interests Unable to exploit quality judgments of other users : Recommendations for new users How to build a user profile?

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 0 Consider user x Find set N of other users whose ratings are similar to x s ratings Estimate x s ratings based on ratings of users in N x N

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Let r x be the vector of user x s ratings Jaccard similarity measure Problem: Ignores the rating value Cosine similarity measure sim(x,y) = cos(r x, r y ) Problem: Treats missing ratings as negative Pearson correlation coefficient S xy = items rated by both users x and y

Intuitively we want: sim(a, B) > sim(a, C) Jaccard similarity: / < / Cosine similarity: 0.86 > 0. Considers missing ratings as negative Solution: subtract the mean sim A,B vs. A,C: 0.09 > -0.9 Notice cos sim is correlation when data is centered at 0 //0 Jure Leskovec, Stanford C6: Mining Massive Datasets

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Let r x be the vector of user x s ratings Let N be the set of k users most similar to x who have rated item i Possibilities for prediction for item s of user x: r xi = /k y N r yi r xi = ( y N sim(x,y) r yi ) / ( y N sim(x,y)) Other options? Many tricks possible

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Expensive step is finding k most similar customers O( C ) Too expensive to do at runtime Could pre-compute Naïve precomputation takes time O(N C ) Stay tuned for how to do it faster! Can use clustering, partitioning as alternatives, but quality degrades

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets So far: User-user collaborative filtering Another view: Item-item For item i, find other similar items Estimate rating for item based on ratings for similar items Can use same similarity metrics and prediction functions as in user-user model r ui = j N ( i; u) s j N ( i; u) ij s r ij uj s ij similarity of items i and j r uj rating of user u on item j N(i;u) set items rated by u similar to i

0 9 8 7 6 6 users movies - unknown rating - rating between to //0 6 Jure Leskovec, Stanford C6: Mining Massive Datasets

0 9 8 7 6? 6 users - estimate rating of movie by user //0 7 Jure Leskovec, Stanford C6: Mining Massive Datasets movies

0 9 8 7 6? 6 users Neighbor selection: Identify movies similar to movie, rated by user //0 8 Jure Leskovec, Stanford C6: Mining Massive Datasets movies

0 9 8 7 6? 6 users Compute similarity weights: s =0., s 6 =0. //0 9 Jure Leskovec, Stanford C6: Mining Massive Datasets movies

0 9 8 7 6.6 6 users Predict by taking weighted average: r =(0.*+0.*)/(0.+0.)=.6 //0 0 Jure Leskovec, Stanford C6: Mining Massive Datasets movies r ii = s iir jj s ii

Before: Define similarity measure s ij of items i and j Select k nearest neighbors N(i; u) items most similar to i, that were rated by u Estimate rating r ui as the weighted average: r ui = j N ( i; u) s j N ( i; u) ij s r ij uj r ui ( ) s r b (; ) ij uj uj = b + j N iu ui s j N(; iu) ij baseline estimate for r ui μ = overall mean movie rating b u = rating deviation of user u = μ avg. rating of user u b i = rating deviation of movie i //0 Jure Leskovec, Stanford C6: Mining Massive Datasets

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Avatar LOTR Matrix Pirates Alice 0.8 Bob 0. 0. Carol 0.9 0.8 David 0. In practice, it has been observed that item-item often works better than user-user Why? Items are simpler, users have multiple tastes

Works for any kind of item No feature selection needed Cold Start: Need enough users in the system to find a match Sparsity: The user/ratings matrix is sparse. Hard to find users that have rated the same items First rater: Cannot recommend an item that has not been previously rated New items, Esoteric items Popularity bias: Cannot recommend items to someone with unique taste Tends to recommend popular items //0 Jure Leskovec, Stanford C6: Mining Massive Datasets

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Implement two or more different recommenders and combine predictions Perhaps using a linear model Add content-based methods to collaborative filtering Item profiles for new item problem Demographics to deal with new user problem

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets movies users

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 users movies????? Test Data Set

Compare predictions with known ratings Root-mean-square error (RMSE) Precision at top 0: % of those in top0 Rating of top 0: Average rating assigned to top 0 Rank Correlation: Spearman s, r s, between system s and user s complete rankings. Another approach: 0/ model Coverage: Number of items/users for which system can make predictions Precision: Accuracy of predictions Receiver operating characteristic (ROC) Tradeoff curve between false positives and false negatives //0 Jure Leskovec, Stanford C6: Mining Massive Datasets 7

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 8 Narrow focus on accuracy sometimes misses the point Prediction Diversity Prediction Context Order of predictions In practice, we care only to predict high ratings: RMSE might penalize a method that does well for high ratings and badly for others

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 Leverage all the Netflix data Don t try to reduce data size in an effort to make fancy algorithms work Simple methods on large data do best Add more data e.g., add IMDB data on genres More data beats better algorithms http://anand.typepad.com/datawocky/008/0/more-data-usual.html

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets 0 Common problem that comes up in many settings Given a large number N of vectors in some high-dimensional space (M dimensions), find pairs of vectors that have high similarity e.g., user profiles, item profiles We already know how to do this! Near-neighbor search in high dimensions (LSH) Dimensionality reduction

Training data 00 million ratings, 80,000 users, 7,770 movies 6 years of data: 000-00 Test data Last few ratings of each user (.8 million) Evaluation criterion: root mean squared error (RMSE) Netflix Cinematch RMSE: 0.9 Competition 700+ teams $ million prize for 0% improvement on Cinematch //0 Jure Leskovec, Stanford C6: Mining Massive Datasets

//0 Jure Leskovec, Stanford C6: Mining Massive Datasets Next topic: Recommendations via Latent Factor models Exoticness / Price R8 B B Overview of Coffee Varieties I IC6 L S C S S S7 S6 R C7 S RR6 R C L C Flavored C a S FR TE F F9 FF F8 F0 F6 R F Popular Roasts and Blends Exotic Com plexity of Flavor The bubbles above represent products sized by sales volume. Products close to each other are recommended to each other. F

[Bellkor Team] //0 Jure Leskovec, Stanford C6: Mining Massive Datasets The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared towards males Dave The Princess Diaries The Lion King escapist Independence Day Dumb and Dumber Gus

Koren, Bell, Volinksy, IEEE Computer, 009 //0 Jure Leskovec, Stanford C6: Mining Massive Datasets