Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams

Similar documents
Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Introduction to Data Mining

BBS654 Data Mining. Pinar Duygulu

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS 124/LINGUIST 180 From Languages to Information

CS 124/LINGUIST 180 From Languages to Information

CS 124/LINGUIST 180 From Languages to Information

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #16: Recommenda2on Systems

Recommendation and Advertising. Shannon Quinn (with thanks to J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University)

Instructor: Dr. Mehmet Aktaş. Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University

Real-time Recommendations on Spark. Jan Neumann, Sridhar Alla (Comcast Labs) DC Spark Interactive Meetup East May

Recommender Systems. Collaborative Filtering & Content-Based Recommending

Recommender Systems - Content, Collaborative, Hybrid

CS 345A Data Mining Lecture 1. Introduction to Web Mining

Recommender Systems 6CCS3WSN-7CCSMWAL

Big Data Analytics CSCI 4030

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

COMP 465: Data Mining Recommender Systems

Recommendation Systems

Introduction to Data Mining

Knowledge Discovery and Data Mining 1 (VO) ( )

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Recommender System. What is it? How to build it? Challenges. R package: recommenderlab

Data Mining Lecture 2: Recommender Systems

Towards a hybrid approach to Netflix Challenge

Recommender Systems - Introduction. Data Mining Lecture 2: Recommender Systems

COMP6237 Data Mining Making Recommendations. Jonathon Hare

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Part 11: Collaborative Filtering. Francesco Ricci

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Using Social Networks to Improve Movie Rating Predictions

Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University

Big Data Analytics CSCI 4030

Big Data Analytics CSCI 4030

Mining Web Data. Lijun Zhang

Machine Learning using MapReduce

Recommender Systems. Techniques of AI

Part 12: Advanced Topics in Collaborative Filtering. Francesco Ricci

3 announcements: Thanks for filling out the HW1 poll HW2 is due today 5pm (scans must be readable) HW3 will be posted today

Mining Web Data. Lijun Zhang

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

High dim. data. Graph data. Infinite data. Machine learning. Apps. Locality sensitive hashing. Filtering data streams.

CS224W Project: Recommendation System Models in Product Rating Predictions

Mining Social Network Graphs

Recommendation Algorithms: Collaborative Filtering. CSE 6111 Presentation Advanced Algorithms Fall Presented by: Farzana Yasmeen

CS249: ADVANCED DATA MINING

Data Mining Techniques

Part 11: Collaborative Filtering. Francesco Ricci

Data Mining Techniques

CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

The influence of social filtering in recommender systems

Problem 1: Complexity of Update Rules for Logistic Regression

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Matrix-Vector Multiplication by MapReduce. From Rajaraman / Ullman- Ch.2 Part 1

Predicting User Ratings Using Status Models on Amazon.com

Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.

Performance Comparison of Algorithms for Movie Rating Estimation

DS504/CS586: Big Data Analytics Big Data Clustering Prof. Yanhua Li

arxiv: v4 [cs.ir] 28 Jul 2016

Web Personalization & Recommender Systems

Web Personalization & Recommender Systems

International Journal of Advance Engineering and Research Development. A Facebook Profile Based TV Shows and Movies Recommendation System

Text classification II CE-324: Modern Information Retrieval Sharif University of Technology

CPSC 340: Machine Learning and Data Mining. Non-Parametric Models Fall 2016

CPSC 340: Machine Learning and Data Mining. Recommender Systems Fall 2017

CSE 5243 INTRO. TO DATA MINING

General Instructions. Questions

Near Neighbor Search in High Dimensional Data (1) Dr. Anwar Alhenshiri

Recommender Systems New Approaches with Netflix Dataset

A probabilistic model to resolve diversity-accuracy challenge of recommendation systems

Instructor: Stefan Savev

Recommender Systems (RSs)

CS246: Mining Massive Datasets Jure Leskovec, Stanford University.

Jeff Howbert Introduction to Machine Learning Winter

CS570: Introduction to Data Mining

CSE 258 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

Distributed Itembased Collaborative Filtering with Apache Mahout. Sebastian Schelter twitter.com/sscdotopen. 7.

CS535 Big Data Fall 2017 Colorado State University 10/10/2017 Sangmi Lee Pallickara Week 8- A.

CS435 Introduction to Big Data Spring 2018 Colorado State University. 3/21/2018 Week 10-B Sangmi Lee Pallickara. FAQs. Collaborative filtering

Hybrid Recommendation System Using Clustering and Collaborative Filtering

Collaborative Filtering using Weighted BiPartite Graph Projection A Recommendation System for Yelp

CSE 158 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

A PROPOSED HYBRID BOOK RECOMMENDER SYSTEM

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

vector space retrieval many slides courtesy James Amherst

Feature Extractors. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. The Perceptron Update Rule.

Recommender Systems. Nivio Ziviani. Junho de Departamento de Ciência da Computação da UFMG

Nearest Neighbor with KD Trees

Neighborhood-Based Collaborative Filtering

Finding Similar Items:Nearest Neighbor Search

Classification: Feature Vectors

Machine Learning and Data Mining. Collaborative Filtering & Recommender Systems. Kalev Kask

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Collaborative Filtering using Euclidean Distance in Recommendation Engine

Transcription:

/9/7 Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. If you make use of a significant portion of these slides in your own lecture, please include this message, or a link to our web site: http://www.mmds.org Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org High dim. data Graph data Infinite data Machine learning Apps Locality sensitive hashing PageRank, SimRank Filtering data streams SVM Recommen der systems Clustering Community Detection Web advertising Decision Trees Association Rules Dimensional ity reduction Spam Detection Queries on streams Perceptron, knn Duplicate document detection

/9/7 Customer X Buys Metallica CD Buys Megadeth CD Customer Y Does search on Metallica Recommender system suggests Megadeth from data collected about customer X Examples: Search Recommendations Items Products, web sites, blogs, news items,

/9/7 Shelf space is a scarce commodity for traditional retailers Also: TV networks, movie theaters, Web enables near-zero-cost dissemination of information about products From scarcity to abundance More choice necessitates better filters Recommendation engines How Into Thin Air made Touching the Void a bestseller: http://www.wired.com/wired/archive/.0/tail.html Source: Chris Anderson (00)

/9/7 Read http://www.wired.com/wired/archive/.0/tail.html to learn more! 7 Editorial and hand curated List of favorites Lists of essential items Simple aggregates Top 0, Most Popular, Recent Uploads Tailored to individual users Amazon, Netflix, 8

/9/7 X = set of Customers S = set of Items Utility function u: X S à R R = set of ratings R is a totally ordered set e.g., 0- stars, real number in [0,] 9 Avatar LOTR Matrix Pirates Alice Bob Carol David 0. 0. 0. 0. 0. 0

/9/7 () Gathering known ratings for matrix How to collect the data in the utility matrix () Extrapolate unknown ratings from the known ones Mainly interested in high unknown ratings We are not interested in knowing what you don t like but what you like () Evaluating extrapolation methods How to measure success/performance of recommendation methods Explicit Ask people to rate items Doesn t work well in practice people can t be bothered Implicit Learn ratings from user actions E.g., purchase implies high rating What about low ratings?

/9/7 Key problem: Utility matrix U is sparse Most people have not rated most items Cold start: New items have no ratings New users have no history Three approaches to recommender systems: ) Content-based ) Collaborative ) Latent factor based Today! 7

/9/7 Main idea: Recommend items to customer x similar to previous items rated highly by x Example: Movie recommendations Recommend movies with same actor(s), director, genre, Websites, blogs, news Recommend other sites with similar content likes Item profiles recommend build match Red Circles Triangles User profile 8

/9/7 For each item, create an item profile Profile is a set (vector) of features Movies: author, title, actor, director, Text: Set of important words in document How to pick important features? Usual heuristic from text mining is TF-IDF (Term frequency * Inverse Doc Frequency) Term Feature Document Item 7 f ij = frequency of term (feature) i in doc (item) j n i = number of docs that mention term i N = total number of docs Note: we normalize TF to discount for longer documents TF-IDF score: w ij = TF ij IDF i Doc profile = set of words with highest TF-IDF scores, together with their scores 8 9

/9/7 User profile possibilities: Weighted average of rated item profiles Variation: weight by difference from average rating for item Prediction heuristic: Given user profile x and item profile i, estimate u(x, i) = cos (x, i) = x i x i 9 +: No need for data on other users No cold-start or sparsity problems +: Able to recommend to users with unique tastes +: Able to recommend new & unpopular items No first-rater problem +: Able to provide explanations Can provide explanations of recommended items by listing content-features that caused an item to be recommended 0 0

/9/7 : Finding the appropriate features is hard E.g., images, movies, music : Recommendations for new users How to build a user profile? : Overspecialization Never recommends items outside user s content profile People might have multiple interests Unable to exploit quality judgments of other users Harnessing quality judgments of other users

/9/7 Consider user x Find set N of other users whose ratings are similar to x s ratings Estimate x s ratings based on ratings of users in N x N Let r x be the vector of user x s ratings Jaccard similarity measure Problem: Ignores the value of the rating Cosine similarity measure sim(x, y) = cos(r x, r y ) = / 0 / / 0 / r x = [*, _, _, *, ***] r y = [*, _, **, **, _] Problem: Treats missing ratings as negative Pearson correlation coefficient S xy = items rated by both users x and y sim x, y = r xs r x r ys r y s Sxy r xs r s S xy x r ys r y s S xy r x, r y as sets: r x = {,, } r y = {,, } r x, r y as points: r x = {, 0, 0,, } r y = {, 0,,, 0} r x, r y avg. rating of x, y

/9/7 Cosine sim: sim(x, y) = r xi r yi i r i xi r i yi Intuitively we want: sim(a, B) > sim(a, C) Jaccard similarity: / < / Cosine similarity: 0.80 > 0. Considers missing ratings as negative Solution: subtract the (row) mean sim A,B vs. A,C: 0.09 > -0.9 Notice cosine sim. is correlation when data is centered at 0 From similarity metric to recommendations: Let r x be the vector of user x s ratings Let N be the set of k users most similar to x who have rated item i Prediction for item s of user x: r => =? @ r A> A B Shorthand: s xy = sim x, y r => = E C 0 / D E C 0 Other options? Many other tricks possible

/9/7 So far: User-user collaborative filtering Another view: Item-item For item i, find other similar items Estimate rating for item i based on ratings for similar items Can use same similarity metrics and prediction functions as in user-user model r xi = å jî å N ( i; x) s ij jîn ( i; x) r s ij xj s ij similarity of items i and j r xj rating of user x on item j N(i;x) set items rated by x similar to i 7 users 7 8 9 0 movies - unknown rating - rating between to 8

/9/7 users 7 8 9 0? movies - estimate rating of movie by user 9 users? 7 8 9 0 sim(,m).00-0.8 movies 0. -0.0-0. 0.9 Here we use Pearson correlation as similarity: ) Subtract mean rating m i from each movie i m = (++++)/ =. row : [-., 0, -0., 0, 0,., 0, 0,., 0, 0., 0] ) Compute cosine similarities between rows 0 Neighbor selection: Identify movies similar to movie, rated by user

/9/7 users? 7 8 9 0 sim(,m).00-0.8 movies 0. -0.0-0. 0.9 Compute similarity weights: s, =0., s, =0.9 users 7 8 9 0. movies Predict by taking weighted average: r. = (0.* + 0.9*) / (0.+0.9) =. r ix = j N(i;x) s ij s ij r jx

/9/7 Define similarity s ij of items i and j Select k nearest neighbors N(i; x) Before: Items most similar to i, that were rated by x Estimate rating r xi as the weighted average: r xi = b xi + å jîn ( i; x) å s ij ( r jîn ( i; x) - b baseline estimate for r xi μ = overall mean movie rating b x = rating deviation of user x b xi = μ + b x + b i = (avg. rating of user x) μ b i = rating deviation of movie i xj s ij å j å ÎN ( i; x) xj ) r xi = s r ij xj s jîn ( i; x) ij Alice Bob Carol David Avatar LOTR Matrix Pirates 0.9 0. 0.8 0. 0.8 0. In practice, it has been observed that item-item often works better than user-user Why? Items are simpler, users have multiple tastes 7

/9/7 + Works for any kind of item No feature selection needed - Cold Start: Need enough users in the system to find a match - Sparsity: The user/ratings matrix is sparse Hard to find users that have rated the same items - First rater: Cannot recommend an item that has not been previously rated New items, Esoteric items - Popularity bias: Cannot recommend items to someone with unique taste Tends to recommend popular items Implement two or more different recommenders and combine predictions Perhaps using a linear model Add content-based methods to collaborative filtering Item profiles for new item problem Demographics to deal with new user problem 8

/9/7 - Evaluation - Error metrics - Complexity / Speed 7 movies users 8 9

/9/7 users movies????? Test Data Set 9 Compare predictions with known ratings Root-mean-square error (RMSE) M => r => r => where r xi is predicted, r xi is the true rating of x on i Precision at top 0: % of those in top 0 Rank Correlation: Spearman s correlation between system s and user s complete rankings Another approach: 0/ model Coverage: Number of items/users for which system can make predictions Precision: Accuracy of predictions Receiver operating characteristic (ROC) Tradeoff curve between false positives and false negatives 0 0

/9/7 Narrow focus on accuracy sometimes misses the point Prediction Diversity Prediction Context Order of predictions In practice, we care only to predict high ratings: RMSE might penalize a method that does well for high ratings and badly for others Expensive step is finding k most similar customers: O( X ) Too expensive to do at runtime Could pre-compute Naïve pre-computation takes time O(k X ) X set of customers We already know how to do this! Near-neighbor search in high dimensions (LSH) Clustering Dimensionality reduction

/9/7 Leverage all the data Don t try to reduce data size in an effort to make fancy algorithms work Simple methods on large data do best Add more data e.g., add IMDB data on genres More data beats better algorithms http://anand.typepad.com/datawocky/008/0/more-data-usual.html