Data Mining Techniques

Similar documents
Data Mining Techniques

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

COMP 465: Data Mining Recommender Systems

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS 124/LINGUIST 180 From Languages to Information

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Recommender Systems Collabora2ve Filtering and Matrix Factoriza2on

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Machine Learning and Data Mining. Collaborative Filtering & Recommender Systems. Kalev Kask

Recommendation Systems

Recommendation and Advertising. Shannon Quinn (with thanks to J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University)

CS 124/LINGUIST 180 From Languages to Information

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #16: Recommenda2on Systems

CS 124/LINGUIST 180 From Languages to Information

CS 572: Information Retrieval

CSE 158 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

Real-time Recommendations on Spark. Jan Neumann, Sridhar Alla (Comcast Labs) DC Spark Interactive Meetup East May

CSE 258 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)

Performance Comparison of Algorithms for Movie Rating Estimation

Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011

Recommender Systems New Approaches with Netflix Dataset

Collaborative Filtering Applied to Educational Data Mining

Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams

Introduction to Data Mining

Using Social Networks to Improve Movie Rating Predictions

CS224W Project: Recommendation System Models in Product Rating Predictions

Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman

Web Personalisation and Recommender Systems

An Empirical Comparison of Collaborative Filtering Approaches on Netflix Data

Recommender System. What is it? How to build it? Challenges. R package: recommenderlab

Recommendation Algorithms: Collaborative Filtering. CSE 6111 Presentation Advanced Algorithms Fall Presented by: Farzana Yasmeen

CPSC 340: Machine Learning and Data Mining. Recommender Systems Fall 2017

Music Recommendation with Implicit Feedback and Side Information

Data Mining Lecture 2: Recommender Systems

Recommender Systems - Introduction. Data Mining Lecture 2: Recommender Systems

Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University

ECS289: Scalable Machine Learning

Part 11: Collaborative Filtering. Francesco Ricci

CptS 570 Machine Learning Project: Netflix Competition. Parisa Rashidi Vikramaditya Jakkula. Team: MLSurvivors. Wednesday, December 12, 2007

Extension Study on Item-Based P-Tree Collaborative Filtering Algorithm for Netflix Prize

Collaborative Filtering for Netflix

Recommender System Optimization through Collaborative Filtering

General Instructions. Questions

Sparse Estimation of Movie Preferences via Constrained Optimization

CSE 258. Web Mining and Recommender Systems. Advanced Recommender Systems

CS249: ADVANCED DATA MINING

Recommender Systems. Techniques of AI

Variational Bayesian PCA versus k-nn on a Very Sparse Reddit Voting Dataset

Hybrid Recommendation Models for Binary User Preference Prediction Problem

CS535 Big Data Fall 2017 Colorado State University 10/10/2017 Sangmi Lee Pallickara Week 8- A.

Deep Learning for Recommender Systems

Factor in the Neighbors: Scalable and Accurate Collaborative Filtering

CSE 547: Machine Learning for Big Data Spring Problem Set 2. Please read the homework submission policies.

Seminar Collaborative Filtering. KDD Cup. Ziawasch Abedjan, Arvid Heise, Felix Naumann

Predicting User Ratings Using Status Models on Amazon.com

Parallel learning of content recommendations using map- reduce

THE goal of a recommender system is to make predictions

Clustering-Based Personalization

BBS654 Data Mining. Pinar Duygulu

Scalable Network Analysis

Towards a hybrid approach to Netflix Challenge

Non-negative Matrix Factorization for Multimodal Image Retrieval

Recommender Systems. Master in Computer Engineering Sapienza University of Rome. Carlos Castillo

Collaborative Filtering using Weighted BiPartite Graph Projection A Recommendation System for Yelp

HMC CS 158, Fall 2017 Problem Set 3 Programming: Regularized Polynomial Regression

Advances in Collaborative Filtering

Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model

Mining Web Data. Lijun Zhang

arxiv: v4 [cs.ir] 28 Jul 2016

Recommender Systems (RSs)

Part 11: Collaborative Filtering. Francesco Ricci

Neighborhood-Based Collaborative Filtering

COMP6237 Data Mining Making Recommendations. Jonathon Hare

ELEC6910Q Analytics and Systems for Social Media and Big Data Applications Lecture 4. Prof. James She

Rating Prediction Using Preference Relations Based Matrix Factorization

CSE 158 Lecture 2. Web Mining and Recommender Systems. Supervised learning Regression

DS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University

Advances in Collaborative Filtering

By Atul S. Kulkarni Graduate Student, University of Minnesota Duluth. Under The Guidance of Dr. Richard Maclin

CS 229 Final Project - Using machine learning to enhance a collaborative filtering recommendation system for Yelp

Additive Regression Applied to a Large-Scale Collaborative Filtering Problem

TriRank: Review-aware Explainable Recommendation by Modeling Aspects

Recommender Systems - Content, Collaborative, Hybrid

ECS289: Scalable Machine Learning

Introduction to Data Science Lecture 8 Unsupervised Learning. CS 194 Fall 2015 John Canny

BordaRank: A Ranking Aggregation Based Approach to Collaborative Filtering

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

CSE 546 Machine Learning, Autumn 2013 Homework 2

PReFacTO: Preference Relations Based. Factor Model with Topic Awareness and. Offset

Performance of Recommender Algorithms on Top-N Recommendation Tasks

node2vec: Scalable Feature Learning for Networks

Singular Value Decomposition, and Application to Recommender Systems

CS294-1 Assignment 2 Report

Non-negative Matrix Factorization for Multimodal Image Retrieval

10-701/15-781, Fall 2006, Final

Assignment 5: Collaborative Filtering

CSE 158. Web Mining and Recommender Systems. Midterm recap

Transcription:

Data Mining Techniques CS 6 - Section - Spring 7 Lecture Jan-Willem van de Meent (credit: Andrew Ng, Alex Smola, Yehuda Koren, Stanford CS6)

Project

Project Deadlines Feb: Form teams of - people 7 Feb: Submit abstract ( paragraph) Mar: Submit proposals ( pages) Mar: Milestone (exploratory analysis) Mar: Milestone (statistical analysis) 6 Apr (Sun): Submit reports ( pages) Apr (Fri): Submit peer reviews

Project Reports ~ pages (rough guideline) Guidelines for contents Introduction / Motivation Exploratory analysis (if applicable) Data mining analysis Discussion of results

Project Review per person (randomly assigned) Reviews should discuss aspects of the report Clarity (is the writing clear?) Technical merit (are methods valid?) Reproducibility (is it clear how results were obtained?) Discussion (are results interpretable?)

Final Exam

Topic List http://www.ccs.neu.edu/home/jwvdm/teaching/cs6/spring7/final-topics.html Emphasis on post-midterm topics (but some pre-midterm topics included)

Recommender Systems

The Long Tail (from: https://www.wired.com///tail/)

The Long Tail (from: https://www.wired.com///tail/)

The Long Tail (from: https://www.wired.com///tail/)

Problem Setting

Problem Setting

Problem Setting

Problem Setting Task: Predict user preferences for unseen items

Content-based Filtering serious The Color Purple Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared towards males Dave The Princess Diaries The Lion King Independence Day Gus Dumb and Dumber escapist

Content-based Filtering serious The Color Purple Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared towards males Dave The Princess Diaries The Lion King Independence Day Gus Dumb and Dumber escapist Idea: Predict rating using item features on a per-user basis

Content-based Filtering serious The Color Purple Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s Lethal Weapon Geared towards males Dave The Princess Diaries The Lion King Independence Day Gus Dumb and Dumber escapist Idea: Predict rating using user features on a per-item basis

Collaborative Filtering # # # Joe # Idea: Predict rating based on similarity to other users

Problem Setting Task: Predict user preferences for unseen items Content-based filtering: Model user/item features Collaborative filtering: Implicit similarity of users items

Recommender Systems Movie recommendation (Netflix) Related product recommendation (Amazon) Web page ranking (Google) Social recommendation (Facebook) Priority inbox & spam filtering (Google) Online dating (OK Cupid) Computational Advertising (Everyone)

Challenges Scalability Millions of objects s of millions of users Cold start Changing user base Changing inventory Imbalanced dataset User activity / item reviews power law distributed Ratings are not missing at random

Running Example: Netflix Data Training data Test data user movie date score user movie date score /7/ 6 /6/? 8// 96 9//? /6/ 7 8/8/? // //? 768 7// 7 6//? 76 // 8//? 8// 9//? 68 9// 8 8/7/? // 9 //? /8/ 7 7/6/? 6 76 8// 6 69 //? 6 6 6// 6 8 //? Released as part of $M competition by Netflix in 6 Prize awarded to BellKor in 9

Running Yardstick: RMSE rmse(s) = s S X (ˆr ui r ui ) (i,u)s

Running Yardstick: RMSE rmse(s) = s S X (i,u)s (ˆr ui r ui ) (doesn t tell you how to actually do recommendation)

Content-based Filtering

Item-based Features

Item-based Features

Item-based Features

Per-user Regression Learn a set of regression coefficients for each user w u = argmin w r u Xw

User Bias and Item Popularity

Bias

Bias Moonrise Kingdom..

Bias Moonrise Kingdom.. Problem: Some movies are universally loved / hated

Bias Moonrise Kingdom.. Problem: Some movies are universally loved / hated some users are more picky than others

Bias Moonrise Kingdom.. Problem: Some movies are universally loved / hated some users are more picky than others Solution: Introduce a per-movie and per-user bias

Collaborative Filtering

Neighborhood Based Methods # # # Joe # Users and items form a bipartite graph (edges are ratings)

Neighborhood Based Methods (user, user) similarity predict rating based on average from k-nearest users good if item base is small good if item base changes rapidly (item,item) similarity predict rating based on average from k-nearest items good if the user base is small good if user base changes rapidly

Parzen-Window Style CF Define a similarity sij between items Find set εk(i,u) of k-nearest neighbors to i that were rated by user u Predict rating using weighted average over set How should we define sij?

Pearson Correlation Coefficient User ratings for item i:??????????? User ratings for item j:??????????? s ij = Cov[r ui,r uj ] Std[r ui ]Std[r uj ]

(item,item) similarity Empirical estimate of Pearson correlation coefficient P uu(i,j) (r ui b ui )(r uj b uj ) ˆ ij = q P uu(i,j) (r ui b ui ) P uu(i,j) (r uj b uj ) Regularize towards for small support s ij = U(i, j) U(i, j) + ˆ ij Regularize towards baseline for small neighborhood

Similarity for binary labels Pearson correlation not meaningful for binary labels (e.g. Views, Purchases, Clicks) Jaccard similarity Observed / Expected ratio s ij = m ij + m i + m j m ij s ij = observed expected m ij + m i m j /m m i users acting on i m ij users acting on both i and j m total number of users

Matrix Factorization Methods

Matrix Factorization Moonrise Kingdom..

Matrix Factorization Moonrise Kingdom.. Idea: pose as (biased) matrix factorization problem

Matrix Factorization items. -....6 -... -.... -. -.7..7 - -.9... -..8 -. -.. -... -.. -.7.9. -....7 -.8. -.6.7.8. -..9..7.6 -.. ~ ~ items users users A rank- SVD approximation

Prediction items. -....6 -... -.... -. -.7..7 - -.9... -..8 -. -.. -... -.. -.7.9. -....7 -.8. -.6.7.8. -..9..7.6 -.. ~ ~ items users A rank- SVD approximation users?

Prediction items. -....6 -... -.... -. -.7..7 - -.9... -..8 -. -.. -... -.. -.7.9. -....7 -.8. -.6.7.8. -..9..7.6 -.. ~ ~ items users. A rank- SVD approximation users

SVD with missing values. -....6 -... -.... -. -.7..7 - -.9... -..8 -. -.. -... -.. -.7.9. -....7 -.8. -.6.7.8. -..9..7.6 -.. ~ Pose as regression problem Regularize using Frobenius norm

Alternating Least Squares. -....6 -... -.... -. -.7..7 - -.9... -..8 -. -.. -... -.. -.7.9. -....7 -.8. -.6.7.8. -..9..7.6 -.. ~ (regress wu given X)

Alternating Least Squares. -.. -..6. ~ -.. -.7..... -. -.8. -..7 -....6...7 -.. -. -.9.8. -. -..9.. -.7.8...7. -. -.6 -.9.. -.7. (regress wu given X) L: closed form solution w =(X T X + I) X T y Remember ridge regression?

Alternating Least Squares. -....6 -... -.... -. -.7..7 - -.9... -..8 -. -.. -... -.. -.7.9. -....7 -.8. -.6.7.8. -..9..7.6 -.. ~ (regress xi given W) (regress wu given X)

Stochastic Gradient Descent. -.. -..6. ~ -.. -.7..... -. -.8. -..7 -....6...7 -.. -. -.9.8. -. -..9.. -.7.8...7. -. -.6 -.9.. -.7. No need for locking Multicore updates asynchronously (Recht, Re, Wright, - Hogwild)

Sampling Bias

Ratings are not given at random Netflix ratings Yahoo! music ratings Yahoo! survey answers

Ratings are not given at random users movies users movies rui cui matrix factorization regression data

Temporal Effects

Changes in user behavior Netflix changed rating labels

Movies get better with time?

Temporal Effects Solution: Model temporal effects in bias not weights

Netflix Prize

Netflix Prize Training data million ratings, 8, users, 7,77 movies 6 years of data: - Test data Last few ratings of each user (.8 million) Evaluation criterion: Root Mean Square Error (RMSE) Competition,7+ teams Netflix s system RMSE:.9 $ million prize for % improvement on Netflix

Improvements RMSE.9.9.9.89.89.88.88 Factor models: Error vs. #parameters 6 9 88 Add biases NMF BiasSVD SVD++ SVD v. SVD v. SVD v..87 Millions of Parameters Do SGD, but also learn biases μ, bu and bi

Improvements RMSE.9.9.9.89.89.88.88 Factor models: Error vs. #parameters 6 9 88 who rated what NMF BiasSVD SVD++ SVD v. SVD v. SVD v..87 Millions of Parameters Account for fact that ratings are not missing at random.

Improvements.9.9.9 Factor models: Error vs. #parameters 6 9 88 NMF BiasSVD SVD++ RMSE.89.89.88.88 temporal effects SVD v. SVD v. SVD v..87 Millions of Parameters

Improvements.9.9.9 Factor models: Error vs. #parameters 6 9 88 NMF BiasSVD SVD++ RMSE.89.89.88.88 temporal effects SVD v. SVD v. SVD v..87 Millions of Parameters Still pretty far from.86 grand prize

Winning Solution from BellKor

Last days June 6 th submission triggers -day last call

BellKor fends off competitors by a hair

BellKor fends off competitors by a hair

Ratings aren t everything Netflix then Netflix now Only simpler submodels (SVD, RBMs) implemented Ratings eventually proved to be only weakly informative