CS246: Mining Massive Datasets Jure Leskovec, Stanford University
|
|
- Emory Gregory
- 5 years ago
- Views:
Transcription
1 We need your help with our research on human interpretable machine learning. Please complete a survey at It should be fun and take about 1min to complete. Thanks a lot for your help! CS6: Mining Massive Datasets Jure Leskovec, Stanford University
2 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets Training movie rating data 100 million ratings, 80,000 users, 17,770 movies 6 years of data: Test movie rating data data Last few ratings of each user (.8 million ratings) Evaluation criterion: Root Mean Square Error (RMSE) = 1 RR rr xxxx rr xxxx (ii,xx) RR Netflix s system RMSE: 0.91 Competition:,700+ teams $1 million prize for 10% improvement on Netflix
3 Labels known publicly Labels only known to Netflix Training Data Held-Out Data million ratings 100 million ratings 1.m ratings 1.m ratings Quiz Set: scores posted on leaderboard Test Set: scores known only to Netflix Scores used in determining final winner 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets
4 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets Matrix R 17,700 movies 80,000 users
5 Matrix R 17,700 movies Training Data Set 80,000 users 1??? 1?? 1 rr,66 Test Data Set True rating of user x on item i RMSE = 1 R 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets rr xxxx rr xxxx (ii,xx) RR Predicted rating
6 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 The winner of the Netflix Challenge Multi-scale modeling of the data: Combine top level, regional modeling of the data, with a refined, local view: Global: Overall deviations of users/movies Factorization: Addressing regional effects Collaborative filtering: Extract local patterns Global effects Factorization Collaborative filtering
7 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 7 Global: Mean movie rating:.7 stars The Sixth Sense is 0. stars above avg. Joe rates 0. stars below avg. Baseline estimation: Joe will rate The Sixth Sense stars Local neighborhood (CF/NN): Joe didn t like related movie Signs Final estimate: Joe will rate The Sixth Sense.8 stars
8 Earliest and most popular collaborative filtering method Derive unknown ratings from those of similar movies (item-item variant) Define similarity measure s ij of items i and j Select k-nearest neighbors, compute the rating N(i; x): items most similar to i that were rated by x rˆ xi = j N ( i; x) s ij j N ( i; x) s ij r 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 8 xj s ij similarity of items i and j r xj rating of user x on item j N(i;x) set of items similar to item i that were rated by x
9 In practice we get better estimates if we model deviations: ^ r xi = b xi + baseline estimate for r xi bb xxxx = μμ + bb xx + bb ii μ = overall mean rating b x = rating deviation of user x = (avg. rating of user x) μ b i = (avg. rating of movie i) μ j N ( i; x) s ij ( r j N ( i; x) Problems/Issues with CF: 1) Similarity measures are arbitrary ) Pairwise similarities neglect interdependencies among users ) Taking a weighted average can be restricting Solution: Instead of s ij use w ij that we estimate directly from data 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 xj s ij b xj )
10 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 10 Use a weighted sum rather than weighted avg.: A few notes: rr xxxx = bb xxxx + ww iiii rr xxxx bb xxxx jj NN(ii;xx) NN(ii; xx) set of movies rated by user x that are similar to movie i ww iiii is the interpolation weight (some real number) We allow: jj NN(ii;xx) ww iiii 11 ww iiii models interaction between pairs of movies (it does not depend on user x)
11 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 11 rr xxxx = bb xxxx + How to set w ij? jj NN(ii,xx) ww iiii rr xxxx bb xxxx Remember, error metric is: 1 RR rr xxxx rr xxxx (ii,xx) RR or equivalently SSE: rr xxxx rr (ii,xx) RR xxxx Find w ij that minimize SSE on training data! Models relationships between item i and its neighbors j w ij can be learned/estimated based on x and all other users that rated i Why is this a good idea?
12 Goal: Make good recommendations Quantify goodness using RMSE: Lower RMSE better recommendations Want to make good recommendations on items that user has not yet seen. Can t really do this! Let s set build a system such that it works well on known (user, item) ratings And hope the system will also predict well the unknown ratings /8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1
13 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1 Idea: Let s set values w such that they work well on known (user, item) ratings How to find such values w? Idea: Define an objective function and solve the optimization problem Find w ij that minimize SSE on training data! JJ ww = bb xxxx + ww iiii rr xxxx bb xxxx xx,ii RR jj NN ii;xx Predicted rating Think of w as a vector of numbers rr xxxx True rating
14 A simple way to minimize a function ff(xx): Compute the take a derivative Start at some point yy and evaluate (yy) Make a step in the reverse direction of the gradient: yy = yy (yy) Repeat until converged ff ff yy + (yy) 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets yy 1
15 We have the optimization problem, now what? Gradient decent: Iterate until convergence: ww ww η ww JJ where ww JJ is the gradient (derivative evaluated on data): ww JJ = JJ(ww) = bb ww xxii + ww iikk rr xxxx bb xxxx iijj xx,ii RR kk NN ii;xx JJ ww = bb xxxx + ww iiii rr xxxx bb xxxx jj NN ii;xx rr xxxx η learning rate rr xxxx bb xxxx for jj {NN ii; xx, ii, xx } else (ww) = 00 ww iijj Note: We fix movie i, go over all r xi, for every movie jj NN ii; xx, we compute JJ(ww) ww iiii while w new - w old > ε: w old = w new w new = w old - η w old 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1 xx,ii RR rr xxxx
16 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 16 So far: rr xxxx = bb xxxx + jj NN(ii;xx) Weights w ij derived based on their role; no use of an arbitrary similarity measure (w ij s ij ) Explicitly account for interrelationships among the neighboring movies Next: Latent factor model Extract regional correlations ww iiii rr xxxx bb xxxx Global effects Factorization CF/NN
17 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 17 Global average: User average: Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 CF+Biases+learned weights: 0.91 Grand Prize: 0.86
18 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 18 The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared towards males The Lion King The Princess Diaries Funny Independence Day Dumb and Dumber
19 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 19 items 1 1 SVD on Netflix data: R Q P T 1 users R 1 items factors Q For now let s assume we can approximate the rating matrix R as a product of thin Q P T R has missing entries but let s ignore that for now! Basically, we will want the reconstruction error to be small on known ratings and we don t care about the values on the missing ones users P T SVD: A = U Σ V T factors
20 How to estimate the missing rating of user x for item i? items 1 1 items users? factors Q factors /8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets rr xxxx = qq ii pp xx = qq iiii pp xxxx users P T ff q i = row i of Q p x = column x of P T
21 How to estimate the missing rating of user x for item i? items 1 1 items users? factors Q factors /8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets rr xxxx = qq ii pp xx = qq iiii pp xxxx users P T ff q i = row i of Q p x = column x of P T
22 How to estimate the missing rating of user x for item i? items 1 1 items users.? f factors 1 Q f factors /8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets rr xxxx = qq ii pp xx = qq iiii pp xxxx users P T ff q i = row i of Q p x = column x of P T
23 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared Factor 1 towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber
24 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets The Color Purple Serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Geared Factor 1 towards males The Lion King The Princess Diaries Factor Funny Independence Day Dumb and Dumber
25 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets n n Remember SVD: A: Input data matrix U: Left singular vecs m A m Σ V T V: Right singular vecs Σ: Singular values U So in our case: SVD on Netflix data: R Q P T A = R, Q = U, P T = Σ V T rr xxxx = qq ii pp xx
26 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 We already know that SVD gives minimum reconstruction error (Sum of Squared Errors): min AA iiii UUΣVV T iiii UU,V,Σ iiii AA Note two things: SSE and RMSE are monotonically related: RRRRRRRR = 11 cc SSSSSS Great news: SVD is minimizing RMSE! Complication: The sum in SVD error term is over all entries (no-rating in interpreted as zero-rating). But our R has missing entries!
27 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 7 items users 1 items factors Q SVD isn t defined when entries are missing! Use specialized methods to find P, Q min PP,QQ ii,xx R rr xxxx qq ii pp xx rr xxxx = qq ii pp xx Note: We don t require cols of P, Q to be orthogonal/unit length P, Q map users/movies to a latent space The most popular model among Netflix contestants users P T factors
28
29 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 Our goal is to find P and Q such tat: mmmmmm PP,QQ ii,xx RR rr xxxx qq ii pp xx users factors items items Q users P T factors
30 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 0 Want to minimize SSE for unseen test data Idea: Minimize SSE on training data Want large k (# of factors) to capture all the signals But, SSE on test data begins to rise for k > This is a classical example of overfitting: With too much freedom (too many free parameters) the model starts fitting noise That is, it fits too well the training data and thus not generalizing well to unseen test data 1??? 1?? 1
31 To solve overfitting we introduce regularization: Allow rich model where there are sufficient data Shrink aggressively where data are scarce 1??? 1?? 1 min P, Q training ( r xi q i p x ) + λ1 x p x + λ i q i error λ 1, λ user set regularization parameters length Note: We do not care about the raw value of the objective function, but we care in P,Q that achieve the minimum of the objective 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1
32 The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day Dumb and Dumber 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets
33 The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day Dumb and Dumber 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets
34 The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day Dumb and Dumber 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets
35 The Color Purple serious Amadeus Braveheart Geared towards females Sense and Sensibility Ocean s 11 Lethal Weapon Factor 1 Geared towards males min P, Q training ( r xi The Princess Diaries q p ) i x + λ min factors error + λ length x p x + i q i The Lion King Factor funny Independence Day Dumb and Dumber 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets
36 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 Want to find matrices P and Q: + + min ( rxi qi px ) λ1 px λ qi P, Q training x i Gradient descent: Initialize P and Q (using SVD, pretend missing ratings are 0) Do gradient descent: P P - η P Q Q - η Q where Q is gradient/derivative of matrix Q: QQ = [ qq iiii ] and qq iiii = rr xxxx qq ii pp xx pp xxxx Here qq iiii is entry f of row q i of matrix Q Observation: Computing gradients is slow! xx,ii How to compute gradient of a matrix? Compute gradient of every element independently! + λλ qq iiii
37 Gradient Descent (GD) vs. Stochastic GD (SGD) Observation: QQ = [ qq iiii ] where qq iiff = rr xxxx qq iiii pp xxxx pp xxxx xx,ii Here qq iiii is entry f of row q i of matrix Q QQ = QQ + λλqq iiii = QQ xx,ii rr xxxx
38 Convergence of GD vs. SGD Value of the objective function Iteration/step GD improves the value of the objective function at every step. SGD improves the value but in a noisy way. GD takes fewer steps to converge but each step takes much longer to compute. In practice, SGD is much faster! 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 8
39 Stochastic gradient descent: Initialize P and Q (using SVD, pretend missing ratings are 0) Then iterate over the ratings (multiple times if necessary) and update factors: For each r xi : εε xxxx = (rr xxxx qq ii pp xx ) qq ii qq ii + μμ 1 εε xxxx pp xx λλ qq ii pp xx pp xx + μμ εε xxii qq ii λλ 1 pp xx for loops: (derivative of the error ) (update equation) (update equation) μμ learning rate For until convergence: For each r xi Compute gradient, do a step a above 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9
40 Koren, Bell, Volinksy, IEEE Computer, 009 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 0
41
42 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets user bias movie bias user-movie interaction Baseline predictor Separates users and movies Benefits from insights into user s behavior Among the main practical contributions of the competition User-Movie interaction Characterizes the matching between users and movies Attracts most research in the field Benefits from algorithmic and mathematical innovations μ = overall mean rating b x = bias of user x b i = bias of movie i
43 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets We have expectations on the rating by user x of movie i, even without estimating x s attitude towards movies like i Rating scale of user x Values of other ratings user gave recently (day-specific mood, anchoring, multi-user accounts) (Recent) popularity of movie i Selection bias; related to number of ratings user gave on the same day ( frequency )
44 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets rr xxxx = μμ + bb xx + bb ii + qq ii pp xx Overall mean rating Bias for user x Bias for movie i User-Movie interaction Example: Mean rating: µ =.7 You are a critical reviewer: your ratings are 1 star lower than the mean: b x = -1 Star Wars gets a mean rating of 0. higher than average movie: b i = + 0. Predicted rating for you on Star Wars: = =.
45 Solve: Stochastic gradient decent to find parameters Note: Both biases b x, b i as well as interactions q i, p x are treated as parameters (we estimate them) 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets regularization goodness of fit λ is selected via gridsearch on a validation set ( ) i i x x x x i i R i x x i i x xi P Q b b p q p q b b r 1 ), (, ) ( min λ λ λ λ µ
46 CF (no time bias) Basic Latent Factors Latent Factors w/ Biases 0.91 RMSE Millions of parameters 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6
47 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 7 Global average: User average: Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+biases: 0.89 Grand Prize: 0.86
48
49 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 Sudden rise in the average movie rating (early 00) Improvements in Netflix GUI improvements Meaning of rating changed Movie age Users prefer new movies without any reasons Older movies are just inherently better than newer ones Y. Koren, Collaborative filtering with temporal dynamics, KDD 09
50 Original model: r xi = µ +b x + b i + q i p x Add time dependence to biases: r xi = µ +b x (t)+ b i (t) +q i p x Make parameters b x and b i to depend on time (1) Parameterize time-dependence by linear trends () Each bin corresponds to 10 consecutive weeks Add temporal dependence to factors p x (t) user preference vector on day t Y. Koren, Collaborative filtering with temporal dynamics, KDD 09 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 0
51 RMSE CF (no time bias) Basic Latent Factors CF (time bias) Latent Factors w/ Biases + Linear time factors + Per-day user biases + CF Millions of parameters 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 1
52 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets Global average: User average: Movie average: 1.0 Netflix: 0.91 Basic Collaborative filtering: 0.9 Collaborative filtering++: 0.91 Latent factors: 0.90 Latent factors+biases: 0.89 Latent factors+biases+time: Still no prize! Getting desperate. Try a kitchen sink approach! Grand Prize: 0.86
53 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets
54 June 6 th submission triggers 0-day last call 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets
55 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets Ensemble team formed Group of other teams on leaderboard forms a new team Relies on combining their models Quickly also get a qualifying score over 10% BellKor Continue to get small improvements in their scores Realize they are in direct competition with team Ensemble Strategy Both teams carefully monitoring the leaderboard Only sure way to check for improvement is to submit a set of predictions This alerts the other team of your latest score
56 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 6 Submissions limited to 1 a day Only 1 final submission could be made in the last h hours before deadline BellKor team member in Austria notices (by chance) that Ensemble posts a score that is slightly better than BellKor s Frantic last hours for both teams Much computer time on final optimization Carefully calibrated to end about an hour before deadline Final submissions BellKor submits a little early (on purpose), 0 mins before deadline Ensemble submits their final entry 0 mins later.and everyone waits.
57 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 7
58 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 8
59 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 9 What s the moral of the story? Submit early!
60 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 60 We need your help with our research on human interpretable machine learning. Please complete a survey at It should be fun and take about 1min to complete. Thanks a lot for your help!
61 1/8/016 Jure Leskovec, Stanford C6: Mining Massive Datasets 61 Some slides and plots borrowed from Yehuda Koren, Robert Bell and Padhraic Smyth Further reading: Y. Koren, Collaborative filtering with temporal dynamics, KDD
CS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS6: Mining Massive Datasets Jure Leskovec, Stanford University http://cs6.stanford.edu Training data 00 million ratings, 80,000 users, 7,770 movies 6 years of data: 000 00 Test data Last few ratings of
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS6: Mining Massive Datasets Jure Leskovec, Stanford University http://cs6.stanford.edu //8 Jure Leskovec, Stanford CS6: Mining Massive Datasets Training data 00 million ratings, 80,000 users, 7,770 movies
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS6: Mining Massive Datasets Jure Leskovec, Stanford University http://cs6.stanford.edu /6/01 Jure Leskovec, Stanford C6: Mining Massive Datasets Training data 100 million ratings, 80,000 users, 17,770
More informationCOMP 465: Data Mining Recommender Systems
//0 movies COMP 6: Data Mining Recommender Systems Slides Adapted From: www.mmds.org (Mining Massive Datasets) movies Compare predictions with known ratings (test set T)????? Test Data Set Root-mean-square
More informationRecommendation and Advertising. Shannon Quinn (with thanks to J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University)
Recommendation and Advertising Shannon Quinn (with thanks to J. Leskovec, A. Rajaraman, and J. Ullman of Stanford University) Lecture breakdown Part : Advertising Bipartite Matching AdWords Part : Recommendation
More informationData Mining Techniques
Data Mining Techniques CS 60 - Section - Fall 06 Lecture Jan-Willem van de Meent (credit: Andrew Ng, Alex Smola, Yehuda Koren, Stanford CS6) Recommender Systems The Long Tail (from: https://www.wired.com/00/0/tail/)
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS6: Mining Massive Datasets Jure Leskovec, Stanford University http://cs6.stanford.edu /7/0 Jure Leskovec, Stanford CS6: Mining Massive Datasets, http://cs6.stanford.edu High dim. data Graph data Infinite
More informationData Mining Techniques
Data Mining Techniques CS 6 - Section - Spring 7 Lecture Jan-Willem van de Meent (credit: Andrew Ng, Alex Smola, Yehuda Koren, Stanford CS6) Project Project Deadlines Feb: Form teams of - people 7 Feb:
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS6: Mining Massive Datasets Jure Leskovec, Stanford University http://cs6.stanford.edu Customer X Buys Metalica CD Buys Megadeth CD Customer Y Does search on Metalica Recommender system suggests Megadeth
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS6: Mining Massive Datasets Jure Leskovec, Stanford University http://cs6.stanford.edu //8 Jure Leskovec, Stanford CS6: Mining Massive Datasets High dim. data Graph data Infinite data Machine learning
More informationCS 124/LINGUIST 180 From Languages to Information
CS /LINGUIST 80 From Languages to Information Dan Jurafsky Stanford University Recommender Systems & Collaborative Filtering Slides adapted from Jure Leskovec Recommender Systems Customer X Buys Metallica
More informationIntroduction to Data Mining
Introduction to Data Mining Lecture #7: Recommendation Content based & Collaborative Filtering Seoul National University In This Lecture Understand the motivation and the problem of recommendation Compare
More informationRecommendation Systems
Recommendation Systems CS 534: Machine Learning Slides adapted from Alex Smola, Jure Leskovec, Anand Rajaraman, Jeff Ullman, Lester Mackey, Dietmar Jannach, and Gerhard Friedrich Recommender Systems (RecSys)
More informationRecommender Systems New Approaches with Netflix Dataset
Recommender Systems New Approaches with Netflix Dataset Robert Bell Yehuda Koren AT&T Labs ICDM 2007 Presented by Matt Rodriguez Outline Overview of Recommender System Approaches which are Content based
More informationCSE 258 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)
CSE 258 Lecture 8 Web Mining and Recommender Systems Extensions of latent-factor models, (and more on the Netflix prize) Summary so far Recap 1. Measuring similarity between users/items for binary prediction
More informationMachine Learning and Data Mining. Collaborative Filtering & Recommender Systems. Kalev Kask
Machine Learning and Data Mining Collaborative Filtering & Recommender Systems Kalev Kask Recommender systems Automated recommendations Inputs User information Situation context, demographics, preferences,
More informationCSE 158 Lecture 8. Web Mining and Recommender Systems. Extensions of latent-factor models, (and more on the Netflix prize)
CSE 158 Lecture 8 Web Mining and Recommender Systems Extensions of latent-factor models, (and more on the Netflix prize) Summary so far Recap 1. Measuring similarity between users/items for binary prediction
More informationCS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #16: Recommenda2on Systems
CS 6: (Big) Data Management Systems B. Aditya Prakash Lecture #6: Recommendaon Systems Example: Recommender Systems Customer X Buys Metallica CD Buys Megadeth CD Customer Y Does search on Metallica Recommender
More informationReddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011
Reddit Recommendation System Daniel Poon, Yu Wu, David (Qifan) Zhang CS229, Stanford University December 11 th, 2011 1. Introduction Reddit is one of the most popular online social news websites with millions
More informationCS 572: Information Retrieval
CS 7: Information Retrieval Recommender Systems : Implementation and Applications Acknowledgements Many slides in this lecture are adapted from Xavier Amatriain (Netflix), Yehuda Koren (Yahoo), and Dietmar
More informationCollaborative Filtering Applied to Educational Data Mining
Collaborative Filtering Applied to Educational Data Mining KDD Cup 200 July 25 th, 200 BigChaos @ KDD Team Dataset Solution Overview Michael Jahrer, Andreas Töscher from commendo research Dataset Team
More informationRecommender Systems Collabora2ve Filtering and Matrix Factoriza2on
Recommender Systems Collaborave Filtering and Matrix Factorizaon Narges Razavian Thanks to lecture slides from Alex Smola@CMU Yahuda Koren@Yahoo labs and Bing Liu@UIC We Know What You Ought To Be Watching
More informationCS 124/LINGUIST 180 From Languages to Information
CS /LINGUIST 80 From Languages to Information Dan Jurafsky Stanford University Recommender Systems & Collaborative Filtering Slides adapted from Jure Leskovec Recommender Systems Customer X Buys CD of
More informationGeneral Instructions. Questions
CS246: Mining Massive Data Sets Winter 2018 Problem Set 2 Due 11:59pm February 8, 2018 Only one late period is allowed for this homework (11:59pm 2/13). General Instructions Submission instructions: These
More informationCS 124/LINGUIST 180 From Languages to Information
CS /LINGUIST 80 From Languages to Information Dan Jurafsky Stanford University Recommender Systems & Collaborative Filtering Slides adapted from Jure Leskovec Recommender Systems Customer X Buys CD of
More informationThanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman
Thanks to Jure Leskovec, Anand Rajaraman, Jeff Ullman http://www.mmds.org Overview of Recommender Systems Content-based Systems Collaborative Filtering J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive
More informationMining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Infinite data. Filtering data streams
/9/7 Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them
More informationConcept of Curve Fitting Difference with Interpolation
Curve Fitting Content Concept of Curve Fitting Difference with Interpolation Estimation of Linear Parameters by Least Squares Curve Fitting by Polynomial Least Squares Estimation of Non-linear Parameters
More informationPerformance Comparison of Algorithms for Movie Rating Estimation
Performance Comparison of Algorithms for Movie Rating Estimation Alper Köse, Can Kanbak, Noyan Evirgen Research Laboratory of Electronics, Massachusetts Institute of Technology Department of Electrical
More informationAdditive Regression Applied to a Large-Scale Collaborative Filtering Problem
Additive Regression Applied to a Large-Scale Collaborative Filtering Problem Eibe Frank 1 and Mark Hall 2 1 Department of Computer Science, University of Waikato, Hamilton, New Zealand eibe@cs.waikato.ac.nz
More informationCSE 547: Machine Learning for Big Data Spring Problem Set 2. Please read the homework submission policies.
CSE 547: Machine Learning for Big Data Spring 2019 Problem Set 2 Please read the homework submission policies. 1 Principal Component Analysis and Reconstruction (25 points) Let s do PCA and reconstruct
More informationReal-time Recommendations on Spark. Jan Neumann, Sridhar Alla (Comcast Labs) DC Spark Interactive Meetup East May
Real-time Recommendations on Spark Jan Neumann, Sridhar Alla (Comcast Labs) DC Spark Interactive Meetup East May 19 2015 Who am I? Jan Neumann, Lead of Big Data and Content Analysis Research Teams This
More informationCptS 570 Machine Learning Project: Netflix Competition. Parisa Rashidi Vikramaditya Jakkula. Team: MLSurvivors. Wednesday, December 12, 2007
CptS 570 Machine Learning Project: Netflix Competition Team: MLSurvivors Parisa Rashidi Vikramaditya Jakkula Wednesday, December 12, 2007 Introduction In current report, we describe our efforts put forth
More informationAn Empirical Comparison of Collaborative Filtering Approaches on Netflix Data
An Empirical Comparison of Collaborative Filtering Approaches on Netflix Data Nicola Barbieri, Massimo Guarascio, Ettore Ritacco ICAR-CNR Via Pietro Bucci 41/c, Rende, Italy {barbieri,guarascio,ritacco}@icar.cnr.it
More informationCS224W Project: Recommendation System Models in Product Rating Predictions
CS224W Project: Recommendation System Models in Product Rating Predictions Xiaoye Liu xiaoye@stanford.edu Abstract A product recommender system based on product-review information and metadata history
More informationYelp Recommendation System
Yelp Recommendation System Jason Ting, Swaroop Indra Ramaswamy Institute for Computational and Mathematical Engineering Abstract We apply principles and techniques of recommendation systems to develop
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu Can we identify node groups? (communities, modules, clusters) 2/13/2014 Jure Leskovec, Stanford C246: Mining
More informationCPSC 340: Machine Learning and Data Mining. Recommender Systems Fall 2017
CPSC 340: Machine Learning and Data Mining Recommender Systems Fall 2017 Assignment 4: Admin Due tonight, 1 late day for Monday, 2 late days for Wednesday. Assignment 5: Posted, due Monday of last week
More informationSingular Value Decomposition, and Application to Recommender Systems
Singular Value Decomposition, and Application to Recommender Systems CSE 6363 Machine Learning Vassilis Athitsos Computer Science and Engineering Department University of Texas at Arlington 1 Recommendation
More informationCollaborative Filtering for Netflix
Collaborative Filtering for Netflix Michael Percy Dec 10, 2009 Abstract The Netflix movie-recommendation problem was investigated and the incremental Singular Value Decomposition (SVD) algorithm was implemented
More informationCSE 546 Machine Learning, Autumn 2013 Homework 2
CSE 546 Machine Learning, Autumn 2013 Homework 2 Due: Monday, October 28, beginning of class 1 Boosting [30 Points] We learned about boosting in lecture and the topic is covered in Murphy 16.4. On page
More informationBy Atul S. Kulkarni Graduate Student, University of Minnesota Duluth. Under The Guidance of Dr. Richard Maclin
By Atul S. Kulkarni Graduate Student, University of Minnesota Duluth Under The Guidance of Dr. Richard Maclin Outline Problem Statement Background Proposed Solution Experiments & Results Related Work Future
More informationWeighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD. Abstract
Weighted Alternating Least Squares (WALS) for Movie Recommendations) Drew Hodun SCPD Abstract There are two common main approaches to ML recommender systems, feedback-based systems and content-based systems.
More informationUse of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University
Use of KNN for the Netflix Prize Ted Hong, Dimitris Tsamis Stanford University {tedhong, dtsamis}@stanford.edu Abstract This paper analyzes the performance of various KNNs techniques as applied to the
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu Web advertising We discussed how to match advertisers to queries in real-time But we did not discuss how to estimate
More informationTHE goal of a recommender system is to make predictions
CSE 569 FUNDAMENTALS OF STATISTICAL LEARNING 1 Anime Recommer System Exploration: Final Report Scott Freitas & Benjamin Clayton Abstract This project is an exploration of modern recommer systems utilizing
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Machine Learning Basics - 1 U Kang Seoul National University U Kang 1 In This Lecture Overview of Machine Learning Capacity, overfitting, and underfitting
More informationCS6716 Pattern Recognition
CS6716 Pattern Recognition Prototype Methods Aaron Bobick School of Interactive Computing Administrivia Problem 2b was extended to March 25. Done? PS3 will be out this real soon (tonight) due April 10.
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 3/12/2014 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 2 3/12/2014 Jure
More informationBBS654 Data Mining. Pinar Duygulu
BBS6 Data Mining Pinar Duygulu Slides are adapted from J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, http://www.mmds.org Mustafa Ozdal Example: Recommender Systems Customer X Buys Metallica
More informationAdvances in Collaborative Filtering
Chapter 5 Advances in Collaborative Filtering Yehuda Koren and Robert Bell Abstract The collaborative filtering (CF) approach to recommenders has recently enjoyed much interest and progress. The fact that
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/24/2014 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 2 High dim. data
More informationPredicting Popular Xbox games based on Search Queries of Users
1 Predicting Popular Xbox games based on Search Queries of Users Chinmoy Mandayam and Saahil Shenoy I. INTRODUCTION This project is based on a completed Kaggle competition. Our goal is to predict which
More informationCS294-1 Assignment 2 Report
CS294-1 Assignment 2 Report Keling Chen and Huasha Zhao February 24, 2012 1 Introduction The goal of this homework is to predict a users numeric rating for a book from the text of the user s review. The
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS46: Mining Massive Datasets Jure Leskovec, Stanford University http://cs46.stanford.edu /7/ Jure Leskovec, Stanford C46: Mining Massive Datasets Many real-world problems Web Search and Text Mining Billions
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu [Kumar et al. 99] 2/13/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu
More informationCS249: ADVANCED DATA MINING
CS249: ADVANCED DATA MINING Classification Evaluation and Practical Issues Instructor: Yizhou Sun yzsun@cs.ucla.edu April 24, 2017 Homework 2 out Announcements Due May 3 rd (11:59pm) Course project proposal
More informationBayesian Personalized Ranking for Las Vegas Restaurant Recommendation
Bayesian Personalized Ranking for Las Vegas Restaurant Recommendation Kiran Kannar A53089098 kkannar@eng.ucsd.edu Saicharan Duppati A53221873 sduppati@eng.ucsd.edu Akanksha Grover A53205632 a2grover@eng.ucsd.edu
More informationLecture on Modeling Tools for Clustering & Regression
Lecture on Modeling Tools for Clustering & Regression CS 590.21 Analysis and Modeling of Brain Networks Department of Computer Science University of Crete Data Clustering Overview Organizing data into
More informationHomework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in class hard-copy please)
Virginia Tech. Computer Science CS 5614 (Big) Data Management Systems Fall 2014, Prakash Homework 4: Clustering, Recommenders, Dim. Reduction, ML and Graph Mining (due November 19 th, 2014, 2:30pm, in
More informationHMC CS 158, Fall 2017 Problem Set 3 Programming: Regularized Polynomial Regression
HMC CS 158, Fall 2017 Problem Set 3 Programming: Regularized Polynomial Regression Goals: To open up the black-box of scikit-learn and implement regression models. To investigate how adding polynomial
More informationSparse Estimation of Movie Preferences via Constrained Optimization
Sparse Estimation of Movie Preferences via Constrained Optimization Alexander Anemogiannis, Ajay Mandlekar, Matt Tsao December 17, 2016 Abstract We propose extensions to traditional low-rank matrix completion
More informationUsing Social Networks to Improve Movie Rating Predictions
Introduction Using Social Networks to Improve Movie Rating Predictions Suhaas Prasad Recommender systems based on collaborative filtering techniques have become a large area of interest ever since the
More informationCS246: Mining Massive Datasets Jure Leskovec, Stanford University
CS246: Mining Massive Datasets Jure Leskovec, Stanford University http://cs246.stanford.edu 2/25/2013 Jure Leskovec, Stanford CS246: Mining Massive Datasets, http://cs246.stanford.edu 3 In many data mining
More informationFactorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model
Factorization Meets the Neighborhood: a Multifaceted Collaborative Filtering Model Yehuda Koren AT&T Labs Research 180 Park Ave, Florham Park, NJ 07932 yehuda@research.att.com ABSTRACT Recommender systems
More informationAdvances in Collaborative Filtering
Advances in Collaborative Filtering Yehuda Koren and Robert Bell 1 Introduction Collaborative filtering (CF) methods produce user specific recommendations of items based on patterns of ratings or usage
More informationPartitioning Data. IRDS: Evaluation, Debugging, and Diagnostics. Cross-Validation. Cross-Validation for parameter tuning
Partitioning Data IRDS: Evaluation, Debugging, and Diagnostics Charles Sutton University of Edinburgh Training Validation Test Training : Running learning algorithms Validation : Tuning parameters of learning
More informationNeural Network Optimization and Tuning / Spring 2018 / Recitation 3
Neural Network Optimization and Tuning 11-785 / Spring 2018 / Recitation 3 1 Logistics You will work through a Jupyter notebook that contains sample and starter code with explanations and comments throughout.
More informationProgress Report: Collaborative Filtering Using Bregman Co-clustering
Progress Report: Collaborative Filtering Using Bregman Co-clustering Wei Tang, Srivatsan Ramanujam, and Andrew Dreher April 4, 2008 1 Introduction Analytics are becoming increasingly important for business
More informationCS570: Introduction to Data Mining
CS570: Introduction to Data Mining Classification Advanced Reading: Chapter 8 & 9 Han, Chapters 4 & 5 Tan Anca Doloc-Mihu, Ph.D. Slides courtesy of Li Xiong, Ph.D., 2011 Han, Kamber & Pei. Data Mining.
More informationMissing Data. Where did it go?
Missing Data Where did it go? 1 Learning Objectives High-level discussion of some techniques Identify type of missingness Single vs Multiple Imputation My favourite technique 2 Problem Uh data are missing
More informationGradient Descent Optimization Algorithms for Deep Learning Batch gradient descent Stochastic gradient descent Mini-batch gradient descent
Gradient Descent Optimization Algorithms for Deep Learning Batch gradient descent Stochastic gradient descent Mini-batch gradient descent Slide credit: http://sebastianruder.com/optimizing-gradient-descent/index.html#batchgradientdescent
More informationHotel Recommendation Based on Hybrid Model
Hotel Recommendation Based on Hybrid Model Jing WANG, Jiajun SUN, Zhendong LIN Abstract: This project develops a hybrid model that combines content-based with collaborative filtering (CF) for hotel recommendation.
More informationRecommender Systems 6CCS3WSN-7CCSMWAL
Recommender Systems 6CCS3WSN-7CCSMWAL http://insidebigdata.com/wp-content/uploads/2014/06/humorrecommender.jpg Some basic methods of recommendation Recommend popular items Collaborative Filtering Item-to-Item:
More informationSupervised Random Walks
Supervised Random Walks Pawan Goyal CSE, IITKGP September 8, 2014 Pawan Goyal (IIT Kharagpur) Supervised Random Walks September 8, 2014 1 / 17 Correlation Discovery by random walk Problem definition Estimate
More informationSeminar Collaborative Filtering. KDD Cup. Ziawasch Abedjan, Arvid Heise, Felix Naumann
Seminar Collaborative Filtering KDD Cup Ziawasch Abedjan, Arvid Heise, Felix Naumann 2 Collaborative Filtering Recommendation systems 3 Recommendation systems 4 Recommendation systems 5 Recommendation
More informationSampling PCA, enhancing recovered missing values in large scale matrices. Luis Gabriel De Alba Rivera 80555S
Sampling PCA, enhancing recovered missing values in large scale matrices. Luis Gabriel De Alba Rivera 80555S May 2, 2009 Introduction Human preferences (the quality tags we put on things) are language
More informationAssignment 5: Collaborative Filtering
Assignment 5: Collaborative Filtering Arash Vahdat Fall 2015 Readings You are highly recommended to check the following readings before/while doing this assignment: Slope One Algorithm: https://en.wikipedia.org/wiki/slope_one.
More informationComparison of Variational Bayes and Gibbs Sampling in Reconstruction of Missing Values with Probabilistic Principal Component Analysis
Comparison of Variational Bayes and Gibbs Sampling in Reconstruction of Missing Values with Probabilistic Principal Component Analysis Luis Gabriel De Alba Rivera Aalto University School of Science and
More informationCPSC 340: Machine Learning and Data Mining. Probabilistic Classification Fall 2017
CPSC 340: Machine Learning and Data Mining Probabilistic Classification Fall 2017 Admin Assignment 0 is due tonight: you should be almost done. 1 late day to hand it in Monday, 2 late days for Wednesday.
More informationThe exam is closed book, closed notes except your one-page (two-sided) cheat sheet.
CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or
More informationCPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017
CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2017 Assignment 3: 2 late days to hand in tonight. Admin Assignment 4: Due Friday of next week. Last Time: MAP Estimation MAP
More informationCPSC 340: Machine Learning and Data Mining
CPSC 340: Machine Learning and Data Mining Fundamentals of learning (continued) and the k-nearest neighbours classifier Original version of these slides by Mark Schmidt, with modifications by Mike Gelbart.
More informationRecommender System Optimization through Collaborative Filtering
Recommender System Optimization through Collaborative Filtering L.W. Hoogenboom Econometric Institute of Erasmus University Rotterdam Bachelor Thesis Business Analytics and Quantitative Marketing July
More informationRecommendation System Using Yelp Data CS 229 Machine Learning Jia Le Xu, Yingran Xu
Recommendation System Using Yelp Data CS 229 Machine Learning Jia Le Xu, Yingran Xu 1 Introduction Yelp Dataset Challenge provides a large number of user, business and review data which can be used for
More informationTowards a hybrid approach to Netflix Challenge
Towards a hybrid approach to Netflix Challenge Abhishek Gupta, Abhijeet Mohapatra, Tejaswi Tenneti March 12, 2009 1 Introduction Today Recommendation systems [3] have become indispensible because of the
More informationImproving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah
Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com
More informationFMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu
FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)
More informationIntroduction to Deep Learning
ENEE698A : Machine Learning Seminar Introduction to Deep Learning Raviteja Vemulapalli Image credit: [LeCun 1998] Resources Unsupervised feature learning and deep learning (UFLDL) tutorial (http://ufldl.stanford.edu/wiki/index.php/ufldl_tutorial)
More informationCSC 411: Lecture 02: Linear Regression
CSC 411: Lecture 02: Linear Regression Raquel Urtasun & Rich Zemel University of Toronto Sep 16, 2015 Urtasun & Zemel (UofT) CSC 411: 02-Regression Sep 16, 2015 1 / 16 Today Linear regression problem continuous
More informationWeb Personalisation and Recommender Systems
Web Personalisation and Recommender Systems Shlomo Berkovsky and Jill Freyne DIGITAL PRODUCTIVITY FLAGSHIP Outline Part 1: Information Overload and User Modelling Part 2: Web Personalisation and Recommender
More informationRanking by Alternating SVM and Factorization Machine
Ranking by Alternating SVM and Factorization Machine Shanshan Wu, Shuling Malloy, Chang Sun, and Dan Su shanshan@utexas.edu, shuling.guo@yahoo.com, sc20001@utexas.edu, sudan@utexas.edu Dec 10, 2014 Abstract
More informationCSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo
CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Do Something..
More informationPlankton Classification Using ConvNets
Plankton Classification Using ConvNets Abhinav Rastogi Stanford University Stanford, CA arastogi@stanford.edu Haichuan Yu Stanford University Stanford, CA haichuan@stanford.edu Abstract We present the
More informationLecture 26: Missing data
Lecture 26: Missing data Reading: ESL 9.6 STATS 202: Data mining and analysis December 1, 2017 1 / 10 Missing data is everywhere Survey data: nonresponse. 2 / 10 Missing data is everywhere Survey data:
More informationCSE 5243 INTRO. TO DATA MINING
CSE 5243 INTRO. TO DATA MINING Cluster Analysis: Basic Concepts and Methods Huan Sun, CSE@The Ohio State University 09/25/2017 Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han 2 Chapter 10.
More informationNon-negative Matrix Factorization for Multimodal Image Retrieval
Non-negative Matrix Factorization for Multimodal Image Retrieval Fabio A. González PhD Machine Learning 2015-II Universidad Nacional de Colombia F. González NMF for MM IR ML 2015-II 1 / 54 Outline 1 The
More informationECS289: Scalable Machine Learning
ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 22, 2016 Course Information Website: http://www.stat.ucdavis.edu/~chohsieh/teaching/ ECS289G_Fall2016/main.html My office: Mathematical Sciences
More informationCPSC 535 Assignment 1: Introduction to Matlab and Octave
CPSC 535 Assignment 1: Introduction to Matlab and Octave The goal of this assignment is to become familiar with Matlab and Octave as tools for scientific investigation. Matlab, a product of Mathworks,
More informationMachine Learning Classifiers and Boosting
Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve
More information