Cross-validation for detecting and preventing overfitting
|
|
- Melvyn Mitchell Cameron
- 5 years ago
- Views:
Transcription
1 Cross-validation for detecting and preventing overfitting Andrew W. Moore/Anna Goldenberg School of Computer Science Carnegie Mellon Universit Copright 2001, Andrew W. Moore Apr 1st, 2004 Want to learn model from data To eample Can we learn f() that fits our data, =f()+noise? Copright 2001, Andrew W. Moore Cross-Validation: Slide 2 1
2 Which is best? linear quadratic piecewise linear Wh not choose the method with the best fit to the data? Copright 2001, Andrew W. Moore Cross-Validation: Slide 3 What do we reall want? linear quadratic piecewise linear Wh not choose the method with the best fit to the data? How well are ou going to predict future data drawn from the same distribution? Copright 2001, Andrew W. Moore Cross-Validation: Slide 4 2
3 The test set method 1. Randoml choose 30% of the data to be in a test set 2. The remainder is a training set Copright 2001, Andrew W. Moore Cross-Validation: Slide 5 The test set method (Linear function eample) 1. Randoml choose 30% of the data to be in a test set 2. The remainder is a training set 3. Learn model from the training set Copright 2001, Andrew W. Moore Cross-Validation: Slide 6 3
4 The test set method 1. Randoml choose 30% of the data to be in a test set 2. The remainder is a training set 3. Learn the model from the training set (Linear How function to tell how eample) good is the prediction? 4. Estimate our future performance with the test set Copright 2001, Andrew W. Moore Cross-Validation: Slide 7 MSE (Mean Squared Error) ( 2, 2o ) MSE = ( 1p - 1o ) 2 + ( 2p - 2o ) 2 + ( 3p - 3o ) 2 3 ( 1, 1o ) ( 3, 3o ) ( 1, 1p ) ( 2, 2p ) ( 3, 3p ) (Linear function eample) Copright 2001, Andrew W. Moore Cross-Validation: Slide 8 4
5 MSE (Mean Squared Error) ( 2, 2o ) MSE = ( 1p - 1o ) 2 + ( 2p - 2o ) 2 + ( 3p - 3o ) 2 3 ( 1, 1o ) ( 3, 3o ) ( 1, 1p ) ( 2, 2p ) ( 3, 3p ) In general, Σ n i=1 (prediction i -observation i )2 MSE = n (Linear function eample) Copright 2001, Andrew W. Moore Cross-Validation: Slide 9 The test set method (Linear function eample) Mean Squared Error = Randoml choose 30% of the data to be in a test set 2. The remainder is a training set 3. Learn the model from the training set 4. Estimate our future performance with the test set Copright 2001, Andrew W. Moore Cross-Validation: Slide 10 5
6 The test set method (Quadratic function) Mean Squared Error = Randoml choose 30% of the data to be in a test set 2. The remainder is a training set 3. Perform model fitting on the training set 4. Estimate our future performance with the test set Copright 2001, Andrew W. Moore Cross-Validation: Slide 11 The test set method (Piecewise Linear function) Mean Squared Error = Randoml choose 30% of the data to be in a test set 2. The remainder is a training set 3. Perform model fitting on the training set 4. Estimate our future performance with the test set Copright 2001, Andrew W. Moore Cross-Validation: Slide 12 6
7 The test set method Good news: Ver ver simple Can simpl choose the method with the best test-set score Bad news: What s the downside? Copright 2001, Andrew W. Moore Cross-Validation: Slide 13 Good news: Ver ver simple The test set method Can simpl choose the method with the best test-set score Bad news: Wastes data: best model is estimated using 30% less data If we don t have much data, our test-set might just be luck or unluck We sa the test-set estimator of performance has high variance Copright 2001, Andrew W. Moore Cross-Validation: Slide 14 7
8 LOOCV (Leave-one-out Cross Validation) For k=1 to R, R = # of records 1. Let ( k, k ) be the k th record Copright 2001, Andrew W. Moore Cross-Validation: Slide 15 LOOCV (Leave-one-out Cross Validation) For k=1 to R 1. Let ( k, k ) be the k th record 2. Temporaril remove ( k, k ) from the dataset Copright 2001, Andrew W. Moore Cross-Validation: Slide 16 8
9 LOOCV (Leave-one-out Cross Validation) For k=1 to R 1. Let ( k, k ) be the k th record 2. Temporaril remove ( k, k ) from the dataset 3. Train on the remaining R-1 datapoints Copright 2001, Andrew W. Moore Cross-Validation: Slide 17 LOOCV (Leave-one-out Cross Validation) For k=1 to R 1. Let ( k, k ) be the k th record 2. Temporaril remove ( k, k ) from the dataset 3. Train on the remaining R-1 datapoints 4. Note our error ( k, k ) Copright 2001, Andrew W. Moore Cross-Validation: Slide 18 9
10 LOOCV (Leave-one-out Cross Validation) For k=1 to R 1. Let ( k, k ) be the k th record 2. Temporaril remove ( k, k ) from the dataset 3. Train on the remaining R-1 datapoints 4. Note our error ( k, k ) When ou ve done all points, report the mean error. Copright 2001, Andrew W. Moore Cross-Validation: Slide 19 LOOCV (Leave-one-out Cross Validation) For k=1 to R 1. Let ( k, k ) be the k th record 2. Temporaril remove ( k, k ) from the dataset 3. Train on the remaining R-1 datapoints 4. Note our error ( k, k ) When ou ve done all points, report the mean error. MSE LOOCV = 2.12 Copright 2001, Andrew W. Moore Cross-Validation: Slide 20 10
11 LOOCV for Quadratic Function For k=1 to R 1. Let ( k, k ) be the k th record 2. Temporaril remove ( k, k ) from the dataset 3. Train on the remaining R-1 datapoints 4. Note our error ( k, k ) When ou ve done all points, report the mean error. MSE LOOCV =0.962 Copright 2001, Andrew W. Moore Cross-Validation: Slide 21 LOOCV for Join The Dots For k=1 to R 1. Let ( k, k ) be the k th record 2. Temporaril remove ( k, k ) from the dataset 3. Train on the remaining R-1 datapoints 4. Note our error ( k, k ) When ou ve done all points, report the mean error. MSE LOOCV =3.33 Copright 2001, Andrew W. Moore Cross-Validation: Slide 22 11
12 Which kind of Cross Validation? Test-set Leaveone-out Downside Variance: unreliable estimate of future performance Epensive. Has some weird behavior Upside Cheap Doesn t waste data..can we get the best of both worlds? Copright 2001, Andrew W. Moore Cross-Validation: Slide 23 k-fold Cross Validation Randoml break the dataset into k partitions (in our eample we ll have k=3 partitions colored Red Green and Blue) Copright 2001, Andrew W. Moore Cross-Validation: Slide 24 12
13 k-fold Cross Validation Randoml break the dataset into k partitions (in our eample we ll have k=3 partitions colored Red Green and Blue) For the red partition: Train on all the points not in the red partition. Find the test-set sum of errors on the red points. Copright 2001, Andrew W. Moore Cross-Validation: Slide 25 k-fold Cross Validation Randoml break the dataset into k partitions (in our eample we ll have k=3 partitions colored Red Green and Blue) For the red partition: Train on all the points not in the red partition. Find the test-set sum of errors on the red points. For the green partition: Train on all the points not in the green partition. Find the test-set sum of errors on the green points. Copright 2001, Andrew W. Moore Cross-Validation: Slide 26 13
14 k-fold Cross Validation Randoml break the dataset into k partitions (in our eample we ll have k=3 partitions colored Red Green and Blue) For the red partition: Train on all the points not in the red partition. Find the test-set sum of errors on the red points. For the green partition: Train on all the points not in the green partition. Find the test-set sum of errors on the green points. For the blue partition: Train on all the points not in the blue partition. Find the test-set sum of errors on the blue points. Copright 2001, Andrew W. Moore Cross-Validation: Slide 27 k-fold Cross Validation Linear model MSE 3FOLD =2.05 Randoml break the dataset into k partitions (in our eample we ll have k=3 partitions colored Red Green and Blue) For the red partition: Train on all the points not in the red partition. Find the test-set sum of errors on the red points. For the green partition: Train on all the points not in the green partition. Find the test-set sum of errors on the green points. For the blue partition: Train on all the points not in the blue partition. Find the test-set sum of errors on the blue points. Then report the mean error Copright 2001, Andrew W. Moore Cross-Validation: Slide 28 14
15 k-fold Cross Validation Quadratic model MSE 3FOLD =1.11 Randoml break the dataset into k partitions (in our eample we ll have k=3 partitions colored Red Green and Blue) For the red partition: Train on all the points not in the red partition. Find the test-set sum of errors on the red points. For the green partition: Train on all the points not in the green partition. Find the test-set sum of errors on the green points. For the blue partition: Train on all the points not in the blue partition. Find the test-set sum of errors on the blue points. Then report the mean error Copright 2001, Andrew W. Moore Cross-Validation: Slide 29 k-fold Cross Validation Piecewise linear model MSE 3FOLD =2.93 Randoml break the dataset into k partitions (in our eample we ll have k=3 partitions colored Red Green and Blue) For the red partition: Train on all the points not in the red partition. Find the test-set sum of errors on the red points. For the green partition: Train on all the points not in the green partition. Find the test-set sum of errors on the green points. For the blue partition: Train on all the points not in the blue partition. Find the test-set sum of errors on the blue points. Then report the mean error Copright 2001, Andrew W. Moore Cross-Validation: Slide 30 15
16 Which kind of Cross Validation? Downside Upside Test-set Leaveone-out 10-fold 3-fold R-fold Variance: unreliable estimate of future performance Epensive. Has some weird behavior Wastes 10% of the data. 10 times more epensive than test set Wastier than 10-fold. Epensivier than test set Identical to Leave-one-out Cheap Doesn t waste data Onl wastes 10%. Onl 10 times more epensive instead of R times. Copright 2001, Andrew W. Moore Cross-Validation: Slide 31 Which kind of Cross Validation? Downside Upside Test-set Slightl better than testset Leaveone-out 10-fold 3-fold R-fold Variance: unreliable Cheap estimate of future performance But note: can use Epensive. algorithmic Doesn t waste tricks data to Has some weird behaviormake these cheap Wastes 10% of the data. Onl wastes 10%. Onl 10 times more epensive 10 times more epensive than testset instead of R times. Wastier than 10-fold. Epensivier than testset Identical to Leave-one-out Slightl better than testset Copright 2001, Andrew W. Moore Cross-Validation: Slide 32 16
17 CV-based Model Selection We re tring to decide which algorithm to use. We train each machine and make a table i f i f 1 f 2 f 3 f 4 f 5 f 6 TRAINERR 10-FOLD-CV-ERR Choice Copright 2001, Andrew W. Moore Cross-Validation: Slide 33 Alternatives to CV-based model selection Model selection methods: AIC (Akaike Information Criterion) asmptoticall equivalent to LOOCV BIC (Baesian Information Criterion) used for finding best structure rather than for classification, asmptoticall equivalent to carefull chosen k-fold VC-dimension (Vapnik-Chervonenkis Dimension) onl directl applicable to choosing classifiers, wildl conservative Their advantage over CV ou onl need the training error Man alternatives---including proper Baesian approaches Copright 2001, Andrew W. Moore Cross-Validation: Slide 34 17
18 Other Cross-validation Can do leave all pairs out or leave-allntuples-out if feeling resourceful. k-folds where each fold is an independentlchosen subset of the data Copright 2001, Andrew W. Moore Cross-Validation: Slide 35 Cross-Validation is useful Preventing overfitting Comparing different learning algorithms Choosing the number of hidden units in a neural net Feature selection (see later) Choosing a polnomial degree Copright 2001, Andrew W. Moore Cross-Validation: Slide 36 18
19 Cross-validation for classification Instead of computing the sum squared errors on a test set, ou should compute Copright 2001, Andrew W. Moore Cross-Validation: Slide 37 Cross-validation for classification Instead of computing the sum squared errors on a test set, ou should compute The total number of misclassifications on a testset. Copright 2001, Andrew W. Moore Cross-Validation: Slide 38 19
20 Cross-validation for classification Instead of computing the sum squared errors on a test set, ou should compute The total number of misclassifications on a testset. But there s a more sensitive alternative: Compute log P(all test outputs all test inputs, our model) Copright 2001, Andrew W. Moore Cross-Validation: Slide 39 Cross-Validation for classification Choosing the pruning parameter for decision trees Feature selection (see later) What kind of Gaussian to use in a Gaussianbased Baes Classifier Choosing which classifier to use Copright 2001, Andrew W. Moore Cross-Validation: Slide 40 20
21 Feature Selection Suppose ou have a learning algorithm LA and a set of input attributes { X 1, X 2.. X m } You epect that LA will onl find some subset of the attributes useful. Question: How can we use cross-validation to find a useful subset? Four ideas: Another fun area in which Forward selection Andrew has spent a lot of time Backward elimination Hill Climbing Stochastic search (Simulated Annealing or GAs) Copright 2001, Andrew W. Moore Cross-Validation: Slide 41 Ver serious warning Intensive use of cross validation can overfit. How? What can be done about it? Copright 2001, Andrew W. Moore Cross-Validation: Slide 42 21
22 Ver serious warning Intensive use of cross validation can overfit. How? Imagine a dataset with 50 records and 1000 attributes. You tr 1000 linear models, each one using one of the attributes. What can be done about it? Copright 2001, Andrew W. Moore Cross-Validation: Slide 43 Ver serious warning Intensive use of cross validation can overfit. How? Imagine a dataset with 50 records and 1000 attributes. You tr 1000 linear models, each one using one of the attributes. The best of those 1000 looks good! What can be done about it? Copright 2001, Andrew W. Moore Cross-Validation: Slide 44 22
23 Ver serious warning Intensive use of cross validation can overfit. How? Imagine a dataset with 50 records and 1000 attributes. You tr 1000 linear models, each one using one of the attributes. The best of those 1000 looks good! But ou realize it would have looked good even if the output had been purel random! What can be done about it? Hold out an additional testset before doing an model selection. The best model should perform well even on the additional testset. Copright 2001, Andrew W. Moore Cross-Validation: Slide 45 What ou should know Wh ou can t use training-set-error to estimate the qualit of our learning algorithm on our data. Wh ou can t use training set error to choose the learning algorithm Test-set cross-validation Leave-one-out cross-validation k-fold cross-validation Feature selection methods Copright 2001, Andrew W. Moore Cross-Validation: Slide 46 23
Cross-validation for detecting and preventing overfitting
Cross-validation for detecting and preventing overfitting Note to other teachers and users of these slides. Andrew would be delighted if ou found this source material useful in giving our own lectures.
More informationLECTURE 6: CROSS VALIDATION
LECTURE 6: CROSS VALIDATION CSCI 4352 Machine Learning Dongchul Kim, Ph.D. Department of Computer Science A Regression Problem Given a data set, how can we evaluate our (linear) model? Cross Validation
More informationLecture 7. CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler. Outline. Machine Learning: Cross Validation. Performance evaluation methods
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 7 Machine Learning: Cross Validation Outline Performance evaluation methods test/train sets cross-validation k-fold Leave-one-out 1 A
More informationBias-Variance Decomposition Error Estimators
Bias-Variance Decomposition Error Estimators Cross-Validation Bias-Variance tradeoff Intuition Model too simple does not fit the data well a biased solution. Model too comple small changes to the data,
More informationBias-Variance Decomposition Error Estimators Cross-Validation
Bias-Variance Decomposition Error Estimators Cross-Validation Bias-Variance tradeoff Intuition Model too simple does not fit the data well a biased solution. Model too comple small changes to the data,
More informationCross-validation for detecting and preventing overfitting
Cross-validation for detecting and preventing overfitting Note to other teachers and users of these slides. Andrew would be delighted if ou found this source material useful in giving our own lectures.
More informationOverfitting, Model Selection, Cross Validation, Bias-Variance
Statistical Machine Learning Notes 2 Overfitting, Model Selection, Cross Validation, Bias-Variance Instructor: Justin Domke Motivation Suppose we have some data TRAIN = {(, ), ( 2, 2 ),..., ( N, N )} that
More informationNon-linear models. Basis expansion. Overfitting. Regularization.
Non-linear models. Basis epansion. Overfitting. Regularization. Petr Pošík Czech Technical Universit in Prague Facult of Electrical Engineering Dept. of Cbernetics Non-linear models Basis epansion.....................................................................................................
More informationMachine Learning. Cross Validation
Machine Learning Cross Validation Cross Validation Cross validation is a model evaluation method that is better than residuals. The problem with residual evaluations is that they do not give an indication
More informationResampling methods (Ch. 5 Intro)
Zavádějící faktor (Confounding factor), ale i 'současně působící faktor' Resampling methods (Ch. 5 Intro) Key terms: Train/Validation/Test data Crossvalitation One-leave-out = LOOCV Bootstrup key slides
More information2017 ITRON EFG Meeting. Abdul Razack. Specialist, Load Forecasting NV Energy
2017 ITRON EFG Meeting Abdul Razack Specialist, Load Forecasting NV Energy Topics 1. Concepts 2. Model (Variable) Selection Methods 3. Cross- Validation 4. Cross-Validation: Time Series 5. Example 1 6.
More informationModel Assessment and Selection. Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer
Model Assessment and Selection Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Model Training data Testing data Model Testing error rate Training error
More informationPerformance Evaluation
Performance Evaluation Dan Lizotte 7-9-5 Evaluating Performance..5..5..5..5 Which do ou prefer and wh? Evaluating Performance..5..5 Which do ou prefer and wh?..5..5 Evaluating Performance..5..5..5..5 Performance
More information6.867 Machine learning
6.867 Machine learning Final eam December 3, 24 Your name and MIT ID: J. D. (Optional) The grade ou would give to ourself + a brief justification. A... wh not? Problem 5 4.5 4 3.5 3 2.5 2.5 + () + (2)
More information6.867 Machine learning
6.867 Machine learning Final eam December 3, 24 Your name and MIT ID: J. D. (Optional) The grade ou would give to ourself + a brief justification. A... wh not? Cite as: Tommi Jaakkola, course materials
More informationInstance-based Learning
Instance-based Learning Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 19 th, 2007 2005-2007 Carlos Guestrin 1 Why not just use Linear Regression? 2005-2007 Carlos Guestrin
More informationCPSC 340: Machine Learning and Data Mining. Feature Selection Fall 2016
CPSC 34: Machine Learning and Data Mining Feature Selection Fall 26 Assignment 3: Admin Solutions will be posted after class Wednesday. Extra office hours Thursday: :3-2 and 4:3-6 in X836. Midterm Friday:
More informationMulticollinearity and Validation CIVL 7012/8012
Multicollinearity and Validation CIVL 7012/8012 2 In Today s Class Recap Multicollinearity Model Validation MULTICOLLINEARITY 1. Perfect Multicollinearity 2. Consequences of Perfect Multicollinearity 3.
More informationPerceptron as a graph
Neural Networks Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 10 th, 2007 2005-2007 Carlos Guestrin 1 Perceptron as a graph 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0-6 -4-2
More informationCross-validation. Cross-validation is a resampling method.
Cross-validation Cross-validation is a resampling method. It refits a model of interest to samples formed from the training set, in order to obtain additional information about the fitted model. For example,
More informationK-means and Hierarchical Clustering
K-means and Hierarchical Clustering Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these
More informationTopic 2 Transformations of Functions
Week Topic Transformations of Functions Week Topic Transformations of Functions This topic can be a little trick, especiall when one problem has several transformations. We re going to work through each
More informationSimple Model Selection Cross Validation Regularization Neural Networks
Neural Nets: Many possible refs e.g., Mitchell Chapter 4 Simple Model Selection Cross Validation Regularization Neural Networks Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February
More informationWeek 3. Topic 5 Asymptotes
Week 3 Topic 5 Asmptotes Week 3 Topic 5 Asmptotes Introduction One of the strangest features of a graph is an asmptote. The come in three flavors: vertical, horizontal, and slant (also called oblique).
More informationCS273 Midterm Exam Introduction to Machine Learning: Winter 2015 Tuesday February 10th, 2014
CS273 Midterm Eam Introduction to Machine Learning: Winter 2015 Tuesday February 10th, 2014 Your name: Your UCINetID (e.g., myname@uci.edu): Your seat (row and number): Total time is 80 minutes. READ THE
More informationRESAMPLING METHODS. Chapter 05
1 RESAMPLING METHODS Chapter 05 2 Outline Cross Validation The Validation Set Approach Leave-One-Out Cross Validation K-fold Cross Validation Bias-Variance Trade-off for k-fold Cross Validation Cross Validation
More informationFall 09, Homework 5
5-38 Fall 09, Homework 5 Due: Wednesday, November 8th, beginning of the class You can work in a group of up to two people. This group does not need to be the same group as for the other homeworks. You
More informationMore Coordinate Graphs. How do we find coordinates on the graph?
Lesson Problem Solving: More Coordinate Graphs Problem Solving: More Coordinate Graphs How do we find coordinates on the graph? We use coordinates to find where the dot goes on the coordinate graph. From
More informationIntermediate Algebra. Gregg Waterman Oregon Institute of Technology
Intermediate Algebra Gregg Waterman Oregon Institute of Technolog c 2017 Gregg Waterman This work is licensed under the Creative Commons Attribution 4.0 International license. The essence of the license
More informationPerceptron Introduction to Machine Learning. Matt Gormley Lecture 5 Jan. 31, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Perceptron Matt Gormley Lecture 5 Jan. 31, 2018 1 Q&A Q: We pick the best hyperparameters
More informationy = f(x) x (x, f(x)) f(x) g(x) = f(x) + 2 (x, g(x)) 0 (0, 1) 1 3 (0, 3) 2 (2, 3) 3 5 (2, 5) 4 (4, 3) 3 5 (4, 5) 5 (5, 5) 5 7 (5, 7)
0 Relations and Functions.7 Transformations In this section, we stud how the graphs of functions change, or transform, when certain specialized modifications are made to their formulas. The transformations
More informationClassification and Regression Trees
Classification and Regression Trees David S. Rosenberg New York University April 3, 2018 David S. Rosenberg (New York University) DS-GA 1003 / CSCI-GA 2567 April 3, 2018 1 / 51 Contents 1 Trees 2 Regression
More informationCOMP 551 Applied Machine Learning Lecture 13: Unsupervised learning
COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning Associate Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551
More informationlow bias high variance high bias low variance error test set training set high low Model Complexity Typical Behaviour Lecture 11:
Lecture 11: Overfitting and Capacity Control high bias low variance Typical Behaviour low bias high variance Sam Roweis error test set training set November 23, 4 low Model Complexity high Generalization,
More informationerror low bias high variance test set training set high low Model Complexity Typical Behaviour 2 CSC2515 Machine Learning high bias low variance
CSC55 Machine Learning Sam Roweis high bias low variance Typical Behaviour low bias high variance Lecture : Overfitting and Capacity Control error training set test set November, 6 low Model Complexity
More informationIntroduction to Homogeneous Transformations & Robot Kinematics
Introduction to Homogeneous Transformations & Robot Kinematics Jennifer Ka Rowan Universit Computer Science Department. Drawing Dimensional Frames in 2 Dimensions We will be working in -D coordinates,
More informationKernels and Clustering
Kernels and Clustering Robert Platt Northeastern University All slides in this file are adapted from CS188 UC Berkeley Case-Based Learning Non-Separable Data Case-Based Reasoning Classification from similarity
More information10601 Machine Learning. Model and feature selection
10601 Machine Learning Model and feature selection Model selection issues We have seen some of this before Selecting features (or basis functions) Logistic regression SVMs Selecting parameter value Prior
More informationClassification: Feature Vectors
Classification: Feature Vectors Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free YOUR_NAME MISSPELLED FROM_FRIEND... : : : : 2 0 2 0 PIXEL 7,12
More informationSpecial Products on Factoring
Special Products on Factoring What Is This Module About? This module is a continuation of the module on polnomials. In the module entitled Studing Polnomials, ou learned what polnomials are as well as
More informationLecture 13: Model selection and regularization
Lecture 13: Model selection and regularization Reading: Sections 6.1-6.2.1 STATS 202: Data mining and analysis October 23, 2017 1 / 17 What do we know so far In linear regression, adding predictors always
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Kernels and Clustering Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC Berkeley.
More informationHow do we obtain reliable estimates of performance measures?
How do we obtain reliable estimates of performance measures? 1 Estimating Model Performance How do we estimate performance measures? Error on training data? Also called resubstitution error. Not a good
More informationFeature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.
CS 188: Artificial Intelligence Fall 2008 Lecture 24: Perceptrons II 11/24/2008 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit
More informationCS 8520: Artificial Intelligence
CS 8520: Artificial Intelligence Machine Learning 2 Paula Matuszek Spring, 2013 1 Regression Classifiers We said earlier that the task of a supervised learning system can be viewed as learning a function
More informationChapter 3. Interpolation. 3.1 Introduction
Chapter 3 Interpolation 3 Introduction One of the fundamental problems in Numerical Methods is the problem of interpolation, that is given a set of data points ( k, k ) for k =,, n, how do we find a function
More informationDetermining the 2d transformation that brings one image into alignment (registers it) with another. And
Last two lectures: Representing an image as a weighted combination of other images. Toda: A different kind of coordinate sstem change. Solving the biggest problem in using eigenfaces? Toda Recognition
More informationNetwork Traffic Measurements and Analysis
DEIB - Politecnico di Milano Fall, 2017 Sources Hastie, Tibshirani, Friedman: The Elements of Statistical Learning James, Witten, Hastie, Tibshirani: An Introduction to Statistical Learning Andrew Ng:
More informationKernels + K-Means Introduction to Machine Learning. Matt Gormley Lecture 29 April 25, 2018
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Kernels + K-Means Matt Gormley Lecture 29 April 25, 2018 1 Reminders Homework 8:
More informationCSC 411 Lecture 4: Ensembles I
CSC 411 Lecture 4: Ensembles I Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 04-Ensembles I 1 / 22 Overview We ve seen two particular classification algorithms:
More information5 Learning hypothesis classes (16 points)
5 Learning hypothesis classes (16 points) Consider a classification problem with two real valued inputs. For each of the following algorithms, specify all of the separators below that it could have generated
More information2.4 Polynomial and Rational Functions
Polnomial Functions Given a linear function f() = m + b, we can add a square term, and get a quadratic function g() = a 2 + f() = a 2 + m + b. We can continue adding terms of higher degrees, e.g. we can
More informationIntroduction to Homogeneous Transformations & Robot Kinematics
Introduction to Homogeneous Transformations & Robot Kinematics Jennifer Ka, Rowan Universit Computer Science Department Januar 25. Drawing Dimensional Frames in 2 Dimensions We will be working in -D coordinates,
More informationCross- Valida+on & ROC curve. Anna Helena Reali Costa PCS 5024
Cross- Valida+on & ROC curve Anna Helena Reali Costa PCS 5024 Resampling Methods Involve repeatedly drawing samples from a training set and refibng a model on each sample. Used in model assessment (evalua+ng
More informationCS 229 Midterm Review
CS 229 Midterm Review Course Staff Fall 2018 11/2/2018 Outline Today: SVMs Kernels Tree Ensembles EM Algorithm / Mixture Models [ Focus on building intuition, less so on solving specific problems. Ask
More informationCSE 573: Artificial Intelligence Autumn 2010
CSE 573: Artificial Intelligence Autumn 2010 Lecture 16: Machine Learning Topics 12/7/2010 Luke Zettlemoyer Most slides over the course adapted from Dan Klein. 1 Announcements Syllabus revised Machine
More informationCS 8520: Artificial Intelligence. Machine Learning 2. Paula Matuszek Fall, CSC 8520 Fall Paula Matuszek
CS 8520: Artificial Intelligence Machine Learning 2 Paula Matuszek Fall, 2015!1 Regression Classifiers We said earlier that the task of a supervised learning system can be viewed as learning a function
More information2.3. One-to-One and Inverse Functions. Introduction. Prerequisites. Learning Outcomes
One-to-One and Inverse Functions 2. Introduction In this Section we eamine more terminolog associated with functions. We eplain one-to-one and man-to-one functions and show how the rule associated with
More informationLecture 19: Decision trees
Lecture 19: Decision trees Reading: Section 8.1 STATS 202: Data mining and analysis November 10, 2017 1 / 17 Decision trees, 10,000 foot view R2 R5 t4 1. Find a partition of the space of predictors. X2
More informationStatistical Consulting Topics Using cross-validation for model selection. Cross-validation is a technique that can be used for model evaluation.
Statistical Consulting Topics Using cross-validation for model selection Cross-validation is a technique that can be used for model evaluation. We often fit a model to a full data set and then perform
More informationLinear Feature Engineering 22
Linear Feature Engineering 22 3 Overfitting Remember our dataset from last time. We have a bunch of inputs x i and corresponding outputs y i. In the previous lecture notes, we considered how to fit polynomials
More informationLinear Model Selection and Regularization. especially usefull in high dimensions p>>100.
Linear Model Selection and Regularization especially usefull in high dimensions p>>100. 1 Why Linear Model Regularization? Linear models are simple, BUT consider p>>n, we have more features than data records
More informationInstance-based Learning
Instance-based Learning Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 15 th, 2007 2005-2007 Carlos Guestrin 1 1-Nearest Neighbor Four things make a memory based learner:
More informationUsing C# for Graphics and GUIs Handout #2
Using C# for Graphics and GUIs Handout #2 Learning Objectives: Review Math methods C# Arras Drawing Rectangles Getting height and width of window Coloring with FromArgb() Sierpinski Triangle 1 Math Methods
More informationBoosting Simple Model Selection Cross Validation Regularization
Boosting: (Linked from class website) Schapire 01 Boosting Simple Model Selection Cross Validation Regularization Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 8 th,
More informationThe Marching Cougars Lesson 9-1 Transformations
The Marching Cougars Lesson 9-1 Learning Targets: Perform transformations on and off the coordinate plane. Identif characteristics of transformations that are rigid motions and characteristics of transformations
More informationCPSC 340: Machine Learning and Data Mining
CPSC 340: Machine Learning and Data Mining Feature Selection Original version of these slides by Mark Schmidt, with modifications by Mike Gelbart. Admin Assignment 3: Due Friday Midterm: Feb 14 in class
More informationCross-validation and the Bootstrap
Cross-validation and the Bootstrap In the section we discuss two resampling methods: cross-validation and the bootstrap. These methods refit a model of interest to samples formed from the training set,
More informationBoosting Simple Model Selection Cross Validation Regularization. October 3 rd, 2007 Carlos Guestrin [Schapire, 1989]
Boosting Simple Model Selection Cross Validation Regularization Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 3 rd, 2007 1 Boosting [Schapire, 1989] Idea: given a weak
More informationMachine Learning for NLP
Machine Learning for NLP Support Vector Machines Aurélie Herbelot 2018 Centre for Mind/Brain Sciences University of Trento 1 Support Vector Machines: introduction 2 Support Vector Machines (SVMs) SVMs
More informationRegularization and model selection
CS229 Lecture notes Andrew Ng Part VI Regularization and model selection Suppose we are trying select among several different models for a learning problem. For instance, we might be using a polynomial
More informationk-nearest Neighbors + Model Selection
10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University k-nearest Neighbors + Model Selection Matt Gormley Lecture 5 Jan. 30, 2019 1 Reminders
More informationRoberto s Notes on Differential Calculus Chapter 8: Graphical analysis Section 5. Graph sketching
Roberto s Notes on Differential Calculus Chapter 8: Graphical analsis Section 5 Graph sketching What ou need to know alread: How to compute and interpret limits How to perform first and second derivative
More informationNonparametric Methods Recap
Nonparametric Methods Recap Aarti Singh Machine Learning 10-701/15-781 Oct 4, 2010 Nonparametric Methods Kernel Density estimate (also Histogram) Weighted frequency Classification - K-NN Classifier Majority
More informationImproving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah
Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com
More informationFeature Extractors. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. The Perceptron Update Rule.
CS 188: Artificial Intelligence Fall 2007 Lecture 26: Kernels 11/29/2007 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit your
More informationDouble Integrals in Polar Coordinates
Double Integrals in Polar Coordinates. A flat plate is in the shape of the region in the first quadrant ling between the circles + and +. The densit of the plate at point, is + kilograms per square meter
More informationCPSC 340: Machine Learning and Data Mining
CPSC 340: Machine Learning and Data Mining Fundamentals of learning (continued) and the k-nearest neighbours classifier Original version of these slides by Mark Schmidt, with modifications by Mike Gelbart.
More informationMore on Neural Networks. Read Chapter 5 in the text by Bishop, except omit Sections 5.3.3, 5.3.4, 5.4, 5.5.4, 5.5.5, 5.5.6, 5.5.7, and 5.
More on Neural Networks Read Chapter 5 in the text by Bishop, except omit Sections 5.3.3, 5.3.4, 5.4, 5.5.4, 5.5.5, 5.5.6, 5.5.7, and 5.6 Recall the MLP Training Example From Last Lecture log likelihood
More informationCS6375: Machine Learning Gautam Kunapuli. Mid-Term Review
Gautam Kunapuli Machine Learning Data is identically and independently distributed Goal is to learn a function that maps to Data is generated using an unknown function Learn a hypothesis that minimizes
More informationPartitioning Data. IRDS: Evaluation, Debugging, and Diagnostics. Cross-Validation. Cross-Validation for parameter tuning
Partitioning Data IRDS: Evaluation, Debugging, and Diagnostics Charles Sutton University of Edinburgh Training Validation Test Training : Running learning algorithms Validation : Tuning parameters of learning
More informationCSE Data Mining Concepts and Techniques STATISTICAL METHODS (REGRESSION) Professor- Anita Wasilewska. Team 13
CSE 634 - Data Mining Concepts and Techniques STATISTICAL METHODS Professor- Anita Wasilewska (REGRESSION) Team 13 Contents Linear Regression Logistic Regression Bias and Variance in Regression Model Fit
More informationCPSC 340: Machine Learning and Data Mining. Probabilistic Classification Fall 2017
CPSC 340: Machine Learning and Data Mining Probabilistic Classification Fall 2017 Admin Assignment 0 is due tonight: you should be almost done. 1 late day to hand it in Monday, 2 late days for Wednesday.
More informationSupport vector machines
Support vector machines When the data is linearly separable, which of the many possible solutions should we prefer? SVM criterion: maximize the margin, or distance between the hyperplane and the closest
More information( ) ( ) Completing the Square. Alg 3 1 Rational Roots Solving Polynomial Equations. A Perfect Square Trinomials
Alg Completing the Square A Perfect Square Trinomials (± ) ± (± ) ± 4 4 (± ) ± 6 9 (± 4) ± 8 6 (± 5) ± 5 What is the relationship between the red term and the blue term? B. Creating perfect squares.. 6
More informationInvestigation Free Fall
Investigation Free Fall Name Period Date You will need: a motion sensor, a small pillow or other soft object What function models the height of an object falling due to the force of gravit? Use a motion
More informationAlgorithms: Decision Trees
Algorithms: Decision Trees A small dataset: Miles Per Gallon Suppose we want to predict MPG From the UCI repository A Decision Stump Recursion Step Records in which cylinders = 4 Records in which cylinders
More informationUVA CS 4501: Machine Learning. Lecture 10: K-nearest-neighbor Classifier / Bias-Variance Tradeoff. Dr. Yanjun Qi. University of Virginia
UVA CS 4501: Machine Learning Lecture 10: K-nearest-neighbor Classifier / Bias-Variance Tradeoff Dr. Yanjun Qi University of Virginia Department of Computer Science 1 Where are we? è Five major secfons
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More informationTransformations of Functions. 1. Shifting, reflecting, and stretching graphs Symmetry of functions and equations
Chapter Transformations of Functions TOPICS.5.. Shifting, reflecting, and stretching graphs Smmetr of functions and equations TOPIC Horizontal Shifting/ Translation Horizontal Shifting/ Translation Shifting,
More informationAnnouncements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron
CS 188: Artificial Intelligence Spring 2010 Lecture 24: Perceptrons and More! 4/20/2010 Announcements W7 due Thursday [that s your last written for the semester!] Project 5 out Thursday Contest running
More informationCPSC 340: Machine Learning and Data Mining. Multi-Class Classification Fall 2017
CPSC 340: Machine Learning and Data Mining Multi-Class Classification Fall 2017 Assignment 3: Admin Check update thread on Piazza for correct definition of trainndx. This could make your cross-validation
More information9. f(x) = x f(x) = x g(x) = 2x g(x) = 5 2x. 13. h(x) = 1 3x. 14. h(x) = 2x f(x) = x x. 16.
Section 4.2 Absolute Value 367 4.2 Eercises For each of the functions in Eercises 1-8, as in Eamples 7 and 8 in the narrative, mark the critical value on a number line, then mark the sign of the epression
More informationPartial Fraction Decomposition
Section 7. Partial Fractions 53 Partial Fraction Decomposition Algebraic techniques for determining the constants in the numerators of partial fractions are demonstrated in the eamples that follow. Note
More informationCPSC 340: Machine Learning and Data Mining. More Regularization Fall 2017
CPSC 340: Machine Learning and Data Mining More Regularization Fall 2017 Assignment 3: Admin Out soon, due Friday of next week. Midterm: You can view your exam during instructor office hours or after class
More informationSection 1.4 Limits involving infinity
Section. Limits involving infinit (/3/08) Overview: In later chapters we will need notation and terminolog to describe the behavior of functions in cases where the variable or the value of the function
More information[POLS 8500] Stochastic Gradient Descent, Linear Model Selection and Regularization
[POLS 8500] Stochastic Gradient Descent, Linear Model Selection and Regularization L. Jason Anastasopoulos ljanastas@uga.edu February 2, 2017 Gradient descent Let s begin with our simple problem of estimating
More informationPre-Algebra Notes Unit 8: Graphs and Functions
Pre-Algebra Notes Unit 8: Graphs and Functions The Coordinate Plane A coordinate plane is formed b the intersection of a horizontal number line called the -ais and a vertical number line called the -ais.
More informationModel Selection and Assessment
Model Selection and Assessment CS4780/5780 Machine Learning Fall 2014 Thorsten Joachims Cornell University Reading: Mitchell Chapter 5 Dietterich, T. G., (1998). Approximate Statistical Tests for Comparing
More informationLecture 25: Review I
Lecture 25: Review I Reading: Up to chapter 5 in ISLR. STATS 202: Data mining and analysis Jonathan Taylor 1 / 18 Unsupervised learning In unsupervised learning, all the variables are on equal standing,
More information