k-nearest Neighbors + Model Selection

Similar documents
Model Selection Introduction to Machine Learning. Matt Gormley Lecture 4 January 29, 2018

Experimental Design + k- Nearest Neighbors

Perceptron Introduction to Machine Learning. Matt Gormley Lecture 5 Jan. 31, 2018

CSCI567 Machine Learning (Fall 2014)

Nearest Neighbor Classification

Introduction to Artificial Intelligence

k Nearest Neighbors Super simple idea! Instance-based learning as opposed to model-based (no pre-processing)

KTH ROYAL INSTITUTE OF TECHNOLOGY. Lecture 14 Machine Learning. K-means, knn

Kernels + K-Means Introduction to Machine Learning. Matt Gormley Lecture 29 April 25, 2018

Machine Learning: Algorithms and Applications Mockup Examination

Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Machine Learning. Nonparametric methods for Classification. Eric Xing , Fall Lecture 2, September 12, 2016

CSE 573: Artificial Intelligence Autumn 2010

Mathematics of Data. INFO-4604, Applied Machine Learning University of Colorado Boulder. September 5, 2017 Prof. Michael Paul

Nearest neighbors classifiers

CS 584 Data Mining. Classification 1

Lecture #11: The Perceptron

CS5670: Computer Vision

Classification: Feature Vectors

Classification and K-Nearest Neighbors

Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron

Nearest Neighbor Classification. Machine Learning Fall 2017

Naïve Bayes for text classification

Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York

K Nearest Neighbor Wrap Up K- Means Clustering. Slides adapted from Prof. Carpuat

Kernels and Clustering

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions

Linear Regression and K-Nearest Neighbors 3/28/18

Distribution-free Predictive Approaches

Regularization and model selection

Nearest Neighbor Classification

Lecture 8: Grid Search and Model Validation Continued

Machine Learning Classifiers and Boosting

Notes and Announcements

CS 343: Artificial Intelligence

Performance Analysis of Data Mining Classification Techniques

CSCI567 Machine Learning (Fall 2018)

Supervised Learning: K-Nearest Neighbors and Decision Trees

EPL451: Data Mining on the Web Lab 5

DATA MINING INTRODUCTION TO CLASSIFICATION USING LINEAR CLASSIFIERS

3 Perceptron Learning; Maximum Margin Classifiers

SUPERVISED LEARNING METHODS. Stanley Liang, PhD Candidate, Lassonde School of Engineering, York University Helix Science Engagement Programs 2018

Machine Learning / Jan 27, 2010

Simple Model Selection Cross Validation Regularization Neural Networks

The Curse of Dimensionality

Backpropagation + Deep Learning

Chapter 4: Non-Parametric Techniques

Nonparametric Methods Recap

6.034 Quiz 2, Spring 2005

LARGE MARGIN CLASSIFIERS

Identification Of Iris Flower Species Using Machine Learning

Ensemble Learning. Another approach is to leverage the algorithms we have via ensemble methods

VECTOR SPACE CLASSIFICATION

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

K-Nearest Neighbors. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

CISC 4631 Data Mining

Instance-based Learning

Density estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate

Machine Learning (CSE 446): Unsupervised Learning

Instance-Based Learning: Nearest neighbor and kernel regression and classificiation

Statistical Learning Part 2 Nonparametric Learning: The Main Ideas. R. Moeller Hamburg University of Technology

Classification: Decision Trees

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 11: Non-Parametric Techniques

Instance-Based Learning: Nearest neighbor and kernel regression and classificiation

ADVANCED CLASSIFICATION TECHNIQUES

Announcements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday.

CS7267 MACHINE LEARNING NEAREST NEIGHBOR ALGORITHM. Mingon Kang, PhD Computer Science, Kennesaw State University

Machine Learning for. Artem Lind & Aleskandr Tkachenko

CS570: Introduction to Data Mining

Supervised Learning. CS 586 Machine Learning. Prepared by Jugal Kalita. With help from Alpaydin s Introduction to Machine Learning, Chapter 2.

CS273 Midterm Exam Introduction to Machine Learning: Winter 2015 Tuesday February 10th, 2014

Supervised Learning: Nearest Neighbors

Lecture 3: Linear Classification

5 Learning hypothesis classes (16 points)

Practice EXAM: SPRING 2012 CS 6375 INSTRUCTOR: VIBHAV GOGATE

Chapter 4: Algorithms CS 795

1-Nearest Neighbor Boundary

CSC 411 Lecture 4: Ensembles I

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Statistical Methods in AI

7. Nearest neighbors. Learning objectives. Foundations of Machine Learning École Centrale Paris Fall 2015

CSC411/2515 Tutorial: K-NN and Decision Tree

CS 229 Midterm Review

Model Assessment and Selection. Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer

Nearest neighbor classification DSE 220

R (2) Data analysis case study using R for readily available data set using any one machine learning algorithm.

CS178: Machine Learning and Data Mining. Complexity & Nearest Neighbor Methods

7. Nearest neighbors. Learning objectives. Centre for Computational Biology, Mines ParisTech

Problem 1: Complexity of Update Rules for Logistic Regression

Classification: Linear Discriminant Functions

CS 340 Lec. 4: K-Nearest Neighbors

Image Classification pipeline. Lecture 2-1

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Density estimation. In density estimation problems, we are given a random from an unknown density. Our objective is to estimate

Machine Learning. Classification

Instance-based Learning

Machine Learning and Pervasive Computing

Multi-Class Logistic Regression and Perceptron

Transcription:

10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University k-nearest Neighbors + Model Selection Matt Gormley Lecture 5 Jan. 30, 2019 1

Reminders Homework 2: Decision Trees Out: Wed, Jan 23 Due: Wed, Feb 6 at 11:59pm Today s Poll: http://p5.mlcourse.org 3

K-NEAREST NEIGHBORS 5

6

Chalkboard: k-nearest Neighbors Nearest Neighbor classifier KNN for binary classification 7

KNN: Remarks Distance Functions: KNN requires a distance function The most common choice is Euclidean distance But other choices are just fine (e.g. Manhattan distance) 9

KNN: Remarks In-Class Exercises 1. How can we handle ties for even values of k? Answer(s) Here: 2. What is the inductive bias of KNN? 10

Computational Efficiency: KNN: Remarks Suppose we have N training examples, and each one has M features Computational complexity for the special case where k=1: Task Naive k-d Tree Train O(1) ~ O(M N log N) Predict (one test example) O(MN) ~ O(2 M log N) on average Problem: Very fast for small M, but very slow for large M In practice: use stochastic approximations (very fast, and empirically often as good) 13

KNN: Remarks Theoretical Guarantees: Cover & Hart (1967) Let h(x) be a Nearest Neighbor (k=1) binary classifier. As the number of training examples N goes to infinity error true (h) < 2 x Bayes Error Rate In this sense, it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor. very informally, Bayes Error Rate can be thought of as: the best you could possibly do 14

Decision Boundary Example Dataset: Outputs {+,-}; Features x 1 and x 2 In-Class Exercise Question 1: a) Can a k-nearest Neighbor classifier with k=1 achieve zero training error on this dataset? b) If Yes, draw the learned decision boundary. If No, why not? Question 2: a) Can a Decision Tree classifier achieve zero training error on this dataset? b) If Yes, draw the learned decision boundary for k-nearest Neighbors with k=1. If No, why not? x 2 x 2 x 1 x 1 16

KNN ON FISHER IRIS DATA 17

Fisher Iris Dataset Fisher (1936) used 150 measurements of flowers from 3 different species: Iris setosa (0), Iris virginica (1), Iris versicolor (2) collected by Anderson (1936) Species Sepal Length Sepal Width Petal Length 0 4.3 3.0 1.1 0.1 0 4.9 3.6 1.4 0.1 0 5.3 3.7 1.5 0.2 1 4.9 2.4 3.3 1.0 1 5.7 2.8 4.1 1.3 1 6.3 3.3 4.7 1.6 1 6.7 3.0 5.0 1.7 Petal Width Full dataset: https://en.wikipedia.org/wiki/iris_flower_data_set 18

Fisher Iris Dataset Fisher (1936) used 150 measurements of flowers from 3 different species: Iris setosa (0), Iris virginica (1), Iris versicolor (2) collected by Anderson (1936) Species Sepal Length 0 4.3 3.0 0 4.9 3.6 0 5.3 3.7 1 4.9 2.4 1 5.7 2.8 1 6.3 3.3 1 6.7 3.0 Sepal Width Deleted two of the four features, so that input space is 2D Full dataset: https://en.wikipedia.org/wiki/iris_flower_data_set 19

KNN on Fisher Iris Data 20

KNN on Fisher Iris Data Special Case: Nearest Neighbor 21

KNN on Fisher Iris Data Special Case: Majority Vote 22

KNN on Fisher Iris Data 23

KNN on Fisher Iris Data Special Case: Nearest Neighbor 24

KNN on Fisher Iris Data 25

KNN on Fisher Iris Data 26

KNN on Fisher Iris Data 27

KNN on Fisher Iris Data 28

KNN on Fisher Iris Data 29

KNN on Fisher Iris Data 30

KNN on Fisher Iris Data 31

KNN on Fisher Iris Data 32

KNN on Fisher Iris Data 33

KNN on Fisher Iris Data 34

KNN on Fisher Iris Data 35

KNN on Fisher Iris Data 36

KNN on Fisher Iris Data 37

KNN on Fisher Iris Data 38

KNN on Fisher Iris Data 39

KNN on Fisher Iris Data 40

KNN on Fisher Iris Data 41

KNN on Fisher Iris Data 42

KNN on Fisher Iris Data 43

KNN on Fisher Iris Data Special Case: Majority Vote 44

KNN ON GAUSSIAN DATA 45

KNN on Gaussian Data 46

KNN on Gaussian Data 47

KNN on Gaussian Data 48

KNN on Gaussian Data 49

KNN on Gaussian Data 50

KNN on Gaussian Data 51

KNN on Gaussian Data 52

KNN on Gaussian Data 53

KNN on Gaussian Data 54

KNN on Gaussian Data 55

KNN on Gaussian Data 56

KNN on Gaussian Data 57

KNN on Gaussian Data 58

KNN on Gaussian Data 59

KNN on Gaussian Data 60

KNN on Gaussian Data 61

KNN on Gaussian Data 62

KNN on Gaussian Data 63

KNN on Gaussian Data 64

KNN on Gaussian Data 65

KNN on Gaussian Data 66

KNN on Gaussian Data 67

KNN on Gaussian Data 68

KNN on Gaussian Data 69

KNN on Gaussian Data 70

K-NEAREST NEIGHBORS 71

Questions How could k-nearest Neighbors (KNN) be applied to regression? Can we do better than majority vote? (e.g. distance-weighted KNN) Where does the Cover & Hart (1967) Bayes error rate bound come from? 72

KNN Learning Objectives You should be able to Describe a dataset as points in a high dimensional space [CIML] Implement k-nearest Neighbors with O(N) prediction Describe the inductive bias of a k-nn classifier and relate it to feature scale [a la. CIML] Sketch the decision boundary for a learning algorithm (compare k-nn and DT) State Cover & Hart (1967)'s large sample analysis of a nearest neighbor classifier Invent "new" k-nn learning algorithms capable of dealing with even k Explain computational and geometric examples of the curse of dimensionality 73

MODEL SELECTION 74

Model Selection WARNING: In some sense, our discussion of model selection is premature. The models we have considered thus far are fairly simple. The models and the many decisions available to the data scientist wielding them will grow to be much more complex than what we ve seen so far. 75

Model Selection Statistics Def: a model defines the data generation process (i.e. a set or family of parametric probability distributions) Def: model parameters are the values that give rise to a particular probability distribution in the model family Def: learning (aka. estimation) is the process of finding the parameters that best fit the data Def: hyperparameters are the parameters of a prior distribution over parameters Machine Learning Def: (loosely) a model defines the hypothesis space over which learning performs its search Def: model parameters are the numeric values or structure selected by the learning algorithm that give rise to a hypothesis Def: the learning algorithm defines the data-driven search over the hypothesis space (i.e. search for good parameters) Def: hyperparameters are the tunable aspects of the model, that the learning algorithm does not select 76

Model Selection Example: Decision Tree model = set of all possible trees, possibly restricted by some hyperparameters (e.g. max depth) parameters = structure of a specific decision tree learning algorithm = ID3, CART, etc. hyperparameters = maxdepth, threshold for splitting criterion, etc. Machine Learning Def: (loosely) a model defines the hypothesis space over which learning performs its search Def: model parameters are the numeric values or structure selected by the learning algorithm that give rise to a hypothesis Def: the learning algorithm defines the data-driven search over the hypothesis space (i.e. search for good parameters) Def: hyperparameters are the tunable aspects of the model, that the learning algorithm does not select 77

Model Selection Example: k-nearest Neighbors model = set of all possible nearest neighbors classifiers parameters = none (KNN is an instance-based or non-parametric method) learning algorithm = for naïve setting, just storing the data hyperparameters = k, the number of neighbors to consider Machine Learning Def: (loosely) a model defines the hypothesis space over which learning performs its search Def: model parameters are the numeric values or structure selected by the learning algorithm that give rise to a hypothesis Def: the learning algorithm defines the data-driven search over the hypothesis space (i.e. search for good parameters) Def: hyperparameters are the tunable aspects of the model, that the learning algorithm does not select 78

Model Selection Example: Perceptron model = set of all linear separators parameters = vector of weights (one for each feature) learning algorithm = mistake based updates to the parameters hyperparameters = none (unless using some variant such as averaged perceptron) Machine Learning Def: (loosely) a model defines the hypothesis space over which learning performs its search Def: model parameters are the numeric values or structure selected by the learning algorithm that give rise to a hypothesis Def: the learning algorithm defines the data-driven search over the hypothesis space (i.e. search for good parameters) Def: hyperparameters are the tunable aspects of the model, that the learning algorithm does not select 79

Model Selection Statistics Def: a model defines the data generation process (i.e. a set or family of parametric probability distributions) Def: model parameters are the values that give rise to a particular probability distribution in the model family Def: learning (aka. estimation) is the process of finding the parameters that best fit the data Def: hyperparameters are the parameters of a prior distribution over parameters If learning is all about picking the best parameters how do we pick the best hyperparameters? Machine Learning Def: (loosely) a model defines the hypothesis space over which learning performs its search Def: model parameters are the numeric values or structure selected by the learning algorithm that give rise to a hypothesis Def: the learning algorithm defines the data-driven search over the hypothesis space (i.e. search for good parameters) Def: hyperparameters are the tunable aspects of the model, that the learning algorithm does not select 80

Model Selection Two very similar definitions: Def: model selection is the process by which we choose the best model from among a set of candidates Def: hyperparameter optimization is the process by which we choose the best hyperparameters from among a set of candidates (could be called a special case of model selection) Both assume access to a function capable of measuring the quality of a model Both are typically done outside the main training algorithm --- typically training is treated as a black box 81

Example of Hyperparameter Opt. Chalkboard: Special cases of k-nearest Neighbors Choosing k with validation data Choosing k with cross-validation 82

Cross-Validation Cross validation is a method of estimating loss on held out data Input: training data, learning algorithm, loss function (e.g. 0/1 error) Output: an estimate of loss function on held-out data Key idea: rather than just a single validation set, use many! (Error is more stable. Slower computation.) D = y (1) y (2) y (N) x (1) x (2) x (N) Fold 1 Fold 2 Fold 3 Fold 4 Algorithm: Divide data into folds (e.g. 4) 1. Train on folds {1,2,3} and predict on {4} 2. Train on folds {1,2,4} and predict on {3} 3. Train on folds {1,3,4} and predict on {2} 4. Train on folds {2,3,4} and predict on {1} Concatenate all the predictions and evaluate loss (almost equivalent to averaging loss over the folds) 83

Model Selection WARNING (again): This section is only scratching the surface! Lots of methods for hyperparameter optimization: (to talk about later) Grid search Random search Bayesian optimization Graduate-student descent Main Takeaway: Model selection / hyperparameter optimization is just another form of learning 84

Model Selection Learning Objectives You should be able to Plan an experiment that uses training, validation, and test datasets to predict the performance of a classifier on unseen data (without cheating) Explain the difference between (1) training error, (2) validation error, (3) cross-validation error, (4) test error, and (5) true error For a given learning technique, identify the model, learning algorithm, parameters, and hyperparamters Define "instance-based learning" or "nonparametric methods" Select an appropriate algorithm for optimizing (aka. learning) hyperparameters 85

THE PERCEPTRON ALGORITHM 86

Perceptron: History Imagine you are trying to build a new machine learning technique your name is Frank Rosenblatt and the year is 1957 87

Perceptron: History Imagine you are trying to build a new machine learning technique your name is Frank Rosenblatt and the year is 1957 88

Linear Models for Classification Key idea: Try to learn this hyperplane directly Looking ahead: We ll see a number of commonly used Linear Classifiers These include: Perceptron Logistic Regression Naïve Bayes (under certain conditions) Support Vector Machines Directly modeling the hyperplane would use a decision function: for: h( ) =sign( T ) y { 1, +1}

Geometry In-Class Exercise Draw a picture of the region corresponding to: Answer Here: Draw the vector w = [w 1, w 2 ] 90

Chalkboard: Visualizing Dot-Products vector in 2D line in 2D adding a bias term definition of orthogonality vector projection hyperplane definition half-space definitions 91