Lecture 19: Decision trees

Similar documents
Classification/Regression Trees and Random Forests

The Basics of Decision Trees

Lecture 20: Bagging, Random Forests, Boosting

Statistical Methods for Data Mining

Random Forests and Boosting

Classification and Regression Trees

Classification and Regression Trees

Tree-based methods for classification and regression

Chapter 5. Tree-based Methods

Decision trees. Decision trees are useful to a large degree because of their simplicity and interpretability

Knowledge Discovery and Data Mining

Introduction to Classification & Regression Trees

8. Tree-based approaches

Decision Tree Learning

Random Forest A. Fornaser

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

Decision tree learning

CS229 Lecture notes. Raphael John Lamarre Townshend

STAT Midterm Research Project: Random Forest. Vicky Xu. Xiyu Liang. Xue Cao

Data Mining Lecture 8: Decision Trees

Lecture 5: Decision Trees (Part II)

Notes based on: Data Mining for Business Intelligence

Network Traffic Measurements and Analysis

Lecture 20: Classification and Regression Trees

Regression trees with R

Classification with Decision Tree Induction

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

Lecture 7: Decision Trees

Patrick Breheny. November 10

Lecture 25: Review I

Decision Trees / Discrete Variables

Statistical Machine Learning Hilary Term 2018

The digital copy of this thesis is protected by the Copyright Act 1994 (New Zealand).

CS 229 Midterm Review

Nonparametric Classification Methods

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

CART. Classification and Regression Trees. Rebecka Jörnsten. Mathematical Sciences University of Gothenburg and Chalmers University of Technology

Machine Learning. Decision Trees. Manfred Huber

Nonparametric Methods Recap

Business Club. Decision Trees

CLASSIFICATION USING E-MINER: LOGISTIC REGRESSION AND CART

Lecture 2 :: Decision Trees Learning

Decision Tree CE-717 : Machine Learning Sharif University of Technology

Algorithms: Decision Trees

Lecture 13: Model selection and regularization

9/6/14. Our first learning algorithm. Comp 135 Introduction to Machine Learning and Data Mining. knn Algorithm. knn Algorithm (simple form)

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Lecture outline. Decision-tree classification

7. Decision or classification trees

Data Mining Practical Machine Learning Tools and Techniques

CSE4334/5334 DATA MINING

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Topics in Machine Learning-EE 5359 Model Assessment and Selection

Additive Models, Trees, etc. Based in part on Chapter 9 of Hastie, Tibshirani, and Friedman David Madigan

Decision Trees: Discussion

Ensemble Learning: An Introduction. Adapted from Slides by Tan, Steinbach, Kumar

Machine Learning. A. Supervised Learning A.7. Decision Trees. Lars Schmidt-Thieme

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

Classification. Instructor: Wei Ding

INTRO TO RANDOM FOREST BY ANTHONY ANH QUOC DOAN

Data Mining Part 5. Prediction

Data Mining Classification - Part 1 -

Machine Learning. Decision Trees. Le Song /15-781, Spring Lecture 6, September 6, 2012 Based on slides from Eric Xing, CMU

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Data Mining. Decision Tree. Hamid Beigy. Sharif University of Technology. Fall 1396

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

CS Machine Learning

Ensemble Methods, Decision Trees

10601 Machine Learning. Model and feature selection

Data Mining and Knowledge Discovery: Practice Notes

Classification: Decision Trees

Predicting Diabetes and Heart Disease Using Diagnostic Measurements and Supervised Learning Classification Models

Data Mining and Knowledge Discovery: Practice Notes

Implementierungstechniken für Hauptspeicherdatenbanksysteme Classification: Decision Trees

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Data Mining and Knowledge Discovery Practice notes: Numeric Prediction, Association Rules

Machine Learning (CSE 446): Concepts & the i.i.d. Supervised Learning Paradigm

Classification: Basic Concepts, Decision Trees, and Model Evaluation

A Systematic Overview of Data Mining Algorithms

Nominal Data. May not have a numerical representation Distance measures might not make sense. PR and ANN

Lecture 16: High-dimensional regression, non-linear regression

Cost-complexity pruning of random forests

CSC411/2515 Tutorial: K-NN and Decision Tree

CS178: Machine Learning and Data Mining. Complexity & Nearest Neighbor Methods

4. Ad-hoc I: Hierarchical clustering

Midterm Examination CS 540-2: Introduction to Artificial Intelligence

Cross-validation and the Bootstrap

Model Selection and Assessment

8.11 Multivariate regression trees (MRT)

Lecture 06 Decision Trees I

Distribution-free Predictive Approaches

List of Exercises: Data Mining 1 December 12th, 2015

An introduction to random forests

CISC 4631 Data Mining

Nearest neighbor classification DSE 220

Classification. Instructor: Wei Ding

14. League: A factor with levels A and N indicating player s league at the end of 1986

BITS F464: MACHINE LEARNING

Transcription:

Lecture 19: Decision trees Reading: Section 8.1 STATS 202: Data mining and analysis November 10, 2017 1 / 17

Decision trees, 10,000 foot view R2 R5 t4 1. Find a partition of the space of predictors. X2 X2 t2 R1 R3 R4 2. Predict a constant in each set of the partition. t1 t3 X1 X1 X1 t1 X2 t2 X1 t3 X2 t4 R1 R2 R3 X2 X1 R4 R5 2 / 17

Decision trees, 10,000 foot view R2 R5 t4 1. Find a partition of the space of predictors. X2 X2 t2 R1 R3 R4 2. Predict a constant in each set of the partition. X1 t1 X1 t1 t3 X1 3. The partition is defined by splitting the range of one predictor at a time. X2 t2 X1 t3 X2 t4 R1 R2 R3 X2 X1 R4 R5 2 / 17

Decision trees, 10,000 foot view R2 R5 t4 1. Find a partition of the space of predictors. X2 X2 t2 R1 R3 R4 2. Predict a constant in each set of the partition. X2 t2 R1 X1 X1 t1 X1 t3 X2 t4 R2 R3 R4 R5 X2 t1 t3 X1 X1 3. The partition is defined by splitting the range of one predictor at a time. Can be represented as a decision tree. Not all partitions are possible. 2 / 17

Example: Predicting a baseball player s salary Years < 4.5 238 R 3 Hits R 1 117.5 5.11 Hits < 117.5 R 2 1 4.5 24 Years 1 6.00 6.74 The prediction for a point in region R i is the average of the training points in R i. 3 / 17

How is a decision tree built? Start with a single region R 1 (entire input space), and iterate: 1. Select a region R k, a predictor X j, and a splitting point s, such that splitting R k with the criterion X j < s produces the largest decrease in RSS: T m=1 x i R m (y i ȳ Rm ) 2 2. Redefine the regions with this additional split. 4 / 17

How is a decision tree built? Start with a single region R 1 (entire input space), and iterate: 1. Select a region R k, a predictor X j, and a splitting point s, such that splitting R k with the criterion X j < s produces the largest decrease in RSS: T m=1 x i R m (y i ȳ Rm ) 2 2. Redefine the regions with this additional split. Terminate when there are 5 observations or fewer in each region (or use a different stopping criterion.) 4 / 17

How is a decision tree built? Start with a single region R 1 (entire input space), and iterate: 1. Select a region R k, a predictor X j, and a splitting point s, such that splitting R k with the criterion X j < s produces the largest decrease in RSS: T m=1 x i R m (y i ȳ Rm ) 2 2. Redefine the regions with this additional split. Terminate when there are 5 observations or fewer in each region (or use a different stopping criterion.) This grows the tree from the root towards the leaves (Top-down greedy approach). 4 / 17

A decision tree for baseball salaries Years < 4.5 RBI < 60.5 Hits < 117.5 Putouts < 82 Years < 3.5 Years < 3.5 5.487 4.622 5.183 5.394 6.189 Walks < 43.5 Runs < 47.5 6.407 6.015 5.571 6.549 Walks < 52.5 RBI < 80.5 Years < 6.5 7.289 6.459 7.007 5 / 17

How do we control overfitting? Idea 1: Find the optimal subtree by cross validation. 6 / 17

How do we control overfitting? Idea 1: Find the optimal subtree by cross validation. There are too many possibilities, so we would still over fit. 6 / 17

How do we control overfitting? Idea 1: Find the optimal subtree by cross validation. There are too many possibilities, so we would still over fit. Idea 2: Stop growing the tree when the RSS doesn t drop by more than a threshold with any new cut. 6 / 17

How do we control overfitting? Idea 1: Find the optimal subtree by cross validation. There are too many possibilities, so we would still over fit. Idea 2: Stop growing the tree when the RSS doesn t drop by more than a threshold with any new cut. In our greedy algorithm, it is possible to find good cuts after bad ones. 6 / 17

How do we control overfitting? Solution: Prune a large tree from the leaves to the root. Weakest link pruning: 7 / 17

How do we control overfitting? Solution: Prune a large tree from the leaves to the root. Weakest link pruning: Starting with with the initial full tree T0, replace a subtree with a leaf node to obtain a new tree T 1. Select subtree to prune by minimizing: RSS(T 1 ) RSS(T 0 ). T 0 T 1 7 / 17

How do we control overfitting? Solution: Prune a large tree from the leaves to the root. Weakest link pruning: Starting with with the initial full tree T0, replace a subtree with a leaf node to obtain a new tree T 1. Select subtree to prune by minimizing: RSS(T 1 ) RSS(T 0 ). T 0 T 1 Iterate this pruning to obtain a sequence T0, T 1, T 2,..., T m where T m is the tree with a single leaf node. 7 / 17

How do we control overfitting? Solution: Prune a large tree from the leaves to the root. Weakest link pruning: Starting with with the initial full tree T0, replace a subtree with a leaf node to obtain a new tree T 1. Select subtree to prune by minimizing: RSS(T 1 ) RSS(T 0 ). T 0 T 1 Iterate this pruning to obtain a sequence T0, T 1, T 2,..., T m where T m is the tree with a single leaf node. Select the optimal tree T i by cross validation. 7 / 17

How do we control overfitting?... or an equivalent procedure Cost complexity pruning: 8 / 17

How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize (y i ȳ Rm ) 2 + α T. R m T x i R m 8 / 17

How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize R m T x i R m (y i ȳ Rm ) 2 + α T. When α =, we select the null tree (=tree with one leaf node.) 8 / 17

How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize R m T x i R m (y i ȳ Rm ) 2 + α T. When α =, we select the null tree (=tree with one leaf node.) When α = 0, we select the full tree. 8 / 17

How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize R m T x i R m (y i ȳ Rm ) 2 + α T. When α =, we select the null tree (=tree with one leaf node.) When α = 0, we select the full tree. Fun fact: The solution for each α is among T 1, T 2,..., T m from weakest link pruning. 8 / 17

How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize R m T x i R m (y i ȳ Rm ) 2 + α T. When α =, we select the null tree (=tree with one leaf node.) When α = 0, we select the full tree. Fun fact: The solution for each α is among T 1, T 2,..., T m from weakest link pruning. Choose the optimal α (the optimal Ti ) by cross validation. 8 / 17

Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 9 / 17

Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 2. Split the training points into 10 folds. 9 / 17

Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 2. Split the training points into 10 folds. 3. For k = 1,..., 10, For each tree Ti, use every fold except the kth to estimate the averages in each region. For each tree T i, calculate the RSS in the test fold. 9 / 17

Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 2. Split the training points into 10 folds. 3. For k = 1,..., 10, For each tree Ti, use every fold except the kth to estimate the averages in each region. For each tree T i, calculate the RSS in the test fold. 4. For each tree T i, average the 10 test errors, and select the value of α that minimizes the error. 9 / 17

Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 2. Split the training points into 10 folds. 3. For k = 1,..., 10, For each tree Ti, use every fold except the kth to estimate the averages in each region. For each tree T i, calculate the RSS in the test fold. 4. For each tree T i, average the 10 test errors, and select the value of α that minimizes the error. WRONG WAY TO DO CROSS VALIDATION! 9 / 17

Cross validation, the right way 1. Split the training points into 10 folds. 10 / 17

Cross validation, the right way 1. Split the training points into 10 folds. 2. For k = 1,..., 10, using every fold except the kth: Construct a sequence of trees T1,..., T m for a range of values of α, and find the prediction for each region in each one. For each tree Ti, calculate the RSS on the test set. 10 / 17

Cross validation, the right way 1. Split the training points into 10 folds. 2. For k = 1,..., 10, using every fold except the kth: Construct a sequence of trees T1,..., T m for a range of values of α, and find the prediction for each region in each one. For each tree Ti, calculate the RSS on the test set. 3. Select the parameter α that minimizes the average test error. 10 / 17

Cross validation, the right way 1. Split the training points into 10 folds. 2. For k = 1,..., 10, using every fold except the kth: Construct a sequence of trees T1,..., T m for a range of values of α, and find the prediction for each region in each one. For each tree Ti, calculate the RSS on the test set. 3. Select the parameter α that minimizes the average test error. Note: We are doing all fitting, including the construction of the trees, using only the training data. 10 / 17

Example. Predicting baseball salaries Years < 4.5 RBI < 60.5 Putouts < 82 Years < 3.5 Hits < 117.5 Mean Squared Error 0.0 0.2 0.4 0.6 0.8 1.0 Training Cross Validation Test 5.487 Years < 3.5 4.622 5.183 5.394 6.189 Walks < 43.5 Walks < 52.5 Runs < 47.5 RBI < 80.5 6.407 Years < 6.5 6.015 5.571 6.549 7.289 6.459 7.007 2 4 6 8 10 Tree Size 11 / 17

Example. Predicting baseball salaries Years < 4.5 5.11 Hits < 117.5 Mean Squared Error 0.0 0.2 0.4 0.6 0.8 1.0 Training Cross Validation Test 2 4 6 8 10 Tree Size 6.00 6.74 12 / 17

Classification trees They work much like regression trees. 13 / 17

Classification trees They work much like regression trees. We predict the response by majority vote, i.e. pick the most common class in every region. 13 / 17

Classification trees They work much like regression trees. We predict the response by majority vote, i.e. pick the most common class in every region. Instead of trying to minimize the RSS: T m=1 x i R m (y i ȳ Rm ) 2 we minimize a classification loss function. 13 / 17

Classification losses The 0-1 loss or misclassification rate: The Gini index: T m=1 x i R m 1(y i ŷ Rm ) T m=1 q m K k=1 ˆp mk (1 ˆp mk ), where ˆp m,k is the proportion of class k within R m, and q m is the proportion of samples in R m. The cross-entropy: T m=1 q m K k=1 ˆp mk log(ˆp mk ). 14 / 17

Classification losses The Gini index and cross-entropy are better measures of the purity of a region, i.e. they are low when the region is mostly one category. 15 / 17

Classification losses The Gini index and cross-entropy are better measures of the purity of a region, i.e. they are low when the region is mostly one category. Motivation for the Gini index: If instead of predicting the most likely class, we predict a random sample from the distribution (ˆp 1,m, ˆp 2,m,..., ˆp K,m ), the Gini index is the expected misclassification rate. 15 / 17

Classification losses The Gini index and cross-entropy are better measures of the purity of a region, i.e. they are low when the region is mostly one category. Motivation for the Gini index: If instead of predicting the most likely class, we predict a random sample from the distribution (ˆp 1,m, ˆp 2,m,..., ˆp K,m ), the Gini index is the expected misclassification rate. It is typical to use the Gini index or cross-entropy for growing the tree, while using the misclassification rate when pruning the tree. 15 / 17

Example. Heart dataset. Thal:a Ca < 0.5 Ca < 0.5 Slope < 1.5 Oldpeak < 1.1 MaxHR < 161.5 RestBP < 157 Chol < 244 MaxHR < 156 MaxHR < 145.5 Yes No No No Yes No ChestPain:bc Chol < 244 Sex < 0.5 No No No Yes Age < 52 Thal:b ChestPain:a Yes No No No Yes RestECG < 1 Yes Yes Yes Error 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Training Cross Validation Test MaxHR < 161.5 No No Ca < 0.5 ChestPain:bc Thal:a Yes Ca < 0.5 Yes 5 10 15 No Yes Tree Size 16 / 17

Some advantages of decision trees Very easy to interpret! 17 / 17

Some advantages of decision trees Very easy to interpret! Closer to human decision-making. 17 / 17

Some advantages of decision trees Very easy to interpret! Closer to human decision-making. Easy to visualize graphically. 17 / 17

Some advantages of decision trees Very easy to interpret! Closer to human decision-making. Easy to visualize graphically. They easily handle qualitative predictors and missing data. 17 / 17