Lecture 19: Decision trees Reading: Section 8.1 STATS 202: Data mining and analysis November 10, 2017 1 / 17
Decision trees, 10,000 foot view R2 R5 t4 1. Find a partition of the space of predictors. X2 X2 t2 R1 R3 R4 2. Predict a constant in each set of the partition. t1 t3 X1 X1 X1 t1 X2 t2 X1 t3 X2 t4 R1 R2 R3 X2 X1 R4 R5 2 / 17
Decision trees, 10,000 foot view R2 R5 t4 1. Find a partition of the space of predictors. X2 X2 t2 R1 R3 R4 2. Predict a constant in each set of the partition. X1 t1 X1 t1 t3 X1 3. The partition is defined by splitting the range of one predictor at a time. X2 t2 X1 t3 X2 t4 R1 R2 R3 X2 X1 R4 R5 2 / 17
Decision trees, 10,000 foot view R2 R5 t4 1. Find a partition of the space of predictors. X2 X2 t2 R1 R3 R4 2. Predict a constant in each set of the partition. X2 t2 R1 X1 X1 t1 X1 t3 X2 t4 R2 R3 R4 R5 X2 t1 t3 X1 X1 3. The partition is defined by splitting the range of one predictor at a time. Can be represented as a decision tree. Not all partitions are possible. 2 / 17
Example: Predicting a baseball player s salary Years < 4.5 238 R 3 Hits R 1 117.5 5.11 Hits < 117.5 R 2 1 4.5 24 Years 1 6.00 6.74 The prediction for a point in region R i is the average of the training points in R i. 3 / 17
How is a decision tree built? Start with a single region R 1 (entire input space), and iterate: 1. Select a region R k, a predictor X j, and a splitting point s, such that splitting R k with the criterion X j < s produces the largest decrease in RSS: T m=1 x i R m (y i ȳ Rm ) 2 2. Redefine the regions with this additional split. 4 / 17
How is a decision tree built? Start with a single region R 1 (entire input space), and iterate: 1. Select a region R k, a predictor X j, and a splitting point s, such that splitting R k with the criterion X j < s produces the largest decrease in RSS: T m=1 x i R m (y i ȳ Rm ) 2 2. Redefine the regions with this additional split. Terminate when there are 5 observations or fewer in each region (or use a different stopping criterion.) 4 / 17
How is a decision tree built? Start with a single region R 1 (entire input space), and iterate: 1. Select a region R k, a predictor X j, and a splitting point s, such that splitting R k with the criterion X j < s produces the largest decrease in RSS: T m=1 x i R m (y i ȳ Rm ) 2 2. Redefine the regions with this additional split. Terminate when there are 5 observations or fewer in each region (or use a different stopping criterion.) This grows the tree from the root towards the leaves (Top-down greedy approach). 4 / 17
A decision tree for baseball salaries Years < 4.5 RBI < 60.5 Hits < 117.5 Putouts < 82 Years < 3.5 Years < 3.5 5.487 4.622 5.183 5.394 6.189 Walks < 43.5 Runs < 47.5 6.407 6.015 5.571 6.549 Walks < 52.5 RBI < 80.5 Years < 6.5 7.289 6.459 7.007 5 / 17
How do we control overfitting? Idea 1: Find the optimal subtree by cross validation. 6 / 17
How do we control overfitting? Idea 1: Find the optimal subtree by cross validation. There are too many possibilities, so we would still over fit. 6 / 17
How do we control overfitting? Idea 1: Find the optimal subtree by cross validation. There are too many possibilities, so we would still over fit. Idea 2: Stop growing the tree when the RSS doesn t drop by more than a threshold with any new cut. 6 / 17
How do we control overfitting? Idea 1: Find the optimal subtree by cross validation. There are too many possibilities, so we would still over fit. Idea 2: Stop growing the tree when the RSS doesn t drop by more than a threshold with any new cut. In our greedy algorithm, it is possible to find good cuts after bad ones. 6 / 17
How do we control overfitting? Solution: Prune a large tree from the leaves to the root. Weakest link pruning: 7 / 17
How do we control overfitting? Solution: Prune a large tree from the leaves to the root. Weakest link pruning: Starting with with the initial full tree T0, replace a subtree with a leaf node to obtain a new tree T 1. Select subtree to prune by minimizing: RSS(T 1 ) RSS(T 0 ). T 0 T 1 7 / 17
How do we control overfitting? Solution: Prune a large tree from the leaves to the root. Weakest link pruning: Starting with with the initial full tree T0, replace a subtree with a leaf node to obtain a new tree T 1. Select subtree to prune by minimizing: RSS(T 1 ) RSS(T 0 ). T 0 T 1 Iterate this pruning to obtain a sequence T0, T 1, T 2,..., T m where T m is the tree with a single leaf node. 7 / 17
How do we control overfitting? Solution: Prune a large tree from the leaves to the root. Weakest link pruning: Starting with with the initial full tree T0, replace a subtree with a leaf node to obtain a new tree T 1. Select subtree to prune by minimizing: RSS(T 1 ) RSS(T 0 ). T 0 T 1 Iterate this pruning to obtain a sequence T0, T 1, T 2,..., T m where T m is the tree with a single leaf node. Select the optimal tree T i by cross validation. 7 / 17
How do we control overfitting?... or an equivalent procedure Cost complexity pruning: 8 / 17
How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize (y i ȳ Rm ) 2 + α T. R m T x i R m 8 / 17
How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize R m T x i R m (y i ȳ Rm ) 2 + α T. When α =, we select the null tree (=tree with one leaf node.) 8 / 17
How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize R m T x i R m (y i ȳ Rm ) 2 + α T. When α =, we select the null tree (=tree with one leaf node.) When α = 0, we select the full tree. 8 / 17
How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize R m T x i R m (y i ȳ Rm ) 2 + α T. When α =, we select the null tree (=tree with one leaf node.) When α = 0, we select the full tree. Fun fact: The solution for each α is among T 1, T 2,..., T m from weakest link pruning. 8 / 17
How do we control overfitting?... or an equivalent procedure Cost complexity pruning: Minimize the following objective over all prunings T of T 0 : minimize R m T x i R m (y i ȳ Rm ) 2 + α T. When α =, we select the null tree (=tree with one leaf node.) When α = 0, we select the full tree. Fun fact: The solution for each α is among T 1, T 2,..., T m from weakest link pruning. Choose the optimal α (the optimal Ti ) by cross validation. 8 / 17
Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 9 / 17
Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 2. Split the training points into 10 folds. 9 / 17
Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 2. Split the training points into 10 folds. 3. For k = 1,..., 10, For each tree Ti, use every fold except the kth to estimate the averages in each region. For each tree T i, calculate the RSS in the test fold. 9 / 17
Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 2. Split the training points into 10 folds. 3. For k = 1,..., 10, For each tree Ti, use every fold except the kth to estimate the averages in each region. For each tree T i, calculate the RSS in the test fold. 4. For each tree T i, average the 10 test errors, and select the value of α that minimizes the error. 9 / 17
Cross validation, the wrong way 1. Construct a sequence of trees T 0,..., T m for a range of values of α. 2. Split the training points into 10 folds. 3. For k = 1,..., 10, For each tree Ti, use every fold except the kth to estimate the averages in each region. For each tree T i, calculate the RSS in the test fold. 4. For each tree T i, average the 10 test errors, and select the value of α that minimizes the error. WRONG WAY TO DO CROSS VALIDATION! 9 / 17
Cross validation, the right way 1. Split the training points into 10 folds. 10 / 17
Cross validation, the right way 1. Split the training points into 10 folds. 2. For k = 1,..., 10, using every fold except the kth: Construct a sequence of trees T1,..., T m for a range of values of α, and find the prediction for each region in each one. For each tree Ti, calculate the RSS on the test set. 10 / 17
Cross validation, the right way 1. Split the training points into 10 folds. 2. For k = 1,..., 10, using every fold except the kth: Construct a sequence of trees T1,..., T m for a range of values of α, and find the prediction for each region in each one. For each tree Ti, calculate the RSS on the test set. 3. Select the parameter α that minimizes the average test error. 10 / 17
Cross validation, the right way 1. Split the training points into 10 folds. 2. For k = 1,..., 10, using every fold except the kth: Construct a sequence of trees T1,..., T m for a range of values of α, and find the prediction for each region in each one. For each tree Ti, calculate the RSS on the test set. 3. Select the parameter α that minimizes the average test error. Note: We are doing all fitting, including the construction of the trees, using only the training data. 10 / 17
Example. Predicting baseball salaries Years < 4.5 RBI < 60.5 Putouts < 82 Years < 3.5 Hits < 117.5 Mean Squared Error 0.0 0.2 0.4 0.6 0.8 1.0 Training Cross Validation Test 5.487 Years < 3.5 4.622 5.183 5.394 6.189 Walks < 43.5 Walks < 52.5 Runs < 47.5 RBI < 80.5 6.407 Years < 6.5 6.015 5.571 6.549 7.289 6.459 7.007 2 4 6 8 10 Tree Size 11 / 17
Example. Predicting baseball salaries Years < 4.5 5.11 Hits < 117.5 Mean Squared Error 0.0 0.2 0.4 0.6 0.8 1.0 Training Cross Validation Test 2 4 6 8 10 Tree Size 6.00 6.74 12 / 17
Classification trees They work much like regression trees. 13 / 17
Classification trees They work much like regression trees. We predict the response by majority vote, i.e. pick the most common class in every region. 13 / 17
Classification trees They work much like regression trees. We predict the response by majority vote, i.e. pick the most common class in every region. Instead of trying to minimize the RSS: T m=1 x i R m (y i ȳ Rm ) 2 we minimize a classification loss function. 13 / 17
Classification losses The 0-1 loss or misclassification rate: The Gini index: T m=1 x i R m 1(y i ŷ Rm ) T m=1 q m K k=1 ˆp mk (1 ˆp mk ), where ˆp m,k is the proportion of class k within R m, and q m is the proportion of samples in R m. The cross-entropy: T m=1 q m K k=1 ˆp mk log(ˆp mk ). 14 / 17
Classification losses The Gini index and cross-entropy are better measures of the purity of a region, i.e. they are low when the region is mostly one category. 15 / 17
Classification losses The Gini index and cross-entropy are better measures of the purity of a region, i.e. they are low when the region is mostly one category. Motivation for the Gini index: If instead of predicting the most likely class, we predict a random sample from the distribution (ˆp 1,m, ˆp 2,m,..., ˆp K,m ), the Gini index is the expected misclassification rate. 15 / 17
Classification losses The Gini index and cross-entropy are better measures of the purity of a region, i.e. they are low when the region is mostly one category. Motivation for the Gini index: If instead of predicting the most likely class, we predict a random sample from the distribution (ˆp 1,m, ˆp 2,m,..., ˆp K,m ), the Gini index is the expected misclassification rate. It is typical to use the Gini index or cross-entropy for growing the tree, while using the misclassification rate when pruning the tree. 15 / 17
Example. Heart dataset. Thal:a Ca < 0.5 Ca < 0.5 Slope < 1.5 Oldpeak < 1.1 MaxHR < 161.5 RestBP < 157 Chol < 244 MaxHR < 156 MaxHR < 145.5 Yes No No No Yes No ChestPain:bc Chol < 244 Sex < 0.5 No No No Yes Age < 52 Thal:b ChestPain:a Yes No No No Yes RestECG < 1 Yes Yes Yes Error 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Training Cross Validation Test MaxHR < 161.5 No No Ca < 0.5 ChestPain:bc Thal:a Yes Ca < 0.5 Yes 5 10 15 No Yes Tree Size 16 / 17
Some advantages of decision trees Very easy to interpret! 17 / 17
Some advantages of decision trees Very easy to interpret! Closer to human decision-making. 17 / 17
Some advantages of decision trees Very easy to interpret! Closer to human decision-making. Easy to visualize graphically. 17 / 17
Some advantages of decision trees Very easy to interpret! Closer to human decision-making. Easy to visualize graphically. They easily handle qualitative predictors and missing data. 17 / 17