Notes based on: Data Mining for Business Intelligence

Similar documents
Decision Tree Learning

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Classification and Regression Trees

Tree-based methods for classification and regression

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

Data Science with R. Decision Trees.

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

8. Tree-based approaches

Data Mining Lecture 8: Decision Trees

Random Forest A. Fornaser

Knowledge Discovery and Data Mining

Lecture 19: Decision trees

Overview. Data Mining for Business Intelligence. Shmueli, Patel & Bruce

Decision trees. Decision trees are useful to a large degree because of their simplicity and interpretability

7. Decision or classification trees

Classification with Decision Tree Induction

Lecture 7: Decision Trees

Data Mining Classification - Part 1 -

Business Club. Decision Trees

Classification: Basic Concepts, Decision Trees, and Model Evaluation

Classification and Regression Trees

Data Mining Practical Machine Learning Tools and Techniques

CSE4334/5334 DATA MINING

INTRO TO RANDOM FOREST BY ANTHONY ANH QUOC DOAN

Classification. Instructor: Wei Ding

Extra readings beyond the lecture slides are important:

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.1-3

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

Lecture 5: Decision Trees (Part II)

Decision Tree CE-717 : Machine Learning Sharif University of Technology

CART. Classification and Regression Trees. Rebecka Jörnsten. Mathematical Sciences University of Gothenburg and Chalmers University of Technology

Algorithms: Decision Trees

Ensemble Methods, Decision Trees

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

CS Machine Learning

Implementierungstechniken für Hauptspeicherdatenbanksysteme Classification: Decision Trees

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Network Traffic Measurements and Analysis

Part I. Instructor: Wei Ding

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

COMP 465: Data Mining Classification Basics

Predictive Analytics: Demystifying Current and Emerging Methodologies. Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA

Machine Learning. A. Supervised Learning A.7. Decision Trees. Lars Schmidt-Thieme

Lecture outline. Decision-tree classification

Classification/Regression Trees and Random Forests

Data Mining Concepts & Techniques

8.11 Multivariate regression trees (MRT)

Data Mining and Knowledge Discovery: Practice Notes

Classification and Regression

CS 229 Midterm Review

Supervised vs unsupervised clustering

Nominal Data. May not have a numerical representation Distance measures might not make sense. PR and ANN

Classification: Decision Trees

Data Mining. 3.3 Rule-Based Classification. Fall Instructor: Dr. Masoud Yaghini. Rule-Based Classification

Lecture 20: Bagging, Random Forests, Boosting

Classification Algorithms in Data Mining

Cyber attack detection using decision tree approach

The Basics of Decision Trees

Introduction to Classification & Regression Trees

Artificial Intelligence. Programming Styles

Data Mining and Knowledge Discovery Practice notes: Numeric Prediction, Association Rules

Data Mining Part 5. Prediction

MIT Samberg Center Cambridge, MA, USA. May 30 th June 2 nd, by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA

Machine Learning. Decision Trees. Le Song /15-781, Spring Lecture 6, September 6, 2012 Based on slides from Eric Xing, CMU

Data Preprocessing. Slides by: Shree Jaswal

Data Mining and Knowledge Discovery: Practice Notes

Nearest neighbor classification DSE 220

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

Decision tree learning

2. Data Preprocessing

Slides for Data Mining by I. H. Witten and E. Frank

Part I. Classification & Decision Trees. Classification. Classification. Week 4 Based in part on slides from textbook, slides of Susan Holmes

This is a simple example of how the lasso regression model works.

Additive Models, Trees, etc. Based in part on Chapter 9 of Hastie, Tibshirani, and Friedman David Madigan

CS229 Lecture notes. Raphael John Lamarre Townshend

Basic Data Mining Technique

Data Mining in Bioinformatics Day 1: Classification

Statistical Machine Learning Hilary Term 2018

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

CS4445 Data Mining and Knowledge Discovery in Databases. A Term 2008 Exam 2 October 14, 2008

An overview for regression tree

A Systematic Overview of Data Mining Algorithms

Nonparametric Classification Methods

Logical Rhythm - Class 3. August 27, 2018

Nominal Data. May not have a numerical representation Distance measures might not make sense PR, ANN, & ML

Nearest Neighbor Predictors

Patrick Breheny. November 10

Applying Supervised Learning

Introduction to Machine Learning

15-780: Graduate Artificial Intelligence. Decision trees

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York

Example of DT Apply Model Example Learn Model Hunt s Alg. Measures of Node Impurity DT Examples and Characteristics. Classification.

Data Mining and Knowledge Discovery Practice notes: Numeric Prediction, Association Rules

Supervised Learning Classification Algorithms Comparison

Regression Models Course Project Vincent MARIN 28 juillet 2016

Stat 602X Exam 2 Spring 2011

Decision Tree Structure: Root

Note: In the presentation I should have said "baby registry" instead of "bridal registry," see

Data Mining and Knowledge Discovery: Practice Notes

Transcription:

Chapter 9 Classification and Regression Trees Roger Bohn April 2017 Notes based on: Data Mining for Business Intelligence 1 Shmueli, Patel & Bruce

2

3

II. Results and Interpretation There are 1183 auction results that are a part of the observation used in the model. The four most important rules are as follows: Rule 1: IF(OpenPrice < 1.23), THEN COMPETITIVE with a 91% accuracy Rule 2: IF(3.718 > OpenPrice >= 1.23) AND (2572 > sellerrating >= 570.5), THEN COMPETITIVE with a 82% accuracy Rule 3: IF(OpenPrice >= 1.23) AND (sellerrating < 570.5) THEN COMPETITIVE with a 65% accuracy Rule 4: IF(3.718 > OpenPrice >= 1.23) AND (sellerrating >= 2572), THEN NOT COMPETITIVE with a 33% accuracy 4

Trees and Rules Goal: Classify or predict an outcome based on a set of predictors The output is a set of rules Example: Goal: classify a record as will accept credit card offer or will not accept Rule might be IF (Income > 92.5) AND (Education < 1.5) AND (Family <= 2.5) THEN Class = 0 (nonacceptor) Also called CART, Decision Trees, or just Trees Rules are represented by tree diagrams 5

Trees and Rules Goal: Classify or predict an outcome based on a set of predictors The output is a set of rules, shown as a tree Example: Goal: classify a record as will accept credit card offer or will not accept Rule might be IF (Income > 92.5) AND (Education < 1.5) AND (Family <= 2.5) THEN Class = 0 (nonacceptor) Also called CART, Decision Trees, or just Trees Rules are represented by tree diagrams 6

7

Key Ideas Recursive partitioning: Repeatedly split the records into two parts so as to achieve maximum homogeneity within the new parts Pruning the tree: Simplify the tree by pruning peripheral branches to avoid overfitting 8

9 Recursive Partitioning

Recursive Partitioning Steps Pick one of the predictor variables, x i Pick a value of x i, say s i, that divides the training data into two (not necessarily equal) portions Measure how pure or homogeneous each of the resulting portions are Pure = containing records of mostly one class Algorithm tries different values of x i, and s i to maximize purity in initial split After you get a maximum purity split, repeat the process for a second split, and so on 10

Example: Riding Mowers Goal: Classify 24 households as owning or not owning riding mowers Predictors = Income, Lot Size 11

12 Income Lot_Size Ownership 60.0 18.4 owner 85.5 16.8 owner 64.8 21.6 owner 61.5 20.8 owner 87.0 23.6 owner 110.1 19.2 owner 108.0 17.6 owner 82.8 22.4 owner 69.0 20.0 owner 93.0 20.8 owner 51.0 22.0 owner 81.0 20.0 owner 75.0 19.6 non-owner 52.8 20.8 non-owner 64.8 17.2 non-owner 43.2 20.4 non-owner 84.0 17.6 non-owner 49.2 17.6 non-owner 59.4 16.0 non-owner 66.0 18.4 non-owner 47.4 16.4 non-owner 33.0 18.8 non-owner 51.0 14.0 non-owner 63.0 14.8 non-owner

How to split Order records according to one variable, say lot size Find midpoints between successive values E.g. first midpoint is 14.4 (halfway between 14.0 and 14.8) Divide records into those with lotsize > 14.4 and those < 14.4 After evaluating that split, try the next one, which is 15.4 (halfway between 14.8 and 16.0) 13

Note: Categorical Variables Examine all possible ways in which the categories can be split. E.g., categories A, B, C can be split 3 ways {A} and {B, C} {B} and {A, C} {C} and {A, B} With many categories, # of splits becomes huge XLMiner supports only binary categorical variables 14

15 The first split: Lot Size = 19,000

16 Second Split: Income = $84,000

17 After All Splits

18 Measuring Impurity

Gini Index Gini Index for rectangle A containing m records I(A) = 1 - p = proportion of cases in rectangle A that belong to class k I(A) = 0 when all cases belong to same class Max value when all classes are equally represented (= 0.50 in binary case) Note: XLMiner uses a variant called delta splitting rule 19

Entropy p = proportion of cases (out of m) in rectangle A that belong to class k Entropy ranges between 0 (most pure) and log 2 (m) (equal representation of classes) 20

Impurity and Recursive Partitioning Obtain overall impurity measure (weighted avg. of individual rectangles) At each successive stage, compare this measure across all possible splits in all variables Choose the split that reduces impurity the most Chosen split points become nodes on the tree 21

22 First Split The Tree

23 Tree after three splits

Tree Structure Split points become nodes on tree (circles with split value in center) Rectangles represent leaves (terminal points, no further splits, classification value noted) Numbers on lines between nodes indicate # cases Read down tree to derive rule E.g., If lot size < 19, and if income > 84.75, then class = owner 24

Determining Leaf Node Label Each leaf node label is determined by voting of the records within it, and by the cutoff value Records within each leaf node are from the training data Default cutoff=0.5 means that the leaf node s label is the majority class. Cutoff = 0.75: requires majority of 75% or more 1 records in the leaf to label it a 1 node 25

26 Tree after all splits

27 The Overfitting Problem

Stopping Tree Growth Natural end of process is 100% purity in each leaf This overfits the data, which end up fitting noise in the data Overfitting leads to low predictive accuracy of new data Past a certain point, the error rate for the validation data starts to increase 28

29 Full Tree Error Rate

CHAID CHAID, older than CART, uses chi-square statistical test to limit tree growth Splitting stops when purity improvement is not statistically significant 30

Pruning CART lets tree grow to full extent, then prunes it back Idea is to find that point at which the validation error begins to rise Generate successively smaller trees by pruning leaves At each pruning stage, multiple trees are possible Use cost complexity to choose the best tree at that stage 31

Cost Complexity CC(T) = Err(T) + α L(T) CC(T) = cost complexity of a tree Err(T) = proportion of misclassified records α = penalty factor attached to tree size (set by user) Among trees of given size, choose the one with lowest CC Do this for each size of tree 32

Using Validation Error to Prune Pruning process yields a set of trees of different sizes and associated error rates Two trees of interest: Minimum error tree Has lowest error rate on validation data Best pruned tree Smallest tree within one std. error of min. error This adds a bonus for simplicity/parsimony 33

34 Error rates on pruned trees

35 Regression Trees

Regression Trees for Prediction Used with continuous outcome variable Procedure similar to classification tree Many splits attempted, choose the one that minimizes impurity 36

Differences from CT Prediction is computed as the average of numerical target variable in the rectangle (in CT it is majority vote) Impurity measured by sum of squared deviations from leaf mean Performance measured by RMSE (root mean squared error) 37

Advantages of trees Easy to use, understand Produce rules that are easy to interpret & implement Variable selection & reduction is automatic Do not require the assumptions of statistical models Work fine with missing data 38

Disadvantages May not perform well where there is structure in the data that is not well captured by horizontal or vertical splits Very simple, don t always give best fits

Summary Classification and Regression Trees are an easily understandable and transparent method for predicting or classifying new records A tree is a graphical representation of a set of rules Trees must be pruned to avoid over-fitting of the training data As trees do not make any assumptions about the data structure, they usually require large samples 40

Toyota Corolla Prices Case analysis Higher or lower than median price? 1436 records, 38 attributes Textbook page 237 Clean up and 41

Lots of variables. Which to use? Look for unimportant variables Little variation in the data Zero correlation (not always safe) Look for groups of variables (high correlation) Logically irrelevant to price (judgment) 42

43

44 5/28/2014 ggpairs3.png (800 800)

ggpairs 1#,Uncomment,these,lines,and,install,if,necessary: 2 #install.packages('ggally') 3 #install.packages('ggplot2') 4 #install.packages('scales') 5 #install.packages('memisc') 6 7 library(ggplot2) 8 library(ggally) 9 library(scales) 10 data(diamonds) 11 12 diasamp = diamonds[sample(1:length(diamonds$price),10000),] 13 ggpairs(diasamp,params = c(shape = I("."),outlier.shape=I("."))) 45

https://tgmstat.wordpress.com/2013/11/13/plot-matrix-with-the-r-package-ggally/ another ggpairs example 46

One variable regression lines Notice 3 regions; each can have different information. 47 https://www.r-bloggers.com/multiple-regression-lines-in-ggpairs/

Corrgram of mtcars intercorrelations gear am drat mpg vs qsec wt disp cyl hp carb 48 Figure 11.17 Corrgram of the correlations among the variables in the mtcars data frame. Rows and columns have been reordered using principal components analysis.

49

50 Corolla tree: age, km,

51

How to build in practice Start with every plausible variable Let the algorithm decide what belongs in final model Do NOT pre-screen based on theory Do Throw out obvious junk Consider pruning highly correlated variables (at least at first) 52

Evaluating result: Confusion matrix Calculate model using only training data Evaluate model using only testing (or validation) data. 53

Report what you DID 54 Don t omit so many variables unless sure they don t matter.

Typical results Age (in months) Km traveled Air conditioning Weight 55

Air conditioning AC Yes/No Automatic AC Yes/no Model thinks this is 2 independent variables Use outside knowledge: Convert this to 1 variable with 3 le vels 56