Extra readings beyond the lecture slides are important:

Similar documents
COMP 465: Data Mining Classification Basics

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.1-3

Classification: Basic Concepts, Decision Trees, and Model Evaluation

CSE4334/5334 DATA MINING

Classification. Instructor: Wei Ding

CSE 5243 INTRO. TO DATA MINING

CS Machine Learning

Example of DT Apply Model Example Learn Model Hunt s Alg. Measures of Node Impurity DT Examples and Characteristics. Classification.

Part I. Instructor: Wei Ding

Data Mining Concepts & Techniques

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Machine Learning. Decision Trees. Le Song /15-781, Spring Lecture 6, September 6, 2012 Based on slides from Eric Xing, CMU

Data Mining Classification - Part 1 -

Lecture outline. Decision-tree classification

ISSUES IN DECISION TREE LEARNING

Nesnelerin İnternetinde Veri Analizi

Classification with Decision Tree Induction

Classification and Regression

Basic Data Mining Technique

Classification Salvatore Orlando

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

DATA MINING LECTURE 9. Classification Basic Concepts Decision Trees Evaluation

DATA MINING LECTURE 11. Classification Basic Concepts Decision Trees Evaluation Nearest-Neighbor Classifier

What Is Data Mining? CMPT 354: Database I -- Data Mining 2

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

CLASSIFICATION OF C4.5 AND CART ALGORITHMS USING DECISION TREE METHOD

DATA MINING LECTURE 9. Classification Decision Trees Evaluation

Lecture Notes for Chapter 4

Decision Tree CE-717 : Machine Learning Sharif University of Technology

Lecture 7: Decision Trees

Given a collection of records (training set )

Data Warehousing & Data Mining

Data Mining D E C I S I O N T R E E. Matteo Golfarelli

Knowledge Discovery in Databases

Data Mining in Bioinformatics Day 1: Classification

ARTIFICIAL INTELLIGENCE (CS 370D)

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

Part I. Classification & Decision Trees. Classification. Classification. Week 4 Based in part on slides from textbook, slides of Susan Holmes

7. Decision or classification trees

Business Club. Decision Trees

Classification: Decision Trees

Decision Tree Learning

8. Tree-based approaches

Classification and Prediction

Algorithms: Decision Trees

Decision tree learning

Credit card Fraud Detection using Predictive Modeling: a Review

Machine Learning in Real World: C4.5

What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control.

Logical Rhythm - Class 3. August 27, 2018

Implementierungstechniken für Hauptspeicherdatenbanksysteme Classification: Decision Trees

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

Data warehouse and Data Mining

(Classification and Prediction)

Nearest neighbor classification DSE 220

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Introduction to Data Mining. Yücel SAYGIN

Data Mining. 3.3 Rule-Based Classification. Fall Instructor: Dr. Masoud Yaghini. Rule-Based Classification

Chapter 4 Data Mining A Short Introduction. 2005/6, Karl Aberer, EPFL-IC, Laboratoire de systèmes d'informations répartis Data Mining - 1

Data Mining Part 5. Prediction

Additive Models, Trees, etc. Based in part on Chapter 9 of Hastie, Tibshirani, and Friedman David Madigan

Machine Learning. Decision Trees. Manfred Huber

Decision Trees. Petr Pošík. Czech Technical University in Prague Faculty of Electrical Engineering Dept. of Cybernetics

Analysis of Various Decision Tree Algorithms for Classification in Data Mining

Machine Learning. A. Supervised Learning A.7. Decision Trees. Lars Schmidt-Thieme

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

International Journal of Software and Web Sciences (IJSWS)

CSE 5243 INTRO. TO DATA MINING

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

DATA MINING LECTURE 10B. Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines

Induction of Decision Trees

Data Mining and Analytics

Prediction. What is Prediction. Simple methods for Prediction. Classification by decision tree induction. Classification and regression evaluation

CS229 Lecture notes. Raphael John Lamarre Townshend

CS 229 Midterm Review

Nominal Data. May not have a numerical representation Distance measures might not make sense. PR and ANN

Classification Key Concepts

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques

9 Classification: KNN and SVM

PUBLIC: A Decision Tree Classifier that Integrates Building and Pruning

BITS F464: MACHINE LEARNING

Performance Analysis of Classifying Unlabeled Data from Multiple Data Sources

Ensemble Methods, Decision Trees

Notes based on: Data Mining for Business Intelligence

CSE 5243 INTRO. TO DATA MINING

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York

Introduction to Machine Learning

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

KDD E A MINERAÇÃO DE DADOS. Daniela Barreiro Claro

Introduction to Data Mining

Data Preprocessing. Slides by: Shree Jaswal

Classification and Regression Trees

List of Exercises: Data Mining 1 December 12th, 2015

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

INTRO TO RANDOM FOREST BY ANTHONY ANH QUOC DOAN

Lecture 2 :: Decision Trees Learning

A Systematic Overview of Data Mining Algorithms

Transcription:

1 Notes To preview next lecture: Check the lecture notes, if slides are not available: http://web.cse.ohio-state.edu/~sun.397/courses/au2017/cse5243-new.html Check UIUC course on the same topic. All their slides are available: https://wiki.illinois.edu//wiki/display/cs412/2.+course+syllabus+and+sche dule Extra readings beyond the lecture slides are important: Lecture notes (or, chapters from the text book Han et al.) Text book: http://www.dataminingbook.info/pmwiki.php/main/bookdownload

CSE 5243 INTRO. TO DATA MINING Classification (Basic Concepts) Huan Sun, CSE@The Ohio State University 09/07/2017 Slides adapted from UIUC CS412, Fall 2017, by Prof. Jiawei Han

3 Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary

4 Supervised vs. Unsupervised Learning Supervised learning (classification) Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations New data is classified based on the training set

5 Supervised vs. Unsupervised Learning Supervised learning (classification) Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations New data is classified based on the training set Unsupervised learning (clustering) The class labels of training data is unknown Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data

6 Prediction Problems: Classification vs. Numeric Prediction Classification predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data Numeric Prediction models continuous-valued functions, i.e., predicts unknown or missing values

Prediction Problems: Classification vs. Numeric Prediction Classification predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values (class labels) in a classifying attribute and uses it in classifying new data Numeric Prediction models continuous-valued functions, i.e., predicts unknown or missing values Typical applications Credit/loan approval: Medical diagnosis: if a tumor is cancerous or benign Fraud detection: if a transaction is fraudulent 7 Web page categorization: which category it is

Classification A Two-Step Process (1) Model construction: describing a set of predetermined classes Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute The set of tuples used for model construction is training set Model: represented as classification rules, decision trees, or mathematical formulae 8

9 Classification A Two-Step Process (1) Model construction: describing a set of predetermined classes Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute The set of tuples used for model construction is training set Model: represented as classification rules, decision trees, or mathematical formulae (2) Model usage: for classifying future or unknown objects Estimate accuracy of the model The known label of test sample is compared with the classified result from the model Accuracy: % of test set samples that are correctly classified by the model Test set is independent of training set (otherwise overfitting) If the accuracy is acceptable, use the model to classify new data

10 Classification A Two-Step Process (1) Model construction: describing a set of predetermined classes Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute The set of tuples used for model construction is training set Model: represented as classification rules, decision trees, or mathematical formulae (2) Model usage: for classifying future or unknown objects Estimate accuracy of the model The known label of test sample is compared with the classified result from the model Accuracy: % of test set samples that are correctly classified by the model Test set is independent of training set (otherwise overfitting) If the accuracy is acceptable, use the model to classify new data Note: If the test set is used to select/refine models, it is called validation (test) set or development test set

11 Step (1): Model Construction Training Data Classification Algorithms NAME RANK YEARS TENURED Mike Assistant Prof 3 no Mary Assistant Prof 7 yes Bill Professor 2 yes Jim Associate Prof 7 yes Dave Assistant Prof 6 no Anne Associate Prof 3 no Classifier (Model)

12 Step (1): Model Construction Training Data Classification Algorithms NAME RANK YEARS TENURED Mike Assistant Prof 3 no Mary Assistant Prof 7 yes Bill Professor 2 yes Jim Associate Prof 7 yes Dave Assistant Prof 6 no Anne Associate Prof 3 no Classifier (Model) IF rank = professor OR years > 6 THEN tenured = yes

Step (2): Using the Model in Prediction Classifier Testing Data 13 NAME RANK YEARS TENURED Tom Assistant Prof 2 no Merlisa Associate Prof 7 no George Professor 5 yes Joseph Assistant Prof 7 yes

Step (2): Using the Model in Prediction Classifier Testing Data New/Unseen Data 14 NAME RANK YEARS TENURED Tom Assistant Prof 2 no Merlisa Associate Prof 7 no George Professor 5 yes Joseph Assistant Prof 7 yes (Jeff, Professor, 4) Tenured?

15 Classification: Basic Concepts Classification: Basic Concepts Decision Tree Induction Bayes Classification Methods Model Evaluation and Selection Techniques to Improve Classification Accuracy: Ensemble Methods Summary

16 Decision Tree Induction: An Example Training data set: Buys_computer The data set follows an example of Quinlan s ID3 (Playing Tennis) age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no

Decision Tree Induction: An Example 17 Training data set: Buys_computer The data set follows an example of Quinlan s ID3 (Playing Tennis) Resulting tree: age? no student? <=30 overcast 31..40 >40 yes yes excellent credit rating? fair no yes yes age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no

18 Algorithm for Decision Tree Induction Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and-conquer manner At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they are discretized in advance) Examples are partitioned recursively based on selected attributes Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain)

19 Algorithm for Decision Tree Induction Basic algorithm (a greedy algorithm) Tree is constructed in a top-down recursive divide-and-conquer manner At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they are discretized in advance) Examples are partitioned recursively based on selected attributes Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain) Conditions for stopping partitioning All samples for a given node belong to the same class There are no remaining attributes for further partitioning majority voting is employed for classifying the leaf There are no samples left

Algorithm Outline Split (node, {data tuples}) A <= the best attribute for splitting the {data tuples} Decision attribute for this node <= A For each value of A, create new child node For each child node / subset: If one of the stopping conditions is satisfied: STOP Else: Split (child_node, {subset}) 20 ID3 algorithm: how it works https://www.youtube.com/watch?v=_xhodslle5c

Algorithm Outline Split (node, {data tuples}) A <= the best attribute for splitting the {data tuples} Decision attribute for this node <= A For each value of A, create new child node For each child node / subset: If one of the stopping conditions is satisfied: STOP Else: Split (child_node, {subset}) 21 ID3 algorithm: how it works https://www.youtube.com/watch?v=_xhodslle5c

22 Brief Review of Entropy Entropy (Information Theory) A measure of uncertainty associated with a random number Calculation: For a discrete random variable Y taking m distinct values {y 1, y 2,, y m } Interpretation Higher entropy higher uncertainty Lower entropy lower uncertainty Conditional entropy m = 2

Attribute Selection Measure: Information Gain (ID3/C4.5) 23 Select the attribute with the highest information gain Let p i be the probability that an arbitrary tuple in D belongs to class C i, estimated by C i, D / D Expected information (entropy) needed to classify a tuple in D: Info( D) = p i log2( p Information needed (after using A to split D into v partitions) to classify D: Information gained by branching on attribute A m i= 1 v D j Info A( D) = D Gain(A) = j= 1 Info(D) i ) Info( D Info A j ) (D)

24 Attribute Selection: Information Gain Class P: buys_computer = yes Class N: buys_computer = no age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no How to select the first attribute?

25 Attribute Selection: Information Gain Class P: buys_computer = yes Class N: buys_computer = no age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no 9 Info D) = I(9,5) = log 14 9 5 ( ) log 14 14 5 ( ) 14 ( 2 2 = 0.940

26 Attribute Selection: Information Gain Class P: buys_computer = yes Class N: buys_computer = no age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no Info 9 D) = I(9,5) = log 14 9 5 ( ) log 14 14 5 ( ) 14 ( 2 2 = Look at age : age p i n i I(p i, n i ) <=30 2 3 0.971 31 40 4 0 0 >40 3 2 0.971 0.940

27 Attribute Selection: Information Gain Class P: buys_computer = yes Class N: buys_computer = no age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no Info 9 D) = I(9,5) = log 14 9 5 ( ) log 14 14 5 ( ) 14 ( 2 2 = Look at age : Info age ( D) = + age p i n i I(p i, n i ) <=30 2 3 0.971 31 40 4 0 0 >40 3 2 0.971 5 14 5 14 I(2,3) + I(3,2) = 4 14 0.694 I(4,0) 0.940

Attribute Selection: Information Gain Class P: buys_computer = yes Class N: buys_computer = no Look at age : age p i n i I(p i, n i ) <=30 2 3 0.971 31 40 4 0 0 >40 3 2 0.971 Info age ( D) = + 5 14 5 14 I(2,3) + I(3,2) = 4 14 0.694 I(4,0) 5 I 14 (2,3) means age <=30 has 5 out of 14 samples, with 2 yes es and 3 no s. 28

29 Attribute Selection: Information Gain Class P: buys_computer = yes Class N: buys_computer = no age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no Info 9 D) = I(9,5) = log 14 9 5 ( ) log 14 14 5 ( ) 14 ( 2 2 = Info age ( D) = + 5 14 5 14 I(2,3) + I(3,2) = 4 14 0.694 I(4,0) Gain( age) = Info( D) Info ( D) = 0.246 Similarly, age Gain( income) = 0.029 Gain( student) = 0.151 Gain( credit _ rating) = 0.048 0.940

30 Recursive Procedure age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no 1. After selecting age at the root node, we will create three child nodes. 2. One child node is associated with red data tuples. 3. How to continue for this child node?

31 Recursive Procedure age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no 1. After selecting age at the root node, we will create three child nodes. 2. One child node is associated with red data tuples. 3. How to continue for this child node? Now, you will make D = {red data tuples} and then select the best attribute to further split D. A recursive procedure.

32 Computing Information-Gain for Continuous-Valued Attributes Let attribute A be a continuous-valued attribute

33 Computing Information-Gain for Continuous-Valued Attributes Let attribute A be a continuous-valued attribute Must determine the best split point for A (or, discretization) => D1 is the set of tuples in D satisfying A split-point, and D2 is the set of tuples in D satisfying A > split-point

34 Computing Information-Gain for Continuous-Valued Attributes Let attribute A be a continuous-valued attribute Must determine the best split point for A Sort the value A in increasing order Typically, the midpoint between each pair of adjacent values is considered as a possible split point (a i +a i+1 )/2 is the midpoint between the values of a i and a i+1 The point with the minimum expected information requirement for A ( i.e., InfoA(D) ) is selected as the split-point for A

Computing Information-Gain for Continuous-Valued Attributes Let attribute A be a continuous-valued attribute Must determine the best split point for A Sort the value A in increasing order Typically, the midpoint between each pair of adjacent values is considered as a possible split point 35 (a i +a i+1 )/2 is the midpoint between the values of a i and a i+1 The point with the minimum expected information requirement for A ( i.e., InfoA(D) ) is selected as the split-point for A Split: D1 is the set of tuples in D satisfying A split-point, and D2 is the set of tuples in D satisfying A > split-point

How to Select Test Attribute? Depends on attribute types Nominal Ordinal Continuous Depends on number of ways to split 2-way split Multi-way split 36

37 Splitting Based on Nominal Attributes Multi-way split: Use as many partitions as distinct values. Family Sports CarType Luxury Binary split: Divides values into two subsets. Need to find optimal partitioning. {Sports, Luxury} CarType {Family} OR {Family, Luxury} CarType {Sports}

38 Splitting Based on Continuous Attributes Taxable Income > 80K? Taxable Income? < 10K > 80K Yes No [10K,25K) [25K,50K) [50K,80K) (i) Binary split (ii) Multi-way split

How to Determine the Best Split Greedy approach: Nodes with homogeneous class distribution are preferred Ideally, data tuples at that node belong to the same class. C0: 5 C1: 5 C0: 9 C1: 1 Non-homogeneous, High degree of impurity Homogeneous, Low degree of impurity 39

Rethink about Decision Tree Classification Greedy approach: Nodes with homogeneous class distribution are preferred Need a measure of node impurity: C0: 5 C1: 5 C0: 9 C1: 1 Non-homogeneous, High degree of impurity Homogeneous, Low degree of impurity 40

41 Measures of Node Impurity Entropy: Higher entropy => higher uncertainty, higher node impurity Why entropy is used in information gain Gini Index Misclassification error

42 Potential Problems with Information Gain Information gain measure is biased towards attributes with a large number of values (or, a large number of small partitions)

Potential Problems with Information Gain Information gain measure is biased towards attributes with a large number of values (or, a large number of small partitions) Consider an attribute that acts as a unique identifier, such as student_id 43

Potential Problems with Information Gain Information gain measure is biased towards attributes with a large number of values (or, a large number of small partitions) Consider an attribute that acts as a unique identifier, such as student_id A split on Student_ID will result in many partitions, each one containing just one tuple. 44

45 Potential Problems with Information Gain Information gain measure is biased towards attributes with a large number of values (or, a large number of small partitions) Consider an attribute that acts as a unique identifier, such as student_id A split on Student_ID will result in many partitions, each one containing just one tuple. Information required to classify data set D after this partitioning is:? v D j Info A( D) = D j= 1 Info( D j )

46 Gain Ratio for Attribute Selection (C4.5) Information gain measure is biased towards attributes with a large number of values C4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain)

47 Gain Ratio for Attribute Selection (C4.5) Information gain measure is biased towards attributes with a large number of values C4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain) SplitInfo v D j ( D) = log2 D D ( j A j= 1 D The entropy of the partitioning, or the potential information generated by splitting D into v partitions. )

48 Gain Ratio for Attribute Selection (C4.5) Information gain measure is biased towards attributes with a large number of values C4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain) SplitInfo v D j ( D) = log2 D D ( j A j= 1 D The entropy of the partitioning, or the potential information generated by splitting D into v partitions. ) GainRatio(A) = Gain(A)/SplitInfo(A) (normalizing Information Gain)

49 Gain Ratio for Attribute Selection (C4.5) C4.5 (a successor of ID3) uses gain ratio to overcome the problem (normalization to information gain) GainRatio(A) = Gain(A)/SplitInfo(A) Ex. SplitInfo Gain ( income) = 0.029 (from last class, slide 27) v D j D j D) = log ( ) j= 1 D D A ( 2 gain_ratio(income) = 0.029/1.557 = 0.019 The attribute with the maximum gain ratio is selected as the splitting attribute

50 Another Attribute Selection Measure: Gini Index (CART, IBM IntelligentMiner) If a data set D contains examples from n classes, gini index, gini(d) is defined as gini( D) = 1 n P j= 1 2 j where p j is the relative frequency of class j in D Gini index measures the impurity of D. The larger, the more impure.

51 Another Attribute Selection Measure: Gini Index (CART, IBM IntelligentMiner) If a data set D contains examples from n classes, gini index, gini(d) is defined as gini( D) = 1 n P j= 1 2 j where p j is the relative frequency of class j in D Ex. D has 9 tuples in buys_computer = yes and 5 in no gini( D) = 1 9 14 2 5 14 2 = 0.459

52 Gini Index (CART, IBM IntelligentMiner) If a data set D contains examples from n classes, gini index, gini(d) is defined as gini( D) = 1 n p 2 j j= 1, where p j is the relative frequency of class j in D If a data set D is split on A into two subsets D 1 and D 2, the gini index after the split is defined as D ( ) 1 D ( ) 2 gini A D = gini D1 + gini( D2) D D

53 Gini Index (CART, IBM IntelligentMiner) If a data set D contains examples from n classes, gini index, gini(d) is defined as gini( D) = 1 n p 2 j j= 1, where p j is the relative frequency of class j in D If a data set D is split on A into two subsets D 1 and D 2, the gini index gini(d) is defined as Reduction in impurity: D ( ) 1 D ( ) 2 gini A D = gini D1 + gini( D2) D D gini( A) = gini( D) gini ( D) A

54 Gini Index (CART, IBM IntelligentMiner) If a data set D contains examples from n classes, gini index, gini(d) is defined as gini( D) = 1 n p 2 j j= 1, where p j is the relative frequency of class j in D If a data set D is split on A into two subsets D 1 and D 2, the gini index after the split is defined as Reduction in impurity: D ( ) 1 D ( ) 2 gini A D = gini D1 + gini( D2) D D gini( A) = gini( D) gini ( D) The attribute provides the smallest gini A (D) (or, the largest reduction in impurity) is chosen to split the node. A

55 Gini Index (CART, IBM IntelligentMiner) If a data set D contains examples from n classes, gini index, gini(d) is defined as gini( D) = 1 n p 2 j j= 1, where p j is the relative frequency of class j in D If a data set D is split on A into two subsets D 1 and D 2, the gini index after the split is defined as Reduction in impurity: D ( ) 1 D ( ) 2 gini A D = gini D1 + gini( D2) D D gini( A) = gini( D) gini ( D) The attribute provides the smallest gini A (D) (or, the largest reduction in impurity) is chosen to split the node. A

56 Binary Attributes: Computing Gini Index Splits into two partitions Effect of weighing partitions: Larger and Purer Partitions are sought for. gini( D) = 1 n p 2 j j= 1 Yes B? No Parent C1 6 C2 6 Node N1 Node N2 Gini =?

57 Binary Attributes: Computing Gini Index Splits into two partitions Effect of weighing partitions: Larger and Purer Partitions are sought for. gini( D) = 1 n p 2 j j= 1 Yes B? No Parent C1 6 C2 6 Gini(N1) = 1 (5/6) 2 (2/6) 2 = 0.194 Gini(N2) = 1 (1/6) 2 (4/6) 2 = 0.528 Node N1 N1 N2 C1 5 1 C2 2 4 Gini=? Node N2 Gini = 0.500

58 Binary Attributes: Computing Gini Index Splits into two partitions Effect of weighing partitions: Larger and Purer Partitions are sought for. Gini(N1) = 1 (5/6) 2 (2/6) 2 = 0.194 Gini(N2) = 1 (1/6) 2 (4/6) 2 = 0.528 Yes Node N1 N1 N2 C1 5 1 C2 2 4 Gini=0.333 B? No Node N2 Gini(Children) = 7/12 * 0.194 + 5/12 * 0.528 = 0.333 gini( D) = 1 Parent C1 6 C2 6 Gini =? weighting n p 2 j j= 1

59 Categorical Attributes: Computing Gini Index For each distinct value, gather counts for each class in the dataset Use the count matrix to make decisions Multi-way split Two-way split (find best partition of values) CarType Family Sports Luxury C1 1 2 1 C2 4 1 1 Gini 0.393 CarType {Sports, Luxury} {Family} C1 3 1 C2 2 4 Gini 0.400 CarType {Sports} {Family, Luxury} C1 2 2 C2 1 5 Gini 0.419

10 Continuous Attributes: Computing Gini Index Use Binary Decisions based on one value Tid Refund Marital Status Taxable Income Cheat Several Choices for the splitting value Number of possible splitting values = Number of distinct values -1 Typically, the midpoint between each pair of adjacent values is considered as a possible split point (a i +a i+1 )/2 is the midpoint between the values of a i and a i+1 Each splitting value has a count matrix associated with it Class counts in each of the partitions, A < v and A v 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes Simple method to choose best v For each v, scan the database to gather count matrix and compute its Gini index Computationally Inefficient! Repetition of work. Taxable Income > 80K? Yes No 60

61 Continuous Attributes: Computing Gini Index... For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and computing gini index Choose the split position that has the least Gini index Cheat No No No Yes Yes Yes No No No No Sorted Values Split Positions Taxable Income 60 70 75 85 90 95 100 120 125 220 55 65 72 80 87 92 97 110 122 172 230 <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0 No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0 Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420

62 Attribute Selection: Complexity Note Class P: buys_computer = yes Class N: buys_computer = no age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no Look at age : age p i n i I(p i, n i ) <=30 2 3 0.971 31 40 4 0 0 >40 3 2 0.971 When selecting an attribute, scan the data set at each node to gather count matrix

63 Another Impurity Measure: Misclassification Error Classification error at a node t : Error( t) = 1 max P( i t) P(i t) means the relative frequency of class i at node t. i Measures misclassification error made by a node. Maximum (1-1/n c ) when records are equally distributed among all classes, implying most impurity Minimum (0.0) when all records belong to one class, implying least impurity

64 Examples for Misclassification Error Error( t) = 1 max P( i t) i C1 0 C2 6 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 max (0, 1) = 1 1 = 0 C1 1 C2 5 P(C1) = 1/6 P(C2) = 5/6 Error = 1 max (1/6, 5/6) = 1 5/6 = 1/6 C1 2 C2 4 P(C1) = 2/6 P(C2) = 4/6 Error = 1 max (2/6, 4/6) = 1 4/6 = 1/3

65 Comparison among Impurity Measure For a 2-class problem:

66 Comparing Attribute Selection Measures The three measures, in general, return good results but Information gain: biased towards multivalued attributes Gain ratio: tends to prefer unbalanced splits in which one partition is much smaller than the others Gini index: biased to multivalued attributes has difficulty when # of classes is large tends to favor tests that result in equal-sized partitions and purity in both partitions

67 Other Attribute Selection Measures CHAID: a popular decision tree algorithm, measure based on χ 2 test for independence C-SEP: performs better than info. gain and gini index in certain cases G-statistic: has a close approximation to χ 2 distribution MDL (Minimal Description Length) principle (i.e., the simplest solution is preferred): The best tree as the one that requires the fewest # of bits to both (1) encode the tree, and (2) encode the exceptions to the tree Multivariate splits (partition based on multiple variable combinations) CART: finds multivariate splits based on a linear comb. of attrs. Which attribute selection measure is the best? Most give good results, none is significantly superior than others

Decision Tree Based Classification Advantages: Inexpensive to construct Extremely fast at classifying unknown records Easy to interpret for small-sized trees Accuracy is comparable to other classification techniques for many simple data sets 68

69 Example: C4.5 Simple depth-first construction. Uses Information Gain Sorts Continuous Attributes at each node. Needs entire data to fit in memory. Unsuitable for Large Datasets. Needs out-of-core sorting. You can download the software online, e.g., http://www2.cs.uregina.ca/~dbd/cs831/notes/ml/dtrees/c4.5/tutorial.html

70 Practical Issues of Classification Underfitting and Overfitting Missing Values Costs of Classification

Underfitting and Overfitting Overfitting 71 Underfitting: when model is too simple, both training and test errors are large

72 Overfitting due to Noise Decision boundary is distorted by noise point

Overfitting due to Insufficient Examples Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region 73 - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task

74 Notes on Overfitting Overfitting results in decision trees that are more complex than necessary Training error no longer provides a good estimate of how well the tree will perform on previously unseen records Need new ways for estimating errors