CSE4334/5334 DATA MINING

Similar documents
Classification: Basic Concepts, Decision Trees, and Model Evaluation

Part I. Instructor: Wei Ding

Classification. Instructor: Wei Ding

Data Mining Concepts & Techniques

CS Machine Learning

Example of DT Apply Model Example Learn Model Hunt s Alg. Measures of Node Impurity DT Examples and Characteristics. Classification.

Classification and Regression

Machine Learning. Decision Trees. Le Song /15-781, Spring Lecture 6, September 6, 2012 Based on slides from Eric Xing, CMU

Extra readings beyond the lecture slides are important:

Classification Salvatore Orlando

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

DATA MINING LECTURE 11. Classification Basic Concepts Decision Trees Evaluation Nearest-Neighbor Classifier

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

DATA MINING LECTURE 9. Classification Basic Concepts Decision Trees Evaluation

Data Mining Classification - Part 1 -

DATA MINING LECTURE 9. Classification Decision Trees Evaluation

Given a collection of records (training set )

Data Mining D E C I S I O N T R E E. Matteo Golfarelli

Lecture Notes for Chapter 4

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

CSE 5243 INTRO. TO DATA MINING

Data Warehousing & Data Mining

Lecture 7: Decision Trees

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

Lecture outline. Decision-tree classification

9 Classification: KNN and SVM

Part I. Classification & Decision Trees. Classification. Classification. Week 4 Based in part on slides from textbook, slides of Susan Holmes

COMP 465: Data Mining Classification Basics

Nesnelerin İnternetinde Veri Analizi

CS 584 Data Mining. Classification 1

DATA MINING LECTURE 10B. Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.1-3

DATA MINING INTRODUCTION TO CLASSIFICATION USING LINEAR CLASSIFIERS

CISC 4631 Data Mining

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

ARTIFICIAL INTELLIGENCE (CS 370D)

KDD E A MINERAÇÃO DE DADOS. Daniela Barreiro Claro

ISSUES IN DECISION TREE LEARNING

COMP90049 Knowledge Technologies

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Decision trees. Decision trees are useful to a large degree because of their simplicity and interpretability

Classification: Basic Concepts, Decision Trees, and Model Evaluation

List of Exercises: Data Mining 1 December 12th, 2015

Nominal Data. May not have a numerical representation Distance measures might not make sense. PR and ANN

Machine Learning. A. Supervised Learning A.7. Decision Trees. Lars Schmidt-Thieme

Classification and Regression Trees

Nearest neighbor classification DSE 220

Data Mining Course Overview

Classification with Decision Tree Induction

Decision tree learning

Data Mining Practical Machine Learning Tools and Techniques

Classification: Basic Concepts and Techniques

Decision Tree CE-717 : Machine Learning Sharif University of Technology

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

Classification: Decision Trees

CSE 494/598 Lecture-11: Clustering & Classification

Business Club. Decision Trees

PUBLIC: A Decision Tree Classifier that Integrates Building and Pruning

Data Mining in Bioinformatics Day 1: Classification

Nominal Data. May not have a numerical representation Distance measures might not make sense PR, ANN, & ML

CS229 Lecture notes. Raphael John Lamarre Townshend

Slides for Data Mining by I. H. Witten and E. Frank

Machine Learning Techniques for Data Mining

10 Classification: Evaluation

CSE4334/5334 Data Mining 4 Data and Data Preprocessing. Chengkai Li University of Texas at Arlington Fall 2017

8. Tree-based approaches

Network Traffic Measurements and Analysis

SOCIAL MEDIA MINING. Data Mining Essentials

PARALLEL CLASSIFICATION ALGORITHMS

Logical Rhythm - Class 3. August 27, 2018

Credit card Fraud Detection using Predictive Modeling: a Review

CSE 158. Web Mining and Recommender Systems. Midterm recap

Classification and Regression Trees

Decision Tree Learning

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

Exam Advanced Data Mining Date: Time:

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

Statistics 202: Statistical Aspects of Data Mining

7. Decision or classification trees

โครงการอบรมเช งปฏ บ ต การ น กว ทยาการข อม ลภาคร ฐ (Government Data Scientist)

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

Machine Learning in Real World: C4.5

Tree-based methods for classification and regression

Introduction to Clustering

Supervised vs unsupervised clustering

Data Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners

Data Preprocessing UE 141 Spring 2013

Lecture 5: Decision Trees (Part II)

Predictive Analytics: Demystifying Current and Emerging Methodologies. Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA

A Systematic Overview of Data Mining Algorithms

9/6/14. Our first learning algorithm. Comp 135 Introduction to Machine Learning and Data Mining. knn Algorithm. knn Algorithm (simple form)

DATA ANALYSIS I. Types of Attributes Sparse, Incomplete, Inaccurate Data

CS 229 Midterm Review

Chapter 4: Algorithms CS 795

Artificial Intelligence. Programming Styles

Introduction to Machine Learning

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

Algorithms: Decision Trees

Transcription:

CSE4334/5334 DATA MINING Lecture 4: Classification (1) CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides courtesy of Vipin Kumar)

Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class. Find a model for class attribute as a function of the values of other attributes. Goal: previously unseen records should be assigned a class as accurately as possible. A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it. Lecture 4: Classification (I) 2

10 10 10 10 Illustrating Classification Task Tid Refund Marital Taxable Tid Attrib1 Status Attrib2 Income Cheat Attrib3 Class 1 1 Yes Single Large 125K 2 2 Married Medium 100K 3 3 Single Small 70K 4 4 Yes Yes Married Medium 120K 5 5 Divorced Large 95K Yes 6 6 Married Medium 60K 7 7 Yes Yes Divorced Large 220K 8 8 Single Small 85K 85K Yes 9 9 Married Medium 75K 75K 10 Small 90K Yes 10 Single 90K Yes Refund Training Set Marital Status Taxable Income Tid Attrib1 SingleAttrib2 75K Attrib3? Class Yes 11 Married Small 50K 55K? 12 Yes Married Medium 150K 80K? Yes 13 Yes Divorced Large 90K 110K? 14 Single Small 40K 95K? 15 Married Large 80K 67K? Cheat Induction Deduction Learning algorithm Learn Model Apply Model Model Test Set Lecture 4: Classification (I) 3

Examples of Classification Task Predicting tumor cells as benign or malignant Classifying credit card transactions as legitimate or fraudulent Categorizing news stories as finance, weather, entertainment, sports, etc. Give me more Examples. Lecture 4: Classification (I) 4

Classification vs. Prediction Classification predicts categorical class labels Most suited for nominal attributes [ex: Gender, Color] Less effective for ordinal attributes [ex: temperature, rank] Prediction models continuous-valued functions or ordinal attributes, i.e., predicts unknown or missing values e.g., Linear regression Lecture 5: Classification (I) 5

Supervised vs. Unsupervised Learning Supervised learning (classification) Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations New data is classified based on the training set Unsupervised learning (clustering) The class labels of training data is unknown Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data Lecture 5: Classification (I) 6

Classification Techniques Decision Tree based Methods Rule-based Methods Nearest-Neighbor Classifiers Naïve Bayes Classifiers and Bayesian Belief Networks Neural Networks Support Vector Machines Lecture 4: Classification (I) 7

10 Example of a Decision Tree Tid Refund Marital Status Taxable Income Cheat Splitting Attributes 1 Yes Single 125K 2 Married 100K 3 Single 70K Yes Refund 4 Yes Married 120K 5 Divorced 95K Yes 6 Married 60K 7 Yes Divorced 220K 8 Single 85K Yes NO MarSt Single, Divorced TaxInc < 80K > 80K Married NO 9 Married 75K 10 Single 90K Yes NO YES Training Data Model: Decision Tree Lecture 4: Classification (I) 8

10 Another Example of Decision Tree Tid Refund Marital Status Taxable Income 1 Yes Single 125K 2 Married 100K Cheat Married NO MarSt Yes Single, Divorced Refund 3 Single 70K NO TaxInc 4 Yes Married 120K < 80K > 80K 5 Divorced 95K Yes 6 Married 60K NO YES 7 Yes Divorced 220K 8 Single 85K Yes 9 Married 75K 10 Single 90K Yes There could be more than one tree that fits the same data! Lecture 4: Classification (I) 9

10 10 10 10 Decision Tree Classification Task Tid Refund Marital Taxable Tid Attrib1 Status Attrib2 Income Cheat Attrib3 Class 1 1 Yes Single Large 125K 2 2 Married Medium 100K 3 3 Single Small 70K 4 4 Yes Yes Married Medium 120K 5 5 Divorced Large 95K Yes 6 6 Married Medium 60K 7 7 Yes Yes Divorced Large 220K 8 8 Single Small 85K 85K Yes 9 9 Married Medium 75K 75K 10 Small 90K Yes 10 Single 90K Yes Refund Training Set Marital Status Taxable Income Tid Attrib1 SingleAttrib2 75K Attrib3? Class Yes 11 Married Small 50K 55K? 12 Yes Married Medium 150K 80K? Yes 13 Yes Divorced Large 90K 110K? 14 Single Small 40K 95K? 15 Married Large 80K 67K? Cheat Induction Deduction Tree Induction algorithm Learn Model Apply Model Model Decision Tree Test Set Lecture 4: Classification (I) 10

10 Apply Model to Test Data Start from the root of tree. Test Data Refund Marital Status Taxable Income Cheat Refund Married 80K? Yes NO MarSt Single, Divorced Married TaxInc < 80K > 80K NO NO YES Lecture 4: Classification (I) 11

10 Apply Model to Test Data Test Data Refund Marital Status Taxable Income Cheat Yes Refund Married 80K? NO MarSt Single, Divorced Married TaxInc < 80K > 80K NO NO YES Lecture 4: Classification (I) 12

10 Apply Model to Test Data Test Data Refund Marital Status Taxable Income Cheat Yes Refund Married 80K? NO Single, Divorced MarSt Married TaxInc < 80K > 80K NO NO YES Lecture 4: Classification (I) 13

10 Apply Model to Test Data Test Data Refund Marital Status Taxable Income Cheat Yes Refund Married 80K? NO Single, Divorced MarSt Married TaxInc < 80K > 80K NO NO YES Lecture 4: Classification (I) 14

10 Apply Model to Test Data Test Data Refund Marital Status Taxable Income Cheat Yes Refund Married 80K? NO Single, Divorced MarSt Married TaxInc < 80K > 80K NO NO YES Lecture 4: Classification (I) 15

10 Apply Model to Test Data Test Data Refund Marital Status Taxable Income Cheat Refund Married 80K? Yes NO Single, Divorced MarSt Married Assign Cheat to TaxInc NO < 80K > 80K NO YES Lecture 4: Classification (I) 16

10 10 10 10 Decision Tree Classification Task Tid Refund Marital Taxable Tid Attrib1 Status Attrib2 Income Cheat Attrib3 Class 1 1 Yes Yes Single Large 125K 2 2 Married Medium 100K 3 3 Single Small 70K 4 4 Yes Yes Married Medium 120K 5 5 Divorced Large 95K Yes 6 6 Married Medium 60K 60K 7 7 Yes Yes Divorced Large 220K 220K 8 8 Single Small 85K 85K Yes Yes 9 Medium 75K 9 Married 75K 10 Small 90K Yes 10 Single 90K Yes Refund Training Set Marital Status Taxable Income Tid Attrib1 SingleAttrib2 75K Attrib3? Class Yes 11 Married Small 50K 55K? 12 Yes Married Medium 150K 80K? Yes 13 Yes Divorced Large 90K 110K? 14 Single Small 40K 95K? 15 Married Large 80K 67K? Cheat Induction Deduction Tree Induction algorithm Learn Model Apply Model Model Decision Tree Lecture 4: Classification Test Set (I) 17

Decision Tree Induction Large search space Exponential number of trees, with respect to the set of attributes. Finding the optimal decision tree is computationally infeasible Efficient algorithm for accurate suboptimal decision tree Greedy strategy Grow the tree by making locally optimally decisions in selecting the attributes Lecture 4: Classification (I) 18

Decision Tree Induction Many Algorithms: Hunt s Algorithm (one of the earliest, basis of others) CART ID3, C4.5 SLIQ,SPRINT Lecture 4: Classification (I) 19

10 General Structure of Hunt s Algorithm Let D t be the set of training records that reach a node t General Procedure: If D t contains records that belong the same class y t, then t is a leaf node labeled as y t If D t is an empty set, then t is a leaf node labeled by the default class, y d If D t contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. Tid Refund Marital Status Taxable Income 1 Yes Single 125K 2 Married 100K 3 Single 70K 4 Yes Married 120K Cheat 5 Divorced 95K Yes 6 Married 60K 7 Yes Divorced 220K 8 Single 85K Yes 9 Married 75K 10 Single 90K Yes D t? Lecture 4: Classification (I) 20

10 Hunt s Algorithm Tid Refund Marital Status Taxable Income 1 Yes Single 125K Cheat Don t Cheat Yes Don t Cheat Refund Don t Cheat 2 Married 100K 3 Single 70K 4 Yes Married 120K 5 Divorced 95K Yes 6 Married 60K 7 Yes Divorced 220K Yes Refund Don t Cheat Single, Divorced Cheat Marital Status Married Don t Cheat Yes Refund Don t Cheat Single, Divorced Don t Cheat Taxable Income Marital Status < 80K >= 80K Cheat Married Don t Cheat 8 Single 85K Yes 9 Married 75K 10 Single 90K Yes Lecture 4: Classification (I) 21

Tree Induction Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting Lecture 4: Classification (I) 22

Tree Induction Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting Lecture 4: Classification (I) 23

How to Specify Test Condition? Depends on attribute types minal Ordinal Continuous Depends on number of ways to split 2-way split Multi-way split Lecture 4: Classification (I) 24

Splitting Based on minal Attributes Multi-way split: Use as many partitions as distinct values. Family CarType Sports Luxury Binary split: Divides values into two subsets. Need to find optimal partitioning. {Sports, Luxury} CarType {Family} OR {Family, Luxury} CarType {Sports} Lecture 4: Classification (I) 25

Splitting Based on Ordinal Attributes Multi-way split: Use as many partitions as distinct values. Small Size Medium Large Binary split: Divides values into two subsets. Need to find optimal partitioning. {Small, Medium} Size What about this split? {Large} OR {Medium, Large} Size {Small} {Small, Large} Size {Medium} Lecture 4: Classification (I) 26

Splitting Based on Continuous Attributes Different ways of handling Discretization to form an ordinal categorical attribute Static discretize once at the beginning Dynamic ranges can be found by equal interval bucketing, equal frequency bucketing (percentiles), or clustering. Binary Decision: (A < v) or (A v) consider all possible splits and finds the best cut can be more computational intensive Lecture 4: Classification (I) 27

Splitting Based on Continuous Attributes Taxable Income > 80K? Taxable Income? < 10K > 80K Yes [10K,25K) [25K,50K) [50K,80K) (i) Binary split (ii) Multi-way split Lecture 4: Classification (I) 28

Tree Induction Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting Lecture 4: Classification (I) 29

How to determine the Best Split Before Splitting: 10 records of class 0, 10 records of class 1 Own Car? Car Type? Student ID? Yes Family Luxury c 1 c 10 c 20 Sports c 11 C0: 6 C1: 4 C0: 4 C1: 6 C0: 1 C1: 3 C0: 8 C1: 0 C0: 1 C1: 7 C0: 1 C1: 0... C0: 1 C1: 0 C0: 0 C1: 1... C0: 0 C1: 1 Which test condition is the best? Lecture 4: Classification (I) 30

How to determine the Best Split Greedy approach: des with homogeneous class distribution are preferred Need a measure of node impurity: C0: 5 C1: 5 C0: 9 C1: 1 n-homogeneous, High degree of impurity Homogeneous, Low degree of impurity Lecture 4: Classification (I) 31

Measures of de Impurity Gini Index Entropy Misclassification error Lecture 4: Classification (I) 32

How to Find the Best Split Before Splitting: C0 C1 N00 N01 M0 A? B? Yes Yes de N1 de N2 de N3 de N4 C0 N10 C0 N20 C0 N30 C0 N40 C1 N11 C1 N21 C1 N31 C1 N41 M1 M2 M3 M4 M12 gain (Information gain, if Entropy is used as M) M0 M12 vs M0 M34 M34 33

Measure of Impurity: GINI Gini Index for a given node t : GINI ( t) 1 j [ p( j t)] 2 (NOTE: p( j t) is the relative frequency of class j at node t). Maximum (1-1/n c ) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information C1 0 C2 6 C1 1 C2 5 C1 2 C2 4 C1 3 C2 3 Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500 34

Examples for computing GINI GINI ( t) 1 j [ p( j t)] 2 C1 0 C2 6 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Gini = 1 P(C1) 2 P(C2) 2 = 1 0 1 = 0 C1 1 C2 5 P(C1) = 1/6 P(C2) = 5/6 Gini = 1 (1/6) 2 (5/6) 2 = 0.278 C1 2 C2 4 P(C1) = 2/6 P(C2) = 4/6 Gini = 1 (2/6) 2 (4/6) 2 = 0.444 Lecture 4: Classification (I) 35

Splitting Based on GINI Used in CART, SLIQ, SPRINT. When a node p is split into k partitions (children), the quality of split is computed as, GINI split k i1 ni n GINI ( i) where, n i = number of records at child i, n = number of records at node p.

How to Find the Best Split Before Splitting: C0 C1 N00 N01 M0 Yes A? GINI split k i1 ni n GINI ( i) Yes B? de N1 de N2 de N3 de N4 C0 C1 N10 N11 C0 C1 N20 N21 C0 C1 N30 N31 C0 C1 N40 N41 M1 M2 M3 M4 M12 gain (Information gain, if Entropy is used as M) M0 M12 vs M0 M34 M34

Binary Attributes: Computing GINI Index Splits into two partitions Effect of Weighing partitions: Larger and Purer Partitions are sought for. Gini(N1) = 1 (5/7) 2 (2/7) 2 = 0.408 Gini(N2) = 1 (1/5) 2 (4/5) 2 = 0.32 Yes de N1 B? N1 N2 C1 5 1 C2 2 4 Gini=0.371 de N2 Parent C1 6 C2 6 Gini = 0.500 Gini(Children) = 7/12 * 0.408 + 5/12 * 0.32 = 0.371

Categorical Attributes: Computing Gini Index For each distinct value, gather counts for each class in the dataset Use the count matrix to make decisions Multi-way split Two-way split (find best partition of values) CarType Family Sports Luxury C1 1 8 1 C2 3 0 7 Gini 0.163 CarType {Sports, Luxury} {Family} C1 9 1 C2 7 3 Gini 0.468 CarType {Sports} {Family, Luxury} C1 8 2 C2 0 10 Gini 0.167 Lecture 4: Classification (I) 39

10 Continuous Attributes: Computing Gini Index Use Binary Decisions based on one value Several Choices for the splitting value Number of possible splitting values = Number of distinct values Each splitting value has a count matrix associated with it Class counts in each of the partitions, A < v and A v Simple method to choose best v For each v, scan the database to gather count matrix and compute its Gini index Computationally Inefficient! Repetition of work. Tid Refund Marital Status Taxable Income 1 Yes Single 125K 2 Married 100K 3 Single 70K 4 Yes Married 120K Cheat 5 Divorced 95K Yes 6 Married 60K 7 Yes Divorced 220K 8 Single 85K Yes 9 Married 75K 10 Single 90K Yes Taxable Income > 80K? Yes

Continuous Attributes: Computing Gini Index... For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and computing gini index Choose the split position that has the least gini index Cheat Yes Yes Yes Taxable Income Sorted Values Split Positions 60 70 75 85 90 95 100 120 125 220 55 65 72 80 87 92 97 110 122 172 230 <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= > Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0 Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420 Lecture 4: Classification (I) 41

Alternative Splitting Criteria based on INFO Entropy at a given node t: Entropy ( t) p( j t)log p( j t) j (NOTE: p( j t) is the relative frequency of class j at node t). Measures homogeneity of a node. Maximum (log n c ) when records are equally distributed among all classes implying least information Minimum (0.0) when all records belong to one class, implying most information Entropy based computations are similar to the GINI index computations Lecture 4: Classification (I) 42

Examples for computing Entropy Entropy t) p( j t)log p( j t) j ( 2 C1 0 C2 6 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Entropy = 0 log 0 1 log 1 = 0 0 = 0 C1 1 C2 5 P(C1) = 1/6 P(C2) = 5/6 Entropy = (1/6) log 2 (1/6) (5/6) log 2 (5/6) = 0.65 C1 2 C2 4 P(C1) = 2/6 P(C2) = 4/6 Entropy = (2/6) log 2 (2/6) (4/6) log 2 (4/6) = 0.92 Lecture 4: Classification (I) 43

Splitting Based on INFO... Information Gain: k ni GAIN Entropy( p) Entropy( i) split i1 n Parent de, p is split into k partitions; n i is number of records in partition i Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) Used in ID3 and C4.5 Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.

Splitting Based on INFO... Gain Ratio: GainRATIO GAIN Split split k i SplitINFO log i1 SplitINFO Parent de, p is split into k partitions n i is the number of records in partition i Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! Used in C4.5 Designed to overcome the disadvantage of Information Gain n n ni n

Splitting Criteria based on Classification Error Classification error at a node t : Error( t) 1 max P( i t) i Measures misclassification error made by a node. Maximum (1-1/n c ) when records are equally distributed among all classes, implying least interesting information Minimum (0.0) when all records belong to one class, implying most interesting information Lecture 4: Classification (I) 46

Examples for Computing Error Error( t) 1 max P( i t) i C1 0 C2 6 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 max (0, 1) = 1 1 = 0 C1 1 C2 5 P(C1) = 1/6 P(C2) = 5/6 Error = 1 max (1/6, 5/6) = 1 5/6 = 1/6 C1 2 C2 4 P(C1) = 2/6 P(C2) = 4/6 Error = 1 max (2/6, 4/6) = 1 4/6 = 1/3 Lecture 4: Classification (I) 47

Comparison among Splitting Criteria For a 2-class problem: (p is the fraction of records belonging to one of the two classes.) Lecture 4: Classification (I) 48

Tree Induction Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting Lecture 4: Classification (I) 49

Stopping Criteria for Tree Induction Stop expanding a node when all the records belong to the same class Stop expanding a node when all the records have same (or similar) attribute values What to do? majority voting Early termination (to be discussed later) Lecture 4: Classification (I) 50

Decision Tree Based Classification Advantages: Inexpensive to construct Extremely fast at classifying unknown records Easy to interpret for small-sized trees Accuracy is comparable to other classification techniques for many simple data sets Lecture 4: Classification (I) 51

Example: C4.5 Simple depth-first construction. Uses Information Gain Sorts Continuous Attributes at each node. Needs entire data to fit in memory. Unsuitable for Large Datasets. Needs out-of-core sorting. You can download the software from: http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz Lecture 4: Classification (I) 52