COMP 465: Data Mining Classification Basics

Similar documents
Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.1-3

Extra readings beyond the lecture slides are important:

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

Classification with Decision Tree Induction

Basic Data Mining Technique

ISSUES IN DECISION TREE LEARNING

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

CSE 5243 INTRO. TO DATA MINING

Nesnelerin İnternetinde Veri Analizi

What Is Data Mining? CMPT 354: Database I -- Data Mining 2

Knowledge Discovery in Databases

Lecture outline. Decision-tree classification

CSE4334/5334 DATA MINING

CLASSIFICATION OF C4.5 AND CART ALGORITHMS USING DECISION TREE METHOD

Additive Models, Trees, etc. Based in part on Chapter 9 of Hastie, Tibshirani, and Friedman David Madigan

Classification and Prediction

CS Machine Learning

Classification: Basic Concepts, Decision Trees, and Model Evaluation

Part I. Instructor: Wei Ding

Classification: Decision Trees

Classification. Instructor: Wei Ding

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Data Mining Classification - Part 1 -

Data Mining. 3.3 Rule-Based Classification. Fall Instructor: Dr. Masoud Yaghini. Rule-Based Classification

Example of DT Apply Model Example Learn Model Hunt s Alg. Measures of Node Impurity DT Examples and Characteristics. Classification.

Decision Tree CE-717 : Machine Learning Sharif University of Technology

Data warehouse and Data Mining

Machine Learning in Real World: C4.5

7. Decision or classification trees

Machine Learning. Decision Trees. Le Song /15-781, Spring Lecture 6, September 6, 2012 Based on slides from Eric Xing, CMU

Data Mining Part 5. Prediction

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

Decision Tree Learning

(Classification and Prediction)

Data Mining Practical Machine Learning Tools and Techniques

Data Mining Concepts & Techniques

Lecture 7: Decision Trees

International Journal of Software and Web Sciences (IJSWS)

Introduction to Data Mining. Yücel SAYGIN

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques

Part I. Classification & Decision Trees. Classification. Classification. Week 4 Based in part on slides from textbook, slides of Susan Holmes

Classification and Regression

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

DATA MINING LECTURE 9. Classification Decision Trees Evaluation

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

Credit card Fraud Detection using Predictive Modeling: a Review

Classification: Decision Trees

Chapter 4 Data Mining A Short Introduction. 2005/6, Karl Aberer, EPFL-IC, Laboratoire de systèmes d'informations répartis Data Mining - 1

8. Tree-based approaches

Data Warehousing & Data Mining

What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control.

Analysis of Various Decision Tree Algorithms for Classification in Data Mining

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University

Prediction. What is Prediction. Simple methods for Prediction. Classification by decision tree induction. Classification and regression evaluation

Ensemble Methods, Decision Trees

Performance Analysis of Classifying Unlabeled Data from Multiple Data Sources

Data Mining in Bioinformatics Day 1: Classification

The digital copy of this thesis is protected by the Copyright Act 1994 (New Zealand).

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

DATA MINING LECTURE 11. Classification Basic Concepts Decision Trees Evaluation Nearest-Neighbor Classifier

BITS F464: MACHINE LEARNING

Implementierungstechniken für Hauptspeicherdatenbanksysteme Classification: Decision Trees

Induction of Decision Trees

A Program demonstrating Gini Index Classification

Decision tree learning

Business Club. Decision Trees

DATA MINING LECTURE 9. Classification Basic Concepts Decision Trees Evaluation

Classification/Regression Trees and Random Forests

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

Data Preprocessing. Slides by: Shree Jaswal

Nearest neighbor classification DSE 220

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

Decision Trees. Petr Pošík. Czech Technical University in Prague Faculty of Electrical Engineering Dept. of Cybernetics

ARTIFICIAL INTELLIGENCE (CS 370D)

Lecture 2 :: Decision Trees Learning

Data Mining. Decision Tree. Hamid Beigy. Sharif University of Technology. Fall 1396

Introduction to Machine Learning

CS4445 Data Mining and Knowledge Discovery in Databases. A Term 2008 Exam 2 October 14, 2008

An Information-Theoretic Approach to the Prepruning of Classification Rules

Data Mining Lecture 8: Decision Trees

Data Preprocessing Yudho Giri Sucahyo y, Ph.D , CISA

Lecture 5: Decision Trees (Part II)

9/6/14. Our first learning algorithm. Comp 135 Introduction to Machine Learning and Data Mining. knn Algorithm. knn Algorithm (simple form)

COMP 465: Data Mining Still More on Clustering

Classification Salvatore Orlando

ECLT 5810 Data Preprocessing. Prof. Wai Lam

Random Forest A. Fornaser

Uncertain Data Classification Using Decision Tree Classification Tool With Probability Density Function Modeling Technique

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

Machine Learning. Decision Trees. Manfred Huber

University of Cambridge Engineering Part IIB Paper 4F10: Statistical Pattern Processing Handout 10: Decision Trees

Lecture Notes for Chapter 4

Nominal Data. May not have a numerical representation Distance measures might not make sense. PR and ANN

SOCIAL MEDIA MINING. Data Mining Essentials

Information Management course

Lars Schmidt-Thieme, Information Systems and Machine Learning Lab (ISMLL), University of Hildesheim, Germany

Performance Analysis of Data Mining Classification Techniques

PUBLIC: A Decision Tree Classifier that Integrates Building and Pruning

Data Mining and Soft Computing

Transcription:

Supervised vs. Unsupervised Learning COMP 465: Data Mining Classification Basics Slides Adapted From : Jiawei Han, Micheline Kamber & Jian Pei Data Mining: Concepts and Techniques, 3 rd ed. Supervised learning (classification Supervision: The training data (observations, measurements, etc. are accompanied by labels indicating the class of the observations New data is classified based on the training set Unsupervised learning (clustering The class labels of training data is unknown Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data 1 Prediction Problems: Classification vs. Numeric Prediction Classification predicts categorical class labels (discrete or nominal classifies data (constructs a model based on the training set and the values (class labels in a classifying attribute and uses it in classifying new data Numeric Prediction models continuous-valued functions, i.e., predicts unknown or missing values Typical applications Credit/loan approval: Medical diagnosis: if a tumor is cancerous or benign Fraud detection: if a transaction is fraudulent Web page categorization: which category it is Classification A Two-Step Process Model construction: describing a set of predetermined classes Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute The set of tuples used for model construction is training set The model is represented as classification rules, decision trees, or mathematical formulae Model usage: for classifying future or unknown objects Estimate accuracy of the model The known label of test sample is compared with the classified result from the model Accuracy rate is the percentage of test set samples that are correctly classified by the model Test set is independent of training set (otherwise overfitting If the accuracy is acceptable, use the model to classify new data Note: If the test set is used to select models, it is called validation (test set 3 4 1

Process (1: Model Construction Process (: Using the Model in Prediction Training Data Classification Algorithms Classifier NAM E RANK YEARS TENURED Mike Assistant Prof 3 no Mary Assistant Prof 7 yes Bill Professor yes Jim Associate Prof 7 yes Dave Assistant Prof 6 no Anne Associate Prof 3 no Classifier (Model IF rank = professor OR years > 6 THEN tenured = yes 5 Testing Data NAME RANK YEARS TENURED Tom Assistant Prof no Merlisa Associate Prof 7 no George Professor 5 yes Joseph Assistant Prof 7 yes Unseen Data (Jeff, Professor, 4 Tenured? 6 Decision Tree Induction: An Example Training data set: Buys_computer The data set follows an example of Quinlan s ID3 (Playing Tennis Resulting tree: age? no <=30 overcast 31..40 >40 student? yes yes excellent credit rating? fair no yes yes age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no 7 Algorithm for Decision Tree Induction Basic algorithm (a greedy algorithm Tree is constructed in a top-down recursive divide-and-conquer manner At start, all the training examples are at the root Attributes are categorical (if continuous-valued, they are discretized in advance Examples are partitioned recursively based on selected attributes Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain Conditions for stopping partitioning All samples for a given node belong to the same class There are no remaining attributes for further partitioning majority voting is employed for classifying the leaf There are no samples left 8

Entropy Attribute Selection Measure: Information Gain (ID3/C4.5 m = 9 Select the attribute with the highest information gain Let p i be the probability that an arbitrary tuple in D belongs to class C i, estimated by C i, D / D Expected information (entropy needed to classify a tuple in D: Info( p i log ( pi i 1 Information needed (after using A to split D into v partitions to classify D: v Dj Info A( Info( Dj j 1 D Information gained by branching on attribute A m Gain(A Info( Info ( A 10 Attribute Selection: Information Gain Class P: buys_computer = yes 5 4 Info age ( I(,3 I(4,0 Class N: buys_computer = no 14 14 9 9 5 5 5 Info( I(9,5 log ( log ( 0.940 I(3, 0.694 14 14 14 14 14 age p i n i I(p i, n i 5 means age <=30 has 5 out of <=30 3 0.971 I (,3 14 31 40 4 0 0 14 samples, with yes and 3 no. >40 3 0.971 Hence Gain( age Info( Info ( 0.46 age income student credit_rating buys_computer <=30 high no fair no <=30 high no excellent no 31 40 high no fair yes >40 medium no fair yes >40 low yes fair yes >40 low yes excellent no 31 40 low yes excellent yes <=30 medium no fair no <=30 low yes fair yes >40 medium yes fair yes <=30 medium yes excellent yes 31 40 medium no excellent yes 31 40 high yes fair yes >40 medium no excellent no Similarly, age Gain( income 0.09 Gain( student 0.151 Gain( credit _ rating 0.048 11 Computing Information-Gain for Continuous-Valued Attributes Let attribute A be a continuous-valued attribute Must determine the best split point for A Sort the value A in increasing order Typically, the midpoint between each pair of adjacent values is considered as a possible split point (a i +a i+1 / is the midpoint between the values of a i and a i+1 The point with the minimum expected information requirement for A is selected as the split-point for A Split: D1 is the set of tuples in D satisfying A split-point, and D is the set of tuples in D satisfying A > split-point 1 3

Gain Ratio for Attribute Selection (C4.5 Information gain measure is biased towards attributes with a large number of values C4.5 (a successor of ID3 uses gain ratio to overcome the problem (normalization to information gain v Dj Dj SplitInfo A( log ( D j 1 D GainRatio(A = Gain(A/SplitInfo(A Ex. gain_ratio(income = 0.09/1.557 = 0.019 The attribute with the maximum gain ratio is selected as the splitting attribute 13 Gini Index (CART, IBM IntelligentMiner If a data set D contains examples from n classes, gini index, gini( is defined as n gini( 1 p j j 1 where p j is the relative frequency of class j in D If a data set D is split on A into two subsets D 1 and D, the gini index gini( is defined as D ( 1 D ( gini A D gini D1 gini( D D Reduction in Impurity: gini( A gini( gini ( The attribute provides the smallest gini split ( (or the largest reduction in impurity is chosen to split the node (need to enumerate all the possible splitting points for each attribute A 14 Computation of Gini Index Example: D has 9 tuples in buys_computer = yes and 5 in no 9 5 gini( 1 0.459 14 14 Suppose the attribute income partitions D into 10 in D 1 : {low, medium} and 4 in D 10 4 giniincome { low, medium} ( Gini( D1 Gini( D 14 14 Gini {low,high} is 0.458; Gini {medium,high} is 0.450. Thus, split on the {low,medium} (and {high} since it has the lowest Gini index All attributes are assumed continuous-valued May need other tools, e.g., clustering, to get the possible split values Can be modified for categorical attributes 15 Comparing Attribute Selection Measures The three measures, in general, return good results but Information gain: biased towards multivalued attributes Gain ratio: tends to prefer unbalanced splits in which one partition is much smaller than the others Gini index: biased to multivalued attributes has difficulty when # of classes is large tends to favor tests that result in equal-sized partitions and purity in both partitions 16 4

Other Attribute Selection Measures Overfitting and Tree Pruning CHAID: a popular decision tree algorithm, measure based on χ test for independence C-SEP: performs better than info. gain and gini index in certain cases G-statistic: has a close approximation to χ distribution MDL (Minimal Description Length principle (i.e., the simplest solution is preferred: The best tree as the one that requires the fewest # of bits to both (1 encode the tree, and ( encode the exceptions to the tree Multivariate splits (partition based on multiple variable combinations CART: finds multivariate splits based on a linear comb. of attrs. Which attribute selection measure is the best? Most give good results, none is significantly superior than others Overfitting: An induced tree may overfit the training data Too many branches, some may reflect anomalies due to noise or outliers Poor accuracy for unseen samples Two approaches to avoid overfitting Prepruning: Halt tree construction early do not split a node if this would result in the goodness measure falling below a threshold Difficult to choose an appropriate threshold Postpruning: Remove branches from a fully grown tree get a sequence of progressively pruned trees Use a set of data different from the training data to decide which is the best pruned tree 17 18 Enhancements to Basic Decision Tree Induction Allow for continuous-valued attributes Dynamically define new discrete-valued attributes that partition the continuous attribute value into a discrete set of intervals Handle missing attribute values Assign the most common value of the attribute Assign probability to each of the possible values Attribute construction Create new attributes based on existing ones that are sparsely represented This reduces fragmentation, repetition, and replication 19 Classification in Large Databases Classification a classical problem extensively studied by statisticians and machine learning researchers Scalability: Classifying data sets with millions of examples and hundreds of attributes with reasonable speed Why is decision tree induction popular? relatively faster learning speed (than other classification methods convertible to simple and easy to understand classification rules can use SQL queries for accessing databases comparable classification accuracy with other methods RainForest (VLDB 98 Gehrke, Ramakrishnan & Ganti Builds an AVC-list (attribute, value, class label 0 5

Summary Classification is a form of data analysis that extracts models describing important data classes. Effective and scalable methods have been developed for decision tree induction Coming up: More classification algorithms. More evaluation metrics include: accuracy, sensitivity, specificity, precision, recall, F measure, and F ß measure. Stratified k-fold cross-validation Bagging and boosting 1 6