CS570: Introduction to Data Mining

Similar documents
Metrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates?

CS145: INTRODUCTION TO DATA MINING

Classification. Instructor: Wei Ding

Lecture Notes for Chapter 5

Metrics Overfitting Model Evaluation Research directions. Classification. Practical Issues. Huiping Cao. lassification-issues, Slide 1/57

LABORATORY OF DATA SCIENCE. Reminds on Data Mining. Data Science & Business Informatics Degree

Data Mining Part 5. Prediction

Data Mining Classification: Bayesian Decision Theory

Lecture Notes for Chapter 5

Data Mining. 3.3 Rule-Based Classification. Fall Instructor: Dr. Masoud Yaghini. Rule-Based Classification

CS249: ADVANCED DATA MINING

10 Classification: Evaluation

Naïve Bayes Classification. Material borrowed from Jonathan Huang and I. H. Witten s and E. Frank s Data Mining and Jeremy Wyatt and others

Information Management course

Data Mining Classification: Alternative Techniques. Imbalanced Class Problem

DATA MINING LECTURE 9. Classification Decision Trees Evaluation

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

Classification Part 4

CS570: Introduction to Data Mining

Chapter 2: Classification & Prediction

Data Mining Concepts & Techniques

Evaluating Machine-Learning Methods. Goals for the lecture

Evaluating Machine Learning Methods: Part 1

Evaluating Classifiers

Classification and Regression

Data Mining and Knowledge Discovery Practice notes 2

Evaluation Measures. Sebastian Pölsterl. April 28, Computer Aided Medical Procedures Technische Universität München

Data Mining and Knowledge Discovery: Practice Notes

Random Forest A. Fornaser

CS4491/CS 7265 BIG DATA ANALYTICS

Network Traffic Measurements and Analysis

Classification with Decision Tree Induction

Evaluating Classifiers

CS 584 Data Mining. Classification 3

COMP 465: Data Mining Classification Basics

CSE 5243 INTRO. TO DATA MINING

Best First and Greedy Search Based CFS and Naïve Bayes Algorithms for Hepatitis Diagnosis

Weka ( )

Part I. Classification & Decision Trees. Classification. Classification. Week 4 Based in part on slides from textbook, slides of Susan Holmes

Lecture Notes for Chapter 4

Probabilistic Classifiers DWML, /27

List of Exercises: Data Mining 1 December 12th, 2015

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

Evaluation Metrics. (Classifiers) CS229 Section Anand Avati

Extra readings beyond the lecture slides are important:

Data Mining and Knowledge Discovery: Practice Notes

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Statistics 202: Statistical Aspects of Data Mining

Chapter 3: Supervised Learning

CISC 4631 Data Mining

CSE4334/5334 DATA MINING

Classification Algorithms in Data Mining

A Conflict-Based Confidence Measure for Associative Classification

DATA MINING LECTURE 11. Classification Basic Concepts Decision Trees Evaluation Nearest-Neighbor Classifier

ECLT 5810 Evaluation of Classification Quality

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

DATA MINING LECTURE 9. Classification Basic Concepts Decision Trees Evaluation

Structure of Association Rule Classifiers: a Review

DATA MINING OVERFITTING AND EVALUATION

INTRODUCTION TO DATA MINING. Daniel Rodríguez, University of Alcalá

DATA MINING AND MACHINE LEARNING. Lecture 6: Data preprocessing and model selection Lecturer: Simone Scardapane

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

Classification. Slide sources:

Contents Machine Learning concepts 4 Learning Algorithm 4 Predictive Model (Model) 4 Model, Classification 4 Model, Regression 4 Representation

CS570: Introduction to Data Mining

Pattern recognition (4)

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Data Mining and Knowledge Discovery Practice notes: Numeric Prediction, Association Rules

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.1-3

Tutorials Case studies

International Journal of Computer Engineering and Applications, Volume XI, Issue XII, Dec. 17, ISSN

Advanced Video Content Analysis and Video Compression (5LSH0), Module 8B

Evaluation. Evaluate what? For really large amounts of data... A: Use a validation set.

Evaluation of different biological data and computational classification methods for use in protein interaction prediction.

INF 4300 Classification III Anne Solberg The agenda today:

MIT Samberg Center Cambridge, MA, USA. May 30 th June 2 nd, by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA

Tree-based methods for classification and regression

Classification: Decision Trees

Improving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets

Classification: Basic Concepts and Techniques

CS 229 Midterm Review

BITS F464: MACHINE LEARNING

Machine Learning. Decision Trees. Le Song /15-781, Spring Lecture 6, September 6, 2012 Based on slides from Eric Xing, CMU

Business Club. Decision Trees

2. On classification and related tasks

CART. Classification and Regression Trees. Rebecka Jörnsten. Mathematical Sciences University of Gothenburg and Chalmers University of Technology

Classification using Weka (Brain, Computation, and Neural Learning)

Machine Learning Techniques for Data Mining

Applying Supervised Learning

What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control.

Introduction to Machine Learning

.. Cal Poly CSC 466: Knowledge Discovery from Data Alexander Dekhtyar.. for each element of the dataset we are given its class label.

CS435 Introduction to Big Data Spring 2018 Colorado State University. 3/21/2018 Week 10-B Sangmi Lee Pallickara. FAQs. Collaborative filtering

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Use of Synthetic Data in Testing Administrative Records Systems

A Program demonstrating Gini Index Classification

Data Mining D E C I S I O N T R E E. Matteo Golfarelli

Contents. Preface to the Second Edition

Ensemble Methods, Decision Trees

Transcription:

CS570: Introduction to Data Mining Classification Advanced Reading: Chapter 8.4 & 8.5 Han, Chapters 4.5 & 4.6 Tan Anca Doloc-Mihu, Ph.D. Slides courtesy of Li Xiong, Ph.D., 2011 Han, Kamber & Pei. Data Mining. Morgan Kaufmann, and 2006 Tan, Steinbach & Kumar. Introd. Data Mining., Pearson. Addison Wesley. September 26, 2013 1

Classification and Prediction Last lecture Overview Decision tree induction Bayesian classification Today Training (learning) Bayesian network knn classification and collaborative filtering Support Vector Machines (SVM) Neural Networks Regression Model evaluation Rule based methods Upcoming lectures Ensemble methods Bagging, Random Forests, AdaBoost 2

Model Evaluation Metrics for Performance Evaluation of a Classifier Methods for Model Comparison (selecting the best classifier) Methods for Performance Evaluation of a Classifier 3

Metrics for Performance Evaluation Accuracy (recognition rate) Error rate (misclassification rate) Sensitivity (recall) Specificity Precision F1 score, F-measure or F-score 4

Metrics for Performance Evaluation Focus on the predictive capability of a model Accuracy of a classifier: percentage of test set tuples that are correctly classified by the model limitations? Binary classification: TP TN Accuracy TP TN FP FN Error rate (misclassification rate) = 1 accuracy Confusion matrix: given m classes, CMi,j, indicates # of tuples in class i that are labeled by the classifier as class j Binary classification confusion matrix ACTUAL CLASS PREDICTED CLASS positive negative positive TP FN negative FP TN TP (true positive) FN (false negative) FP (false positive) TN (true negative) 5

Limitation of Accuracy Consider a 2-class problem Number of Class 0 examples = 9990 Number of Class 1 examples = 10 If model predicts everything to be class 0, accuracy is 9990/10000 = 99.9 % Accuracy is misleading because model does not detect any class 1 example Accuracy is most effective when the class distribution is relatively balanced.

Cost-Sensitive Measures FP False Positive Rate (FPR) FP TN FN False Negative Rate (FNR) TP FN TP Sensitivity TP FN TN Specificit y FP TN ACTUAL CLASS PREDICTED CLASS positive negative positive TP FN negative FP TN TP Precision TP FP TP Recall TP FN 2 Precision * Recall F - measure Recall Recall sensitivity/recall/true positive rate specificity/true negative rate precision

8 Predictor Error Measures Measure predictor accuracy: measure how far off the predicted value is from the actual known value Loss function: measures the error bw. y i and the predicted value y i Absolute error: y i y i Squared error: (y i y i ) 2 Test error (generalization error): the average loss over the test set Mean absolute error: Mean squared error: Relative absolute error: Relative squared error: The mean squared-error exaggerates the presence of outliers Popularly use (square) root mean-square error, similarly, root relative squared error d y y d i i i 1 ' d y y d i i i 1 2 ') ( d i i d i i i y y y y 1 1 ' d i i d i i i y y y y 1 2 1 2 ) ( ') ( 8

Classifier Accuracy Measures C 1 C 2 C 1 True positive False negative C 2 False positive True negative classes buy_computer = yes buy_computer = no total recognition(%) buy_computer = yes 6954 46 7000 99.34 buy_computer = no 412 2588 3000 86.27 total 7366 2634 10000 95.52 Accuracy of a classifier M, acc(m): percentage of test set tuples that are correctly classified by the model M Error rate (misclassification rate) of M = 1 acc(m) Confusion matrix: given m classes, CM i,j, indicates # of tuples in class i that are labeled by the classifier as class j Alternative accuracy measures (e.g., for cancer diagnosis) sensitivity = truepos/pos /* true positive recognition rate */ specificity = trueneg/neg /* true negative recognition rate */ precision = truepos/(truepos + falsepos) accuracy = sensitivity * pos/(pos + neg) + specificity * neg/(pos + neg) Data Mining: Concepts and Techniques 9

Model Evaluation Metrics for Performance Evaluation Methods for Model Comparison Methods for Performance Evaluation 10

Model Comparison: ROC (Receiver Operating Characteristic) From signal detection theory True positive rate vs. false positive rate Sensitivity vs (1 - specificity) Each prediction result represents one point (varying threshold, sample distribution, etc) perfect classification line of no discrimination 11

How to Construct an ROC curve Instance P(+ A) True Class 1 0.95 + 2 0.93 + 3 0.87-4 0.85-5 0.85-6 0.85 + 7 0.76-8 0.53 + 9 0.43-10 0.25 + Sort instances according to posterior probability P(+ A) in decreasing order Apply threshold at each unique value of P(+ A) Compute and plot TPR and FPR 12

How to construct an ROC curve Class + - + - - - + - + + 0.25 0.43 0.53 0.76 0.85 0.85 0.85 0.87 0.93 0.95 1.00 Threshold >= TP 5 4 4 3 3 3 3 2 2 1 0 FP 5 5 4 4 3 2 1 1 0 0 0 TN 0 0 1 1 2 3 4 4 5 5 5 FN 0 1 1 2 2 2 2 3 3 4 5 TPR 1 0.8 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.2 0 FPR 1 1 0.8 0.8 0.6 0.4 0.2 0.2 0 0 0 ROC Curve:

Using ROC for Model Comparison Area Under the ROC curve Ideal: Area = 1 Diagonal: Area = 0.5 M1 vs. M2? 14

Given two models: Test of Significance Model M1: accuracy = 85%, tested on 30 instances Model M2: accuracy = 75%, tested on 5000 instances Can we say M1 is better than M2? How much confidence can we place on accuracy of M1 and M2? Can the difference in performance measure be explained as a result of random fluctuations in the test set? 15

Confidence Interval for Accuracy Prediction can be regarded as a Bernoulli trial A Bernoulli trial has 2 possible outcomes Possible outcomes for prediction: correct or wrong Collection of Bernoulli trials has a Binomial distribution Given x (# of correct predictions) or equivalently, acc=x/n, and N (# of test instances), Can we predict p (true accuracy of model)? 16

Confidence Interval for Accuracy For large test sets (N large), P acc has a normal distribution with mean p and variance p(1-p)/n ( Z Z / 2 1 / 2 1 acc p p(1 p) / N ) Area = 1 - Z /2 Z 1- /2 Confidence Interval for p: 2 2 2 N acc Z Z 4 N / 2 / 2 p 2 2( N Z ) / 2 acc 4 N acc 2 17

Confidence Interval for Accuracy Consider a model that produces an accuracy of 80% when evaluated on 100 test instances: N=100, acc = 0.8 Let 1- = 0.95 (95% confidence) From probability table, Z /2 =1.96 N 50 100 500 1000 5000 p(lower) 0.670 0.711 0.763 0.774 0.789 1- Z 0.99 2.58 0.98 2.33 0.95 1.96 0.90 1.65 p(upper) 0.888 0.866 0.833 0.824 0.811 18

Comparing Performance of 2 Models Given two models, say M1 and M2, which is better? M1 is tested on D1 (size=n1), found error rate = e 1 M2 is tested on D2 (size=n2), found error rate = e 2 Assume D1 and D2 are independent, is the observed difference bw e 1 and e 2 statistically significant? If n1 and n2 are sufficiently large, then we can approximate e ~ N, e (1 e ) i i e 1 2 ~ N 1 2 1, 2 ˆ i n i 19

Comparing Performance of 2 Models To test if performance difference is statistically significant: d = e1 e2 d ~ N(d t, t ) where d t is the true difference Since D1 and D2 are independent, their variance adds up: 2 t 2 1 2 e1(1 e1) n1 2 2 ˆ 1 2 ˆ e2(1 e2) n2 2 At (1-) confidence level, d t d Z / 2 ˆ t 20

An Illustrative Example Given: M1: n1 = 30, e1 = 0.15 M2: n2 = 5000, e2 = 0.25 d = e2 e1 = 0.1 (2-sided test) ˆ d 0.15(1 0.15) 30 0.25(1 0.25) 5000 0.0043 At 95% confidence level, Z /2 =1.96 d t 0.11.96 0.0043 0.1 0.128 => Interval contains 0 => difference may not be statistically significant 21

Comparing Performance of 2 Algorithms Each learning algorithm may produce k models: L1 may produce M11, M12,, M1k L2 may produce M21, M22,, M2k If models are generated on the same test sets D1,D2,, Dk (e.g., via cross-validation) For each set: compute d j = e 1j e 2j d j has mean d t and variance t Estimate: ˆ t 2 k j1 ( d j k( k d) 1) 2 d t d t 1, k1 ˆ t 22

Model Evaluation Metrics for Performance Evaluation Methods for Model Comparison Methods for Performance Evaluation 23

Methods for Performance Evaluation How to obtain a reliable estimate of performance? Performance of a model may depend on other factors besides the learning algorithm: Class distribution Cost of misclassification Size of training and test sets

Methods of Evaluation Holdout method Given data is randomly partitioned into two independent sets Training set (e.g., 2/3) for model construction Test set (e.g., 1/3) for accuracy estimation Random sampling: a variation of holdout Repeat holdout k times, accuracy = avg. of the accuracies obtained Cross-validation (k-fold, where k = 10 is most popular) Randomly partition the data into k mutually exclusive subsets, each approximately equal size At i-th iteration, use k-1 sets as training set and remaining one as test set Leave-one-out: k folds where k = # of tuples, for small sized data Stratified cross-validation: folds are stratified so that class dist. in each fold is approx. the same as that in the initial data 25

Evaluating the Accuracy of a Classifier or Predictor (II) Bootstrap - Sampling with replacement Works well with small data sets Samples the given training tuples uniformly with replacement i.e., each time a tuple is selected, it is equally likely to be selected again and re-added to the training set Several boostrap methods, and a common one is.632 boostrap Suppose we are given a data set of d tuples. The data set is sampled d times, with replacement, resulting in a training set of d samples. The data tuples that did not make it into the training set end up forming the test set. About 63.2% of the original data will end up in the bootstrap, and the remaining 36.8% will form the test set (since (1 1/d) d e -1 = 0.368) Repeat the sampling procedue k times, overall accuracy of the model: k acc( M ) (0.632 acc( M i ) test _ set 0.368 acc( M i ) train_ i 1 26 set )

Classification and Prediction Last lecture Overview Decision tree induction Bayesian classification Today Training (learning) Bayesian network knn classification and collaborative filtering Support Vector Machines (SVM) Neural Networks Regression Model evaluation Rule based methods Upcoming lectures Ensemble methods Bagging, Random Forests, AdaBoost 27

Rule-Based Classifier Classify records by a collection of IF-THEN rules Basic concepts IF (Condition) THEN y (Condition) y LHS: rule antecedent or condition RHS: rule consequent E.g. IF age = youth AND student = yes THEN buys_computer = yes Using the rules Learning the rules 28

Rule-based Classifier: Example Name Blood Type Give Birth Can Fly Live in Water Class human warm yes no no mammals python cold no no no reptiles salmon cold no no yes fishes whale warm yes no yes mammals frog cold no no sometimes amphibians komodo cold no no no reptiles bat warm yes yes no mammals pigeon warm no yes no birds cat warm yes no no mammals leopard shark cold yes no yes fishes turtle cold no no sometimes reptiles penguin warm no no sometimes birds porcupine warm yes no no mammals eel cold no no yes fishes salamander cold no no sometimes amphibians gila monster cold no no no reptiles platypus warm no no no mammals owl warm no yes no birds dolphin warm yes no yes mammals eagle warm no yes no birds R1: (Give Birth = no) (Can Fly = yes)? R2: (Give Birth = no) (Live in Water = yes)? R3: (Give Birth = yes) (Blood Type = warm)? 29

Assessment of a Rule Coverage of a rule: Fraction of records that satisfy the antecedent of a rule coverage(r) = n covers / D where n covers = # of tuples covered by R and D is the training data set Accuracy of a rule: Fraction of records that satisfy both the antecedent and consequent of a rule accuracy(r) = n correct / n covers where n correct = # of tuples correctly classified by R Data Mining: Concepts and Techniques 30

Characteristics of Rule-Based Classifier Mutually exclusive rules Classifier contains mutually exclusive rules if the rules are independent of each other Every record is covered by at most one rule Exhaustive rules Classifier has exhaustive coverage if it accounts for every possible combination of attribute values Each record is covered by at least one rule 31

Using the Rules Rules that are mutually exclusive and exhaustive Rules that are not mutually exclusive A record may trigger more than one rule Solution? Conflict resolution Ordered rules (decision list) - in decreasing order of their priority Unordered rule set use voting schemes Rules that are not exhaustive A record may not trigger any rules Solution? - Use a default class (rule) 32

Rule-Based Ordering Rule-based ordering Individual rules are ranked based on their quality Rule set is known as a decision list Class-based ordering Classes are sorted in order of decreasing importance Rules are sorted by the classes R1: (Give Birth = no) (Can Fly = yes) Birds R2: (Give Birth = no) (Live in Water = yes) Fishes R3: (Give Birth = yes) (Blood Type = warm) Mammals R4: (Give Birth = no) (Can Fly = no) Reptiles R5: (Live in Water = sometimes) Amphibians Name Blood Type Give Birth Can Fly Live in Water Class turtle cold no no sometimes? 33

Building Classification Rules Indirect Method: Extract rules from other classification models Decision trees. E.g. C4.5 Rules Direct Method: Extract rules directly from data Sequential Covering. E.g.: CN2, RIPPER Associative Classification 34

Rule Extraction from a Decision Tree One rule is created for each path from the root to a leaf - each attribute-value pair forms a conjunction, the leaf holds the class prediction Rules are mutually exclusive and exhaustive <=30 student? age? 31..40 yes >40 credit rating? Pruning (C4.5): class-based ordering no yes excellent fair no yes yes Example: Rule extraction from our buys_computer decision-tree IF age = young AND student = no IF age = young AND student = yes IF age = mid-age THEN buys_computer = no THEN buys_computer = yes THEN buys_computer = yes IF age = old AND credit_rating = excellent THEN buys_computer = no IF age = young AND credit_rating = fair THEN buys_computer = yes 35

Direct Method: Sequential Covering 1. Start from an empty rule 2. Grow a rule using the Learn-One-Rule function 3. Remove training records covered by the rule 4. Repeat Step (2) and (3) until stopping criterion is met 36

Example of Sequential Covering (i) Original Data (ii) Step 1 37

Example of Sequential Covering R1 R1 R2 (iii) Step 2 (iv) Step 3 38

Rule Growing Two common strategies { } Yes: 3 No: 4 Refund=No, Status=Single, Income=85K (Class=Yes) Refund=No, Status=Single, Income=90K (Class=Yes) Refund= No Status = Single Status = Divorced Status = Married... Income > 80K Refund=No, Status = Single (Class = Yes) Yes: 3 No: 4 Yes: 2 No: 1 Yes: 1 No: 0 Yes: 0 No: 3 (a) General-to-specific Yes: 3 No: 1 (b) Specific-to-general 39

Learn-One-Rule Start with the most general rule possible: condition = empty Adding new attributes by adopting a greedy depth-first strategy Picks the one that most improves the rule quality Rule-Quality measures: consider both coverage and accuracy Foil-gain (in FOIL & RIPPER): assesses info_gain by extending condition pos' FOIL _ Gain pos' (log 2 log 2 pos' neg ' It favors rules that have high accuracy and cover many positive tuples Rule pruning based on an independent set of test tuples pos neg FOIL _ Prune( R) pos neg Pos/neg are # of positive/negative tuples covered by R. pos ) pos neg If FOIL_Prune is higher for the pruned version of R, then prune R. 40

Direct Method: Multi-Class For 2-class problem, choose one of the classes as positive class, and the other as negative class Learn rules for positive class Negative class will be default class For multi-class problem Order the classes according to increasing class prevalence (fraction of instances that belong to a particular class) Learn the rule set for smallest class first, treat the rest as negative class Repeat with next smallest class as positive class 41

Associative Classification Associative classification Search for strong associations between frequent patterns (conjunctions of attribute-value pairs) and class labels Classification: Based on evaluating a set of rules in the form of P 1 ^ p 2 ^ p l A class = C (conf, sup) Why effective? It explores highly confident associations among multiple attributes and may overcome some constraints introduced by decision-tree induction, which considers only one attribute at a time In many studies, associative classification has been found to be more accurate than some traditional classification methods, such as C4.5 42

Typical Associative Classification Methods CBA (Classification By Association: Liu, Hsu & Ma, KDD 98) Mine association possible rules in the form of Cond-set (a set of attribute-value pairs) class label Build classifier: Organize rules according to decreasing precedence based on confidence and then support CMAR (Classification based on Multiple Association Rules: Li, Han, Pei, ICDM 01) Classification: Statistical analysis on multiple rules CPAR (Classification based on Predictive Association Rules: Yin & Han, SDM 03) Generation of predictive rules (FOIL-like analysis) High efficiency, accuracy similar to CMAR RCBT (Mining top-k covering rule groups for gene expression data, Cong et al. SIGMOD 05) Explore high-dimensional classification, using top-k rule groups Achieve high classification accuracy and high run-time efficiency 43

Rule-Based Classifiers: Comments As highly expressive as decision trees Easy to interpret Easy to generate Can classify new instances rapidly Performance comparable to decision trees 44

Classification and Prediction Last lecture Overview Decision tree induction Bayesian classification Today Training (learning) Bayesian network knn classification and collaborative filtering Support Vector Machines (SVM) Neural Networks Regression Model evaluation Rule based methods Upcoming lectures Ensemble methods Bagging, Random Forests, AdaBoost 45