Classification and Prediction

Similar documents
Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.1-3

COMP 465: Data Mining Classification Basics

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

Basic Data Mining Technique

Classification: Decision Trees

Lecture 6 Classification and Prediction

Extra readings beyond the lecture slides are important:

What Is Data Mining? CMPT 354: Database I -- Data Mining 2

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.7

CMPUT 391 Database Management Systems. Data Mining. Textbook: Chapter (without 17.10)

Classification with Decision Tree Induction

Knowledge discovery & data mining. Classification & fraud detection

Analytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.

Prediction. What is Prediction. Simple methods for Prediction. Classification by decision tree induction. Classification and regression evaluation

Performance Analysis of Data Mining Classification Techniques

Data Mining and Analytics

Decision Tree CE-717 : Machine Learning Sharif University of Technology

Data Preprocessing Yudho Giri Sucahyo y, Ph.D , CISA

Decision Tree Learning

Data warehouse and Data Mining

PATTERN DISCOVERY IN TIME-ORIENTED DATA

Outline. RainForest A Framework for Fast Decision Tree Construction of Large Datasets. Introduction. Introduction. Introduction (cont d)

Data Mining. Decision Tree. Hamid Beigy. Sharif University of Technology. Fall 1396

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

9/6/14. Our first learning algorithm. Comp 135 Introduction to Machine Learning and Data Mining. knn Algorithm. knn Algorithm (simple form)

PUBLIC: A Decision Tree Classifier that Integrates Building and Pruning

Dynamic Load Balancing of Unstructured Computations in Decision Tree Classifiers

CSE4334/5334 DATA MINING

Knowledge Discovery in Databases

Enhancing Forecasting Performance of Naïve-Bayes Classifiers with Discretization Techniques

Random Forest A. Fornaser

Chapter 3: Supervised Learning

CONCEPT FORMATION AND DECISION TREE INDUCTION USING THE GENETIC PROGRAMMING PARADIGM

Lecture 5: Decision Trees (Part II)

BITS F464: MACHINE LEARNING

Chapter 4: Algorithms CS 795

AMOL MUKUND LONDHE, DR.CHELPA LINGAM

INTRODUCTION TO DATA MINING. Daniel Rodríguez, University of Alcalá

International Journal of Software and Web Sciences (IJSWS)

Data Mining. Part 2. Data Understanding and Preparation. 2.4 Data Transformation. Spring Instructor: Dr. Masoud Yaghini. Data Transformation

SOCIAL MEDIA MINING. Data Mining Essentials

International Journal of Computer Engineering and Applications, Volume XII, Issue II, Feb. 18, ISSN

Chapter 1, Introduction

Chapter 4: Algorithms CS 795

Classification Algorithms in Data Mining

Data Mining Classification - Part 1 -

A STUDY OF SOME DATA MINING CLASSIFICATION TECHNIQUES

Cse634 DATA MINING TEST REVIEW. Professor Anita Wasilewska Computer Science Department Stony Brook University

Data Mining and Machine Learning: Techniques and Algorithms

(Classification and Prediction)

Carnegie Mellon Univ. Dept. of Computer Science /615 DB Applications. Data mining - detailed outline. Problem

7. Decision or classification trees

Credit card Fraud Detection using Predictive Modeling: a Review

Data Mining in Bioinformatics Day 1: Classification

DESIGN AND IMPLEMENTATION OF BUILDING DECISION TREE USING C4.5 ALGORITHM

ARTIFICIAL INTELLIGENCE (CS 370D)

Slides for Data Mining by I. H. Witten and E. Frank

Discretizing Continuous Attributes Using Information Theory

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Business Club. Decision Trees

Data Warehousing & Data Mining

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

Basic Concepts Weka Workbench and its terminology

Nesnelerin İnternetinde Veri Analizi

Data Mining Algorithms: Basic Methods

Data Set. What is Data Mining? Data Mining (Big Data Analytics) Illustrative Applications. What is Knowledge Discovery?

BBS654 Data Mining. Pinar Duygulu. Slides are adapted from Nazli Ikizler

Data mining - detailed outline. Carnegie Mellon Univ. Dept. of Computer Science /615 DB Applications. Problem.

Nominal Data. May not have a numerical representation Distance measures might not make sense. PR and ANN

Data Mining Lecture 8: Decision Trees

What is Data Mining? Data Mining. Data Mining Architecture. Illustrative Applications. Pharmaceutical Industry. Pharmaceutical Industry

Introduction to Machine Learning

Lecture 7: Decision Trees

Artificial Intelligence. Programming Styles

劉介宇 國立台北護理健康大學 護理助產研究所 / 通識教育中心副教授 兼教師發展中心教師評鑑組長 Nov 19, 2012

Classification: Basic Concepts, Decision Trees, and Model Evaluation

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Study on Classifiers using Genetic Algorithm and Class based Rules Generation

Machine Learning Techniques for Data Mining

Additive Models, Trees, etc. Based in part on Chapter 9 of Hastie, Tibshirani, and Friedman David Madigan

Understanding Rule Behavior through Apriori Algorithm over Social Network Data

Lecture 10 September 19, 2007

Applying Supervised Learning

MIT Samberg Center Cambridge, MA, USA. May 30 th June 2 nd, by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA

What is Data Mining? Data Mining. Data Mining Architecture. Illustrative Applications. Pharmaceutical Industry. Pharmaceutical Industry

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

Data Mining Concepts & Techniques

Evaluation of Decision Tree Pruning Algorithms for Complexity and Classification Accuracy

Data Mining Part 5. Prediction

Data Mining. 3.3 Rule-Based Classification. Fall Instructor: Dr. Masoud Yaghini. Rule-Based Classification

STUDY PAPER ON CLASSIFICATION TECHIQUE IN DATA MINING

Data Mining. Practical Machine Learning Tools and Techniques. Slides for Chapter 3 of Data Mining by I. H. Witten, E. Frank and M. A.

Roadmap DB Sys. Design & Impl. Association rules - outline. Citations. Association rules - idea. Association rules - idea.

Introduction to Data Mining and Data Analytics

Coefficient of Variation based Decision Tree (CvDT)

ISSN: (Online) Volume 3, Issue 9, September 2015 International Journal of Advance Research in Computer Science and Management Studies

Lecture 2 :: Decision Trees Learning

Fuzzy Partitioning with FID3.1

Decision Tree Learning

Cyber attack detection using decision tree approach

Transcription:

Objectives Introduction What is Classification? Classification vs Prediction Supervised and Unsupervised Learning D t P Data Preparation ti Classification Accuracy ID3 Algorithm Information Gain Bayesian Classification Predictive Modelling Classification and Prediction Lecture 5/DMBI/IKI83403T/MTI/UI Yudho Giri Sucahyo, Ph.D, CISA (yudho@cs.ui.ac.id) Faculty of Computer Science, 2 Introduction What is Classification? A two-step two step process Databases are rich with hidden information that can be used for making intelligent business decisions. Classification and prediction can be used to extract models d describing ibi iimportant d data classes l or to predict di ffuture data d trends. Classification predicts categorical labels. labels Ex: categorize bank loan applications Æ safe or risky. Prediction models continuous-valued functions. Ex: predict the expenditures of potential customers on computer equipment given their income and occupation. Typical Applications: Model construction: Credit approval, target marketing, Medical diagnosis, g, treatment effectiveness analysis y Each tuple is assumed to belong to a predefined class, as determined by one of the attributes, called the class label. Data tuples are also referred to as samples, examples, or objects. All tuples used for construction is called training set. Since the class label of each training sample is provided Æ supervised learning. In clustering (unsupervised learning), th class the l llabels b l off each h training t i i sample l iis nott known, k and d th the number or set of classes to be learned may not be known in advance. The model is represented in the following forms: 4 Classification rules,, ((IF-THEN statements), ), decision tree,, mathematical formulae

What is Classification? A two-step process (2) The model is used for classifying future or unknown objects. First, the predictive accuracy of the model is estimated 5 The known label of test sample is compared with the classified result from the model. Accuracy rate is the percentage of test set samples that are correctly classified by the model. Test set is independent of training set otherwise over-fitting (it may have incorporated some particular anomalies of the training data that are not present in the overall sample population) will occur. If the accuracy of the model is considered acceptable, the model can be used to classify future objects for which the class label is not known (unknown, previously unseen data). Classification Process (1) Training Data NAME RANK YEARS TENURED Mike Assistant Prof 3 no Mary Assistant Prof 7 yes Bill Professor 2 yes Jim Associate Prof 7 yes Dave Assistant Prof 6 no Anne Associate Prof 3 no 6 Classification Algorithms Classifier (Model) IF rank = professor OR years > 6 THEN tenured = yes Classification Process (2) Testing Data Classifier Unseen Data (Jeff, Professor, 4) NAME RANK YEARS TENURED Tom Assistant Prof 2 no Tenured? Merlisa Associate Prof 7 no George Professor 5 yes Joseph Assistant Prof 7 yes 7 What is Prediction? Prediction is similar to classification First, construct model. Second, use model to predict future or unknown objects Major method for prediction is regression: Linear and multiple regression Non-liner regression Prediction is different from classification Classification refers to predict categorical class label. Prediction refers to predict continuous value. 8

Classification vs Prediction Sending out promotional literature to every new customer in the database can be quite costly. A more cosefficient method would be to target only those new customers who are likely l to purchase a new computer classification. Predict the number of major purchases that a customer will make during a fiscal year prediction. Supervised vs Unsupervised Learning Supervised learning (classification) Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations Based on the training set to classify new data Unsupervised learning (clustering) We are given a set of measurements, observations, etc with the aim of establishing the existence of classes or clusters in the data No training data, or the training data are not accompanied by class labels 9 10 Issues Data Preparation Data preprocessing can be used to help improve the accuracy, efficiency, and scalability of the classification or prediction process. Data Cleaning Remove/reduce noise and the treatment of missing values Relevance Analysis Many of the attributes in the data may be irrelevant to the classification or prediction task. Ex: data recording the day of the week on which a bank loan application was filed is unlikely to be relevant to the success of the application. Other attributes may be redundant. This step is known as feature selection. Issues Data Preparation Data Transformation Data can be generalized to higher-level concepts. Useful fot continuous-valued attributes. Income can be generalized low, medium, high. Street city. Generalization compresses the original training data, fewer input/output operations may be involved during learning. When using neural networks (or other methods involving i distance measurements), data may also be normalized. 11 12

Comparing Classification Method Predictive accuracy Speed and scalability time to construct the model time to use the model Robustness handling noise and missing values Scalability efficiency in large databases (not memory resident data) Interpretability: the level of understanding and insight provided by the model Goodness of rules decision tree size the compactness of classification rules Classification Accuracy: Estimating Error Rates Partition: Training-and-testing use two independent data sets, e.g., training set (2/3), test set(1/3) used for data set with large number of samples Cross-validation divide the data set into k subsamples use k-1 subsamples as training data and one sub-sample as test data --- k-fold cross-validation for data set with moderate size Bootstrapping (leave-one-out) for small size data 13 What is a decision tree? A decision tree is a flow-chart-like tree structure. Internal node denotes a test on an attribute Branch represents an outcome of the test All tuples in branch have the same value for the tested attribute. Leaf node represents class label or class label distribution. To classify an unknown sample, the attribute values of the sample are tested against the decision tree. A path is traced from the root to a leaf node that holds the class prediction for that sample. Decision trees can easily be converted to classification rules. 15 Training Dataset An Example from Quinlan s ID3 16 Outlook Tempreature Humidity Windy Class sunny hot high false N sunny hot high true N overcast hot high false P rain mild high false P rain cool normal false P rain cool normal true N overcast cool normal true P sunny mild high false N sunny cool normal false P rain mild normal false P sunny mild normal true P overcast mild high true P overcast hot normal false P rain mild high true N

A Sample Decision Tree humidity sunny Outlook overcast P rain windy high normal false true N P N P Decision-Tree Classification Methods The basic top-down decision tree generation approach usually consists of two phases: Tree construction At start, all the training examples are at the root. Partition examples recursively based on selected attributes. Tree pruning Aiming at removing tree branches that may lead to errors when classifying ifi test t dt data (training ii dt data may contain ti noise, outliers, ) 17 18 ID3 Algorithm All attributes are categorical Create a node N; if samples are all of the same class C, then return N as a leaf node labeled with C if attribute-list is empty then return N as a leaf node labeled with the most common class select split-attribute with highest information gain label N with the split-attribute for each value A i of split-attribute, tt ib t grow a branch from Node N let S i be the branch in which all tuples have the value A i for split- attribute if S i is empty then attach a leaf labeled with the most common class Choosing Split Attribute Information Gain (ID3/C4.5) (1) Assume all attributes to be categorical (discrete-values). Continuous-valued attributes must be discretized. Used to select the test attribute at each node in the tree. Also called measure of the goodness of split. The attribute with the highest information gain is chosen as the test attribute for the current node. Else recursively run the algorithm at Node S i until all branches reach leaf nodes 19 20

Information Gain (ID3/C4.5) (2) Assume that there are two classes, P and N. Let the set of examples S contain p elements of class P and n elements of class N. The amount of information, needed to decide if an arbitrary example in S belong to P or N is defined as I p p n n p + n 2 p + n p + n 2 p + n ( p, n ) = log log Assume that using attribute A as the root in the tree will partition S in sets {S 1, S 2,, S v }. If S i contains p i examples of P and n i examples of N, the information needed to classify objects in all subtrees S i : 21 p v + E A i i ( ) = I ( p, p n i ni) + i= 1 n Information Gain (ID3/C4.5) (3) The attribute A is selected such that the information gain gain(a) = I(p, n) - E(A) is maximal, that is, E(A) is minimal since I(p, n) is the same to all attributes at a node. In the given sample data, attribute outlook is chosen to split at 22 the root : gain(outlook) = 0.246 gain(temperature) = 0.029 gain(humidity) = 0.151 gain(windy) = 0.048 Information Gain (ID3/C4.5) (3) Examples: See Table 7.1. Class label: buys_computer. Two values: YES, NO. m = 2. C 1 correspond to yes, C 2 correspond to no. 9 samples of class yes and 5 samples of class no. Compute the expected information needed to classify a given sample 9 9 5 5 I ( s1, s2 ) = I (9,5) = log log = 0.940 2 2 Information Gain (ID3/C4.5) (4) Next, compute the entropy of each attribute. Let s start with the attribute age. For age = <= 30 : s 11 = 2 s 21 = 3 I (s 11, s 21 ) = 0.971 For age = 31..40 : s 12 = 4 s 22 = 0 I (s 12, s 22 ) = 0 For age = >40 : s 13 = 3 s 23 = 2 I (s 13, s 23 ) = 0.971 Using equation (7.2), the expected information needed to classify a given sample if the samples are partitioned by age is E 5 4 5 age ) = I ( s11, s 21 ) + I ( s12, s 22 ) + I ( s13, s ) = 0.694 ( 23 Hence, the gain in information from such a partitioning: Gain(age) (g) = I (s 1, s 2 ) E (age) (g) = 0.246 Similarly, we can compute Gain(income) = 0.029, Gain(student) = 0.151, Gain(Credit_rating) = 0.048. 23 24

How to use a tree? Directly test the attribute value of unknown sample against the tree. A path is traced from root to a leaf which holds the label Indirectly decision tree is converted to classification rules one rule is created for each path from the root to a leaf IF-THEN is easier for humans to understand 25 Example: IF age = <=30 AND student = no THEN buys_computer = no Tree Pruning A decision tree constructed using the training data may have too many branches/leaf nodes. Caused by noise, overfitting May result poor accuracy for unseen samples Prune the tree: merge a subtree into a leaf node. Using a set of data different from the training i data. At a tree node, if the accuracy without splitting is higher than the accuracy with splitting, replace the subtree with a leaf node, label it using the majority class. Pruning Criterion: Pessimistic pruning: C4.5 26 MDL: SLIQ and SPRINT Cost complexity pruning: CART Classification and Databases Classification is a classical problem extensively studied by statisticians AI, especially machine learning researchers Database researchers re-examined the problem in the context of large databases most previous studies used small size data, and most algorithms are memory resident Recent data mining research contributes to Scalability Generalization-based classification Parallel and distributed processing Classifying Large Dataset Decision trees seem to be a good choice relatively faster learning speed than other classification methods can be converted into simple and easy to understand classification rules can be used to generate SQL queries for accessing databases has comparable classification accuracy with other methods Classifying data-sets with millions of examples and a few hundred even thousands attributes with reasonable speed. 27 28

Scalable Decision Tree Methods Most algorithms assume data can fit in memory. Data mining research contributes to the scalability issue, especially for decision trees. Successful examples SLIQ (EDBT 96 -- Mehta et al 96) al.96) SPRINT (VLDB96 -- J. Shafer et al. 96) PUBLIC (VLDB98 -- Rastogi & Shim 98) RainForest (VLDB98 -- Gehrke, et al. 98) Previous Efforts on Scalability Incremental tree construction (Quinlan 86) using partial data to build a tree. testing other examples and those mis-classified ones are used to rebuild the tree interactively. Data reduction (Cattlet 91) reducing data size by sampling and discretization. still a main memory algorithm. Data partition and merge (Chan and Stolfo 91) partitioning data and building trees for each partition. merging multiple trees into a combined tree. experiment results indicated reduced classification accuracy. 29 30 Presentation of Classification Rules Other Classification Methods Bayesian Classification Neural Networks Genetic Algorithm Rough Set Approach k-nearest Neighbor Classifier Case-Based Reasoning (CBR) Fuzzy Logic Support Vector Machine (SVM) 31 32

Bayesian Classification Bayesian classifiers are statistical classifiers. They can predict class membership probabilities, such as the probability that a given sample belongs to a particular class. Bayesian classification is based on Bayes theorem. Naive Bayesian Classifier is comparable in performance with decision tree and neural network classifiers. Bayesian classifiers also have high accuracy and speed when applied to large databases. Bayes Theorem (1) Let X be a data sample whose class label is unknown. Let H be some hypothesis, such as that the data sample X belongs to a specified class C. We want to determine P(H X), the probability the the hypothesis H holds given the observed data sample X. P(H X) is the posterior probability or a posteriori probability, of H conditioned on X. Support the world of data samples consists of fruits, described by their color and shape., Suppose that X is red and round, and that H is the hypothesis h that X is an apple. Then P(H X) reflects our confidence that X is an apple given that we have seen that X is red and round. 33 34 Bayes Theorem (2) P(H) is the prior probability or a priori probability, of H. The probability that any given data sample is an apple, regardless of how the data sample looks. The posterior probability is based on more information (such as background knowledge) than the prior probability bili which h is independent d of X. Bayes theorem is P( X H) P( H) P ( H X ) = P(X ( ) See example 7.4 for example on Naive Bayesian Classification. Predictive Modeling in Databases What if we would like to predict a continuous value, rather than a categorical label? Prediction of continuous values can be modeled by statistical techniques hi of regression. Example: A modle to predict the salary of college e graduates ates with 10 years of work experience. Potential sales of a new product given its price. Many problems can be solved by linear regression. Software packages for solving regression problems: SAS, SPSS, S-Plus 35 36

Linear Regression Prediction: Numerical Data Data are modeled using a straight line. The simplest form of regression Bivariate liner regressions models a random variable Y (called a response variable), as a linear function of another random variable, X (called a predictor variable) Y = α + β X See Example 7.6 for an example of linear regression. Other regression models Multiple regression Log-linear models 37 38 Prediction: Categorical Data Conclusion Classification is an extensively studied problem (mainly in statistics, machine learning & neural networks) Classification is probably one of the most widely used data mining techniques with a lot of applications. Scalability is still an important issue for database applications. Combining classification with database techniques should be a promising research topic. Research Direction: Classification of non-relational data, eg e.g., text, spatial, multimedia, etc.. 39 40

References C. Apte and S. Weiss. Data mining with decision trees and decision rules. Future Generation Computer Systems, 13, 1997. L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth International Group, 1984. P. K. Chan and S. J. Stolfo. Learning arbiter and combiner trees from partitioned data for scaling machine learning. In Proc. 1st Int. Conf. Knowledge Discovery and Data Mining (KDD'95), pages 39-44, Montreal, Canada, August 1995. U. M. Fayyad. Branching on attribute values in decision tree generation. In Proc. 1994 AAAI Conf., pages pg 601-606, AAAI Press, 1994. J. Gehrke, R. Ramakrishnan, and V. Ganti. Rainforest: A framework for fast decision tree construction of large datasets. In Proc. 1998 Int. Conf. Very Large Data Bases, pages 416-427, New York, NY, August 1998. M. Kamber, L. Winstone, W. Gong, S. Cheng, and J. Han. Generalization and decision tree induction: Efficient classification in data mining. In Proc. 1997 Int. Workshop Research Issues on Data Engineering (RIDE'97), pages 111-120, Birmingham, England, April 1997. 41 References (2) J. Magidson. The chaid approach to segmentation modeling: Chi-squared automatic interaction detection. In R. P. Bagozzi, editor, Advanced Methods of Marketing Research,,pages 118-159. Blackwell Business, Cambridge Massechusetts, 1994. M. Mehta, R. Agrawal, and J. Rissanen. SLIQ : A fast scalable classifier for data mining. In Proc. 1996 Int. Conf. Extending Database Technology (EDBT'96), Avignon, France, March 1996. S. K. Murthy, Automatic Construction of Decision Trees from Data: A Multi-Diciplinary Survey, Data Mining and Knowledge Discovery 2(4): 345-389, 1998 J. R. Quinlan. Bagging, g boosting, and c4.5. In Proc. 13th Natl. Conf. on Artificial Intelligence (AAAI'96), 725-730, Portland, OR, Aug. 1996. R. Rastogi and K. Shim. Public: A decision tree classifer that integrates building and pruning. In Proc. 1998 Int. Conf. Very Large Data Bases, 404-415, New York, NY, August 1998. J. Shafer, R. Agrawal, and M. Mehta. SPRINT : A scalable parallel classifier for data mining. In Proc. 1996 Int. Conf. Very Large Data Bases, 544-555, Bombay, India, Sept. 1996. S. M. Weiss and C. A. Kulikowski. Computer Systems that Learn: Classification and Prediction Methods from Statistics, Neural Nets, Machine Learning, and Expert Systems. Morgan Kaufman, 1991. 42