Ensemble Learning: An Introduction. Adapted from Slides by Tan, Steinbach, Kumar

Similar documents
Ensemble Methods, Decision Trees

Data Mining Lecture 8: Decision Trees

Ensemble Learning. Another approach is to leverage the algorithms we have via ensemble methods

CSC411 Fall 2014 Machine Learning & Data Mining. Ensemble Methods. Slides by Rich Zemel

Ensemble Methods: Bagging

STA 4273H: Statistical Machine Learning

April 3, 2012 T.C. Havens

7. Boosting and Bagging Bagging

CS294-1 Final Project. Algorithms Comparison

Computer Vision Group Prof. Daniel Cremers. 8. Boosting and Bagging

Introduction to Classification & Regression Trees

Machine Learning Lecture 11

Improving the Random Forest Algorithm by Randomly Varying the Size of the Bootstrap Samples for Low Dimensional Data Sets

Slides for Data Mining by I. H. Witten and E. Frank

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

An introduction to random forests

Classification/Regression Trees and Random Forests

Machine Learning Techniques for Data Mining

Oliver Dürr. Statistisches Data Mining (StDM) Woche 12. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften

Random Forest A. Fornaser

International Journal of Software and Web Sciences (IJSWS)

Data Mining Practical Machine Learning Tools and Techniques

Naïve Bayes for text classification

CSC 411 Lecture 4: Ensembles I

Semi-supervised learning and active learning

Network Traffic Measurements and Analysis

Machine Learning (CS 567)

CS 559: Machine Learning Fundamentals and Applications 10 th Set of Notes

Computer Vision Group Prof. Daniel Cremers. 6. Boosting

8. Tree-based approaches

Decision Trees / Discrete Variables

CS 229 Midterm Review

Bias-Variance Analysis of Ensemble Learning

Business Club. Decision Trees

Overview. Non-Parametrics Models Definitions KNN. Ensemble Methods Definitions, Examples Random Forests. Clustering. k-means Clustering 2 / 8

Lecture Notes for Chapter 5

Lecture Notes for Chapter 5

Decision Trees: Discussion

CSE 151 Machine Learning. Instructor: Kamalika Chaudhuri

The digital copy of this thesis is protected by the Copyright Act 1994 (New Zealand).

Classification with Decision Tree Induction

Recognition Part I: Machine Learning. CSE 455 Linda Shapiro

The Basics of Decision Trees

Boosting Algorithms for Parallel and Distributed Learning

Random Forests and Boosting

BITS F464: MACHINE LEARNING

An Unsupervised Approach for Combining Scores of Outlier Detection Techniques, Based on Similarity Measures

Predictive Analytics: Demystifying Current and Emerging Methodologies. Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA

Boosting Simple Model Selection Cross Validation Regularization

Data Mining Practical Machine Learning Tools and Techniques

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Information Management course

CSE 6242/CX Ensemble Methods. Or, Model Combination. Based on lecture by Parikshit Ram

Boosting Simple Model Selection Cross Validation Regularization. October 3 rd, 2007 Carlos Guestrin [Schapire, 1989]

Algorithms: Decision Trees

Lecture #17: Autoencoders and Random Forests with R. Mat Kallada Introduction to Data Mining with R

Statistical Methods for Data Mining

Introduction to Machine Learning

Metrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates?

Ensemble methods in machine learning. Example. Neural networks. Neural networks

arxiv: v1 [cs.ai] 25 Jun 2017

The exam is closed book, closed notes except your one-page cheat sheet.

Part I. Classification & Decision Trees. Classification. Classification. Week 4 Based in part on slides from textbook, slides of Susan Holmes

Classification: Decision Trees

ii(i,j) = k i,l j efficiently computed in a single image pass using recurrences

Soft computing algorithms

Cluster based boosting for high dimensional data

Introduction to Machine Learning

CART. Classification and Regression Trees. Rebecka Jörnsten. Mathematical Sciences University of Gothenburg and Chalmers University of Technology

MIT Samberg Center Cambridge, MA, USA. May 30 th June 2 nd, by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA

Statistics 202: Statistical Aspects of Data Mining

Classification. Instructor: Wei Ding

MASTER. Random forest visualization. Kuznetsova, N.I. Award date: Link to publication

Advanced and Predictive Analytics with JMP 12 PRO. JMP User Meeting 9. Juni Schwalbach

Performance Estimation and Regularization. Kasthuri Kannan, PhD. Machine Learning, Spring 2018

Ensemble methods. Ricco RAKOTOMALALA Université Lumière Lyon 2. Ricco Rakotomalala Tutoriels Tanagra -

CS6716 Pattern Recognition. Ensembles and Boosting (1)

Topic 1 Classification Alternatives

INTRO TO RANDOM FOREST BY ANTHONY ANH QUOC DOAN

Applying Supervised Learning

Supervised Learning Classification Algorithms Comparison

Additive Models, Trees, etc. Based in part on Chapter 9 of Hastie, Tibshirani, and Friedman David Madigan

Advanced Video Content Analysis and Video Compression (5LSH0), Module 8B

Business Data Analytics

HALF&HALF BAGGING AND HARD BOUNDARY POINTS. Leo Breiman Statistics Department University of California Berkeley, CA

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

Lecture 19: Decision trees

Logical Rhythm - Class 3. August 27, 2018

CS229 Lecture notes. Raphael John Lamarre Townshend

Lecture 2 :: Decision Trees Learning

List of Exercises: Data Mining 1 December 12th, 2015

Using Machine Learning to Optimize Storage Systems

Decision Trees. Predic-on: Based on a bunch of IF THEN ELSE rules. Fi>ng: Find a bunch of IF THEN ELSE rules to cover all cases as best you can.

Model combination. Resampling techniques p.1/34

Outlier Ensembles. Charu C. Aggarwal IBM T J Watson Research Center Yorktown, NY Keynote, Outlier Detection and Description Workshop, 2013

Classifier Case Study: Viola-Jones Face Detector

Model Selection and Assessment

Supervised vs unsupervised clustering

Dynamic Ensemble Construction via Heuristic Optimization

ECG782: Multidimensional Digital Signal Processing

Transcription:

Ensemble Learning: An Introduction Adapted from Slides by Tan, Steinbach, Kumar 1

General Idea D Original Training data Step 1: Create Multiple Data Sets... D 1 D 2 D t-1 D t Step 2: Build Multiple Classifiers C 1 C 2 C t -1 C t Step 3: Combine Classifiers C * 2

Why does it work? Suppose there are 25 base classifiers Each classifier has error rate, = 0.35 Assume classifiers are independent Probability that the ensemble classifier makes a wrong prediction: 25 i13 25 i (1 ) i 25 i 0.06 3

Examples of Ensemble Methods How to generate an ensemble of classifiers? Bagging Boosting 4

Bagging Sampling with replacement Data ID Training Data Original Data 1 2 3 4 5 6 7 8 9 10 Bagging (Round 1) 7 8 10 8 2 5 10 10 5 9 Bagging (Round 2) 1 4 9 1 2 3 2 7 3 2 Bagging (Round 3) 1 8 5 10 5 5 9 6 3 7 Build classifier on each bootstrap sample Each sample has probability (1 1/n) n of being selected as test data Training data = 1- (1 1/n) n of the original data 5

The 0.632 bootstrap This method is also called the 0.632 bootstrap A particular training data has a probability of 1-1/n of not being picked Thus its probability of ending up in the test data (not selected) is: 1 1 n e 1 0.368 This means the training data will contain approximately 63.2% of the instances n 6 6

Example of Bagging Assume that the training data is: +1 0.4 to 0.7: -1 +1 0.3 0.8 x Goal: find a collection of 10 simple thresholding classifiers that collectively can classify correctly. -Each simple (or weak) classifier is: (x<=k class = +1 or -1 depending on which value yields the lowest error; where K is determined by entropy minimization) 7

8

Bagging (applied to training data) Accuracy of ensemble classifier: 100% 9

Bagging- Summary Works well if the base classifiers are unstable (complement each other) Increased accuracy because it reduces the variance of the individual classifier Does not focus on any particular instance of the training data Therefore, less susceptible to model overfitting when applied to noisy data What if we want to focus on a particular instances of training data? 10

In general, - Bias is contributed to by the training error; a complex model has low bias. -Variance is caused by future error; a complex model has High variance. - Bagging reduces the variance in the base classifiers. 11

12

13

Boosting An iterative procedure to adaptively change distribution of training data by focusing more on previously misclassified records Initially, all N records are assigned equal weights Unlike bagging, weights may change at the end of a boosting round 14

Boosting Records that are wrongly classified will have their weights increased Records that are classified correctly will have their weights decreased Original Data 1 2 3 4 5 6 7 8 9 10 Boosting (Round 1) 7 3 2 8 7 9 4 10 6 3 Boosting (Round 2) 5 4 9 4 2 5 1 7 4 2 Boosting (Round 3) 4 4 8 10 4 5 4 6 3 4 Example 4 is hard to classify Its weight is increased, therefore it is more likely to be chosen again in subsequent rounds 15

Boosting Equal weights are assigned to each training instance (1/d for round 1) at first After a classifier C i is learned, the weights are adjusted to allow the subsequent classifier C i+1 to pay more attention to data that were misclassified by C i. Final boosted classifier C* combines the votes of each individual classifier Weight of each classifier s vote is a function of its accuracy Adaboost popular boosting algorithm 16

Adaboost (Adaptive Boost) Input: Training set D containing N instances T rounds A classification learning scheme Output: A composite model 17

Adaboost: Training Phase Training data D contain N labeled data (X 1,y 1 ), (X 2,y 2 ), (X 3,y 3 ),.(X N,y N ) Initially assign equal weight 1/N to each data To generate T base classifiers, we need T rounds or iterations Round i, data from D are sampled with replacement, to form Di (size n) Each data s chance of being selected in the next rounds depends on its weight Each time the new sample is generated directly from the training data D with different sampling probability according to the weights; these weights are not zero 18

Adaboost: Training Phase Base classifier C i, is derived from training data of Di Error of C i is tested using D Weights of training data are adjusted depending on how they were classified Correctly classified: Decrease weight Incorrectly classified: Increase weight Weight of a data indicates how hard it is to classify it (directly proportional) 19

Adaboost: Testing Phase The lower a classifier error rate, the more accurate it is, and therefore, the higher its weight for voting should be Weight of a classifier C i s vote is Testing: For each class c, sum the weights of each classifier that assigned class c to X (unseen data) The class with the highest sum is the WINNER! C *( x ) arg max C test y T i1 i i i 1 2 ( x test 1 i ln i ) y 20

21 Example: AdaBoost Base classifiers: C 1, C 2,, C T Error rate: (i = index of classifier, j=index of instance) Importance of a classifier: N j j j i j i y x C w N 1 ) ( 1 i i i 1 ln 2 1

Example: AdaBoost Assume: N training data in D, T rounds, (x j,y j ) are the training data, C i, a i are the classifier and weight of the i th round, respectively. Weight update on all training data in D: w ( i1) j where w Z Z i ( i) j i is i exp if Ci ( x j ) y j i exp if Ci ( x j ) y j the normalizat ion factor C *( x ) arg max C test y T i1 i i ( x test ) y 22

Illustrating AdaBoost Initial weights for each data point Data points for training 0.1 0.1 0.1 Original Data + + + - - - - - + + B1 0.0094 0.0094 0.4623 Boosting Round 1 + + + - - - - - - - = 1.9459 23

Illustrating AdaBoost Boosting Round 1 + + + - Boosting B1 0.0094 0.0094 0.4623 - - - - - - B2 0.3037 0.0009 0.0422 Round 2 - - - - - - - - + + = 1.9459 = 2.9323 0.0276 0.1819 0.0038 Boosting Round 3 + + + + + + + + + + B3 = 3.8744 Overall + + + - - - - - + + 24

Random Forests Ensemble method specifically designed for decision tree classifiers Random Forests grows many trees Ensemble of unpruned decision trees Each base classifier classifies a new vector of attributes from the original data Final result on classifying a new instance: voting. Forest chooses the classification result having the most votes (over all the trees in the forest) 25

Random Forests Introduce two sources of randomness: Bagging and Random input vectors Bagging method: each tree is grown using a bootstrap sample of training data Random vector method: At each node, best split is chosen from a random sample of m attributes instead of all attributes 26

Random Forests 27

Methods for Growing the Trees Fix a m <= M. At each node Method 1: Choose m attributes randomly, compute their information gains, and choose the attribute with the largest gain to split Method 2: (When M is not very large): select L of the attributes randomly. Compute a linear combination of the L attributes using weights generated from [-1,+1] randomly. That is, new A = Sum(Wi*Ai), i=1..l. Method 3: Compute the information gain of all M attributes. Select the top m attributes by information gain. Randomly select one of the m attributes as the splitting node. 28

Random Forest Algorithm: method 1 in previous slide M input features in training data, a number m<<m is specified such that at each node, m features are selected at random out of the M and the best split on these m features is used to split the node. (In weather data, M=4, and m is between 1 and 4) m is held constant during the forest growing Each tree is grown to the largest extent possible (deep tree, overfit easily), and there is no pruning 29

Generalization Error of Random Forests (page 291 of Tan book) It can be proven that the generalization Error <= r(1-s 2 )/s 2, r is the average correlation among the trees s is the strength of the tree classifiers Strength is defined as how certain the classification results are on the training data on average How certain is measured Pr(C1 X)-Pr(C2-X), where C1, C2 are class values of two highest probability in decreasing order for input instance X. Thus, higher diversity and accuracy is good for performance 30