CS 559: Machine Learning Fundamentals and Applications 10 th Set of Notes

Similar documents
Ensemble Methods, Decision Trees

Ensemble Learning. Another approach is to leverage the algorithms we have via ensemble methods

CSC411 Fall 2014 Machine Learning & Data Mining. Ensemble Methods. Slides by Rich Zemel

Classifier Case Study: Viola-Jones Face Detector

Computer Vision Group Prof. Daniel Cremers. 8. Boosting and Bagging

Boosting Simple Model Selection Cross Validation Regularization

Boosting Simple Model Selection Cross Validation Regularization. October 3 rd, 2007 Carlos Guestrin [Schapire, 1989]

Random Forest A. Fornaser

Active learning for visual object recognition

Naïve Bayes for text classification

Ensemble Learning: An Introduction. Adapted from Slides by Tan, Steinbach, Kumar

Generic Object-Face detection

Face detection and recognition. Many slides adapted from K. Grauman and D. Lowe

Slides for Data Mining by I. H. Witten and E. Frank

Face Detection and Alignment. Prof. Xin Yang HUST

CSE 151 Machine Learning. Instructor: Kamalika Chaudhuri

CS 229 Midterm Review

Skin and Face Detection

STA 4273H: Statistical Machine Learning

Window based detectors

April 3, 2012 T.C. Havens

Object Category Detection: Sliding Windows

Data Mining Lecture 8: Decision Trees

CS6716 Pattern Recognition. Ensembles and Boosting (1)

Lecture 7: Decision Trees

8. Tree-based approaches

Supervised Learning for Image Segmentation

Machine Learning Techniques for Data Mining

Face detection and recognition. Detection Recognition Sally

Three Embedded Methods

Computer Vision Group Prof. Daniel Cremers. 6. Boosting

Object and Class Recognition I:

Network Traffic Measurements and Analysis

Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

Learning to Detect Faces. A Large-Scale Application of Machine Learning

7. Boosting and Bagging Bagging

Object recognition (part 1)

Ulas Bagci


Object Detection Design challenges

Lecture 2 :: Decision Trees Learning

Advanced Video Content Analysis and Video Compression (5LSH0), Module 8B

An introduction to random forests

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

Business Club. Decision Trees

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

MIT Samberg Center Cambridge, MA, USA. May 30 th June 2 nd, by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA

Recap Image Classification with Bags of Local Features

Automatic Initialization of the TLD Object Tracker: Milestone Update

Previously. Window-based models for generic object detection 4/11/2011

Information Management course

Machine Learning for Signal Processing Detecting faces (& other objects) in images

Classification/Regression Trees and Random Forests

Object detection as supervised classification

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Machine Learning for Medical Image Analysis. A. Criminisi

The exam is closed book, closed notes except your one-page cheat sheet.

Face Tracking in Video

Classification and Regression Trees

Contents. Preface to the Second Edition

Recognition Part I: Machine Learning. CSE 455 Linda Shapiro

Image Analysis. Window-based face detection: The Viola-Jones algorithm. iphoto decides that this is a face. It can be trained to recognize pets!

Object Category Detection: Sliding Windows

Subject-Oriented Image Classification based on Face Detection and Recognition

Analysis: TextonBoost and Semantic Texton Forests. Daniel Munoz Februrary 9, 2009

Face and Nose Detection in Digital Images using Local Binary Patterns

Mondrian Forests: Efficient Online Random Forests

Applied Statistics for Neuroscientists Part IIa: Machine Learning

Introduction. How? Rapid Object Detection using a Boosted Cascade of Simple Features. Features. By Paul Viola & Michael Jones

Detecting Pedestrians Using Patterns of Motion and Appearance (Viola & Jones) - Aditya Pabbaraju

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

Machine Learning Lecture 11

Detecting Faces in Images. Detecting Faces in Images. Finding faces in an image. Finding faces in an image. Finding faces in an image

Tree-based methods for classification and regression

Predictive Analytics: Demystifying Current and Emerging Methodologies. Tom Kolde, FCAS, MAAA Linda Brobeck, FCAS, MAAA

Data Mining Practical Machine Learning Tools and Techniques

Decision Trees Oct

Viola Jones Face Detection. Shahid Nabi Hiader Raiz Muhammad Murtaz

Semi-supervised learning and active learning

Applying Supervised Learning

An Empirical Comparison of Ensemble Methods Based on Classification Trees. Mounir Hamza and Denis Larocque. Department of Quantitative Methods

Face Detection on OpenCV using Raspberry Pi

The Curse of Dimensionality

Bagging for One-Class Learning

Machine Learning (CS 567)

Recognition problems. Face Recognition and Detection. Readings. What is recognition?

ii(i,j) = k i,l j efficiently computed in a single image pass using recurrences

Decision Trees: Representation

Contents Machine Learning concepts 4 Learning Algorithm 4 Predictive Model (Model) 4 Model, Classification 4 Model, Regression 4 Representation

CS381V Experiment Presentation. Chun-Chen Kuo

Variable Selection 6.783, Biomedical Decision Support

Oliver Dürr. Statistisches Data Mining (StDM) Woche 12. Institut für Datenanalyse und Prozessdesign Zürcher Hochschule für Angewandte Wissenschaften

Study of Viola-Jones Real Time Face Detector

CSE 546 Machine Learning, Autumn 2013 Homework 2

Nonparametric Methods Recap

A Bagging Method using Decision Trees in the Role of Base Classifiers

Weka ( )

A General Greedy Approximation Algorithm with Applications

Estimating Human Pose in Images. Navraj Singh December 11, 2009

Unsupervised Improvement of Visual Detectors using Co-Training

Transcription:

1 CS 559: Machine Learning Fundamentals and Applications 10 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215

Ensemble Methods Bagging (Breiman 1994, ) Random forests (Breiman 2001, ) Boosting (Freund and Schapire 1995, Friedman et al. 1998, ) Predict class label for unseen data by aggregating a set of predictions (classifiers learned from the training data) 2

General Idea S Training Data Multiple Data Sets S1 S2 Sn Multiple Classifiers C1 C2 Cn Combined H Classifier 3

Ensemble Classifiers Basic idea: Build different experts and let them vote Advantages: Improve predictive performance Different types of classifiers can be directly included Easy to implement Not too much parameter tuning Disadvantages: The combined classifier is not transparent (black box) Not a compact representation 4

Why do they work? Suppose there are 25 base classifiers Each classifier has error rate 0.35 Assume independence among classifiers Probability that the ensemble classifier makes a wrong prediction: 25 i 13 25 25 i i i (1 ) 0.06 5

Bagging Training o Given a dataset S, at each iteration i, a training set S i is sampled with replacement from S (i.e. bootstraping) o A classifier C i is learned for each S i Classification: given an unseen sample X o Each classifier C i returns its class prediction o The bagged classifier H counts the votes and assigns the class with the most votes to X 6

Bagging Bagging works because it reduces variance by voting/averaging o In some pathological hypothetical situations the overall error might increase o Usually, the more classifiers the better Problem: we only have one dataset Solution: generate new ones of size n by bootstrapping, i.e. sampling with replacement Can help a lot if data is noisy 7

Aggregating Classifiers Bagging and Boosting Aggregating Classifiers FINAL RULE Breiman (1996) found gains in accuracy by aggregating predictors built from reweighed versions of the learning set 8

Bagging Bagging = Bootstrap Aggregating Reweighing of the learning sets is done by drawing at random with replacement from the learning sets Predictors are aggregated by plurality voting 9

The Bagging Algorithm B bootstrap samples From which we derive: B Classifiers 1 2 3 B 1,1 : c, c, c,..., c B Estimated probabilities 1 2 3 B 0,1 : p, p, p,..., p The aggregate classifier becomes: c bag B B 1 b 1 ( x) sign c ( x) or p bag ( x) B b 1 B b 1 p b ( x) 10

Weighting Initial set Drawing with replacement 1 Classifier 1 Drawing with replacement 2 Classifier 2 11

Aggregation Sign Initial set Classifier 1 Classifier 2 Classifier 3 Classifier T = Final rule 12

Random Forests Breiman L. Random forests. Machine Learning 2001;45(1):5-32 http://stat-www.berkeley.edu/users/breiman/randomforests/ 13

Decision Forests for computer vision and medical image analysis A. Criminisi, J. Shotton and E. Konukoglu http://research.microsoft.com/projects/decisionforests 14

Decision Trees and Decision Forests A general tree structure A decision tree 0 root node Is top part blue? internal (split) node 1 2 Is bottom part green? Is bottom part blue? 3 4 5 6 7 8 9 10 11 12 13 14 terminal (leaf) node A forest is an ensemble of trees. The trees are all slightly different from one another 15

Decision Tree Testing (Runtime) Input data in feature space Input test point Split the data at node Prediction at leaf 16

Decision Tree Training (Offline) How to split the data? Input training data Input data in feature space Binary tree? Ternary? How big a tree? What tree structure? 17

Decision Tree Training (Offline) How many trees? How different? How to fuse their outputs? 18

Decision Forest Model Basic notation Input data point e.g. Collection of feature responses. d=? Output/label space Feature response selector Forest model e.g. Categorical, continuous? Features can be e.g. wavelets? Pixel intensities? Context? Node test parameters Node objective function (train.) e.g. Parameters related to each split node: i) which features, ii) what geometric primitive, iii) thresholds. The energy to be minimized when training the j-th split node tree Node weak learner e.g. The test function for splitting data at a node j. Leaf predictor model e.g. Point estimate? Full distribution? Randomness model (train.) e.g. 1. Bagging, 2. Randomized node optimization How is randomness injected during training? How much? Stopping criteria (train.) e.g. max tree depth = When to stop growing a tree during training ensemble Forest size The ensemble model e.g. Total number of trees in the forest How to compute the forest output from that of individual trees?

Randomness Model 1) Bagging (randomizing the training set) The full training set The randomly sampled subset of training data made available for the tree t Forest training Efficient training 20

Randomness Model 2) Randomized node optimization (RNO) The full set of all possible node test parameters For each node the set of randomly sampled features Node training Node weak learner Randomness control parameter. For no randomness and maximum tree correlation. For max randomness and minimum tree correlation. Node test params The effect of Small value of ; little tree correlation. Large value of ; large tree correlation. 21

An example forest to predict continuous variables The Ensemble Model

Training and Information Gain Before split Information gain Split 1 Shannon s entropy Node training Split 2

Overfitting and Underfitting 24

Classification Forest Training data in feature space? Classification tree training?? Model specialization for classification Input data point Output is categorical ( is feature response) (discrete set) Entropy of a discrete distribution Node weak learner Obj. funct. for node j (information gain) with Training node j Predictor model (class posterior) 25

Weak Learners Splitting data at node j Node weak learner Node test params Examples of weak learners Weak learner: axis aligned Weak learner: oriented line Weak learner: conic section Feature response for 2D example. Feature response for 2D example. Feature response for 2D example. With or With a generic line in homog. coordinates. With a matrix representing a conic. In general may select only a very small subset of features 26

Prediction Model What do we do at the leaf? leaf leaf Prediction model: probabilistic leaf 27

Classification Forest: Ensemble Model Tree t=1 t=2 t=3 The ensemble model Forest output probability 28

Effect of Tree Depth

Effect of Weak Learner Model and Randomness Testing posteriors Weak learner: axis aligned Weak learner: oriented line Weak learner: conic section D=13 D=5 Parameters: T=400 predictor model = prob.

Effect of Weak Learner Model and Randomness Testing posteriors D=13 D=5 Weak learner: axis aligned Parameters: T=400 predictor model = prob.

Body tracking in Microsoft Kinect for XBox 360 left hand right shoulder neck right foot Input depth image Training labelled data Visual features Classification forest Labels are categorical Input data point Visual features Feature response Predictor model Objective function Node parameters Node training Weak learner

Body tracking in Microsoft Kinect for XBox 360 Input depth image (bg removed) Inferred body parts posterior 33

Advantages of Random Forests Very high accuracy not easily surpassed by other algorithms Efficient on large datasets Can handle thousands of input variables without variable deletion Effective method for estimating missing data, also maintains accuracy when a large proportion of the data are missing Can handle categorical variables Robust to label noise Can be used in clustering, locating outliers and semisupervised learning L. Breiman s web page 34

Boosting 35

Boosting Resources Slides based on: Tutorial by Rob Schapire Tutorial by Yoav Freund Slides by Carlos Guestrin Tutorial by Paul Viola Tutorial by Ron Meir Slides by Aurélie Lemmens Slides by Zhuowen Tu 36

Code Antonio Torralba (Object detection) http://people.csail.mit.edu/torralba/shortcours erloc/boosting/boosting.html GML AdaBoost http://graphics.cs.msu.ru/ru/science/research/ machinelearning/adaboosttoolbox 37

Boosting Invented independently by Schapire (1989) and Freund (1990) Later joined forces Main idea: train a strong classifier by combining weak classifiers Practically useful Theoretically interesting 38

Boosting Given a set of weak learners, run them multiple times on (reweighted) training data, then let learned classifiers vote At each iteration t: Weight each training example by how incorrectly it was classified Learn a hypothesis h t The one with the smallest error Choose a strength for this hypothesis α t Final classifier: weighted combination of weak learners 39

Learning from Weighted Data Sometimes not all data points are equal Some data points are more equal than others Consider a weighted dataset D(i) weight of i th training example (x i,y i ) Interpretations: i th training example counts as D(i) examples If I were to resample data, I would get more samples of heavier data points Now, in all calculations the i th training example counts as D(i) examples 40

Definition of Boosting Given training set (x 1,y 1 ),, (x m,y m ) y i ϵ{-1,+1} correct label of instance x i ϵx For t=1,,t construct distribution D t on {1,,m} find weak hypothesis h t : X {-1,+1} with small error ε t on D t Output final hypothesis H final 41

AdaBoost Constructing D t D 1 =1/m Given D t and h t : where Z t is a normalization constant Final hypothesis: 42

The AdaBoost Algorithm 43

The AdaBoost Algorithm with minimum 44

The AdaBoost Algorithm 45

The AdaBoost Algorithm 46

Toy Example D 1 47

Toy Example: Round 1 ε 1 =0.3 α 1 =0.42 D 2 48

Toy Example: Round 2 D 3 ε 2 =0.21 α 2 =0.65 49

Toy Example: Round 3 ε 3 =0.14 α 3 =0.92 50

Toy Example: Final Hypothesis H final 51

How to choose Weights Training error of final classifier is bounded by: where: Notice that: 52

How to choose Weights Training error of final classifier is bounded by: where In final round: 53

How to choose Weights If we minimize Π t Z t, we minimize our training error We can tighten this bound greedily, by choosing α t and h t in each iteration to minimize Z t For boolean target function, this is accomplished by [Freund & Schapire 97]: 54

Weak and Strong Classifiers If each classifier is (at least slightly) better than random ε t < 0.5 AdaBoost will achieve zero training error (exponentially fast): Is it hard to achieve better than random training error? 55

Important Aspects of Boosting Exponential loss function Choice of weak learners Generalization and overfitting Multi-class boosting 56

Exponential Loss Function The exponential loss function is an upper bound of the 0-1 loss function (classification error) AdaBoost provably minimizes exponential loss Therefore, it also minimizes the upper bound of classification error 57

Exponential Loss Function AdaBoost attempts to minimize: (*) Really a coordinate descent procedure At each round add a t h t to sum to minimize (*) Why this loss function? upper bound on training (classification) error easy to work with connection to logistic regression 58

Coordinate Descent Explanation 59

Stumps: Weak Learners Single-axis parallel partition of space Decision trees: Hierarchical partition of space Multi-layer perceptrons: General nonlinear function approximators Y X<5 X 5 Y=1 Y=-1 60

Decision Trees Hierarchical and recursive partitioning of the feature space A simple model (e.g. constant) is fit in each region Often, splits are parallel to axes 61

Decision Trees Nominal Features 62

Decision Trees - Instability 63

Boosting: Analysis of Training Error Training error of final classifier is bounded by: For binary classifiers with choice of α t as before, the error is bounded by 64

Analysis of Training Error If each base classifier is slightly better than random such that there exists γ such that γ t >γ for all t Then the training error drops exponentially fast in T exp( 2 t t 2 2 exp( 2 T ) AdaBoost is indeed a boosting algorithm in the sense that it can efficiently convert a true weak learning algorithm into a strong learning algorithm Weak learning algorithm: can always generate a classifier with a weak edge for any distribution Strong learning algorithm: can generate a classifier with an arbitrarily low error rate, given sufficient data ) 65

Generalization Error T number of boosting rounds d VC dimension of weak learner, measures complexity of classifier The Vapnik-Chervonenkis (VC) dimension is a standard measure of the complexity of a space of binary functions m number of training examples 66

Overfitting This bound suggests that boosting will overfit if run for too many rounds Several authors observed empirically that boosting often does not overfit, even when run for thousands of rounds Moreover, it was observed that AdaBoost would sometimes continue to drive down the generalization error long after the training error had reached zero, clearly contradicting the bound above 67

Analysis of Margins An alternative analysis can be made in terms of the margins of the training examples. The margin of example (x,y) is: It is a number in [-1, 1] and it is positive when the example is correctly classified Larger margins on the training set translate into a superior upper bound on the generalization error 68

Analysis of Margins It can be shown that the generalization error is at most: Independent of T Boosting is particularly aggressive at increasing the margin since it concentrates on the examples with the smallest margins positive or negative 69

Error Rates and Margins 70

Margin Analysis Margin theory gives a qualitative explanation of the effectiveness of boosting Quantitatively, the bounds are rather weak One classifier can have a margin distribution that is better than that of another classifier, and yet be inferior in test accuracy Margin theory points to a strong connection between boosting and the support-vector machines 71

Advantages of Boosting Simple and easy to implement Flexible can be combined with any learning algorithm No requirement on data being in metric space data features don t need to be normalized, like in knn and SVMs (this has been a central problem in machine learning) Feature selection and fusion are naturally combined with the same goal for minimizing an objective error function 72

Advantages of Boosting (cont.) Can show that if a gap exists between positive and negative points, generalization error converges to zero No parameters to tune (maybe T) No prior knowledge needed about weak learner Provably effective Versatile can be applied on a wide variety of problems Non-parametric 73

Disadvantages of Boosting Performance of AdaBoost depends on data and weak learner Consistent with theory, AdaBoost can fail if weak classifier too complex overfitting weak classifier too weak - underfitting Empirically, AdaBoost seems especially susceptible to uniform noise Decision boundaries are often rugged 74

Multi-class AdaBoost Assume yϵ{1,,k} Direct approach (AdaBoost.M1): can prove same bound on error if else: abort 75

Limitation of AdaBoost.M1 Achieving may be hard if k (number of classes) is large [Mukherjee and Schapire, 2010]: weak learners that perform slightly better than random chance can be used in multi-class boosting framework Out of scope for now 76

Reducing to Binary Problems Say possible labels are {a,b,c,d,e} Each training example replaced by five {-1,+1} labeled examples 77

AdaBoost.MH Formally Used to be X {-1, +1} where Can prove that 78

Random Forests vs. Boosting RF Pros: More robust Faster to train (no reweighting, each split is on a small subset of data and features) Can handle missing/partial data Easier to extend to online version RF Cons: Feature selection process is not explicit Weaker performance on small size training data Weaker theoretical foundations 79

Applications of Boosting Real time face detection using a classifier cascade [Viola and Jones, 2001 and 2004] 80

The Classical Face Detection Process Larger Scale Smallest Scale 50,000 Locations/Scales 81

Classifier is Trained on Labeled Data Training Data 5000 faces All frontal 10 8 non faces Faces are normalized Scale, translation Many variations Across individuals Illumination Pose (rotation both in plane and out) 82

Key Properties of Face Detection Each image contains 10,000 50,000 locations/scales Faces are rare 0-50 per image 1000 times as many non-faces as faces Goal: Extremely small rate of false negatives: 10-6 83

Support Vectors Challenging negative examples are extremely important 84

Classifier Cascade (Viola-Jones) For real problems results are only as good as the features used... This is the main piece of ad-hoc (or domain) knowledge Rather than the pixels, use a very large set of simple functions Sensitive to edges and other critical features of the image Computed at multiple scales Introduce a threshold to yield binary features Binary features seem to work better in practice In general, convert continuous features to binary by quantizing 85

Boosted Face Detection: Image Features Rectangle filters Similar to Haar wavelets h ( x t i ) t t if f t ( x ) otherwise i t C ( x) ht ( x) t b 60,000 100 Unique Binary Features 6,000,000 86

Feature Selection For each round of boosting: Evaluate each rectangle filter on each example Sort examples by filter values Select best threshold for each filter Select best filter/threshold (= Feature) Reweight examples 87

Example Classifier for Face Detection A classifier with 200 rectangle features was learned using AdaBoost 95% correct detection on test set with 1 in 14084 false positives. ROC curve for 200 feature classifier 88

Building Fast Classifiers In general, simple classifiers are more efficient, but they are also weaker We could define a computational risk hierarchy A nested set of classifier classes The training process is reminiscent of boosting Previous classifiers reweight the examples used to train subsequent classifiers The goal of the training process is different Minimize errors, but also minimize false positives 89

Cascaded Classifier IMAGE SUB-WINDOW 50% 1 Feature 5 Features 20% 2% 20 Features FACE F F F NON-FACE NON-FACE NON-FACE A 1-feature classifier achieves 100% detection rate and about 50% false positive rate A 5-feature classifier achieves 100% detection rate and 40% false positive rate using data from previous stage A 20-feature classifier achieve 100% detection rate with 10% false positive rate 90

Output of Face Detector on Test Images 91

Solving other Face Tasks Facial Feature Localization Profile Detection Demographic Analysis 92

Feature Localization Surprising properties of Viola-Jones framework The cost of detection is not a function of image size Just the number of features Learning automatically focuses attention on key regions Conclusion: the feature detector can include a large contextual region around the feature Sub-windows rejected at final stages 93

Feature Localization Learned features reflect the task 94

Profile Detection 95

Profile Features 96