What is Learning? CS 343: Artificial Intelligence Machine Learning. Raymond J. Mooney. Problem Solving / Planning / Control.

Similar documents
Decision Tree Learning

Decision Tree CE-717 : Machine Learning Sharif University of Technology

Ensemble Methods, Decision Trees

7. Decision or classification trees

h=[3,2,5,7], pos=[2,1], neg=[4,4]

Introduction to Machine Learning

Algorithms: Decision Trees

Classification with Decision Tree Induction

Chapter ML:III. III. Decision Trees. Decision Trees Basics Impurity Functions Decision Tree Algorithms Decision Tree Pruning

Data Mining and Knowledge Discovery Practice notes 2

Data Mining and Knowledge Discovery: Practice Notes

Decision Trees Dr. G. Bharadwaja Kumar VIT Chennai

Extra readings beyond the lecture slides are important:

COMP 465: Data Mining Classification Basics

BITS F464: MACHINE LEARNING

Machine Learning. Decision Trees. Manfred Huber

Decision Trees. Petr Pošík. Czech Technical University in Prague Faculty of Electrical Engineering Dept. of Cybernetics

Decision tree learning

Machine Learning. Decision Trees. Le Song /15-781, Spring Lecture 6, September 6, 2012 Based on slides from Eric Xing, CMU

Data Mining. 3.2 Decision Tree Classifier. Fall Instructor: Dr. Masoud Yaghini. Chapter 5: Decision Tree Classifier

CS229 Lecture notes. Raphael John Lamarre Townshend

Weka ( )

CS Machine Learning

Classification and Regression Trees

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

DATA MINING LECTURE 9. Classification Decision Trees Evaluation

DATA MINING LECTURE 11. Classification Basic Concepts Decision Trees Evaluation Nearest-Neighbor Classifier

Decision Trees: Representation

Credit card Fraud Detection using Predictive Modeling: a Review

Data Mining. 3.3 Rule-Based Classification. Fall Instructor: Dr. Masoud Yaghini. Rule-Based Classification

Nearest neighbor classification DSE 220

Lecture 7: Decision Trees

Supervised Learning. Decision trees Artificial neural nets K-nearest neighbor Support vectors Linear regression Logistic regression...

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

Uninformed Search Methods. Informed Search Methods. Midterm Exam 3/13/18. Thursday, March 15, 7:30 9:30 p.m. room 125 Ag Hall

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Best First and Greedy Search Based CFS and Naïve Bayes Algorithms for Hepatitis Diagnosis

Machine Learning Techniques for Data Mining

Lecture outline. Decision-tree classification

Homework 1 Sample Solution

Dynamic Programming Homework Problems

Lecture 5 of 42. Decision Trees, Occam s Razor, and Overfitting

1) Give decision trees to represent the following Boolean functions:

Slides for Data Mining by I. H. Witten and E. Frank

Nominal Data. May not have a numerical representation Distance measures might not make sense. PR and ANN

Data Mining Part 5. Prediction

Data Mining. Covering algorithms. Covering approach At each stage you identify a rule that covers some of instances. Fig. 4.

Lecture 2 :: Decision Trees Learning

Data Mining. Decision Tree. Hamid Beigy. Sharif University of Technology. Fall 1396

Artificial Intelligence. Programming Styles

Dynamic Programming Homework Problems

Data Mining: Concepts and Techniques Classification and Prediction Chapter 6.1-3

Problem 1: Complexity of Update Rules for Logistic Regression

A Systematic Overview of Data Mining Algorithms

Data Mining Algorithms: Basic Methods

International Journal of Software and Web Sciences (IJSWS)

International Journal of Scientific Research & Engineering Trends Volume 4, Issue 6, Nov-Dec-2018, ISSN (Online): X

CSE4334/5334 DATA MINING

Random Forest A. Fornaser

Model Selection and Assessment

SOCIAL MEDIA MINING. Data Mining Essentials

DATA MINING LECTURE 9. Classification Basic Concepts Decision Trees Evaluation

Classification: Decision Trees

Introduction to Rule-Based Systems. Using a set of assertions, which collectively form the working memory, and a set of

CS 229 Midterm Review

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Data Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners

Exam Advanced Data Mining Date: Time:

Part I. Instructor: Wei Ding

CS 395T Computational Learning Theory. Scribe: Wei Tang

Data Mining and Knowledge Discovery: Practice Notes

Classification. Instructor: Wei Ding

The digital copy of this thesis is protected by the Copyright Act 1994 (New Zealand).

Machine Learning Techniques for Data Mining

Decision Tree Learning

Data Mining Practical Machine Learning Tools and Techniques

Introduction to Machine Learning

Learning Rules. CS 391L: Machine Learning: Rule Learning. Raymond J. Mooney. Rule Learning Approaches. Decision-Trees to Rules. Sequential Covering

Example: Map coloring

Data Mining Classification - Part 1 -

Business Club. Decision Trees

Nominal Data. May not have a numerical representation Distance measures might not make sense PR, ANN, & ML

Simple Model Selection Cross Validation Regularization Neural Networks

Based on Raymond J. Mooney s slides

! Greed. O(n log n) interval scheduling. ! Divide-and-conquer. O(n log n) FFT. ! Dynamic programming. O(n 2 ) edit distance.

Machine Learning (CS 567)

8. Tree-based approaches

Metrics for Performance Evaluation How to evaluate the performance of a model? Methods for Performance Evaluation How to obtain reliable estimates?

8.1 Polynomial-Time Reductions

Perceptron-Based Oblique Tree (P-BOT)

Network Traffic Measurements and Analysis

What is Learning? Spring 1

Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation. Lecture Notes for Chapter 4. Introduction to Data Mining

Lecture Notes for Chapter 4

Data Mining Concepts & Techniques

Classification. Instructor: Wei Ding

Classification: Basic Concepts, Decision Trees, and Model Evaluation

Tree-based methods for classification and regression

Midterm Examination CS540-2: Introduction to Artificial Intelligence

Logical Rhythm - Class 3. August 27, 2018

Transcription:

What is Learning? CS 343: Artificial Intelligence Machine Learning Herbert Simon: Learning is any process by which a system improves performance from experience. What is the task? Classification Problem solving / planning / control Raymond J. Mooney University of Texas at Austin 2 Classification Assign object/event to one of a given finite set of categories. Medical diagnosis Credit card applications or transactions Fraud detection in e-commerce Worm detection in network packets Spam filtering in email Recommended articles in a newspaper Recommended books, movies, music, or jokes Financial investments DNA sequences Spoken words Handwritten letters Astronomical images Problem Solving / Planning / Control Performing actions in an environment in order to achieve a goal. Solving calculus problems Playing checkers, chess, or backgammon Balancing a pole Driving a car or a jeep Flying a plane, helicopter, or rocket Controlling an elevator Controlling a character in a video game Controlling a mobile robot 3 4 Sample Category Learning Problem Hypothesis Selection Instance language: <size,, > size {small, medium, large} {red, blue, green} {square, circle, triangle} C = {positive, negative} D: small red circle positive 5 Many hypotheses are usually consistent with the training data. red & circle (small & circle) or (large & red) (small & red & circle) or (large & red & circle) not [ ( red & triangle) or (blue & circle) ] not [ ( small & red & triangle) or (large & blue & circle) ] Bias Any criteria other than consistency with the training data that is used to select a hypothesis. 6

Generalization Hypotheses must generalize to correctly classify instances not in the training data. Simply memorizing training examples is a consistent hypothesis that does not generalize. Occam s razor: Finding a simple hypothesis helps ensure generalization. Hypothesis Space Restrict learned functions a priori to a given hypothesis space, H, of functions h(x) that can be considered as definitions of c(x). For learning concepts on instances described by n discretevalued features, consider the space of conjunctive hypotheses represented by a vector of n constraints <c, c 2, c n > where each c i is either:?, a wild card indicating no constraint on the ith feature A specific value from the domain of the ith feature Ø indicating no value is acceptable Sample conjunctive hypotheses are <big, red,?> <?,?,?> (most general hypothesis) < Ø, Ø, Ø> (most specific hypothesis) 7 8 Inductive Learning Hypothesis Any function that is found to approximate the target concept well on a sufficiently large set of training examples will also approximate the target function well on unobserved examples. Assumes that the training and test examples are drawn independently from the same underlying distribution. This is a fundamentally unprovable hypothesis unless additional assumptions are made about the target concept and the notion of approximating the target function well on unobserved examples is defined appropriately (cf. computational learning theory). Evaluation of Classification Learning Classification accuracy (% of instances classified correctly). Measured on an independent test data. Training time (efficiency of training algorithm). Testing time (efficiency of subsequent classification). 9 0 Category Learning as Search Category learning can be viewed as searching the hypothesis space for one (or more) hypotheses that are consistent with the training data. Consider an instance space consisting of n binary features which therefore has 2 n instances. For conjunctive hypotheses, there are 4 choices for each feature: Ø, T, F,?, so there are 4 n syntactically distinct hypotheses. However, all hypotheses with or more Øs are equivalent, so there are 3 n + semantically distinct hypotheses. The target binary categorization function in principle could be any of the possible 2 2^n functions on n input bits. Therefore, conjunctive hypotheses are a small subset of the space of possible functions, but both are intractably large. All reasonable hypothesis spaces are intractably large or Learning by Enumeration For any finite or countably infinite hypothesis space, one can simply enumerate and test hypotheses one at a time until a consistent one is found. For each h in H do: If h is consistent with the training data D, then terminate and return h. This algorithm is guaranteed to terminate with a consistent hypothesis if one exists; however, it is obviously computationally intractable for almost any practical problem. even infinite. 2 2

Efficient Learning Is there a way to learn conjunctive concepts without enumerating them? How do human subjects learn conjunctive concepts? Is there a way to efficiently find an unconstrained boolean function consistent with a set of discrete-valued training instances? If so, is it a useful/practical algorithm? Conjunctive Rule Learning Conjunctive descriptions are easily learned by finding all commonalities shared by all positive examples. small red circle positive Learned rule: red & circle positive Must check consistency with negative examples. If inconsistent, no conjunctive rule exists. 3 4 Limitations of Conjunctive Rules If a concept does not have a single set of necessary and sufficient conditions, conjunctive learning fails. small red circle positive 5 medium red circle negative Learned rule: red & circle positive Inconsistent with negative example #5! 5 Decision Trees Tree-based classifiers for instances represented as feature-vectors. Nodes test features, there is one branch for each value of the feature, and leaves specify the category. red blue green red blue green neg pos B C circle square triangle circle square triangle pos neg neg A B C Can represent arbitrary conjunction and disjunction. Can represent any classification function over discrete feature vectors. Can be rewritten as a set of rules, i.e. disjunctive normal form (DNF). red circle pos red circle A blue B; red square B green C; red triangle C 6 Properties of Decision Tree Learning Continuous (real-valued) features can be handled by allowing nodes to split a real valued feature into two ranges based on a threshold (e.g. length < 3 and length 3) Classification trees have discrete class labels at the leaves, regression trees allow real-valued outputs at the leaves. Algorithms for finding consistent trees are efficient for processing large amounts of training data for data mining tasks. Methods developed for handling noisy training data (both class and feature noise). Methods developed for handling missing feature values. Top-Down Decision Tree Induction Recursively build a tree top-down by divide and conquer. <big, red, circle>: + <small, red, circle>: + <small, red, square>: <big, blue, circle>: red blue <big, red, circle>: + <small, red, circle>: + <small, red, square>: green 7 8 3

Top-Down Decision Tree Induction Recursively build a tree top-down by divide and conquer. <big, red, circle>: + <small, red, circle>: + <small, red, square>: <big, blue, circle>: red green <big, red, circle>: + blue <small, red, circle>: + neg neg <small, red, square>: circle <big, blue, circle>: square triangle pos <big, red, circle>: + <small, red, circle>: + neg pos <small, red, square>: Decision Tree Induction Pseudocode DTree(examples, features) returns a tree If all examples are in one category, return a leaf node with that category label. Else if the set of features is empty, return a leaf node with the category label that is the most common in examples. Else pick a feature F and create a node R for it For each possible value v i of F: Let examples i be the subset of examples that have value v i for F Add an out-going edge E to node R labeled with the value v i. If examples i is empty then attach a leaf node to edge E labeled with the category that is the most common in examples. else call DTree(examples i, features {F}) and attach the resulting tree as the subtree under edge E. Return the subtree rooted at R. 9 20 Picking a Good Split Feature Goal is to have the resulting tree be as small as possible, per Occam s razor. Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard optimization problem. Top-down divide-and-conquer method does a greedy search for a simple tree but does not guarantee to find the smallest. General lesson in ML: Greed is good. Want to pick a feature that creates subsets of examples that are relatively pure in a single class so they are closer to being leaf nodes. There are a variety of heuristics for picking a good test, a popular one is based on information gain that originated with the ID3 system of Quinlan (979). Entropy Entropy (disorder, impurity) of a set of examples, S, relative to a binary classification is: Entropy S) = p log ( p ) p log ( ) ( 2 0 2 p0 where p is the fraction of positive examples in S and p 0 is the fraction of negatives. If all examples are in one category, entropy is zero (we define 0 log(0)=0) If examples are equally mixed (p =p 0 =0.5), entropy is a maximum of. Entropy can be viewed as the number of bits required on average to encode the class of an example in S where data compression (e.g. Huffman coding) is used to give shorter codes to more likely cases. For multi-class problems with c categories, entropy generalizes to: Entropy( S) = c i= p i log ( p i 2 ) 2 22 Entropy Plot for Binary Classification Information Gain The information gain of a feature F is the expected reduction in entropy resulting from splitting on this feature. Gain( S, F) = Entropy( S) Values Sv Entropy( Sv) S v ( F ) where S v is the subset of S having value v for feature F. Entropy of each resulting subset weighted by its relative size. Example: <big, red, circle>: + <small, red, circle>: + <small, red, square>: <big, blue, circle>: 2+, 2 : E= size 2+, 2 : E= 2+, 2 : E= 23 big small +, +, E= E= Gain= (0.5 + 0.5 ) = 0 red blue 2+, 0+, E=0.98 E=0 Gain= (0.75 0.98 + 0.25 0) = 0.3 circle square 2+, 0+, E=0.98 E=0 Gain= (0.75 0.98 + 24 0.25 0) = 0.3 4

Hypothesis Space Search Performs batch learning that processes all training instances at once rather than incremental learning that updates a hypothesis after each example. Performs hill-climbing (greedy search) that may only find a locally-optimal solution. Guaranteed to find a tree consistent with any conflict-free training set (i.e. identical feature vectors always assigned the same class), but not necessarily the simplest tree. Finds a single discrete hypothesis, so there is no way to provide confidences or create useful queries. Another Red-Circle Data Set small red circle positive 3 small red square negative 25 26 Weka J48 Trace A Data Set Requiring Disjunction data> java weka.classifiers.trees.j48 -t figure.arff -T figure.arff -U -M Options: -U -M J48 unpruned tree ------------------ = blue: negative (.0) = red = circle: positive (2.0) = square: negative (.0) = triangle: positive (0.0) = green: positive (0.0) Number of Leaves : 5 Size of the tree : 7 small red circle positive 5 small green circle positive Time taken to build model: 0.03 seconds Time taken to test model on training data: 0 seconds 27 28 Weka J48 Trace 2 Weka J48 Trace 3 data> java weka.classifiers.trees.j48 -t figure3.arff -T figure3.arff -U -M Options: -U -M J48 unpruned tree ------------------ = circle = blue: negative (.0) = red: positive (2.0) = green: positive (.0) = square: positive (0.0) = triangle: negative (.0) Number of Leaves : 5 Size of the tree : 7 Time taken to build model: 0.02 seconds Time taken to test model on training data: 0 seconds 29 data> java weka.classifiers.trees.j48 -t contact-lenses.arff J48 pruned tree ------------------ tear-prod-rate = reduced: none (2.0) tear-prod-rate = normal astigmatism = no: soft (6.0/.0) astigmatism = yes spectacle-prescrip = myope: hard (3.0) spectacle-prescrip = hypermetrope: none (3.0/.0) Number of Leaves : 4 Size of the tree : 7 Time taken to build model: 0.03 seconds Time taken to test model on training data: 0 seconds === Error on training data === Correctly Classified Instances 22 9.6667 % Incorrectly Classified Instances 2 8.3333 % Kappa statistic 0.8447 Mean absolute error 0.0833 Root mean squared error 0.204 Relative absolute error 22.6257 % Root relative squared error 48.223 % Total Number of Instances 24 === Confusion Matrix === a b c <-- classified as 5 0 0 a = soft 0 3 b = hard 0 4 c = none === Stratified cross-validation === Correctly Classified Instances 20 83.3333 % Incorrectly Classified Instances 4 6.6667 % Kappa statistic 0.7 Mean absolute error 0.5 Root mean squared error 0.3249 Relative absolute error 39.7059 % Root relative squared error 74.3898 % Total Number of Instances 24 === Confusion Matrix === a b c <-- classified as 5 0 0 a = soft 0 3 b = hard 2 2 c = none 30 5

Evaluating Inductive Hypotheses K-Fold Cross Validation Accuracy of hypotheses on training data is obviously biased since the hypothesis was constructed to fit this data. Accuracy must be evaluated on an independent (usually disjoint) test set. Average over multiple train/test splits to get accurate measure of accuracy. K-fold cross validation averages over K trials using each example exactly once as a test case. 3 Randomly partition data D into k disjoint equal-sized subsets P P k For i from to k do: Use P i for the test set and remaining data for training S i = (D P i ) h A = L A (S i ) h B = L B (S i ) δ i = error Pi (h A ) error Pi (h B ) Return the average difference in error: k δ = δ k i= i 32 K-Fold Cross Validation Comments Every example gets used as a test example once and as a training example k times. All test sets are independent; however, training sets overlap significantly. Measures accuracy of hypothesis generated for [(k )/k] D training examples. Standard method is 0-fold. If k is low, not sufficient number of train/test trials; if k is high, test set is small and test variance is high and run time is increased. If k= D, method is called leave-one-out cross validation. Learning Curves Plots accuracy vs. size of training set. Has maximum accuracy (Bayes optimal) nearly been reached or will more examples help? Is one system better when training data is limited? Most learners eventually converge to Bayes optimal given sufficient training examples. 00% Test Accuracy Bayes optimal Random guessing 33 # Training examples 34 Cross Validation Learning Curves Split data into k equal partitions For trial i = to k do: Use partition i for testing and the union of all other partitions for training. For each desired point p on the learning curve do: For each learning system L Train L on the first p examples of the training set and record training time, training accuracy, and learned concept complexity. Test L on the test set, recording testing time and test accuracy. Compute average for each performance statistic across k trials. Plot curves for any desired performance statistic versus training set size. 35 6