Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.

Similar documents
Feature Extractors. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. The Perceptron Update Rule.

Classification: Feature Vectors

CSE 573: Artificial Intelligence Autumn 2010

Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron

Announcements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday.

CS 343: Artificial Intelligence

Kernels and Clustering

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

CS 188: Artificial Intelligence Fall 2008

CSEP 573: Artificial Intelligence

CS 343H: Honors AI. Lecture 23: Kernels and clustering 4/15/2014. Kristen Grauman UT Austin

Natural Language Processing

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

Kernels + K-Means Introduction to Machine Learning. Matt Gormley Lecture 29 April 25, 2018

CS 188: Artificial Intelligence Fall 2008

Support vector machines

Natural Language Processing

Machine Learning for NLP

CS5670: Computer Vision

ECG782: Multidimensional Digital Signal Processing

Support Vector Machines

CS 188: Artificial Intelligence Fall Machine Learning

Lecture 9: Support Vector Machines

Artificial Intelligence Naïve Bayes

All lecture slides will be available at CSC2515_Winter15.html

Data Mining: Concepts and Techniques. Chapter 9 Classification: Support Vector Machines. Support Vector Machines (SVMs)

Recognition Tools: Support Vector Machines

Supervised Learning (contd) Linear Separation. Mausam (based on slides by UW-AI faculty)

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #19: Machine Learning 1

CS 188: Artificial Intelligence Fall 2011

CS 188: Artificial Intelligence Spring Announcements

CS 188: Artificial Intelligence Fall Announcements

Announcements. CS 188: Artificial Intelligence Fall Machine Learning. Classification. Classification. Bayes Nets for Classification

Slides for Data Mining by I. H. Witten and E. Frank

Naïve Bayes for text classification

Text classification II CE-324: Modern Information Retrieval Sharif University of Technology

The Perceptron. Simon Šuster, University of Groningen. Course Learning from data November 18, 2013

SUPPORT VECTOR MACHINES

Lecture #11: The Perceptron

Support Vector Machines

k-nearest Neighbors + Model Selection

CS 343: Artificial Intelligence

DM6 Support Vector Machines

Lecture Linear Support Vector Machines

Supervised Learning: K-Nearest Neighbors and Decision Trees

Perceptron Introduction to Machine Learning. Matt Gormley Lecture 5 Jan. 31, 2018

CS570: Introduction to Data Mining

Support Vector Machines + Classification for IR

Application of Support Vector Machine Algorithm in Spam Filtering

CS 229 Midterm Review

CSE 417T: Introduction to Machine Learning. Lecture 22: The Kernel Trick. Henry Chai 11/15/18

6.034 Quiz 2, Spring 2005

ADVANCED CLASSIFICATION TECHNIQUES

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

CPSC 340: Machine Learning and Data Mining. More Linear Classifiers Fall 2017

CPSC 340: Machine Learning and Data Mining. Logistic Regression Fall 2016

Content-based image and video analysis. Machine learning

Instance-based Learning

Discriminative classifiers for image recognition

Machine Learning Techniques for Data Mining

1 Case study of SVM (Rob)

More Learning. Ensembles Bayes Rule Neural Nets K-means Clustering EM Clustering WEKA

Classification. Statistical NLP Spring Example: Text Classification. Some Definitions. Block Feature Vectors.

Generative and discriminative classification techniques

Kernel Methods & Support Vector Machines

Natural Language Processing. Classification. Duals and Kernels. Linear Models: Perceptron. Non Parametric Classification. Classification III

Neural Networks and Deep Learning

Linear methods for supervised learning

SUPPORT VECTOR MACHINES

Support Vector Machines

CSE 473: Ar+ficial Intelligence Machine Learning: Naïve Bayes and Perceptron

Topics in Machine Learning

HW2 due on Thursday. Face Recognition: Dimensionality Reduction. Biometrics CSE 190 Lecture 11. Perceptron Revisited: Linear Separators

CS229 Final Project: Predicting Expected Response Times

Nearest Neighbor Classification. Machine Learning Fall 2017

Support Vector Machines.

VECTOR SPACE CLASSIFICATION

Network Traffic Measurements and Analysis

CS 8520: Artificial Intelligence. Machine Learning 2. Paula Matuszek Fall, CSC 8520 Fall Paula Matuszek

Supervised Learning with Neural Networks. We now look at how an agent might learn to solve a general problem by seeing examples.

Machine Learning Classifiers and Boosting

Model Selection Introduction to Machine Learning. Matt Gormley Lecture 4 January 29, 2018

Machine Learning in Biology

Classification Algorithms in Data Mining

Support Vector Machines

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

Image classification Computer Vision Spring 2018, Lecture 18

Data Mining in Bioinformatics Day 1: Classification

Linear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines

Classification and K-Nearest Neighbors

Neural Nets & Deep Learning

Machine Learning: Think Big and Parallel

Going nonparametric: Nearest neighbor methods for regression and classification

Linear Regression and K-Nearest Neighbors 3/28/18

Search Engines. Information Retrieval in Practice

Lecture 3. Oct

CS 8520: Artificial Intelligence

Semi-supervised Learning

CS7267 MACHINE LEARNING NEAREST NEIGHBOR ALGORITHM. Mingon Kang, PhD Computer Science, Kennesaw State University

Instance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2015

Transcription:

CS 188: Artificial Intelligence Fall 2008 Lecture 24: Perceptrons II 11/24/2008 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. W=dear : 1 W=sir : 1 W=this : 2 W=wish : 0 MISSPELLED : 2 NAMELESS : 1 ALL_CAPS : 0 NUM_URLS : 0 Many classifiers take feature vectors as inputs Feature vectors usually very sparse, use sparse encodings (i.e. only represent non-zero keys) 1 2 Some (Vague) Biology Very loose inspiration: human neurons The Binary Perceptron Inputs are feature values Each feature has a weight Sum is the activation If the activation is: Positive, output 1 Negative, output 0 f 1 f 2 f 3 w 1 w 2 Σ w 3 >0? 3 4 Example: Spam Imagine 4 features: Free (number of occurrences of free ) Money (occurrences of money ) BIAS (always has value 1) free money BIAS : 1 free : 1 money : 1 BIAS : -3 free : 4 money : 2 Binary Decision Rule In the space of feature vectors Any weight vector is a hyperplane One side will be class 1 Other will be class -1 BIAS : -3 free : 4 money : 2-1 = HAM money 2 1 0 0 1 1 = SPAM free 5 6 1

Multiclass Decision Rule Example If we have more than two classes: Have a weight vector for each class Calculate an activation for each class win the vote BIAS : 1 win : 1 game : 0 vote : 1 the : 1 Highest activation wins BIAS : -2 win : 4 game : 4 vote : 0 BIAS : 1 win : 2 game : 0 vote : 4 BIAS : 2 win : 0 game : 2 vote : 0 7 8 The Perceptron Update Rule Start with zero weights Pick up training instances one by one Try to classify win the vote win the election win the game Example If correct, no change! If wrong: lower score of wrong answer, raise score of right answer BIAS : win : game : vote : the : BIAS : win : game : vote : the : BIAS : win : game : vote : the : 9 10 Examples: Perceptron Separable Case Examples: Perceptron Separable Case 11 12 2

Mistake-Driven Classification Properties of Perceptrons In naïve Bayes, parameters: From data statistics Have a causal interpretation One pass through the data Training Data Separability: some parameters get the training set perfectly correct Convergence: if the training is separable, perceptron will eventually converge (binary case) Separable For the perceptron parameters: From reactions to mistakes Have a discriminative interpretation Go through the data until held-out accuracy maxes out Held-Out Data Test Data Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability Non-Separable 13 14 Examples: Perceptron Non-Separable Case Examples: Perceptron Non-Separable Case 15 16 Issues with Perceptrons Overtraining: test / held-out accuracy usually rises, then falls Overtraining isn t quite as bad as overfitting, but is similar Regularization: if the data isn t separable, weights might thrash around Averaging weight vectors over time can help (averaged perceptron) Fixing the Perceptron Main problem with perceptron: Update size τ is uncontrolled Sometimes update way too much Sometimes update way too little Solution: choose an update size which fixes the current mistake (by 1) Mediocre generalization: finds a barely separating solution 17 but, choose the minimum change 18 3

Minimum Correcting Update MIRA In practice, it s bad to make updates that are too large Example may be labeled incorrectly Solution: cap the maximum possible value of τ min not τ=0, or would not have made an error, so min will be where equality holds 19 This gives an algorithm called MIRA Usually converges faster than perceptron Usually performs better, especially on noisy data 20 Linear Separators Support Vector Machines Which of these linear separators is optimal? Maximizing the margin: good according to intuition and theory. Only support vectors matter; other training examples are ignorable. Support vector machines (SVMs) find the separator with max margin Basically, SVMs are MIRA where you optimize over all examples at once MIRA SVM 21 22 Summary Similarity Functions Naïve Bayes Build classifiers using model of training data Smoothing estimates is important in real systems Classifier confidences are useful, when you can get them Perceptrons / MIRA: Make less assumptions about data Mistake-driven learning Multiple passes through data Similarity functions are very important in machine learning Topic for next class: kernels Similarity functions with special properties The basis for a lot of advance machine learning (e.g. SVMs) 23 24 4

Case-Based Reasoning Parametric / Non-parametric Similarity for classification Case-based reasoning Predict an instance s label using similar instances Nearest-neighbor classification 1-NN: copy the label of the most similar data point K-NN: let the k nearest neighbors vote (have to devise a weighting scheme) Key issue: how to define similarity Trade-off: Small k gives relevant neighbors Large k gives smoother functions Sound familiar? Parametric models: Fixed set of parameters More data means better settings Non-parametric models: Complexity of the classifier increases with data Better in the limit, often worse in the non-limit (K)NN is non-parametric Truth 2 Examples 10 Examples 100 Examples 10000 Examples [DEMO] http://www.cs.cmu.edu/~zhuxj/courseproject/knndemo/knn.html 25 26 Collaborative Filtering Ever wonder how online merchants decide what products to recommend to you? Simplest idea: recommend the most popular items to everyone Not entirely crazy! (Why) Can do better if you know something about the customer (e.g. what they ve bought) Better idea: recommend items that similar customers bought A popular technique: collaborative filtering Define a similarity function over customers (how?) Look at purchases made by people with high similarity Trade-off: relevance of comparison set vs confidence in predictions How can this go wrong? You are here 27 Nearest-Neighbor Classification Nearest neighbor for digits: Take new image Compare to all training images Assign based on closest example Encoding: image is vector of intensities: What s the similarity function? Dot product of two images vectors? Usually normalize vectors so x = 1 min = 0 (when?), max = 1 (when?) 28 Basic Similarity Invariant Metrics Many similarities based on feature dot products: If features are just the pixels: Note: not all similarities are of this form Better distances use knowledge about vision Invariant metrics: Similarities are invariant under certain transformations Rotation, scaling, translation, stroke-thickness E.g: 16 x 16 = 256 pixels; a point in 256-dim space Small similarity in R 256 (why?) How to incorporate invariance into similarities? 29 This and next few slides adapted from Xiao Hu, UIUC 30 5

Rotation Invariant Metrics Each example is now a curve in R 256 Rotation invariant similarity: s =max s( r( ), r( )) E.g. highest similarity between images rotation lines Tangent Families Problems with s : Hard to compute Allows large transformations (6 9) Tangent distance: 1st order approximation at original points. Easy to compute Models small rotations 31 32 Template Deformation Deformable templates: An ideal version of each category Best-fit to image using min variance Cost for high distortion of template Cost for image points being far from distorted template Used in many commercial digit recognizers A Tale of Two Approaches Nearest neighbor-like approaches Can use fancy kernels (similarity functions) Don t actually get to do explicit learning Perceptron-like approaches Explicit training to reduce empirical error Can t use fancy kernels (why not?) Or can you? Let s find out! Examples from [Hastie 94] 33 34 The Perceptron, Again Start with zero weights Pick up training instances one by one Try to classify Perceptron Weights What is the final value of a weight w c? Can it be any real vector? No! It s built by adding up inputs. If correct, no change! If wrong: lower score of wrong answer, raise score of right answer Can reconstruct weight vectors (the primal representation) from update counts (the dual representation) 35 36 6

Dual Perceptron How to classify a new example x? Dual Perceptron Start with zero counts (alpha) Pick up training instances one by one Try to classify x n, If correct, no change! If wrong: lower count of wrong class (for this instance), raise score of right class (for this instance) If someone tells us the value of K for each pair of examples, never need to build the weight vectors! 37 38 Kernelized Perceptron Kernelized Perceptron Structure If we had a black box (kernel) which told us the dot product of two examples x and y: Could work entirely with the dual representation No need to ever take dot products ( kernel trick ) Like nearest neighbor work with black-box similarities Downside: slow if many examples get nonzero alpha 39 40 Kernels: Who Cares? Properties of Perceptrons So far: a very strange way of doing a very simple calculation Separability: some parameters get the training set perfectly correct Separable Kernel trick : we can substitute any* similarity function in place of the dot product Convergence: if the training is separable, perceptron will eventually converge (binary case) Lets us learn new kinds of hypothesis * Fine print: if your kernel doesn t satisfy certain technical requirements, lots of proofs break. E.g. convergence, mistake bounds. In practice, illegal kernels sometimes work (but not always). 41 Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability Non-Separable 42 7

Non-Linear Separators Data that is linearly separable (with some noise) works out great: 0 x But what are we going to do if the dataset is just too hard? 0 x How about mapping data to a higher-dimensional space: Non-Linear Separators General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x φ(x) x 2 0 x This and next few slides adapted from Ray Mooney, UT43 44 Some Kernels Kernels implicitly map original vectors to higher dimensional spaces, take the dot product there, and hand the result back Linear kernel: Quadratic kernel: RBF: infinite dimensional representation Discrete kernels: e.g. string kernels 45 8