Announcements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday.

Similar documents
Announcements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron

Classification: Feature Vectors

Feature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.

CSE 573: Artificial Intelligence Autumn 2010

CSEP 573: Artificial Intelligence

Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

CS 188: Artificial Intelligence Fall 2008

CS 343: Artificial Intelligence

CS 343H: Honors AI. Lecture 23: Kernels and clustering 4/15/2014. Kristen Grauman UT Austin

Kernels and Clustering

Feature Extractors. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. The Perceptron Update Rule.

CS 188: Artificial Intelligence Fall 2008

CS5670: Computer Vision

CS246: Mining Massive Datasets Jure Leskovec, Stanford University

The Perceptron. Simon Šuster, University of Groningen. Course Learning from data November 18, 2013

CSE 473: Ar+ficial Intelligence Machine Learning: Naïve Bayes and Perceptron

ECG782: Multidimensional Digital Signal Processing

Supervised Learning (contd) Linear Separation. Mausam (based on slides by UW-AI faculty)

Lecture 9: Support Vector Machines

Supervised Learning: K-Nearest Neighbors and Decision Trees

CS 188: Artificial Intelligence Spring Announcements

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #19: Machine Learning 1

Text classification II CE-324: Modern Information Retrieval Sharif University of Technology

Naïve Bayes for text classification

Artificial Intelligence Naïve Bayes

ADVANCED CLASSIFICATION TECHNIQUES

CPSC 340: Machine Learning and Data Mining. Logistic Regression Fall 2016

Natural Language Processing

Support Vector Machines + Classification for IR

Slides for Data Mining by I. H. Witten and E. Frank

CS7267 MACHINE LEARNING NEAREST NEIGHBOR ALGORITHM. Mingon Kang, PhD Computer Science, Kennesaw State University

Lecture 3. Oct

CS 188: Artificial Intelligence Fall Machine Learning

Object Recognition. Lecture 11, April 21 st, Lexing Xie. EE4830 Digital Image Processing

CS 343: Artificial Intelligence

CS6375: Machine Learning Gautam Kunapuli. Mid-Term Review

Classification and K-Nearest Neighbors

Final Exam. Introduction to Artificial Intelligence. CS 188 Spring 2010 INSTRUCTIONS. You have 3 hours.

Semi-supervised Learning

Machine Learning Techniques for Data Mining

Machine Learning Classifiers and Boosting

CSE 40171: Artificial Intelligence. Learning from Data: Unsupervised Learning

Recap: Gaussian (or Normal) Distribution. Recap: Minimizing the Expected Loss. Topics of This Lecture. Recap: Maximum Likelihood Approach

CS570: Introduction to Data Mining

Kernels + K-Means Introduction to Machine Learning. Matt Gormley Lecture 29 April 25, 2018

Instance-based Learning

Perceptron Introduction to Machine Learning. Matt Gormley Lecture 5 Jan. 31, 2018

CAMCOS Report Day. December 9 th, 2015 San Jose State University Project Theme: Classification

CS4495/6495 Introduction to Computer Vision. 8C-L1 Classification: Discriminative models

UVA CS 6316/4501 Fall 2016 Machine Learning. Lecture 15: K-nearest-neighbor Classifier / Bias-Variance Tradeoff. Dr. Yanjun Qi. University of Virginia

DATA MINING LECTURE 10B. Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines

CS 188: Artificial Intelligence Fall 2011

Image classification Computer Vision Spring 2018, Lecture 18

Lecture #11: The Perceptron

CS 188: Artificial Intelligence Fall Announcements

Announcements. CS 188: Artificial Intelligence Fall Machine Learning. Classification. Classification. Bayes Nets for Classification

Search Engines. Information Retrieval in Practice

Application of Support Vector Machine Algorithm in Spam Filtering

Machine Learning Lecture 3

Recognition Tools: Support Vector Machines

VECTOR SPACE CLASSIFICATION

Data Preprocessing. Supervised Learning

CPSC 340: Machine Learning and Data Mining. Non-Parametric Models Fall 2016

Discriminative classifiers for image recognition

Predictive Analysis: Evaluation and Experimentation. Heejun Kim

Classification. Statistical NLP Spring Example: Text Classification. Some Definitions. Block Feature Vectors.

Bayes Risk. Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Discriminative vs Generative Models. Loss functions in classifiers

Support Vector Machines

Instance-based Learning

3D Object Recognition using Multiclass SVM-KNN

Support Vector Machines

CPSC 340: Machine Learning and Data Mining. Non-Parametric Models Fall 2016

COMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 6: k-nn Cross-validation Regularization

Problem 1: Complexity of Update Rules for Logistic Regression

Classifiers for Recognition Reading: Chapter 22 (skip 22.3)

Machine Learning in Biology

K-Nearest Neighbors. Jia-Bin Huang. Virginia Tech Spring 2019 ECE-5424G / CS-5824

Topics in Machine Learning

Non-Parametric Modeling

CS6716 Pattern Recognition

Content-based image and video analysis. Machine learning

Multi-Class Logistic Regression and Perceptron

CPSC 340: Machine Learning and Data Mining. More Linear Classifiers Fall 2017

Machine Learning. Nonparametric methods for Classification. Eric Xing , Fall Lecture 2, September 12, 2016

PV211: Introduction to Information Retrieval

6.034 Quiz 2, Spring 2005

Machine Learning. Classification

k-nearest Neighbors + Model Selection

Classification: Linear Discriminant Functions

Instance-Based Representations. k-nearest Neighbor. k-nearest Neighbor. k-nearest Neighbor. exemplars + distance measure. Challenges.

Natural Language Processing

Part-based and local feature models for generic object recognition

CS178: Machine Learning and Data Mining. Complexity & Nearest Neighbor Methods

Contents Machine Learning concepts 4 Learning Algorithm 4 Predictive Model (Model) 4 Model, Classification 4 Model, Regression 4 Representation

MIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018

Ensemble Learning. Another approach is to leverage the algorithms we have via ensemble methods

Generative and discriminative classification techniques

CS 229 Midterm Review

Equation to LaTeX. Abhinav Rastogi, Sevy Harris. I. Introduction. Segmentation.

CS 8520: Artificial Intelligence. Machine Learning 2. Paula Matuszek Fall, CSC 8520 Fall Paula Matuszek

Transcription:

CS 188: Artificial Intelligence Spring 2011 Lecture 21: Perceptrons 4/13/2010 Announcements Project 4: due Friday. Final Contest: up and running! Project 5 out! Pieter Abbeel UC Berkeley Many slides adapted from Dan Klein. Saturday, 10am-noon, 3 rd floor Sutardja Dai Hall Survey Outline Generative vs. Discriminative Perceptron Classification: Feature Vectors Generative vs. Discriminative Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free : 2 YOUR_NAME : 0 MISSPELLED : 2 FROM_FRIEND : 0 PIXEL-7,12 : 1 PIXEL-7,13 : 0 NUM_LOOPS : 1 SPAM or + 2 Generative classifiers: E.g. naïve Bayes A causal model with evidence variables Query model for causes given evidence Discriminative classifiers: No causal model, no Bayes rule, often no probabilities at all! Try to predict the label Y directly from X Robust, accurate with varied features Loosely: mistake driven rather than model driven 6 1

Some (Simplified) Biology Very loose inspiration: human neurons Linear Classifiers Inputs are feature values Each feature has a weight Sum is the activation If the activation is: Positive, output +1 Negative, output -1 f 1 f 2 f 3 w 1 w 2 w 3 Σ >0? 7 8 Classification: Weights Binary case: compare features to a weight vector Learning: figure out the weight vector from examples # free : 4 YOUR_NAME :-1 MISSPELLED : 1 FROM_FRIEND :-3 Dot product positive means the positive class # free : 2 YOUR_NAME : 0 MISSPELLED : 2 FROM_FRIEND : 0 # free : 0 YOUR_NAME : 1 MISSPELLED : 1 FROM_FRIEND : 1 Binary Decision Rule In the space of feature vectors Examples are points Any weight vector is a hyperplane One side corresponds to Y=+1 Other corresponds to Y=-1 BIAS : -3 free : 4 money : 2-1 = HAM money 2 1 0 0 1 +1 = SPAM free Outline Naïve Bayes recap Smoothing Generative vs. Discriminative Perceptron Binary Perceptron Update Start with zero weights For each training instance: Classify with current weights If correct (i.e., y=y*), no change! If wrong: adjust the weight vector by adding or subtracting the feature vector. Subtract if y* is -1. 14 [demo] 2

Multiclass Decision Rule Example If we have multiple classes: A weight vector for each class: Score (activation) of a class y: win the vote BIAS : 1 win : 1 game : 0 vote : 1 the : 1 Prediction highest score wins Binary = multiclass where the negative class has weight zero BIAS : -2 win : 4 game : 4 vote : 0 the : 0 BIAS : 1 win : 2 game : 0 vote : 4 the : 0 BIAS : 2 win : 0 game : 2 vote : 0 the : 0 Learning Multiclass Perceptron Start with zero weights Pick up training instances one by one Classify with current weights win the vote win the election win the game Example If correct, no change! If wrong: lower score of wrong answer, raise score of right answer 17 BIAS : win : game : vote : the : BIAS : win : game : vote : the : BIAS : win : game : vote : the : Examples: Perceptron Properties of Perceptrons Separable Case Separability: some parameters get the training set perfectly correct Convergence: if the training is separable, perceptron will eventually converge (binary case) Mistake Bound: the maximum number of mistakes (binary case) related to the margin or degree of separability Separable Non-Separable 19 21 3

Examples: Perceptron Non-Separable Case Problems with the Perceptron Noise: if the data isn t separable, weights might thrash Averaging weight vectors over time can help (averaged perceptron) Mediocre generalization: finds a barely separating solution Overtraining: test / held-out accuracy usually rises, then falls Overtraining is a kind of overfitting 22 Fixing the Perceptron Minimum Correcting Update Idea: adjust the weight update to mitigate these effects MIRA*: choose an update size that fixes the current mistake but, minimizes the change to w The +1 helps to generalize * Margin Infused Relaxed Algorithm min not τ=0, or would not have made an error, so min will be where equality holds Maximum Step Size In practice, it s also bad to make updates that are too large Example may be labeled incorrectly You may not have enough features Solution: cap the maximum possible value of τ with some constant C Linear Separators Which of these linear separators is optimal? Corresponds to an optimization that assumes non-separable data Usually converges faster than perceptron Usually better, especially on noisy data 27 28 4

Support Vector Machines Maximizing the margin: good according to intuition, theory, practice Only support vectors matter; other training examples are ignorable Support vector machines (SVMs) find the separator with max margin Basically, SVMs are MIRA where you optimize over all examples at once MIRA Classification: Comparison Naïve Bayes Builds a model training data Gives prediction probabilities Strong assumptions about feature independence One pass through data (counting) SVM Perceptrons / MIRA: Makes less assumptions about data Mistake-driven learning Multiple passes through data (prediction) Often more accurate 30 Extension: Web Search Feature-Based Ranking Information retrieval: Given information needs, produce information Includes, e.g. web search, question answering, and classic IR x = Apple Computers x = Apple Computers x, Web search: not exactly classification, but rather ranking x, Perceptron for Ranking Inputs Candidates Many feature vectors: One weight vector: Prediction: Update (if wrong): Pacman Apprenticeship! Examples are states s Candidates are pairs (s,a) Correct actions: those taken by expert Features defined over (s,a) pairs: f(s,a) Score of a q-state (s,a) given by: correct action a* How is this VERY different from reinforcement learning? 5

Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance s label using similar instances Nearest-neighbor classification 1-NN: copy the label of the most similar data point K-NN: let the k nearest neighbors vote (have to devise a weighting scheme) Key issue: how to define similarity Trade-off: Small k gives relevant neighbors Large k gives smoother functions Sound familiar? [Demo] http://www.cs.cmu.edu/~zhuxj/courseproject/knndemo/knn.html 36 Parametric / Non-parametric Nearest-Neighbor Classification Parametric models: Fixed set of parameters More data means better settings Non-parametric models: Complexity of the classifier increases with data Better in the limit, often worse in the non-limit (K)NN is non-parametric Truth Nearest neighbor for digits: Take new image Compare to all training images Assign based on closest example Encoding: image is vector of intensities: 2 Examples 10 Examples 100 Examples 10000 Examples What s the similarity function? Dot product of two images vectors? 37 Usually normalize vectors so x = 1 min = 0 (when?), max = 1 (when?) 38 Basic Similarity Many similarities based on feature dot products: If features are just the pixels: Invariant Metrics Better distances use knowledge about vision Invariant metrics: Similarities are invariant under certain transformations Rotation, scaling, translation, stroke-thickness E.g: Note: not all similarities are of this form 39 16 x 16 = 256 pixels; a point in 256-dim space Small similarity in R 256 (why?) How to incorporate invariance into similarities? 40 This and next few slides adapted from Xiao Hu, UIUC 6

Template Deformation Deformable templates: An ideal version of each category Best-fit to image using min variance Cost for high distortion of template Cost for image points being far from distorted template Used in many commercial digit recognizers A Tale of Two Approaches Nearest neighbor-like approaches Can use fancy similarity functions Don t actually get to do explicit learning Perceptron-like approaches Explicit training to reduce empirical error Can t use fancy similarity, only linear Or can they? Let s find out! 43 Examples from [Hastie 94] 44 Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance s label using similar instances Nearest-neighbor classification 1-NN: copy the label of the most similar data point K-NN: let the k nearest neighbors vote (have to devise a weighting scheme) Key issue: how to define similarity Trade-off: Small k gives relevant neighbors Large k gives smoother functions Sound familiar? [DEMO] http://www.cs.cmu.edu/~zhuxj/courseproject/knndemo/knn.html Recap: Nearest-Neighbor Nearest neighbor: Classify test example based on closest training example Requires a similarity function (kernel) Eager learning: extract classifier from data Lazy learning: keep data around and predict from it at test time Truth 2 Examples 10 Examples 100 Examples 10000 Examples Nearest-Neighbor Classification Nearest neighbor for digits: Take new image Compare to all training images Assign based on closest example Encoding: image is vector of intensities: What s the similarity function? Dot product of two images vectors? Usually normalize vectors so x = 1 min = 0 (when?), max = 1 (when?) 7