Support Vector Machines

Similar documents
Support Vector Machines

SUPPORT VECTOR MACHINES

All lecture slides will be available at CSC2515_Winter15.html

Support Vector Machines

SUPPORT VECTOR MACHINES

Data Mining: Concepts and Techniques. Chapter 9 Classification: Support Vector Machines. Support Vector Machines (SVMs)

DM6 Support Vector Machines

12 Classification using Support Vector Machines

Support Vector Machines

Classification by Support Vector Machines

Support Vector Machines: Applications

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017

Kernels + K-Means Introduction to Machine Learning. Matt Gormley Lecture 29 April 25, 2018

Support Vector Machines.

Introduction to Machine Learning

Kernel Methods & Support Vector Machines

Machine Learning for NLP

Classification by Support Vector Machines

Lecture 9: Support Vector Machines

Support Vector Machines

Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines

Support Vector Machines + Classification for IR

Robot Learning. There are generally three types of robot learning: Learning from data. Learning by demonstration. Reinforcement learning

Classification by Support Vector Machines

HW2 due on Thursday. Face Recognition: Dimensionality Reduction. Biometrics CSE 190 Lecture 11. Perceptron Revisited: Linear Separators

Optimal Separating Hyperplane and the Support Vector Machine. Volker Tresp Summer 2018

Lab 2: Support vector machines

Support vector machines

Support Vector Machines (a brief introduction) Adrian Bevan.

Kernel SVM. Course: Machine Learning MAHDI YAZDIAN-DEHKORDI FALL 2017

Bagging and Boosting Algorithms for Support Vector Machine Classifiers

Introduction to Support Vector Machines

Memory-efficient Large-scale Linear Support Vector Machine

A Short SVM (Support Vector Machine) Tutorial

Supervised Learning (contd) Linear Separation. Mausam (based on slides by UW-AI faculty)

.. Spring 2017 CSC 566 Advanced Data Mining Alexander Dekhtyar..

Support Vector Machines and their Applications

Support Vector Machines

Support Vector Machines

Support Vector Machines.

Kernel Methods. Chapter 9 of A Course in Machine Learning by Hal Daumé III. Conversion to beamer by Fabrizio Riguzzi

Feature scaling in support vector data description

Scale-Invariance of Support Vector Machines based on the Triangular Kernel. Abstract

Chap.12 Kernel methods [Book, Chap.7]

CAP5415-Computer Vision Lecture 13-Support Vector Machines for Computer Vision Applica=ons

Introduction AL Neuronale Netzwerke. VL Algorithmisches Lernen, Teil 2b. Norman Hendrich

Machine Learning Lecture 9

Topic 4: Support Vector Machines

Training Data Selection for Support Vector Machines

One-class Problems and Outlier Detection. 陶卿 中国科学院自动化研究所

Content-based image and video analysis. Machine learning

Machine Learning Lecture 9

Module 4. Non-linear machine learning econometrics: Support Vector Machine

Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning. Supervised vs. Unsupervised Learning

SVM in Analysis of Cross-Sectional Epidemiological Data Dmitriy Fradkin. April 4, 2005 Dmitriy Fradkin, Rutgers University Page 1

Lecture 7: Support Vector Machine

CS570: Introduction to Data Mining

Support Vector Machines

SoftDoubleMinOver: A Simple Procedure for Maximum Margin Classification

Rule extraction from support vector machines

CSE 417T: Introduction to Machine Learning. Lecture 22: The Kernel Trick. Henry Chai 11/15/18

CS 559: Machine Learning Fundamentals and Applications 9 th Set of Notes

The Effects of Outliers on Support Vector Machines

Support vector machine (II): non-linear SVM. LING 572 Fei Xia

Machine Learning: Think Big and Parallel

KBSVM: KMeans-based SVM for Business Intelligence

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Combine the PA Algorithm with a Proximal Classifier

Lab 2: Support Vector Machines

Support Vector Machines

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

Neural Networks and Deep Learning

CS 229 Midterm Review

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Instance-based Learning

Table of Contents. Recognition of Facial Gestures... 1 Attila Fazekas

3 Perceptron Learning; Maximum Margin Classifiers

Part 5: Structured Support Vector Machines

Support Vector Machines for Classification and Regression

Support Vector Machines for Face Recognition

Lecture 3: Linear Classification

Kernel-based online machine learning and support vector reduction

A Practical Guide to Support Vector Classification

CSE 573: Artificial Intelligence Autumn 2010

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

LECTURE 5: DUAL PROBLEMS AND KERNELS. * Most of the slides in this lecture are from

COMS 4771 Support Vector Machines. Nakul Verma

DECISION-TREE-BASED MULTICLASS SUPPORT VECTOR MACHINES. Fumitake Takahashi, Shigeo Abe

Classification: Feature Vectors

LOGISTIC REGRESSION FOR MULTIPLE CLASSES

Support vector machines. Dominik Wisniewski Wojciech Wawrzyniak

A Summary of Support Vector Machine

Prediction of Dialysis Length. Adrian Loy, Antje Schubotz 2 February 2017

Linear methods for supervised learning

ECG782: Multidimensional Digital Signal Processing

Generative and discriminative classification techniques

Support Vector Machines

Data mining with Support Vector Machine

Parallel & Scalable Machine Learning Introduction to Machine Learning Algorithms

Instance-based Learning

Transcription:

Support Vector Machines VL Algorithmisches Lernen, Teil 3a Norman Hendrich & Jianwei Zhang University of Hamburg, Dept. of Informatics Vogt-Kölln-Str. 30, D-22527 Hamburg hendrich@informatik.uni-hamburg.de 12/05/2010 1

Outline Introduction Review of the linear classifier Maximum margin classification Soft-margin classification Kernels and feature maps 2

Introduction University of Hamburg Support Vector Machines a.k.a. maximum margin classifiers a family of related supervised learning methods for classification and regression try to minimize the classification error while maximizing the geometric margin 3

Introduction Hype University of Hamburg s are very popular today often the best solutions on classification benchmarks can handle large data sets an active research area but don t believe the hype (at least, all of it) good performance is not guaranteed selection of feature maps is critical requires prior knowledge and experiments and fine-tuning of parameters 4

Introduction Overall concept and architecture select a feature space H and a mapping function Φ : x Φ(x) select a classification (output) function σ y(x) = σ( i ϑ i Φ(x), Φ(x i ) ) during training, find the support-vectors x 1... x n and weights ϑ which minimize the classification error map test input x to Φ(x) calculate dot-products Φ(x)Φ(x i ) feed linear combination of the dot-products into σ get the classification result 5

Introduction University of Hamburg Block-diagram handwritten digit recognition 6

Introduction University of Hamburg Example: learning a checkers board 7

Introduction University of Hamburg History Three revolutions in machine learning (Shawe-Taylor & Cristianni 2004) 1960s: efficient algorithms for (linear) pattern detection e.g., Perceptron (Rosenblatt 1957) efficient training algorithms good generalization but insufficient for nonlinear data 1980s: multi-layer networks and backpropagation can deal with nonlinear data but high modeling effort, long training times and risk of overfitting 1990s: s and related Kernel Methods all in one solution considerable success on practical applications based on principled statistical theory 8

Introduction University of Hamburg History: seminal work by Vladimir Vapnik B. E. Boser, I. M. Guyon, and V. N. Vapnik, A training algorithm for optimal margin classifiers., 5th Annual ACM Workshop on COLT, pages 144-152, Pittsburgh, 1992 C. Cortes and V. Vapnik, Support-Vector Networks, Machine Learning, 20, 1995. http://www.springerlink.com/content/k238jx04hm87j80g/ H. Drucker, C.J.C. Burges, L. Kaufman, A. Smola, and V. Vapnik Support Vector Regression Machines, Advances in Neural Information Processing Systems 9, NIPS 1996, 155-161 The bible : V. Vapnik, The Nature of Statistical Learning Theory, Springer, 1995 9

Introduction University of Hamburg References V. Vapnik, The Nature of Statistical Learning Theory, Springer, 1995 N. Cristianini, J. Shawe-Taylor, Introduction to Support Vector Machines and other kernel-based learning methods, Cambridge University Press, 2000 J. Shawe-Taylor, N. Cristianini, Kernel Methods for Pattern Analysis, Cambridge University Press, 2004 B. Schölkopf, A. J. Smola, Learning with Kernels, MIT Press, 2002 L. Bottou, O. Chapelle, D. DeCoste, J. Weste (Eds), Large-Scale Kernel Machines, MIT Press, 2007 10

Introduction University of Hamburg References: web resources www.kernel-machines.org/ A. W. Moore, Support Vector Machines, www.cs.cmu.edu/ awm, 2003 S. Bloehdorn, Maschinelles Lernen, http://www.aifb.uni-karlsruhe.de/wbs/pci/ml/s.pdf C.-C. Chang & C.-J. Lin, libsvm http://www.csie.ntu.edu.tw/ cjlin/libsvm/ W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical Recipes The Art of Scientific Computing, Cambridge University Press, 2007 (all algorithms on CD-ROM) 11

Review of the linear classifier Review: binary classification task: classify input test patterns x based on previously learned training patterns simplest case is binary classification, two-classes y(x) = {+1, 1} A first example algorithm: classify based on distance to the center-of-mass of the training pattern clusters result can be written as y = sgn( i w i x i + b) 12

Review of the linear classifier Simple classification example 13

Review of the linear classifier Simple classification example (cont d) two classes of data points ( o and + ) calculate the means of each cluster (center of mass) assign test pattern x to the nearest cluster can be written as y = sgn( m i=1 α i x, x i + b) with constant weights α i = { 1 1 m +, m } 14

Review of the linear classifier Simple classification example (cont d) centers of mass: c + = 1 m + {i y i =+1} x i, c = 1 m {i y i = 1} x i, boundary point c: c = (c + + c )/2 classification: y = sgn (x c), w norm: x := x, x rewrite: y = sgn( (x, c + ) (x, c ) + b) with b = ( c 2 c + 2 )/2 all together: y = sgn ( 1 m + {i y i =+1} x i x, x i 1 m {i y i = 1} x i x, x i + b ) 15

Maximum margin classification Linear classification denotes +1 denotes -1 x f y find w and b, so that y(x, w, b) = sgn(w x b) 16

Maximum margin classification Linear classification denotes +1 denotes -1 x f y one possible decision boundary 17

Maximum margin classification Linear classification denotes +1 denotes -1 x f y and another one 18

Maximum margin classification Linear classification denotes +1 denotes -1 x f y which is best? which boundary is best? 19

Maximum margin classification Remember: Perceptron can use the Perceptron learning algorithm to find a valid decision boundary convergence is guaranteed, iff the data is separable algorithm stops as soon as a solution is found but we don t know which boundary will be chosen 20

Maximum margin classification Perceptron training algorithm 21

Maximum margin classification The classifier margin denotes +1 denotes -1 x f y which is best? check the "margin"! define the margin as the width that the boundary could be increased before hitting a data point. 22

Maximum margin classification The classifier margin denotes +1 denotes -1 x f y which is best? a second example: margin not symmetrical 23

Maximum margin classification Maximum margin classifier denotes +1 denotes -1 x f y the classifier with the largest margin the simplest kind of (called the linear ) 24

Maximum margin classification Support vectors denotes +1 denotes -1 "support vectors" x f y data points that limit the margin are called the support vectors 25

Maximum margin classification Why maximum margin? intuitively, feels safest least chance of misclassification if the decision boundary is not exactly correct statistical theory ( VC dimension ) indicates that maximum margin is good empirically, works very well note: far fewer support-vectors than data points (unless overfitted) note: the model is immune against removal of all non-support-vector data points 26

Maximum margin classification The geometric interpretation 27

Maximum margin classification Step by step: calculating the margin width "predict class = +1 zone" "plus" plane classifier decision boundary "minus" plane M "predict class = -1 zone" how to represent the boundary (hyperplane) and the margin width M in m input dimensions? 28

Maximum margin classification Calculating the margin width "predict class = +1 zone" "plus" plane classifier decision boundary "minus" plane M "predict class = -1 zone" plus-plane: {x : w x + b = +1} minus-plane: {x : w x + b = 1} classify pattern as +1 if w x + b +1 and 1 if w x + b 1 29

Maximum margin classification Calculating the margin width X+ "predict class = +1 zone" wx+b = +1 wx + b = 0 wx+b = -1 M X- "predict class = -1 zone" w is perpendicular to the decision boundary and the plus-plane and minus-plane proof: consider two points u and v on the plus-plane and calculate w (u v) 30

Maximum margin classification Calculating the margin width X+ "predict class = +1 zone" wx+b = +1 wx + b = 0 wx+b = -1 M X- "predict class = -1 zone" select point X + on the plus plane and nearest point X on the minus plane of course, margin width M = X + X and X + = X + λw for some λ 31

Maximum margin classification Calculating the margin width X+ "predict class = +1 zone" wx+b = +1 wx + b = 0 wx+b = -1 M X- "predict class = -1 zone" w (X + λw) + b = 1 w X + b + λw w = 1 1 + λw w = 1 λ = 2 w w 32

Maximum margin classification Calculating the margin width M X+ "predict class = +1 zone" X- "predict class = -1 zone" wx+b = +1 wx + b = 0 wx+b = -1 M = 2= w w λ = 2 w w M = X + X = λw = λ w M = λ w w = 2/ w w 33

Maximum margin classification Training the maximum margin classifier Given a guess of w and b we can compute whether all data points are in the correct half-planes compute the width of the margin So: write a program to search the space of w and b to find the widest margin that still correctly classifies all training data points. but how? gradient descent? simulated annealing?... usually, Quadrating programming 34

Maximum margin classification Learning via Quadratic Programming QP is a well-studied class of optimization algorithms maximize a quadratic function of real-valued variables subject to linear constraints could use standard QP program libraries e.g. MINOS http://www.sbsi-sol-optimize.com/asp/sol products minos.htm e.g. LOQO http://www.princeton.edu/ rvdb/loqo or algorithms streamlined for (e.g. large data sets) 35

Maximum margin classification Quadratic Programming General problem: find arg max u ( c + d T u + 1 2 ut Ru ) subject to n linear inequality constraints a 11 u 1 + a 12 u 2 + + a 1m u m b 1 a 21 u 1 + a 22 u 2 + + a 2m u m b 2... a n1 u 1 + a n2 u 2 + + a nm u m b n subject to e additional linear equality constraints a (n+1)1 u 1 + a (n+1)2 u 2 + + a (n+1)m u m = b n+1... a (n+e)1 u 1 + a (n+e)2 u 2 + + a (n+e)m u m = b n+1 36

Maximum margin classification QP for the maximum margin classifier Setup of the Quadratic Programming for : M = λ w w = 2/ w w for largest M, we want to minimize w w assuming R data points (x k, y k ) with y k = ±1 there are R constraints: w x k + b +1 if y k = +1 w x k + b 1 if y k = 1 37

Maximum margin classification QP for the maximum margin classifier solution of the QP problem is possible but difficult, because of the complex constraints Instead, switch to the dual representation use the Lagrange multiplier trick introduce new dummy variables α i this allows to rewrite with simple inequalities α i 0 solve the optimization problem, find α i from the α i, find the separating hyperplane (w) from the hyperplane, find b 38

Maximum margin classification The dual optimization problem 39

Maximum margin classification Dual representation 40

Maximum margin classification Dual representation of Perceptron learning 41

Maximum margin classification Summary: Linear based on the classical linear classifier maximum margin concept limiting data points are called Support Vectors solution via Quadratic Programming dual formulation (usually) easier to solve 42

Soft-margin classification Classification of noisy input data? actual real world training data contains noise usually, several outlier patterns for example, mis-classified training data at least, reduced error-margins or worse, training set not linearly separable complicated decision boundaries complex kernels can handle this (see below) but not always the best idea risk of overfitting instead, allow some patterns to violate the margin constraints 43

Soft-margin classification The example data set, modified denotes +1 denotes -1 x f y what should we do? not linearly separable! trust every data point? 44

Soft-margin classification Example data set, and one example classifier denotes +1 denotes -1 x f y what should we do? three points misclassified two with small margin, one with large margin 45

Soft-margin classification Noisy input data? Another toy example LWK, page 10 allow errors? trust every data point? 46

Soft-margin classification Soft-margin classification Cortes and Vapnik, 1995 allow some patterns to violate the margin constraints find a compromise between large margins and the number of violations Idea: introduce slack-variables ξ = (ξ i... ξ n ), ξ i 0 which measure the margin violation (or classification error) on pattern x i : y(x i )(w Φ(x i ) + b) 1 ξ i introduce one global parameter C which controls the compromise between large margins and the number of violations 47

Soft-margin classification Soft-margin classification introduce slack-variables ξ i and global control parameter C max w,b,ξ P(w, b, ξ) = 1 2 w 2 + C n i=1 ξ i subject to: i : y(x i )(w Φ(x i ) + b) 1 ξ i i : ξ i 0 problem is now very similar to the hard-margin case again, the dual representation is often easier to solve 48

Soft-margin classification Slack parameters ξ i, control parameter C (LSKM chapter 1) 49

Soft-margin classification Lagrange formulation of the soft-margin 50

Soft-margin classification Dual formulation of soft-margin 51

Soft-margin classification The optimization problem 52

Soft-margin classification How to select the control parameter? of course, the optimization result depends on the specified control parameter C how to select the value of C? depends on the application and training data Numerical Recipes recommends the following start with C = 1 then try to increase or decrease by powers of 10 until you find a broad plateau where the exact value of C doesn t matter much good solution should classify most patterns correctly, with many α i = 0 and many α i = C, but only a few in between 53

Soft-margin classification Summary: soft-margin same concept as the linear try to maximize the decision margin allow some patterns to violate the margin constraints compromise between large margin and number of violations introduce a control parameter C and new inequality parameters ξ i (slack) again, can be written as a QP problem again, dual formulation easier to solve 54

Kernels and feature maps Nonlinearity through feature maps General idea: introduce a function Φ which maps the input data into a higher dimensional feature space Φ : x X Φ(x) H similar to hidden layers of multi-layer ANNs explicit mappings can be expensive in terms of CPU and/or memory (especially in high dimensions) Kernel functions achieve this mapping implicitly often, very good performance 55

Kernels and feature maps Example 1-dimensional data set denotes +1 denotes -1 x=0 what would the linear do with these patterns? 56

Kernels and feature maps Example 1-dimensional data set M classification boundary margin denotes +1 denotes -1 x=0 what would the linear do with these patterns? not a big surprise! maximum margin solution 57

Kernels and feature maps Harder 1-dimensional data set denotes +1 denotes -1 x=0 and now? doesn t look like outliers so, soft-margin won t help a lot 58

Kernels and feature maps Harder 1-dimensional data set denotes +1 denotes -1 x=0 permit non-linear basis functions z k = (x k, x 2 k ) 59

Kernels and feature maps Harder 1-dimensional data set denotes +1 denotes -1 x=0 z k = (x k, x 2 k ) data is now linearly separable! 60

Kernels and feature maps Similar for 2-dimensional data set denotes +1 denotes -1 clearly not linearly separable in 2D introduce z k = (x k, y k, 2x k y k ) 61

Kernels and feature maps Common feature maps basis functions z k = ( polynomial terms of x k of degree 1 to q) z k = ( radial basis functions of x k ) z k = ( sigmoid functions of x k )... combinations of the above Note: feature map Φ only used in inner products for training, information on pairwise inner products is sufficient 62

Kernels and feature maps Kernel: definition Definition 1 (Kernel): A Kernel is a function K, such that for all x, z X : K(x, z) = φ(x), φ(z). where Φ is a mapping from X to an (inner product) feature space F. 63

Kernels and feature maps Example: polynomial Kernel consider the mapping: Φ(x) = (x 2 1, 2x 1 x 2, x 2 2 ) IR3 evaluation of dot products: Φ(x), Φ(z) = (x 2 1, 2x 1 x 2, x 2 2 ), (z2 1, 2z 1 z 2, z 2 2 ) = x 2 1 z2 1 + 2x 1x 2 z 1 z 2 + x 2 2 z2 2 = (x 1 z 1 + x 2 z 2 ) 2 = x, z 2 = κ(x, z) kernel does not uniquely determine the feature space: Φ (x) = (x 2 1, x 2 2, x 1x 2, x 2 x 1 ) IR 4 also fits to k(x, z) = x, z 2 64

Kernels and feature maps Example: quadratic kernel, m dimensions x = (x 1,..., x m ) Φ(x) = ( 1, 2x1, 2x 2,... 2x m, x 1 2, x 2 2,... x m, 2 2x1 x 2, 2x 1 x 3,..., 2x m 1 x m ) constant, linear, pure quadratic, cross quadratic terms in total (m + 2)(m + 1)/2 terms (roughly m 2 /2) so, complexity of evaluating Φ(x) is O(m 2 ) for example, m = 100 implies 5000 terms... 65

Kernels and feature maps Example: quadratic kernel, scalar product Φ(x) Φ(y) = 1 2x1 2x2... x 2 1 x 2 2... 2x1 x 2 2x1 x 3... 2xm 1 x m 1 2y1 2y2... y 2 1 y 2 2... 2y1 y 2 2y1 y 3... 2ym 1 y m = 1 + m i=1 2x iy i + m i=1 x i 2y i 2 + m m i=1 j=1 2x ix j y i y j 66

Kernels and feature maps Example: scalar product calculating Φ(x), Φ(y) is O(m 2 ) for comparison, calculate (x y + 1) 2 : (x y + 1) 2 = (( m i=1 x i y i ) + 1) 2 = ( m i=1 x ) 2 ( iy i + 2 m i=1 x ) iy i + 1 = m m i=1 j=1 x iy i x j y j + 2 m i=1 x iy i + 1 = m i=1 (x iy i ) 2 + 2 m m i=1 j=1 x iy i x j y j + 2 m i=1 x iy i + 1 = Φ(x) Φ(y) we can replace Φ(x), Φ(y) with (x y + 1) 2, which is O(m) 67

Kernels and feature maps Polynomial kernels the learning algorithm only needs Φ(x), Φ(y) for the quadratic polynomial, we can replace this by ( x, y + 1) 2 optional, use scale factors: (a x, y + b) 2 calculating one scalar product drops from O(m 2 ) to O(m) overall training algorithm then is O(mR 2 ) same trick also works for cubic and higher degree cubic polynomial kernel: (a x, y + b) 3, includes all m 3 /6 terms up to degree 3 quartic polynomial kernel: (a x, y + b) 4 includes all m 4 /24 terms up to degree 4 etc. 68

Kernels and feature maps Polynomial kernels for polynomial kernel of degree d, we use ( x, y + 1) d calculating the scalar product drops from O(m d ) to O(m) algorithm implicitly uses an enourmous number of terms high theoretical risk of overfitting but often works well in practice note: same trick is used to evaluate a test input: y(x t ) = R i=1 α ky k ( x k, x + 1) d ) note: α k = 0 for non-support vectors, so overall O(mS) with the number of support vectors S. 69

Kernels and feature maps Kernel Design How to get up a useful kernel function? derive it directly from explicit feature mappings design a similarity function for your input data, then check whether it is a valid kernel function use the application domain to guess useful values of any kernel parameters (scale factors) for example, for polynomial kernels make (a x, y + b) lie between ±1 for all i and j. 70

Kernels and feature maps Kernel composition Given Kernels K 1 and K 2 over X X, the following functions are also kernels: K(x, z) = αk 1 (x, z), α IR + ; K(x, z) = K 1 (x, z) + c, c IR + ; K(x, z) = K 1 (x, z) + K 2 (x, z); K(x, z) = K 1 (x, z) K 2 (x, z); K(x, z) = x Bz, X IR n, B pos. sem.-def. 71

Kernels and feature maps Gaussian Kernel K(x, z) = exp ( x z 2 ) 2σ 2 with bandwidth parameter σ kernel evaluation depends on distance of x and z local neighborhood classification initialize σ to a characteristic distance between nearby patterns in feature space large distance implies orthogonal patterns 72

Kernels and feature maps The Kernel Trick rewrite the learning algorithm such that any reference to the input data happens from within inner products replace any such inner product by the kernel function work with the (linear) algorithm as usual many well-known algorithms can be rewritten using the kernel approach 73

Kernels and feature maps Summary: Kernels non-linearity enters (only) through the kernel but the training algorithm remains linear free choice of the kernel (and feature map) based on the application polynomial or Gaussian kernels often work well some examples of fancy kernels next week 74

Kernels and feature maps Summary: Support Vector Machine based on the linear classifier Four new main concepts: maximum margin classification soft-margin classification for noisy data introduce non-linearity via feature maps kernel-trick: implicit calculation of feature maps use Quadratic Programming for training polynomial or Gaussian kernels often work well 75