Announcements:$Rough$Plan$$
|
|
- Edmund Robinson
- 5 years ago
- Views:
Transcription
1 UVACS6316 Fall2015Graduate: MachineLearning 10/27/15 Dr.YanjunQi UniversityofVirginia Departmentof ComputerScience 1 Announcements:RoughPlan HW3:dueonNov.8thmidnight MidphaseProjectReport:dueonNov.4 th LateMidterm: Opennote/Openlecture Nov.18 th /conflictswithmanystudents conferencetrips Nov.23 rd????/conflicts??? HW4: 20samplesquesWonsforthepreparaWonforexam DuedependingonLateYMidtermDate;If23 rd,dueon20 th 10/27/15 2
2 Wherearewe?! FivemajorsecWonsofthiscourse " Regression(supervised) " ClassificaWon(supervised) " Unsupervisedmodels " Learningtheory " Graphicalmodels 10/27/15 3 Wherearewe?! ThreemajorsecWonsforclassificaWon We can divide the large variety of classification approaches into roughly three major types 1. Discriminative - directly estimate a decision rule/boundary - e.g., logistic regression, support vector machine, decisiontree 2. Generative: - build a generative statistical model - e.g., naïve bayes classifier, Bayesian networks 3. Instance based classifiers - Use observation directly (no models) - e.g. K nearest neighbors 10/27/15 4
3 Today: # KYnearestneighbor # ModelSelecWon/BiasVarianceTradeoff 10/27/15 5 Nearest neighbor classifiers Basic idea: If it walks like a duck, quacks like a duck, then it s probably a duck compute distance test sample training samples choose k of the nearest samples 10/27/15 6
4 Nearest neighbor classifiers Unknown record Requiresthreeinputs: 1. Thesetofstored trainingsamples 2. Distancemetricto computedistance betweensamples 3. Thevalueofk,i.e.,the numberofnearest neighborstoretrieve 10/27/15 7 Nearest neighbor classifiers Unknown record Toclassifyunknownsample: 1. Computedistanceto othertrainingrecords 2. IdenWfyknearest neighbors 3. Useclasslabelsofnearest neighborstodetermine theclasslabelofunknown record(e.g.,bytaking majorityvote) 10/27/15 8
5 Definition of nearest neighbor X X X (a) 1-nearest neighbor (b) 2-nearest neighbor (c) 3-nearest neighbor kynearestneighborsofasamplexaredatapointsthat havetheksmallestdistancestox 10/27/ nearest neighbor Voronoidiagram: partitioning of a plane into regions based on distance to points in a specific subset of the plane. 10/27/15 10
6 Nearest neighbor classification Compute distance between two points: For instance, Euclidean distance d( x, y) = ( x i y i ) Options for determining the class from nearest neighbor list Take majority vote of class labels among the k-nearest neighbors Weight the votes according to distance example: weight factor w = 1 / d 2 i 2 10/27/15 11 Nearest neighbor classification Choosing the value of k: If k is too small, sensitive to noise points If k is too large, neighborhood may include points from other classes X 10/27/15 12
7 Nearest neighbor classification Scaling issues Attributes may have to be scaled to prevent distance measures from being dominated by one of the attributes Example: height of a person may vary from 1.5 m to 1.8 m weight of a person may vary from 90 lb to 300 lb income of a person may vary from 10K to 1M 10/27/15 13 Nearest neighbor classification Problem with Euclidean measure: High dimensional data curse of dimensionality Can produce counter-intuitive results vs d = d = onesoluwon:normalizethevectorstounitlength 10/27/15 14
8 Nearest neighbor classification k-nearest neighbor classifier is a lazy learner Does not build model explicitly. Classifying unknown samples is relatively expensive. k-nearest neighbor classifier is a local model, vs. global model of linear classifiers. 10/27/15 15 K-Nearest Neighbor Task classification Representation Local Smoothness Score Function NA Search/Optimization NA Models, Parameters Training Samples 10/27/15 16
9 Decision boundaries in global vs. local models linear regression 15-nearest neighbor 1-nearest neighbor global stable can be inaccurate local accurate unstable What ultimately matters: GENERALIZATION 10/27/15 17 KYNearestYNeighboursfor ClassificaWon(Extra) Kactsasasmother For,theerrorrateofthe1YnearestYneighbourclassifierisnevermorethan twicetheopwmalerror(obtainedfromthetruecondiwonalclassdistribuwons). 10/27/15 18
10 KNN METHODS IN HIGH DIMENSIONS (Extra) In high dimensions, all sample points are close to the edge of the sample N data points uniformly distributed in a p-dimensional unit ball centered at the origin Median distance from the closest point to the origin 1 d( p, N) = 1 2 d(10,500) = 0.52 More than half the way to the boundary (unit ball s boundary edge is 1 distance to the origin) 1/ N 1/ p 10/27/ Vs. Locally weighted regression aka locally weighted regression, locally linear regression, LOESS, K λ (x i, x 0 ) linear_func(x)y>y! couldrepresent onlytheneighbor regionofx_0 10/27/15 UseRBFfuncWonto pickout/emphasize theneighborregion ofx_0! K λ (x i, x 0 )
11 Vs. Locally weighted regression Separate weighted least squares at each target point x 0 : x 0 min N α (x 0 ),β (x 0 ) i=1 f ˆ ( x ˆ( K λ (x i, x 0 )[y i α(x 0 ) β(x 0 )x i ] 2 ˆ( 0 ) =α x0) + β x0) x 0 " K τ (x i, x0) = exp (x % i x0)2 ' # 2τ 2 & 10/27/15 Today: # KYnearestneighbor # ModelSelecWon/BiasVarianceTradeoff # EPE # DecomposiWonofMSE # BiasYVariancetradeoff # Highbias?Highvariance?Howtorespond? 10/27/15 22
12 e.g. Training Error from KNN, Lesson Learned When k = 1, No misclassifications (on training): Overtraining 1-nearest neighbor averaging Minimizing training error is not always good (e.g., 1- NN) ˆ > 0.5 Y Y ˆ = 0.5 Y ˆ < 0. 5 X 1 10/27/15 23 X 2 Statistical Decision Theory Random input vector: X Random output variable:y Joint distribution: Pr(X,Y ) Loss function L(Y, f(x)) Expected prediction error (EPE): EPE( f ) = E(L(Y, f (X))) = L(y, f (x)) Pr(dx,dy) e.g. = (y f (x)) 2 Pr(dx,dy) e.g. Squared error loss (also called L2 loss ) Consider population distribution 10/27/15 24
13 Expected prediction error (EPE) EPE( f ) = E(L(Y, f (X))) = L(y, f (x)) Pr(dx, dy) Consider joint distribution For L2 loss: e.g. = (y f (x)) 2 Pr(dx, dy) under L2 loss, best estimator for EPE (Theoretically) is : Conditional mean! ˆf (x)= E(Y X = x) e.g. KNN NN methods are the direct implementation (approximation ) For 0-1 loss: L(k, ) = 1- kl Bayes Classifier ˆf (X)=C k!if! Pr(C k X = x)= maxpr(g X = x) g C! 10/27/ LR VS. KNN FOR MINIMIZING EPE We know under L2 loss, best estimator for EPE (Theoretically) is : Conditional mean f ( x) = E( Y X = x ) Two simple approaches using different approximations: Least squares assumes that f(x) is well approximated by a globally linear function Nearest neighbors assumes that f(x) is well approximated by a locally constant function. 10/27/15
14 Review : WHEN EPE USES DIFFERENT LOSS Loss Function Estimator L 2 L (Bayes classifier / MAP) 10/27/15 27 Today: # KYnearestneighbor # ModelSelecWon/BiasVarianceTradeoff # EPE # DecomposiWonofMSE # BiasYVariancetradeoff # Highbias?Highvariance?Howtorespond? 10/27/15 28
15 Decomposition of EPE When additive error model: Notations Output random variable: Prediction function: Prediction estimator: 29 10/27/15 Irreducible / Bayes error MSE component of f- hat in estimating f BiasYVarianceTradeYoffforEPE: EPE (x_0) = noise 2 + bias 2 + variance Unavoidable error Error due to incorrect assumptions Error due to variance of training samples 10/27/15 30 Slidecredit:D.Hoiem
16 BIAS AND VARIANCE TRADE-OFF for MSE (more general setting!!! ): Bias measures accuracy or quality of the estimator low bias implies on average we will accurately estimate true parameter or func from training data Variance Measures precision or specificity of the estimator 10/27/15 Low variance implies the estimator does not change much as the training set varies 32 BIAS AND VARIANCE TRADE-OFF for MSE of parameter estimation: Error due to variance of training samples Error due to incorrect assumptions 10/27/15
17 e.g., BIAS AND VARIANCE IN KNN (Extra) Prediction 33 Bias Variance When under data model: 10/27/15 Today: # KYnearestneighbor # ModelSelecWon/BiasVarianceTradeoff # EPE # DecomposiWonofMSE # BiasYVariancetradeoff # Highbias?Highvariance?Howtorespond? 10/27/15 34
18 Bias-Variance Tradeoff / Model Selection underfit region overfit region 10/27/15 35 Whichisthebest? HighestBias Lowestvariance Modelcomplexity=low MediumBias MediumVariance Modelcomplexity=medium SmallestBias Highestvariance Modelcomplexity=high Whynotchoosethemethodwiththe bestfittothedata? Howwellareyougoingtopredictfuturedata? 10/27/15 36
19 BiasYVarianceTradeYoff Modelswithtoofew parametersareinaccurate becauseofalargebias(not enoughflexibility). Modelswithtoomany parametersareinaccurate becauseofalargevariance (toomuchsensiwvitytothe samplerandomness). 10/27/15 37 Slidecredit:D.Hoiem TrainingvsTestError Trainingerror canalwaysbe reducedwhen increasing model complexity, ExpectedTestError ButrisksoverY finngand generalize poorly. ExpectedTrainingError 38
20 Regression: Complexity versus Goodness of Fit y Training data y Too simple? y x x Too complex? About right? y LowVariance/ HighBias Meanofthe LowBias /HighVariance x What ultimately matters: GENERALIZATION 10/27/15 39 x Classification, Decision boundaries in global vs. local models LowVariance/ HighBias linear regression global stable can be inaccurate 15-nearest neighbor LowVariance/ HighBias KNN local accurate unstable 1-nearest neighbor LowBias /HighVariance What ultimately matters: GENERALIZATION 10/27/15 40
21 Middle RED: TRUE function Model bias & Model variance Error due to bias: How far off in general from the middle red Error due to variance: How wildly the blue points spread 10/27/15 41 needtomakeassumpwonsthat areabletogeneralize Components of generalization error Bias: how much the average model over all training sets differ from the true model? Error due to inaccurate assumptions/simplifications made by the model Variance: how much models estimated from different training sets differ from each other Underfitting: model is too simple to represent all the relevant class characteristics High bias and low variance High training error and high test error Overfitting: model is too complex and fits irrelevant characteristics (noise) in the data Low bias and high variance Low training error and high test error 10/27/15 42 Slidecredit:L.Lazebnik
22 MODEL COMPLEXITY CONTROL, EXAMPLES (Extra) Regularization (Bayesian) Kernel methods and local regression Basis functions 10/27/15 43 Today: # KYnearestneighbor # ModelSelecWon/BiasVarianceTradeoff # EPE # DecomposiWonofMSE # BiasYVariancetradeoff # Highbias?Highvariance?Howtorespond? 10/27/15 44
23 Highvariance Couldalsouse CrossVError TesterrorsWlldecreasingasmincreases.Suggestslargertrainingsetwillhelp. Largegapbetweentrainingandtesterror. Low training error and high test error 10/27/15 45 Slidecredit:A.Ng Howtoreducevariance? Chooseasimplerclassifier Regularizetheparameters Getmoretrainingdata Trysmallersetoffeatures 10/27/15 46 Slidecredit:D.Hoiem
24 Highbias Couldalsouse CrossVError Eventrainingerrorisunacceptablyhigh. Smallgapbetweentrainingandtesterror. High training error and high test error 10/27/15 47 Slidecredit:A.Ng HowtoreduceBias? E.g. Y GetaddiWonalfeatures Y Tryaddingbasisexpansions,e.g.polynomial Y Trymorecomplexlearner 10/27/15 48
25 Forinstance,iftryingtosolve spam detecwon using(extra) L2Y Ifperformanceisnotasdesired 10/27/15 49 Slidecredit:A.Ng ModelSelecWonandAssessment ModelSelecWon EsWmaWngperformancesofdifferentmodelstochoose thebestone ModelAssessment Havingchosenamodel,esWmaWngthepredicWonerror onnewdata 10/27/15 50
26 ModelSelecWonandAssessment (Extra) DataRichScenario:Splitthedataset Train Validation Test Model Selection Model assessment Insufficientdatatosplitinto3parts ApproximatevalidaWonstepanalyWcally AIC,BIC,MDL,SRM Efficientreuseofsamples CrossvalidaWon,bootstrap 10/27/15 51 TodayRecap: # KYnearestneighbor # ModelSelecWon/BiasVarianceTradeoff # EPE # DecomposiWonofMSE # BiasYVariancetradeoff # Highbias?Highvariance?Howtorespond? 10/27/15 52
27 References " Prof.Tan,Steinbach,Kumar s IntroducWon todatamining slide " Prof.AndrewMoore sslides " Prof.EricXing sslides " HasWe,Trevor,etal.The%elements%of% sta.s.cal%learning.vol.2.no.1.newyork: Springer, /27/15 53
UVA CS 6316/4501 Fall 2016 Machine Learning. Lecture 15: K-nearest-neighbor Classifier / Bias-Variance Tradeoff. Dr. Yanjun Qi. University of Virginia
UVA CS 6316/4501 Fall 2016 Machine Learning Lecture 15: K-nearest-neighbor Classifier / Bias-Variance Tradeoff Dr. Yanjun Qi University of Virginia Department of Computer Science 11/9/16 1 Rough Plan HW5
More informationUVA CS 4501: Machine Learning. Lecture 10: K-nearest-neighbor Classifier / Bias-Variance Tradeoff. Dr. Yanjun Qi. University of Virginia
UVA CS 4501: Machine Learning Lecture 10: K-nearest-neighbor Classifier / Bias-Variance Tradeoff Dr. Yanjun Qi University of Virginia Department of Computer Science 1 Where are we? è Five major secfons
More informationData Mining Classification: Alternative Techniques. Lecture Notes for Chapter 4. Instance-Based Learning. Introduction to Data Mining, 2 nd Edition
Data Mining Classification: Alternative Techniques Lecture Notes for Chapter 4 Instance-Based Learning Introduction to Data Mining, 2 nd Edition by Tan, Steinbach, Karpatne, Kumar Instance Based Classifiers
More informationDATA MINING LECTURE 10B. Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines
DATA MINING LECTURE 10B Classification k-nearest neighbor classifier Naïve Bayes Logistic Regression Support Vector Machines NEAREST NEIGHBOR CLASSIFICATION 10 10 Illustrating Classification Task Tid Attrib1
More information9 Classification: KNN and SVM
CSE4334/5334 Data Mining 9 Classification: KNN and SVM Chengkai Li Department of Computer Science and Engineering University of Texas at Arlington Fall 2017 (Slides courtesy of Pang-Ning Tan, Michael Steinbach
More informationNotes and Announcements
Notes and Announcements Midterm exam: Oct 20, Wednesday, In Class Late Homeworks Turn in hardcopies to Michelle. DO NOT ask Michelle for extensions. Note down the date and time of submission. If submitting
More informationCISC 4631 Data Mining
CISC 4631 Data Mining Lecture 03: Nearest Neighbor Learning Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Prof. R. Mooney (UT Austin) Prof E. Keogh (UCR), Prof. F.
More informationDistribution-free Predictive Approaches
Distribution-free Predictive Approaches The methods discussed in the previous sections are essentially model-based. Model-free approaches such as tree-based classification also exist and are popular for
More informationCS570: Introduction to Data Mining
CS570: Introduction to Data Mining Classification Advanced Reading: Chapter 8 & 9 Han, Chapters 4 & 5 Tan Anca Doloc-Mihu, Ph.D. Slides courtesy of Li Xiong, Ph.D., 2011 Han, Kamber & Pei. Data Mining.
More informationNearest Neighbor Classification. Machine Learning Fall 2017
Nearest Neighbor Classification Machine Learning Fall 2017 1 This lecture K-nearest neighbor classification The basic algorithm Different distance measures Some practical aspects Voronoi Diagrams and Decision
More informationDS Machine Learning and Data Mining I. Alina Oprea Associate Professor, CCIS Northeastern University
DS 4400 Machine Learning and Data Mining I Alina Oprea Associate Professor, CCIS Northeastern University January 24 2019 Logistics HW 1 is due on Friday 01/25 Project proposal: due Feb 21 1 page description
More informationData Mining. Lecture 03: Nearest Neighbor Learning
Data Mining Lecture 03: Nearest Neighbor Learning Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Prof. R. Mooney (UT Austin) Prof E. Keogh (UCR), Prof. F. Provost
More informationCS 584 Data Mining. Classification 1
CS 584 Data Mining Classification 1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class. Find a model for
More informationCS178: Machine Learning and Data Mining. Complexity & Nearest Neighbor Methods
+ CS78: Machine Learning and Data Mining Complexity & Nearest Neighbor Methods Prof. Erik Sudderth Some materials courtesy Alex Ihler & Sameer Singh Machine Learning Complexity and Overfitting Nearest
More informationData Preprocessing. Supervised Learning
Supervised Learning Regression Given the value of an input X, the output Y belongs to the set of real values R. The goal is to predict output accurately for a new input. The predictions or outputs y are
More informationUsing Machine Learning to Optimize Storage Systems
Using Machine Learning to Optimize Storage Systems Dr. Kiran Gunnam 1 Outline 1. Overview 2. Building Flash Models using Logistic Regression. 3. Storage Object classification 4. Storage Allocation recommendation
More informationMachine Learning. Chao Lan
Machine Learning Chao Lan Machine Learning Prediction Models Regression Model - linear regression (least square, ridge regression, Lasso) Classification Model - naive Bayes, logistic regression, Gaussian
More informationSlides for Data Mining by I. H. Witten and E. Frank
Slides for Data Mining by I. H. Witten and E. Frank 7 Engineering the input and output Attribute selection Scheme-independent, scheme-specific Attribute discretization Unsupervised, supervised, error-
More informationNonparametric Methods Recap
Nonparametric Methods Recap Aarti Singh Machine Learning 10-701/15-781 Oct 4, 2010 Nonparametric Methods Kernel Density estimate (also Histogram) Weighted frequency Classification - K-NN Classifier Majority
More informationClassification and K-Nearest Neighbors
Classification and K-Nearest Neighbors Administrivia o Reminder: Homework 1 is due by 5pm Friday on Moodle o Reading Quiz associated with today s lecture. Due before class Wednesday. NOTETAKER 2 Regression
More informationMachine Learning. Classification
10-701 Machine Learning Classification Inputs Inputs Inputs Where we are Density Estimator Probability Classifier Predict category Today Regressor Predict real no. Later Classification Assume we want to
More informationMachine Learning Techniques for Data Mining
Machine Learning Techniques for Data Mining Eibe Frank University of Waikato New Zealand 10/25/2000 1 PART VII Moving on: Engineering the input and output 10/25/2000 2 Applying a learner is not all Already
More informationNaïve Bayes for text classification
Road Map Basic concepts Decision tree induction Evaluation of classifiers Rule induction Classification using association rules Naïve Bayesian classification Naïve Bayes for text classification Support
More informationData mining. Classification k-nn Classifier. Piotr Paszek. (Piotr Paszek) Data mining k-nn 1 / 20
Data mining Piotr Paszek Classification k-nn Classifier (Piotr Paszek) Data mining k-nn 1 / 20 Plan of the lecture 1 Lazy Learner 2 k-nearest Neighbor Classifier 1 Distance (metric) 2 How to Determine
More informationK-Nearest Neighbour Classifier. Izabela Moise, Evangelos Pournaras, Dirk Helbing
K-Nearest Neighbour Classifier Izabela Moise, Evangelos Pournaras, Dirk Helbing Izabela Moise, Evangelos Pournaras, Dirk Helbing 1 Reminder Supervised data mining Classification Decision Trees Izabela
More informationCS6375: Machine Learning Gautam Kunapuli. Mid-Term Review
Gautam Kunapuli Machine Learning Data is identically and independently distributed Goal is to learn a function that maps to Data is generated using an unknown function Learn a hypothesis that minimizes
More informationk-nearest Neighbor (knn) Sept Youn-Hee Han
k-nearest Neighbor (knn) Sept. 2015 Youn-Hee Han http://link.koreatech.ac.kr ²Eager Learners Eager vs. Lazy Learning when given a set of training data, it will construct a generalization model before receiving
More informationAnnouncements. CS 188: Artificial Intelligence Spring Generative vs. Discriminative. Classification: Feature Vectors. Project 4: due Friday.
CS 188: Artificial Intelligence Spring 2011 Lecture 21: Perceptrons 4/13/2010 Announcements Project 4: due Friday. Final Contest: up and running! Project 5 out! Pieter Abbeel UC Berkeley Many slides adapted
More informationModel Assessment and Selection. Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer
Model Assessment and Selection Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Model Training data Testing data Model Testing error rate Training error
More informationBias-Variance Analysis of Ensemble Learning
Bias-Variance Analysis of Ensemble Learning Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, Oregon 97331 http://www.cs.orst.edu/~tgd Outline Bias-Variance Decomposition
More informationNetwork Traffic Measurements and Analysis
DEIB - Politecnico di Milano Fall, 2017 Sources Hastie, Tibshirani, Friedman: The Elements of Statistical Learning James, Witten, Hastie, Tibshirani: An Introduction to Statistical Learning Andrew Ng:
More informationCSC 411 Lecture 4: Ensembles I
CSC 411 Lecture 4: Ensembles I Roger Grosse, Amir-massoud Farahmand, and Juan Carrasquilla University of Toronto UofT CSC 411: 04-Ensembles I 1 / 22 Overview We ve seen two particular classification algorithms:
More informationInstance-Based Learning: Nearest neighbor and kernel regression and classificiation
Instance-Based Learning: Nearest neighbor and kernel regression and classificiation Emily Fox University of Washington February 3, 2017 Simplest approach: Nearest neighbor regression 1 Fit locally to each
More informationK- Nearest Neighbors(KNN) And Predictive Accuracy
Contact: mailto: Ammar@cu.edu.eg Drammarcu@gmail.com K- Nearest Neighbors(KNN) And Predictive Accuracy Dr. Ammar Mohammed Associate Professor of Computer Science ISSR, Cairo University PhD of CS ( Uni.
More informationOn Bias, Variance, 0/1 - Loss, and the Curse of Dimensionality
RK April 13, 2014 Abstract The purpose of this document is to summarize the main points from the paper, On Bias, Variance, 0/1 - Loss, and the Curse of Dimensionality, written by Jerome H.Friedman1997).
More informationMachine Learning and Pervasive Computing
Stephan Sigg Georg-August-University Goettingen, Computer Networks 17.12.2014 Overview and Structure 22.10.2014 Organisation 22.10.3014 Introduction (Def.: Machine learning, Supervised/Unsupervised, Examples)
More informationEnsemble Learning. Another approach is to leverage the algorithms we have via ensemble methods
Ensemble Learning Ensemble Learning So far we have seen learning algorithms that take a training set and output a classifier What if we want more accuracy than current algorithms afford? Develop new learning
More informationInstance-Based Learning: Nearest neighbor and kernel regression and classificiation
Instance-Based Learning: Nearest neighbor and kernel regression and classificiation Emily Fox University of Washington February 3, 2017 Simplest approach: Nearest neighbor regression 1 Fit locally to each
More informationLecture 9: Support Vector Machines
Lecture 9: Support Vector Machines William Webber (william@williamwebber.com) COMP90042, 2014, Semester 1, Lecture 8 What we ll learn in this lecture Support Vector Machines (SVMs) a highly robust and
More informationTopic 1 Classification Alternatives
Topic 1 Classification Alternatives [Jiawei Han, Micheline Kamber, Jian Pei. 2011. Data Mining Concepts and Techniques. 3 rd Ed. Morgan Kaufmann. ISBN: 9380931913.] 1 Contents 2. Classification Using Frequent
More informationTopics in Machine Learning
Topics in Machine Learning Gilad Lerman School of Mathematics University of Minnesota Text/slides stolen from G. James, D. Witten, T. Hastie, R. Tibshirani and A. Ng Machine Learning - Motivation Arthur
More informationMIT 801. Machine Learning I. [Presented by Anna Bosman] 16 February 2018
MIT 801 [Presented by Anna Bosman] 16 February 2018 Machine Learning What is machine learning? Artificial Intelligence? Yes as we know it. What is intelligence? The ability to acquire and apply knowledge
More informationBIOINF 585: Machine Learning for Systems Biology & Clinical Informatics
BIOINF 585: Machine Learning for Systems Biology & Clinical Informatics Lecture 12: Ensemble Learning I Jie Wang Department of Computational Medicine & Bioinformatics University of Michigan 1 Outline Bias
More informationThe exam is closed book, closed notes except your one-page (two-sided) cheat sheet.
CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or
More informationVECTOR SPACE CLASSIFICATION
VECTOR SPACE CLASSIFICATION Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press. Chapter 14 Wei Wei wwei@idi.ntnu.no Lecture
More informationFeature Extractors. CS 188: Artificial Intelligence Fall Some (Vague) Biology. The Binary Perceptron. Binary Decision Rule.
CS 188: Artificial Intelligence Fall 2008 Lecture 24: Perceptrons II 11/24/2008 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit
More informationRecap: Gaussian (or Normal) Distribution. Recap: Minimizing the Expected Loss. Topics of This Lecture. Recap: Maximum Likelihood Approach
Truth Course Outline Machine Learning Lecture 3 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Probability Density Estimation II 2.04.205 Discriminative Approaches (5 weeks)
More informationADVANCED CLASSIFICATION TECHNIQUES
Admin ML lab next Monday Project proposals: Sunday at 11:59pm ADVANCED CLASSIFICATION TECHNIQUES David Kauchak CS 159 Fall 2014 Project proposal presentations Machine Learning: A Geometric View 1 Apples
More informationMachine Learning Classifiers and Boosting
Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve
More informationNearest Neighbor Predictors
Nearest Neighbor Predictors September 2, 2018 Perhaps the simplest machine learning prediction method, from a conceptual point of view, and perhaps also the most unusual, is the nearest-neighbor method,
More informationClassification: Feature Vectors
Classification: Feature Vectors Hello, Do you want free printr cartriges? Why pay more when you can get them ABSOLUTELY FREE! Just # free YOUR_NAME MISSPELLED FROM_FRIEND... : : : : 2 0 2 0 PIXEL 7,12
More informationCSC 411: Lecture 05: Nearest Neighbors
CSC 411: Lecture 05: Nearest Neighbors Raquel Urtasun & Rich Zemel University of Toronto Sep 28, 2015 Urtasun & Zemel (UofT) CSC 411: 05-Nearest Neighbors Sep 28, 2015 1 / 13 Today Non-parametric models
More informationData Mining and Machine Learning: Techniques and Algorithms
Instance based classification Data Mining and Machine Learning: Techniques and Algorithms Eneldo Loza Mencía eneldo@ke.tu-darmstadt.de Knowledge Engineering Group, TU Darmstadt International Week 2019,
More informationMachine Learning / Jan 27, 2010
Revisiting Logistic Regression & Naïve Bayes Aarti Singh Machine Learning 10-701/15-781 Jan 27, 2010 Generative and Discriminative Classifiers Training classifiers involves learning a mapping f: X -> Y,
More informationCSE 573: Artificial Intelligence Autumn 2010
CSE 573: Artificial Intelligence Autumn 2010 Lecture 16: Machine Learning Topics 12/7/2010 Luke Zettlemoyer Most slides over the course adapted from Dan Klein. 1 Announcements Syllabus revised Machine
More informationInstance-based Learning CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2015
Instance-based Learning CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2015 Outline Non-parametric approach Unsupervised: Non-parametric density estimation Parzen Windows K-Nearest
More informationRecognition Tools: Support Vector Machines
CS 2770: Computer Vision Recognition Tools: Support Vector Machines Prof. Adriana Kovashka University of Pittsburgh January 12, 2017 Announcement TA office hours: Tuesday 4pm-6pm Wednesday 10am-12pm Matlab
More informationCOMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 6: k-nn Cross-validation Regularization
COMPUTATIONAL INTELLIGENCE SEW (INTRODUCTION TO MACHINE LEARNING) SS18 Lecture 6: k-nn Cross-validation Regularization LEARNING METHODS Lazy vs eager learning Eager learning generalizes training data before
More informationInstance-based Learning
Instance-based Learning Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University February 19 th, 2007 2005-2007 Carlos Guestrin 1 Why not just use Linear Regression? 2005-2007 Carlos Guestrin
More informationGoing nonparametric: Nearest neighbor methods for regression and classification
Going nonparametric: Nearest neighbor methods for regression and classification STAT/CSE 46: Machine Learning Emily Fox University of Washington May 3, 208 Locality sensitive hashing for approximate NN
More informationSupport Vector Machines
Support Vector Machines Chapter 9 Chapter 9 1 / 50 1 91 Maximal margin classifier 2 92 Support vector classifiers 3 93 Support vector machines 4 94 SVMs with more than two classes 5 95 Relationshiop to
More informationLecture 3. Oct
Lecture 3 Oct 3 2008 Review of last lecture A supervised learning example spam filter, and the design choices one need to make for this problem use bag-of-words to represent emails linear functions as
More informationLinear Regression and K-Nearest Neighbors 3/28/18
Linear Regression and K-Nearest Neighbors 3/28/18 Linear Regression Hypothesis Space Supervised learning For every input in the data set, we know the output Regression Outputs are continuous A number,
More informationLecture 6 K- Nearest Neighbors(KNN) And Predictive Accuracy
Lecture 6 K- Nearest Neighbors(KNN) And Predictive Accuracy Machine Learning Dr.Ammar Mohammed Nearest Neighbors Set of Stored Cases Atr1... AtrN Class A Store the training samples Use training samples
More informationText classification II CE-324: Modern Information Retrieval Sharif University of Technology
Text classification II CE-324: Modern Information Retrieval Sharif University of Technology M. Soleymani Fall 2015 Some slides have been adapted from: Profs. Manning, Nayak & Raghavan (CS-276, Stanford)
More informationSupervised Learning: K-Nearest Neighbors and Decision Trees
Supervised Learning: K-Nearest Neighbors and Decision Trees Piyush Rai CS5350/6350: Machine Learning August 25, 2011 (CS5350/6350) K-NN and DT August 25, 2011 1 / 20 Supervised Learning Given training
More informationNearest Neighbor Methods
Nearest Neighbor Methods Nicholas Ruozzi University of Texas at Dallas Based on the slides of Vibhav Gogate and David Sontag Nearest Neighbor Methods Learning Store all training examples Classifying a
More informationMachine Learning Lecture 3
Machine Learning Lecture 3 Probability Density Estimation II 19.10.2017 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Exam dates We re in the process
More informationMachine Learning Lecture 3
Many slides adapted from B. Schiele Machine Learning Lecture 3 Probability Density Estimation II 26.04.2016 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Course
More informationData Mining. 3.5 Lazy Learners (Instance-Based Learners) Fall Instructor: Dr. Masoud Yaghini. Lazy Learners
Data Mining 3.5 (Instance-Based Learners) Fall 2008 Instructor: Dr. Masoud Yaghini Outline Introduction k-nearest-neighbor Classifiers References Introduction Introduction Lazy vs. eager learning Eager
More informationMachine Learning Lecture 3
Course Outline Machine Learning Lecture 3 Fundamentals (2 weeks) Bayes Decision Theory Probability Density Estimation Probability Density Estimation II 26.04.206 Discriminative Approaches (5 weeks) Linear
More informationLecture 27: Review. Reading: All chapters in ISLR. STATS 202: Data mining and analysis. December 6, 2017
Lecture 27: Review Reading: All chapters in ISLR. STATS 202: Data mining and analysis December 6, 2017 1 / 16 Final exam: Announcements Tuesday, December 12, 8:30-11:30 am, in the following rooms: Last
More informationSemi-supervised Learning
Semi-supervised Learning Piyush Rai CS5350/6350: Machine Learning November 8, 2011 Semi-supervised Learning Supervised Learning models require labeled data Learning a reliable model usually requires plenty
More informationCS7267 MACHINE LEARNING NEAREST NEIGHBOR ALGORITHM. Mingon Kang, PhD Computer Science, Kennesaw State University
CS7267 MACHINE LEARNING NEAREST NEIGHBOR ALGORITHM Mingon Kang, PhD Computer Science, Kennesaw State University KNN K-Nearest Neighbors (KNN) Simple, but very powerful classification algorithm Classifies
More information7. Nearest neighbors. Learning objectives. Foundations of Machine Learning École Centrale Paris Fall 2015
Foundations of Machine Learning École Centrale Paris Fall 2015 7. Nearest neighbors Chloé-Agathe Azencott Centre for Computational Biology, Mines ParisTech chloe agathe.azencott@mines paristech.fr Learning
More informationLecture 25: Review I
Lecture 25: Review I Reading: Up to chapter 5 in ISLR. STATS 202: Data mining and analysis Jonathan Taylor 1 / 18 Unsupervised learning In unsupervised learning, all the variables are on equal standing,
More informationCSC411/2515 Tutorial: K-NN and Decision Tree
CSC411/2515 Tutorial: K-NN and Decision Tree Mengye Ren csc{411,2515}ta@cs.toronto.edu September 25, 2016 Cross-validation K-nearest-neighbours Decision Trees Review: Motivation for Validation Framework:
More informationNon-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines
Non-Bayesian Classifiers Part II: Linear Discriminants and Support Vector Machines Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Spring 2007 c 2007,
More informationBias-Variance Trade-off (cont d) + Image Representations
CS 275: Machine Learning Bias-Variance Trade-off (cont d) + Image Representations Prof. Adriana Kovashka University of Pittsburgh January 2, 26 Announcement Homework now due Feb. Generalization Training
More informationMathematics of Data. INFO-4604, Applied Machine Learning University of Colorado Boulder. September 5, 2017 Prof. Michael Paul
Mathematics of Data INFO-4604, Applied Machine Learning University of Colorado Boulder September 5, 2017 Prof. Michael Paul Goals In the intro lecture, every visualization was in 2D What happens when we
More informationCS 229 Midterm Review
CS 229 Midterm Review Course Staff Fall 2018 11/2/2018 Outline Today: SVMs Kernels Tree Ensembles EM Algorithm / Mixture Models [ Focus on building intuition, less so on solving specific problems. Ask
More informationMachine Learning (CS 567)
Machine Learning (CS 567) Time: T-Th 5:00pm - 6:20pm Location: GFS 118 Instructor: Sofus A. Macskassy (macskass@usc.edu) Office: SAL 216 Office hours: by appointment Teaching assistant: Cheol Han (cheolhan@usc.edu)
More informationK-Nearest Neighbour (Continued) Dr. Xiaowei Huang
K-Nearest Neighbour (Continued) Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ A few things: No lectures on Week 7 (i.e., the week starting from Monday 5 th November), and Week 11 (i.e., the week
More informationMLCC 2018 Local Methods and Bias Variance Trade-Off. Lorenzo Rosasco UNIGE-MIT-IIT
MLCC 2018 Local Methods and Bias Variance Trade-Off Lorenzo Rosasco UNIGE-MIT-IIT About this class 1. Introduce a basic class of learning methods, namely local methods. 2. Discuss the fundamental concept
More informationSupervised vs unsupervised clustering
Classification Supervised vs unsupervised clustering Cluster analysis: Classes are not known a- priori. Classification: Classes are defined a-priori Sometimes called supervised clustering Extract useful
More informationMachine Learning: Think Big and Parallel
Day 1 Inderjit S. Dhillon Dept of Computer Science UT Austin CS395T: Topics in Multicore Programming Oct 1, 2013 Outline Scikit-learn: Machine Learning in Python Supervised Learning day1 Regression: Least
More informationAnnouncements. CS 188: Artificial Intelligence Spring Classification: Feature Vectors. Classification: Weights. Learning: Binary Perceptron
CS 188: Artificial Intelligence Spring 2010 Lecture 24: Perceptrons and More! 4/20/2010 Announcements W7 due Thursday [that s your last written for the semester!] Project 5 out Thursday Contest running
More informationGenerative and discriminative classification techniques
Generative and discriminative classification techniques Machine Learning and Category Representation 013-014 Jakob Verbeek, December 13+0, 013 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.13.14
More informationSupport Vector Machines + Classification for IR
Support Vector Machines + Classification for IR Pierre Lison University of Oslo, Dep. of Informatics INF3800: Søketeknologi April 30, 2014 Outline of the lecture Recap of last week Support Vector Machines
More informationK Nearest Neighbor Wrap Up K- Means Clustering. Slides adapted from Prof. Carpuat
K Nearest Neighbor Wrap Up K- Means Clustering Slides adapted from Prof. Carpuat K Nearest Neighbor classification Classification is based on Test instance with Training Data K: number of neighbors that
More informationData Mining and Data Warehousing Classification-Lazy Learners
Motivation Data Mining and Data Warehousing Classification-Lazy Learners Lazy Learners are the most intuitive type of learners and are used in many practical scenarios. The reason of their popularity is
More informationSupervised Learning: Nearest Neighbors
CS 2750: Machine Learning Supervised Learning: Nearest Neighbors Prof. Adriana Kovashka University of Pittsburgh February 1, 2016 Today: Supervised Learning Part I Basic formulation of the simplest classifier:
More informationBayes Net Learning. EECS 474 Fall 2016
Bayes Net Learning EECS 474 Fall 2016 Homework Remaining Homework #3 assigned Homework #4 will be about semi-supervised learning and expectation-maximization Homeworks #3-#4: the how of Graphical Models
More informationProblem 1: Complexity of Update Rules for Logistic Regression
Case Study 1: Estimating Click Probabilities Tackling an Unknown Number of Features with Sketching Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox January 16 th, 2014 1
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Machine Learning: Perceptron Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer and Dan Klein. 1 Generative vs. Discriminative Generative classifiers:
More information7. Nearest neighbors. Learning objectives. Centre for Computational Biology, Mines ParisTech
Foundations of Machine Learning CentraleSupélec Paris Fall 2016 7. Nearest neighbors Chloé-Agathe Azencot Centre for Computational Biology, Mines ParisTech chloe-agathe.azencott@mines-paristech.fr Learning
More informationPerceptron as a graph
Neural Networks Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 10 th, 2007 2005-2007 Carlos Guestrin 1 Perceptron as a graph 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0-6 -4-2
More informationClassifiers and Detection. D.A. Forsyth
Classifiers and Detection D.A. Forsyth Classifiers Take a measurement x, predict a bit (yes/no; 1/-1; 1/0; etc) Detection with a classifier Search all windows at relevant scales Prepare features Classify
More informationThe exam is closed book, closed notes except your one-page (two-sided) cheat sheet.
CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or
More informationRESAMPLING METHODS. Chapter 05
1 RESAMPLING METHODS Chapter 05 2 Outline Cross Validation The Validation Set Approach Leave-One-Out Cross Validation K-fold Cross Validation Bias-Variance Trade-off for k-fold Cross Validation Cross Validation
More information