Linear discriminant analysis and logistic

Size: px
Start display at page:

Download "Linear discriminant analysis and logistic"

Transcription

1 Practical 6: classifiers Linear discriminant analysis and logistic This practical looks at two different methods of fitting linear classifiers. The linear discriminant analysis is implemented in the MASS package and the logistic classifier is implemented in the nnet package. Iris data The iris dataset is available in the MASS package. This is the classical dataset Fisher used in his original 1936 paper on linear discriminant analysis. The dataset contains observations of iris flowers. Each observation consists of measurements of the mean length and width of both the flower sepals and petals, followed by the species of the flower. Read in the data and perform some exploratory analysis. library(mass) data(iris) # load data head(iris) pairs(iris[,1:4]) LDA. Now, perform linear discriminant analysis using the lda function from the MASS package and interpret the output. iris.lda <- lda(species~., data=iris) iris.lda # Call: # lda(species ~., data = iris) # # Prior probabilities of groups: # setosa versicolor virginica # # # Group means: # Sepal.Length Sepal.Width Petal.Length Petal.Width # setosa # versicolor # virginica # # Coefficients of linear discriminants: # LD1 LD2 # Sepal.Length # Sepal.Width

2 # Petal.Length # Petal.Width # # Proportion of trace: # LD1 LD2 # The prior probabilities are the estimated group priors ˆπ l, l = 1,..., 3. The group means are the class centroids ˆµ l, l = 1,..., 3. As mentioned in the lecture, the LDA for L classes can be viewed as a nearest class centroid (adjusted by class priors) classification method after projecting the data onto an (at most) (L 1)-dimensional space. The matrix of coefficients of linear discriminants A = is precisely that projection matrix. (Though not covered in the course, the two columns LD1 and LD2 in this case corresponds to the first and second canonical direction vectors. The proportion of trace are the corresponding Rayleigh quotients, i.e. ratio of betweenclass variance and within-class variance, along the two canonical directions. We see that most of the signal is captured in a single canonical direction.) We can visualise the LDA output using the following plot command. See?plot.lda for more details. plot(iris.lda) Why is the plot two-dimensional? How are the x- and y- coordinates of the twodimensional plot computed? We can manually obtain the same plot (and colour coding the classes) as follows. A <- iris.lda$scaling X <- as.matrix(iris[,1:4]) plot(x%*%a, pch=20, col=iris$species) pred <- predict(iris.lda, newdata=iris) # another way to obtain projected points plot(pred$x, pch=20,col=iris$species) # predict also centres the data 2

3 What is the training misclassification error of LDA? What is the leave-one-out crossvalidated misclassification error? Logistic classifier. Now, let s apply logistic regression classification on the same dataset. This can be done either using the multinom function in the nnet package (i.e. treating logistic regression classifier as a neural network with no hidden layers!) or the mlogit function in the mlogit package. (Of course, if it is a two-class classification problem, we can also use the good old glm function.) We will use the former in this practical. library(nnet) iris.logit <- multinom(species~., data=iris, maxit=200) coef <- t(coef(iris.logit)) coef # versicolor virginica # (Intercept) # Sepal.Length # Sepal.Width # Petal.Length # Petal.Width The maxit argument is set to 200 for convergence. For which covariate values will a flower be classified as virginica? The misclassification training error using logistic classifier is 1.3%. sum(predict(iris.logit, newdata=iris)!= iris$species)/150 #

4 The multinom function numerically solves for the coefficients using a quasi-newton method (similar to Newton Raphson method but with an approximate inverse Hessian matrix that is computationally cheaper to obtain). Neither nnet nor mlogit package implement the stochastic gradient descent. Hence we write our own code to achieve this. Do you understand what is happening in the innermost for loop? X <- model.matrix(species~., data=iris) y <- model.matrix(~species, data=iris)[,-1] # indicator vectors for labels labels <- levels(iris$species) n <- dim(x)[1]; p <- dim(x)[2] # num of obs and num of covariates expit = function(v) exp(v)/(sum(exp(v))+1) # generalised expit for multinomial logl <- function(beta, X, y){ # loglikelihood of the coefficients sum(log(exp(rowsums(y*(x%*%beta)))/(rowsums(exp(x%*%beta))+1))) beta <- matrix(0, p, 2) # initialise matrix of linear coefficients nepoch = 200 set.seed(1122); shuffle = sample(150) # randomly shuffle indices for (epoch in 1:nepoch){ # stochastic gradient descent updates for (i in shuffle){ alpha <- 1/(1+epoch/10) # step size (learning rate) xi = X[i,,drop=T]; yi = y[i,,drop=t] prob <- expit(t(beta)%*%xi) # predicted probs of i-th obs with current betas beta <- beta + alpha * xi %*% t(yi - prob) # SGD update cat('epoch = ', epoch, 'loglik = ', logl(beta, X, y), '\n') The step size for stochastic gradient update is chosen to be (1 + e/10) 1, where e is the current epoch (number of passes through the entire data). Optimisation theory in this area says that any choice of learning rates α e satisfying α e 0, i=1 α e = and i=1 α2 e < will ensure eventual convergence to a local maximum (which is a global maximum in this case). However, difference choice of learning rate can have huge influence on the speed of convergence. The above code prints out the log-likelihood of the estimated coefficients after every epoch. We see that the log-likelihood has not converged at the end of 200 epochs. On the other hand, the multinom function has reached numerical convergence. Compare the estimated coefficients beta stochastic gradient descent to the coefficients obtained by the multinom function. beta coef logl(beta) logl(coef) They are not the same (actually, they are rather different) and the log-likelihood for coefficients estimated through quasi-newton method is much higher than that from the stochastic gradient descent. 4

5 MNIST data The MNIST (Modified National Institute of Standards and Technology) data is a database of handwritten digits. We will be using a subset of the database. The full database can be found on Load the data into R and explore a bit. filepath <- " filename <- "mnist.csv" mnist <- read.csv(paste0(filepath, filename), header = TRUE) mnist[1:10,1:10] mnist$digit <- as.factor(mnist$digit) visualise = function(vec,...){ # function for graphically displaying a digit image(matrix(as.numeric(vec),nrow=28)[,28:1], col=gray((255:0)/255),...) old_par <- par(mfrow=c(2,2)) for (i in 1:4) visualise(mnist[i,-1]) par(old_par) Each handwritten digit is stored in a pixels grayscale image. The grayscale values (0 to 255) for the 784 pixels are stored as row vectors in the dataset. We define a visualise function to graphically display a digit based on its vector of grayscale values. 5

6 We use the first 2/3 of the data as training data and the remaining 1/3 as test data. Moreover, many margin pixels are constantly white throughout the dataset. We exclude them from our analysis. train <- mnist[1:4000,] identical <- apply(train, 2, function(v){all(v==v[1])) train <- train[,!identical] test <- mnist[4001:6000,!identical] LDA. We first fit a LDA classifier to the data. The test error is 16.9%. mnist.lda <- lda(digit~., data=train) pred <- predict(mnist.lda, test) sum(pred$class!= test$digit)/2000 We can visualise the outcome by plotting along the first two canonical directions. plot(pred$x, pch=20, col=as.numeric(test$digit)+1, xlim=c(-10,10), ylim=c(-10,10)) legend('topright', lty=1, col=1:10, legend=0:9) 6

7 The first two canonical directions already show good separation of the 10 classes. The dim argument in predict.lda can be used to explicitly fit a reduced rank LDA classifier. We plot how the training error and test error change as more dimensions are included. train.err <- test.err <- rep(0, 20) for (r in 1:20){ train.err[r] <- sum(predict(mnist.lda, train, dim=r)$class!= train$digit)/4000 test.err[r] <- sum(predict(mnist.lda, test, dim=r)$class!= test$digit)/2000 plot(train.err,type='l', col='orange', ylim=range(c(train.err,test.err)), ylab='error') points(test.err,type='l', col='blue') legend('topright', c('train err', 'test err'), col=c('orange', 'blue'), lty=1) 7

8 We see that all useful information are essentially captured in the first ten canonical directions. Logistic classifier. Next, we try to fit a logistic classifier to the MNIST data. We again start with the multinom function. The MaxNWts argument controls maximum number of coefficients in the multinomial logistic model. The test error is 22.2%. mnist.logit <- multinom(digit~., data=train, MaxNWts=100000) sum(predict(mnist.logit, newdata=test)!= test$digit)/2000 we also implement the stochastic gradient descent. X <- cbind(1,as.matrix(train[,-1])/255) X.test <- cbind(1,as.matrix(test[,-1])/255) y <- model.matrix(~digit-1, data=train)[,-1] # use digit 0 as baseline nepoch <- 20 beta <- matrix(0, 655, 9) train.err <- test.err <- rep(0,nepoch) set.seed(1122); shuffle = sample(4000) # randomly shuffle indices for (epoch in 1:nepoch){ for (i in shuffle){ alpha <- 1/(1+epoch/10) xi <- X[i,,drop=T]; yi <- y[i,,drop=t] prob <- expit(t(beta)%*%xi) beta <- beta + alpha * xi%*%t(yi - prob) train.err[epoch] <- sum((0:9)[max.col(cbind(0,x%*%beta))]!= train[,1])/4000 test.err[epoch] <- sum((0:9)[max.col(cbind(0,x.test%*%beta))]!= test[,1])/2000 test.err[nepoch] The final test error is 12.3%. We kept track of the training and test errors after each 8

9 epoch. Here is how they evolve. plot(train.err, type='l', col='orange', ylim=range(c(train.err,test.err)), xlab='epoch', ylab='error') points(test.err, type='l', col='blue') legend('topright', c('train err', 'test err'), col=c('orange', 'blue'), lty=1) Let s have a look some of the mis-classified images. pred <- (0:9)[max.col(cbind(0,x.test%*%beta))] err_ind <- (1:2000)[pred!= test[,1]] old_par <- par(mfrow = c(3,3)) for (i in 1:9){ visualise(mnist[4000+err_ind[i],-1], main=paste0('true=', test[err_ind[i],1], ', pred=', pred[err_ind[i]])) par(old_par) 9

10 If we compare the log-likelihood of the estimators from the quasi-newton method and stochastic gradient descent, we find that the former still produces a higher log-likelihood. However, it is the latter that gives a better test error. Early stopping in stochastic gradient descent is acting as a form of regularisation to prevent over-fitting in this case. 10

Practical 8: Neural networks

Practical 8: Neural networks Practical 8: Neural networks Properly building and training a neural network involves many design decisions such as choosing number and nature of layers and fine-tuning hyperparameters. Many details are

More information

Data Mining - Data. Dr. Jean-Michel RICHER Dr. Jean-Michel RICHER Data Mining - Data 1 / 47

Data Mining - Data. Dr. Jean-Michel RICHER Dr. Jean-Michel RICHER Data Mining - Data 1 / 47 Data Mining - Data Dr. Jean-Michel RICHER 2018 jean-michel.richer@univ-angers.fr Dr. Jean-Michel RICHER Data Mining - Data 1 / 47 Outline 1. Introduction 2. Data preprocessing 3. CPA with R 4. Exercise

More information

Introduction to Artificial Intelligence

Introduction to Artificial Intelligence Introduction to Artificial Intelligence COMP307 Machine Learning 2: 3-K Techniques Yi Mei yi.mei@ecs.vuw.ac.nz 1 Outline K-Nearest Neighbour method Classification (Supervised learning) Basic NN (1-NN)

More information

Data analysis case study using R for readily available data set using any one machine learning Algorithm

Data analysis case study using R for readily available data set using any one machine learning Algorithm Assignment-4 Data analysis case study using R for readily available data set using any one machine learning Algorithm Broadly, there are 3 types of Machine Learning Algorithms.. 1. Supervised Learning

More information

In stochastic gradient descent implementations, the fixed learning rate η is often replaced by an adaptive learning rate that decreases over time,

In stochastic gradient descent implementations, the fixed learning rate η is often replaced by an adaptive learning rate that decreases over time, Chapter 2 Although stochastic gradient descent can be considered as an approximation of gradient descent, it typically reaches convergence much faster because of the more frequent weight updates. Since

More information

k Nearest Neighbors Super simple idea! Instance-based learning as opposed to model-based (no pre-processing)

k Nearest Neighbors Super simple idea! Instance-based learning as opposed to model-based (no pre-processing) k Nearest Neighbors k Nearest Neighbors To classify an observation: Look at the labels of some number, say k, of neighboring observations. The observation is then classified based on its nearest neighbors

More information

Introduction to R and Statistical Data Analysis

Introduction to R and Statistical Data Analysis Microarray Center Introduction to R and Statistical Data Analysis PART II Petr Nazarov petr.nazarov@crp-sante.lu 22-11-2010 OUTLINE PART II Descriptive statistics in R (8) sum, mean, median, sd, var, cor,

More information

Practical 7: Support vector machines

Practical 7: Support vector machines Practical 7: Support vector machines Support vector machines are implemented in several R packages, including e1071, caret and kernlab. We will use the e1071 package in this practical. install.packages('e1071')

More information

arulescba: Classification for Factor and Transactional Data Sets Using Association Rules

arulescba: Classification for Factor and Transactional Data Sets Using Association Rules arulescba: Classification for Factor and Transactional Data Sets Using Association Rules Ian Johnson Southern Methodist University Abstract This paper presents an R package, arulescba, which uses association

More information

DEPARTMENT OF BIOSTATISTICS UNIVERSITY OF COPENHAGEN. Graphics. Compact R for the DANTRIP team. Klaus K. Holst

DEPARTMENT OF BIOSTATISTICS UNIVERSITY OF COPENHAGEN. Graphics. Compact R for the DANTRIP team. Klaus K. Holst Graphics Compact R for the DANTRIP team Klaus K. Holst 2012-05-16 The R Graphics system R has a very flexible and powerful graphics system Basic plot routine: plot(x,y,...) low-level routines: lines, points,

More information

Statistical Methods in AI

Statistical Methods in AI Statistical Methods in AI Distance Based and Linear Classifiers Shrenik Lad, 200901097 INTRODUCTION : The aim of the project was to understand different types of classification algorithms by implementing

More information

Practical 7: Support vector machines

Practical 7: Support vector machines Practical 7: Support vector machines Support vector machines are implemented in several R packages, including e1071, caret and kernlab. We will use the e1071 package in this practical. install.packages('e1071')

More information

Intro to R for Epidemiologists

Intro to R for Epidemiologists Lab 9 (3/19/15) Intro to R for Epidemiologists Part 1. MPG vs. Weight in mtcars dataset The mtcars dataset in the datasets package contains fuel consumption and 10 aspects of automobile design and performance

More information

mmpf: Monte-Carlo Methods for Prediction Functions by Zachary M. Jones

mmpf: Monte-Carlo Methods for Prediction Functions by Zachary M. Jones CONTRIBUTED RESEARCH ARTICLE 1 mmpf: Monte-Carlo Methods for Prediction Functions by Zachary M. Jones Abstract Machine learning methods can often learn high-dimensional functions which generalize well

More information

Orange3 Educational Add-on Documentation

Orange3 Educational Add-on Documentation Orange3 Educational Add-on Documentation Release 0.1 Biolab Jun 01, 2018 Contents 1 Widgets 3 2 Indices and tables 27 i ii Widgets in Educational Add-on demonstrate several key data mining and machine

More information

Machine Learning: Algorithms and Applications Mockup Examination

Machine Learning: Algorithms and Applications Mockup Examination Machine Learning: Algorithms and Applications Mockup Examination 14 May 2012 FIRST NAME STUDENT NUMBER LAST NAME SIGNATURE Instructions for students Write First Name, Last Name, Student Number and Signature

More information

Manuel Oviedo de la Fuente and Manuel Febrero Bande

Manuel Oviedo de la Fuente and Manuel Febrero Bande Supervised classification methods in by fda.usc package Manuel Oviedo de la Fuente and Manuel Febrero Bande Universidade de Santiago de Compostela CNTG (Centro de Novas Tecnoloxías de Galicia). Santiago

More information

Creating publication-ready Word tables in R

Creating publication-ready Word tables in R Creating publication-ready Word tables in R Sara Weston and Debbie Yee 12/09/2016 Has this happened to you? You re working on a draft of a manuscript with your adviser, and one of her edits is something

More information

KTH ROYAL INSTITUTE OF TECHNOLOGY. Lecture 14 Machine Learning. K-means, knn

KTH ROYAL INSTITUTE OF TECHNOLOGY. Lecture 14 Machine Learning. K-means, knn KTH ROYAL INSTITUTE OF TECHNOLOGY Lecture 14 Machine Learning. K-means, knn Contents K-means clustering K-Nearest Neighbour Power Systems Analysis An automated learning approach Understanding states in

More information

The supclust Package

The supclust Package The supclust Package May 18, 2005 Title Supervised Clustering of Genes Version 1.0-5 Date 2005-05-18 Methodology for Supervised Grouping of Predictor Variables Author Marcel Dettling and Martin Maechler

More information

STAT 1291: Data Science

STAT 1291: Data Science STAT 1291: Data Science Lecture 18 - Statistical modeling II: Machine learning Sungkyu Jung Where are we? data visualization data wrangling professional ethics statistical foundation Statistical modeling:

More information

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions

Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions Computational Statistics The basics of maximum likelihood estimation, Bayesian estimation, object recognitions Thomas Giraud Simon Chabot October 12, 2013 Contents 1 Discriminant analysis 3 1.1 Main idea................................

More information

Logistic Regression

Logistic Regression Logistic Regression ddebarr@uw.edu 2016-05-26 Agenda Model Specification Model Fitting Bayesian Logistic Regression Online Learning and Stochastic Optimization Generative versus Discriminative Classifiers

More information

A Brief Look at Optimization

A Brief Look at Optimization A Brief Look at Optimization CSC 412/2506 Tutorial David Madras January 18, 2018 Slides adapted from last year s version Overview Introduction Classes of optimization problems Linear programming Steepest

More information

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York A Systematic Overview of Data Mining Algorithms Sargur Srihari University at Buffalo The State University of New York 1 Topics Data Mining Algorithm Definition Example of CART Classification Iris, Wine

More information

Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur

Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Introduction to Machine Learning Prof. Anirban Santara Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Lecture 14 Python Exercise on knn and PCA Hello everyone,

More information

Introduction to R for Epidemiologists

Introduction to R for Epidemiologists Introduction to R for Epidemiologists Jenna Krall, PhD Thursday, January 29, 2015 Final project Epidemiological analysis of real data Must include: Summary statistics T-tests or chi-squared tests Regression

More information

Classification. Data Set Iris. Logistic Regression. Species. Petal.Width

Classification. Data Set Iris. Logistic Regression. Species. Petal.Width Classification Data Set Iris # load data data(iris) # this is what it looks like... head(iris) Sepal.Length Sepal.Width 1 5.1 3.5 1.4 0.2 2 4.9 3.0 1.4 0.2 3 4.7 3.2 1.3 0.2 4 4.6 3.1 0.2 5 5.0 3.6 1.4

More information

Comparision between Quad tree based K-Means and EM Algorithm for Fault Prediction

Comparision between Quad tree based K-Means and EM Algorithm for Fault Prediction Comparision between Quad tree based K-Means and EM Algorithm for Fault Prediction Swapna M. Patil Dept.Of Computer science and Engineering,Walchand Institute Of Technology,Solapur,413006 R.V.Argiddi Assistant

More information

K-fold cross validation in the Tidyverse Stephanie J. Spielman 11/7/2017

K-fold cross validation in the Tidyverse Stephanie J. Spielman 11/7/2017 K-fold cross validation in the Tidyverse Stephanie J. Spielman 11/7/2017 Requirements This demo requires several packages: tidyverse (dplyr, tidyr, tibble, ggplot2) modelr broom proc Background K-fold

More information

DATA MINING INTRODUCTION TO CLASSIFICATION USING LINEAR CLASSIFIERS

DATA MINING INTRODUCTION TO CLASSIFICATION USING LINEAR CLASSIFIERS DATA MINING INTRODUCTION TO CLASSIFICATION USING LINEAR CLASSIFIERS 1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes and a class attribute

More information

Logistic Regression. Abstract

Logistic Regression. Abstract Logistic Regression Tsung-Yi Lin, Chen-Yu Lee Department of Electrical and Computer Engineering University of California, San Diego {tsl008, chl60}@ucsd.edu January 4, 013 Abstract Logistic regression

More information

k-nearest Neighbors + Model Selection

k-nearest Neighbors + Model Selection 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University k-nearest Neighbors + Model Selection Matt Gormley Lecture 5 Jan. 30, 2019 1 Reminders

More information

DATA VISUALIZATION WITH GGPLOT2. Coordinates

DATA VISUALIZATION WITH GGPLOT2. Coordinates DATA VISUALIZATION WITH GGPLOT2 Coordinates Coordinates Layer Controls plot dimensions coord_ coord_cartesian() Zooming in scale_x_continuous(limits =...) xlim() coord_cartesian(xlim =...) Original Plot

More information

Model Selection Introduction to Machine Learning. Matt Gormley Lecture 4 January 29, 2018

Model Selection Introduction to Machine Learning. Matt Gormley Lecture 4 January 29, 2018 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Model Selection Matt Gormley Lecture 4 January 29, 2018 1 Q&A Q: How do we deal

More information

Optimization Plugin for RapidMiner. Venkatesh Umaashankar Sangkyun Lee. Technical Report 04/2012. technische universität dortmund

Optimization Plugin for RapidMiner. Venkatesh Umaashankar Sangkyun Lee. Technical Report 04/2012. technische universität dortmund Optimization Plugin for RapidMiner Technical Report Venkatesh Umaashankar Sangkyun Lee 04/2012 technische universität dortmund Part of the work on this technical report has been supported by Deutsche Forschungsgemeinschaft

More information

Package intccr. September 12, 2017

Package intccr. September 12, 2017 Type Package Package intccr September 12, 2017 Title Semiparametric Competing Risks Regression under Interval Censoring Version 0.2.0 Author Giorgos Bakoyannis , Jun Park

More information

Experimental Design + k- Nearest Neighbors

Experimental Design + k- Nearest Neighbors 10-601 Introduction to Machine Learning Machine Learning Department School of Computer Science Carnegie Mellon University Experimental Design + k- Nearest Neighbors KNN Readings: Mitchell 8.2 HTF 13.3

More information

Keras: Handwritten Digit Recognition using MNIST Dataset

Keras: Handwritten Digit Recognition using MNIST Dataset Keras: Handwritten Digit Recognition using MNIST Dataset IIT PATNA January 31, 2018 1 / 30 OUTLINE 1 Keras: Introduction 2 Installing Keras 3 Keras: Building, Testing, Improving A Simple Network 2 / 30

More information

MULTIVARIATE ANALYSIS USING R

MULTIVARIATE ANALYSIS USING R MULTIVARIATE ANALYSIS USING R B N Mandal I.A.S.R.I., Library Avenue, New Delhi 110 012 bnmandal @iasri.res.in 1. Introduction This article gives an exposition of how to use the R statistical software for

More information

K-means Clustering & PCA

K-means Clustering & PCA K-means Clustering & PCA Andreas C. Kapourani (Credit: Hiroshi Shimodaira) 02 February 2018 1 Introduction In this lab session we will focus on K-means clustering and Principal Component Analysis (PCA).

More information

Package automl. September 13, 2018

Package automl. September 13, 2018 Type Package Title Deep Learning with Metaheuristic Version 1.0.5 Author Alex Boulangé Package automl September 13, 2018 Maintainer Alex Boulangé Fits from

More information

Louis Fourrier Fabien Gaie Thomas Rolf

Louis Fourrier Fabien Gaie Thomas Rolf CS 229 Stay Alert! The Ford Challenge Louis Fourrier Fabien Gaie Thomas Rolf Louis Fourrier Fabien Gaie Thomas Rolf 1. Problem description a. Goal Our final project is a recent Kaggle competition submitted

More information

Work 2. Case-based reasoning exercise

Work 2. Case-based reasoning exercise Work 2. Case-based reasoning exercise Marc Albert Garcia Gonzalo, Miquel Perelló Nieto November 19, 2012 1 Introduction In this exercise we have implemented a case-based reasoning system, specifically

More information

Programming Exercise 3: Multi-class Classification and Neural Networks

Programming Exercise 3: Multi-class Classification and Neural Networks Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks to recognize

More information

Neural Networks Laboratory EE 329 A

Neural Networks Laboratory EE 329 A Neural Networks Laboratory EE 329 A Introduction: Artificial Neural Networks (ANN) are widely used to approximate complex systems that are difficult to model using conventional modeling techniques such

More information

EPL451: Data Mining on the Web Lab 5

EPL451: Data Mining on the Web Lab 5 EPL451: Data Mining on the Web Lab 5 Παύλος Αντωνίου Γραφείο: B109, ΘΕΕ01 University of Cyprus Department of Computer Science Predictive modeling techniques IBM reported in June 2012 that 90% of data available

More information

MS1b Statistical Data Mining Part 3: Supervised Learning Nonparametric Methods

MS1b Statistical Data Mining Part 3: Supervised Learning Nonparametric Methods MS1b Statistical Data Mining Part 3: Supervised Learning Nonparametric Methods Yee Whye Teh Department of Statistics Oxford http://www.stats.ox.ac.uk/~teh/datamining.html Outline Supervised Learning: Nonparametric

More information

The STEPDISC Procedure

The STEPDISC Procedure SAS/STAT 9.2 User s Guide The STEPDISC Procedure (Book Excerpt) This document is an individual chapter from SAS/STAT 9.2 User s Guide. The correct bibliographic citation for the complete manual is as follows:

More information

Homework 3: Solutions

Homework 3: Solutions Homework 3: Solutions Statistics 43 Fall 207 Data Analysis: Note: All data analysis results are provided by Yixin Chen STAT43 - HW3 Yixin Chen Data Analysis in R Pipeline (a) Since the purpose is to find

More information

Input: Concepts, Instances, Attributes

Input: Concepts, Instances, Attributes Input: Concepts, Instances, Attributes 1 Terminology Components of the input: Concepts: kinds of things that can be learned aim: intelligible and operational concept description Instances: the individual,

More information

Multiple imputation using chained equations: Issues and guidance for practice

Multiple imputation using chained equations: Issues and guidance for practice Multiple imputation using chained equations: Issues and guidance for practice Ian R. White, Patrick Royston and Angela M. Wood http://onlinelibrary.wiley.com/doi/10.1002/sim.4067/full By Gabrielle Simoneau

More information

ISyE 6416 Basic Statistical Methods - Spring 2016 Bonus Project: Big Data Analytics Final Report. Team Member Names: Xi Yang, Yi Wen, Xue Zhang

ISyE 6416 Basic Statistical Methods - Spring 2016 Bonus Project: Big Data Analytics Final Report. Team Member Names: Xi Yang, Yi Wen, Xue Zhang ISyE 6416 Basic Statistical Methods - Spring 2016 Bonus Project: Big Data Analytics Final Report Team Member Names: Xi Yang, Yi Wen, Xue Zhang Project Title: Improve Room Utilization Introduction Problem

More information

Simulation of Back Propagation Neural Network for Iris Flower Classification

Simulation of Back Propagation Neural Network for Iris Flower Classification American Journal of Engineering Research (AJER) e-issn: 2320-0847 p-issn : 2320-0936 Volume-6, Issue-1, pp-200-205 www.ajer.org Research Paper Open Access Simulation of Back Propagation Neural Network

More information

Package mmtsne. July 28, 2017

Package mmtsne. July 28, 2017 Type Package Title Multiple Maps t-sne Author Benjamin J. Radford Package mmtsne July 28, 2017 Maintainer Benjamin J. Radford Version 0.1.0 An implementation of multiple maps

More information

Chapter 60 The STEPDISC Procedure. Chapter Table of Contents

Chapter 60 The STEPDISC Procedure. Chapter Table of Contents Chapter 60 Chapter Table of Contents OVERVIEW...3155 GETTING STARTED...3156 SYNTAX...3163 PROC STEPDISC Statement...3163 BYStatement...3166 CLASSStatement...3167 FREQStatement...3167 VARStatement...3167

More information

The Curse of Dimensionality

The Curse of Dimensionality The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more

More information

LaTeX packages for R and Advanced knitr

LaTeX packages for R and Advanced knitr LaTeX packages for R and Advanced knitr Iowa State University April 9, 2014 More ways to combine R and LaTeX Additional knitr options for formatting R output: \Sexpr{}, results='asis' xtable - formats

More information

An Introduction to Cluster Analysis. Zhaoxia Yu Department of Statistics Vice Chair of Undergraduate Affairs

An Introduction to Cluster Analysis. Zhaoxia Yu Department of Statistics Vice Chair of Undergraduate Affairs An Introduction to Cluster Analysis Zhaoxia Yu Department of Statistics Vice Chair of Undergraduate Affairs zhaoxia@ics.uci.edu 1 What can you say about the figure? signal C 0.0 0.5 1.0 1500 subjects Two

More information

Combine the PA Algorithm with a Proximal Classifier

Combine the PA Algorithm with a Proximal Classifier Combine the Passive and Aggressive Algorithm with a Proximal Classifier Yuh-Jye Lee Joint work with Y.-C. Tseng Dept. of Computer Science & Information Engineering TaiwanTech. Dept. of Statistics@NCKU

More information

Clojure & Incanter. Introduction to Datasets & Charts. Data Sorcery with. David Edgar Liebke

Clojure & Incanter. Introduction to Datasets & Charts. Data Sorcery with. David Edgar Liebke Data Sorcery with Clojure & Incanter Introduction to Datasets & Charts National Capital Area Clojure Meetup 18 February 2010 David Edgar Liebke liebke@incanter.org Outline Overview What is Incanter? Getting

More information

Package ordinalnet. December 5, 2017

Package ordinalnet. December 5, 2017 Type Package Title Penalized Ordinal Regression Version 2.4 Package ordinalnet December 5, 2017 Fits ordinal regression models with elastic net penalty. Supported model families include cumulative probability,

More information

Class 6 Large-Scale Image Classification

Class 6 Large-Scale Image Classification Class 6 Large-Scale Image Classification Liangliang Cao, March 7, 2013 EECS 6890 Topics in Information Processing Spring 2013, Columbia University http://rogerioferis.com/visualrecognitionandsearch Visual

More information

3 Types of Gradient Descent Algorithms for Small & Large Data Sets

3 Types of Gradient Descent Algorithms for Small & Large Data Sets 3 Types of Gradient Descent Algorithms for Small & Large Data Sets Introduction Gradient Descent Algorithm (GD) is an iterative algorithm to find a Global Minimum of an objective function (cost function)

More information

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013 Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork

More information

CS4618: Artificial Intelligence I. Accuracy Estimation. Initialization

CS4618: Artificial Intelligence I. Accuracy Estimation. Initialization CS4618: Artificial Intelligence I Accuracy Estimation Derek Bridge School of Computer Science and Information echnology University College Cork Initialization In [1]: %reload_ext autoreload %autoreload

More information

Metric Learning for Large Scale Image Classification:

Metric Learning for Large Scale Image Classification: Metric Learning for Large Scale Image Classification: Generalizing to New Classes at Near-Zero Cost Thomas Mensink 1,2 Jakob Verbeek 2 Florent Perronnin 1 Gabriela Csurka 1 1 TVPA - Xerox Research Centre

More information

Computational Machine Learning, Fall 2015 Homework 4: stochastic gradient algorithms

Computational Machine Learning, Fall 2015 Homework 4: stochastic gradient algorithms Computational Machine Learning, Fall 2015 Homework 4: stochastic gradient algorithms Due: Tuesday, November 24th, 2015, before 11:59pm (submit via email) Preparation: install the software packages and

More information

Instance-Based Representations. k-nearest Neighbor. k-nearest Neighbor. k-nearest Neighbor. exemplars + distance measure. Challenges.

Instance-Based Representations. k-nearest Neighbor. k-nearest Neighbor. k-nearest Neighbor. exemplars + distance measure. Challenges. Instance-Based Representations exemplars + distance measure Challenges. algorithm: IB1 classify based on majority class of k nearest neighbors learned structure is not explicitly represented choosing k

More information

Solution 1 (python) Performance: Enron Samples Rate Recall Precision Total Contribution

Solution 1 (python) Performance: Enron Samples Rate Recall Precision Total Contribution Summary Each of the ham/spam classifiers has been tested against random samples from pre- processed enron sets 1 through 6 obtained via: http://www.aueb.gr/users/ion/data/enron- spam/, or the entire set

More information

Hands on Datamining & Machine Learning with Weka

Hands on Datamining & Machine Learning with Weka Step1: Click the Experimenter button to launch the Weka Experimenter. The Weka Experimenter allows you to design your own experiments of running algorithms on datasets, run the experiments and analyze

More information

BL5229: Data Analysis with Matlab Lab: Learning: Clustering

BL5229: Data Analysis with Matlab Lab: Learning: Clustering BL5229: Data Analysis with Matlab Lab: Learning: Clustering The following hands-on exercises were designed to teach you step by step how to perform and understand various clustering algorithm. We will

More information

Performance Analysis of Data Mining Classification Techniques

Performance Analysis of Data Mining Classification Techniques Performance Analysis of Data Mining Classification Techniques Tejas Mehta 1, Dr. Dhaval Kathiriya 2 Ph.D. Student, School of Computer Science, Dr. Babasaheb Ambedkar Open University, Gujarat, India 1 Principal

More information

Machine Learning. Chao Lan

Machine Learning. Chao Lan Machine Learning Chao Lan Machine Learning Prediction Models Regression Model - linear regression (least square, ridge regression, Lasso) Classification Model - naive Bayes, logistic regression, Gaussian

More information

Package TPD. June 14, 2018

Package TPD. June 14, 2018 Type Package Package TPD June 14, 2018 Title Methods for Measuring Functional Diversity Based on Trait Probability Density Version 1.0.0 Date 2018-06-13 Author Carlos P. Carmona

More information

Metric Learning for Large-Scale Image Classification:

Metric Learning for Large-Scale Image Classification: Metric Learning for Large-Scale Image Classification: Generalizing to New Classes at Near-Zero Cost Florent Perronnin 1 work published at ECCV 2012 with: Thomas Mensink 1,2 Jakob Verbeek 2 Gabriela Csurka

More information

Package nnet. R topics documented: March 20, Priority recommended. Version Date Depends R (>= 2.14.

Package nnet. R topics documented: March 20, Priority recommended. Version Date Depends R (>= 2.14. Package nnet March 20, 2013 Priority recommended Version 7.3-6 Date 2013-03-20 Depends R (>= 2.14.0), stats, utils Suggests MASS Author Brian Ripley . Maintainer Brian Ripley

More information

Package reghelper. April 8, 2017

Package reghelper. April 8, 2017 Type Package Title Helper Functions for Regression Analysis Version 0.3.3 Date 2017-04-07 Package reghelper April 8, 2017 A set of functions used to automate commonly used methods in regression analysis.

More information

Identification Of Iris Flower Species Using Machine Learning

Identification Of Iris Flower Species Using Machine Learning Identification Of Iris Flower Species Using Machine Learning Shashidhar T Halakatti 1, Shambulinga T Halakatti 2 1 Department. of Computer Science Engineering, Rural Engineering College,Hulkoti 582205

More information

Exploring high-dimensional classification boundaries

Exploring high-dimensional classification boundaries Exploring high-dimensional classification boundaries Hadley Wickham, Doina Caragea, Di Cook January 20, 2006 1 Introduction Given p-dimensional training data containing d groups (the design space), a classification

More information

Graphing Bivariate Relationships

Graphing Bivariate Relationships Graphing Bivariate Relationships Overview To fully explore the relationship between two variables both summary statistics and visualizations are important. For this assignment you will describe the relationship

More information

Basic Concepts Weka Workbench and its terminology

Basic Concepts Weka Workbench and its terminology Changelog: 14 Oct, 30 Oct Basic Concepts Weka Workbench and its terminology Lecture Part Outline Concepts, instances, attributes How to prepare the input: ARFF, attributes, missing values, getting to know

More information

Machine Learning for. Artem Lind & Aleskandr Tkachenko

Machine Learning for. Artem Lind & Aleskandr Tkachenko Machine Learning for Object Recognition Artem Lind & Aleskandr Tkachenko Outline Problem overview Classification demo Examples of learning algorithms Probabilistic modeling Bayes classifier Maximum margin

More information

Package pnmtrem. February 20, Index 9

Package pnmtrem. February 20, Index 9 Type Package Package pnmtrem February 20, 2015 Title Probit-Normal Marginalized Transition Random Effects Models Version 1.3 Date 2013-05-19 Author Ozgur Asar, Ozlem Ilk Depends MASS Maintainer Ozgur Asar

More information

Gradient Descent. Wed Sept 20th, James McInenrey Adapted from slides by Francisco J. R. Ruiz

Gradient Descent. Wed Sept 20th, James McInenrey Adapted from slides by Francisco J. R. Ruiz Gradient Descent Wed Sept 20th, 2017 James McInenrey Adapted from slides by Francisco J. R. Ruiz Housekeeping A few clarifications of and adjustments to the course schedule: No more breaks at the midpoint

More information

Lecture 20: Classification and Regression Trees

Lecture 20: Classification and Regression Trees Fall, 2017 Outline Basic Ideas Basic Ideas Tree Construction Algorithm Parameter Tuning Choice of Impurity Measure Missing Values Characteristics of Classification Trees Main Characteristics: very flexible,

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

The grplasso Package

The grplasso Package The grplasso Package June 27, 2007 Type Package Title Fitting user specified models with Group Lasso penalty Version 0.2-1 Date 2007-06-27 Author Lukas Meier Maintainer Lukas Meier

More information

Programming Exercise 4: Neural Networks Learning

Programming Exercise 4: Neural Networks Learning Programming Exercise 4: Neural Networks Learning Machine Learning Introduction In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written

More information

Perceptron: This is convolution!

Perceptron: This is convolution! Perceptron: This is convolution! v v v Shared weights v Filter = local perceptron. Also called kernel. By pooling responses at different locations, we gain robustness to the exact spatial location of image

More information

Classification: Linear Discriminant Functions

Classification: Linear Discriminant Functions Classification: Linear Discriminant Functions CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Discriminant functions Linear Discriminant functions

More information

CSSS 510: Lab 2. Introduction to Maximum Likelihood Estimation

CSSS 510: Lab 2. Introduction to Maximum Likelihood Estimation CSSS 510: Lab 2 Introduction to Maximum Likelihood Estimation 2018-10-12 0. Agenda 1. Housekeeping: simcf, tile 2. Questions about Homework 1 or lecture 3. Simulating heteroskedastic normal data 4. Fitting

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

Fitting Classification and Regression Trees Using Statgraphics and R. Presented by Dr. Neil W. Polhemus

Fitting Classification and Regression Trees Using Statgraphics and R. Presented by Dr. Neil W. Polhemus Fitting Classification and Regression Trees Using Statgraphics and R Presented by Dr. Neil W. Polhemus Classification and Regression Trees Machine learning methods used to construct predictive models from

More information

Stat 342 Exam 3 Fall 2014

Stat 342 Exam 3 Fall 2014 Stat 34 Exam 3 Fall 04 I have neither given nor received unauthorized assistance on this exam. Name Signed Date Name Printed There are questions on the following 6 pages. Do as many of them as you can

More information

Data Mining: Exploring Data. Lecture Notes for Chapter 3

Data Mining: Exploring Data. Lecture Notes for Chapter 3 Data Mining: Exploring Data Lecture Notes for Chapter 3 Slides by Tan, Steinbach, Kumar adapted by Michael Hahsler Look for accompanying R code on the course web site. Topics Exploratory Data Analysis

More information

Combined Weak Classifiers

Combined Weak Classifiers Combined Weak Classifiers Chuanyi Ji and Sheng Ma Department of Electrical, Computer and System Engineering Rensselaer Polytechnic Institute, Troy, NY 12180 chuanyi@ecse.rpi.edu, shengm@ecse.rpi.edu Abstract

More information

CISC 4631 Data Mining

CISC 4631 Data Mining CISC 4631 Data Mining Lecture 03: Introduction to classification Linear classifier Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Eamonn Koegh (UC Riverside) 1 Classification:

More information

CS 584 Data Mining. Classification 1

CS 584 Data Mining. Classification 1 CS 584 Data Mining Classification 1 Classification: Definition Given a collection of records (training set ) Each record contains a set of attributes, one of the attributes is the class. Find a model for

More information

Package bayescl. April 14, 2017

Package bayescl. April 14, 2017 Package bayescl April 14, 2017 Version 0.0.1 Date 2017-04-10 Title Bayesian Inference on a GPU using OpenCL Author Rok Cesnovar, Erik Strumbelj Maintainer Rok Cesnovar Description

More information