Aravind Baskar A ASSIGNMENT - 3

Size: px
Start display at page:

Download "Aravind Baskar A ASSIGNMENT - 3"

Transcription

1 Aravind Baskar A ASSIGNMENT - 3 EE5904R/ME5404 Neural Networks Homework #3 Q1. Function Approximation with RBFN (20 Marks) Solution a): RBFN-Exact Interpolation Method: Number of hidden units = number of data points Several trainings are made because noise adding to the function is random, which provide different solution based on randomly chosen values. The mean squared error for the five trials and the average error are Trial 1 Trial 2 Trial 3 Trial 4 Trial 5 Average MATLAB CODE: clear all; x_train = -1:0.05:1; x_test = -1:0.01:1; N_train=length(x_train); N_test=length(x_test); for trial=1:5 noise = randn(n_train,1); y_train = 1.2*sin(pi*x_train)-cos(2.4*pi*x_train)+0.3*noise'; y_test = 1.2*sin(pi*x_test)-cos(2.4*pi*x_test); sigma = 0.1; w_train= ones(n_train,n_train); for j=1:n_train,for i=1:n_train w_train(j,i)= exp(-((x_train(j)-x_train(i))^2)/(2*(sigma)^2));, w_train = inv(w_train)*y_train'; w_test = zeros(n_test,n_train); for j=1:n_test,for i=1:n_train w_test(j,i)= exp(-((x_test(j)-x_train(i))^2)/(2*(sigma)^2));, net_output = w_test*w_train; error = y_test' - net_output; disp(mse(error)); figure(trial); plot(x_train,y_train,'b^');hold on;

2 plot(x_test,y_test,'go');hold on; plot(x_test,net_output,'r');hold off; leg('training data','test data','rbfn Output','Location','NorthWest'); title(['rbfn - Exact interpolation method; trial-',int2str(trial)]); print(figure(trial),'-djpeg100',strcat('q1_a_',int2str(trial))) (a) (b) (c) (d) (e) Fig.1. RBFN-Exact Interpolation Method

3 This shows that RBFN Exact Interpolation method is over fitting because we included some noise to the exact solution, which provided inaccurate results with high error values. a) Follow the strategy of Fixed Centers Selected at Random (as described on page 37 in the slides of lecture five), randomly select 15 centers among the sampling points. Determine the weights of the RBFN. Evaluate the approximation performance of the resulting RBFN using the test set. Compare it to the result of part a). Solution (b): Fixed Centers Selected at Random In Fixed centers method, 15 center points are randomly assigned among the sampling points i.e., 15 hidden layers. (a) (b) (c) (d)

4 (e) Fig.2. Fixed Centers Selected at Random: No.of centers =15 The mean squared error for the five trials, when m=15 and the average error are Trial 1 Trial 2 Trial 3 Trial 4 Trial 5 Average Comparison: When comparing exact interpolation and fixed center method, the errors are small in fixed center method in some trials but in other few trials the error was extremely large. This is because the interpolation matrix is close to singular or badly scaled. The solution deps on the number of centers we choose (i.e. the number of hidden units). MATLAB CODE: % Fixed Centers selected at Random clear all; clc; x_train = -1:0.05:1; x_train = x_train(:); x_test = -1:0.01:1; x_test = x_test(:); N_train=length(x_train); N_test=length(x_test); M = 15; for trial=1:5 noise = randn(n_train,1); y_train = 1.2*sin(pi*x_train)-cos(2.4*pi*x_train)+0.3*noise; y_test = 1.2*sin(pi*x_test)-cos(2.4*pi*x_test); % select M centers randomly from the training points

5 centerind = randperm(n_train); centers = x_train(centerind(1:m)); dist = minmax(centers); dmax = (dist(2)-dist(1))^2; phimat = ones(n_train,m+1); for i=1:n_train for j=1:m phimat(i,j+1)=exp(-m*(x_train(i)-centers(j))^2/dmax); weights = inv(phimat'*phimat)*(phimat'*y_train); phimattest = ones(n_test,m+1); for i=1:n_test for j=1:m phimattest(i,j+1)=exp(-20*(x_test(i)-centers(j))^2/dmax); net_output = phimattest*weights; e = y_test - net_output; E = mse(e); disp(e); figure(trial); plot(x_train,y_train,'b^');hold on; plot(x_test,y_test,'go');hold on; plot(x_test,net_output,'r');hold off; leg('training data','test data','rbfn Output','Location','NorthWest'); title(['rbfn- Fixed Centers Selected at Random; trial-',int2str(trial)]); Solution (c): RBFN - Regularization Factor Method The regularization factors are varied between and 10 to see the effects on the exact interpolation method. The mean squared errors are Lamb 0.00 da 1 Error When comparing RBFN and Regularization method, the error are considerably small for lesser lambda values. But error shoot ups when lambda value is increased leads to under fit the curve. And also increase in lambda increases the smoothness of the curve.

6

7 Fig.4. Regularization Factor Method MATLAB CODE: clear all; x_train = -1:0.05:1; x_test = -1:0.01:1; N=length(x_train); N_test=length(x_test); noise = randn(n,1); y_train = 1.2*sin(pi*x_train)-cos(2.4*pi*x_train)+0.3*noise'; y_test = 1.2*sin(pi*x_test)-cos(2.4*pi*x_test); sigma = 0.1; W_train= ones(n,n); for j=1:n,for i=1:n W_train(j,i)= exp(-((x_train(j)-x_train(i))^2)/(2*(sigma)^2));, lambda = [ ]; for trial=1:length(lambda) w_train = inv(w_train'*w_train+lambda(trial)*eye(n,n))*w_train'*y_train'; W_test = zeros(n_test,n); for j=1:n_test,for i=1:n W_test(j,i)= exp(-((x_test(j)-x_train(i))^2)/(2*(sigma)^2));, net_output = W_test*w_train; error = y_test' - net_output; E(trial)=mse(error); disp(mse(error)); figure(trial); plot(x_train,y_train,'b^');hold on; plot(x_test,y_test,'go');hold on;

8 plot(x_test,net_output,'r');hold off; leg('training','test','rbfn Output','Location','NorthWest'); title(['regularization Factor method; lambda =',num2str(lambda(trial))]); loglam=log(lambda); plot(loglam,e,'-r*'); xlabel('log of Regularization Factor');ylabel('Mean Square Error'); Solution (d): RBFN Fixed center Vs Regularization Factor method Similar to previous case, error will increase with increase in lambda in regularization factor method. Lamb 0.00 da 1 Error The mean square error values are very less for lambda less than 0.5 when comparing to Fixed center RBFN method. For lambda greater than 0.5 i.e. increase in lambda value increases the curve smoothness but it also increase the mean square error which is greater than Fixed center error.

9 Fig.5. Regularization Factor method Vs Fixed Center RBFN

10 MATLAB CODE: clear all; clc; x_train = -1:0.05:1; x_test = -1:0.01:1; N_train=length(x_train); N_test=length(x_test); M=15; noise = randn(n_train,1); y_train = 1.2*sin(pi*x_train)-cos(2.4*pi*x_train)+0.3*noise'; y_test = 1.2*sin(pi*x_test)-cos(2.4*pi*x_test); index = randperm(n_train); mu = x_train(index(1:m)); minmaximum = minmax(mu); dmax = minmaximum(2)-minmaximum(1); dmax=dmax^2; W_train= ones(n_train,m); for j=1:n_train,for i=1:m W_train(j,i)= exp(-15*((x_train(j)-mu(i))^2)/dmax);, lambda = [ ]; for trial=1:length(lambda) w_train = inv(w_train'*w_train+lambda(trial)*eye(m,m))*w_train'*y_train'; W_test = zeros(n_test,m); for j=1:n_test,for i=1:m W_test(j,i)= exp(-15*((x_test(j)-mu(i))^2)/dmax);, net_output = W_test*w_train; error = y_test' - net_output; E(trial)=mse(error); disp(mse(error)); figure(trial); plot(x_train,y_train,'b^');hold on; plot(x_test,y_test,'go');hold on; plot(x_test,net_output,'r');hold off; leg('training','test','rbfn Output','Location','NorthWest'); title(['regularization Factor method; lambda =',num2str(lambda(trial))]); loglam=log(lambda);

11 plot(loglam,e,'-r*'); xlabel('log of Regularization Factor');ylabel('Mean Square Error'); Q2. Handwritten alphabet classification using Self-organizing Map (20 Marks) Solution: The hand written pictures are loaded as binary image from SOM_database.mat Learning rate = No.of iterations = 1000 Sigma calculated as per size of the image = 7.07 'U' 'U' 'L' 'O' 'O' 'O' 'S' 'S' 'E' 'E' 'U' 'U' 'U' 'L' 'L' 'O' 'S' 'L' 'L' 'L' 'U' 'U' 'U' 'U' 'L' 'L' 'L' 'L' 'L' 'L' 'U' 'U' 'N' 'N' 'N' 'N' 'L' 'L' 'L' 'L' 'M' 'N' 'N' 'A' 'A' 'A' 'A' 'L' 'L' 'L' 'N' 'N' 'N' 'N' 'A' 'A' 'A' 'A' 'R' 'R' 'M' 'M' 'M' 'A' 'A' 'A' 'A' 'A' 'R' 'R' 'M' 'M' 'M' 'A' 'A' 'A' 'A' 'A' 'R' 'R' Since SOM randomly initialized the values keeps on changes in every run. 1) For each neuron in the trained SOM, show the weights on a 10x10 map to visualize the result of the training process and make comments about them, if any. A simple way would be to reshape the weight of a neuron into a 20x16 matrix with double precision (since that is the dimension of the training image) and display it as an image. As seen, the weights look almost same as the label obtained in the previous part. Since learning rate = 0.1.

12 2) Use the generated SOM to classify the test images (found in test_data). The classification can be done in the following fashion. Apply a test sample to SOM, and determine the winner neuron. Then label the test sample with the one corresponding to the winner neuron (note that labels of all the neurons in the SOM have already been determined in the first step). Compare the label resulting from above classification scheme to the true label of the test sample, and calculate the recognition rate for the whole test set. The SOM is tested with the test images (found in test_data). The winner solution is determined by applying the test sample to SOM then test sample is labelled with corresponding winner neuron. Several trails are done because the initial weights are chosen random. At each trial gives different performance results because the values are assigned randomly. The average of five trials are taken and the performance in percentage given below Trial 1 Trial 2 Trial 3 Trial 4 Trial 5 Average 'U' 'M' 'A' 'L' 'U' 'N' 'M' 'M' 'N' 'A' 'E' 'E' 'E' 'E' 'E' 'E' 'L' 'E' 'E' 'E' 'U' 'U' 'U' 'U' 'U' 'U' 'U' 'R' 'U' 'U' 'L' 'R' 'N' 'N' 'R' 'A' 'R' 'R' 'R' 'A' 'A' 'A' 'A' 'A' 'A' 'A' 'A' 'N' 'A' 'A' 'L' 'L' 'L' 'U' 'U' 'L' 'L' 'U' 'U' 'L' 'M' 'N' 'M' 'M' 'M' 'M' 'M' 'A' 'M' 'M' 'O' 'O' 'O' 'O' 'L' 'L' 'O' 'O' 'O' 'O' 'S' 'S' 'S' 'S' 'S' 'S' 'S' 'S' 'L' 'S' When epochs = 1000; size=10x10; Learning rate=0.1; Recognition Rate = 72.22% Testing of SOM for other design parameters The parameters that can be varied and played around are a) Learning rate b) Size of the lattice c) Sigma d) Width e) Number of iterations The initial value of sigma deps on the size of the lattice as per the formula given. So we concentrate on the size and no.of iteration.

13 Size of lattice When we increase the size of lattice from 10x10 to 20x20, the accuracy rate also increased but the time consumption for learning also increased when comparing to lower size. Thus increasing in size also helps in improving the performance. But there is a tradeoff between time for computation and the performance. Iterations: Increase in no.of iteration increase the time conception. But the accuracy of prediction is considerably increased. Obtained label values for iteration. The weights here almost completely comply with the labels obtained. Also the accuracy obtained here is % which is higher compared to the previous part. Thus increasing the iterations definitely helps in improving the performance. But there is a trade-off between time for computation and the performance. 'N' 'A' 'L' 'R' 'R' 'E' 'E' 'E' 'S' 'S' 'N' 'A' 'R' 'R' 'E' 'E' 'E' 'S' 'S' 'S' 'N' 'N' 'A' 'R' 'E' 'E' 'E' 'O' 'O' 'S' 'M' 'M' 'M' 'M' 'R' 'R' 'R' 'O' 'O' 'O' 'M' 'M' 'N' 'N' 'M' 'R' 'O' 'O' 'O' 'O' 'N' 'N' 'M' 'M' 'U' 'L' 'L' 'L' 'O' 'O' 'N' 'M' 'M' 'M' 'U' 'L' 'L' 'L' 'U' 'U' 'M' 'M' 'M' 'M' 'U' 'L' 'L' 'L' 'U' 'U' The corresponding weights can be visualized as

Homework 1. for i=1:n curr_dist = norm(x - X[i,:])

Homework 1. for i=1:n curr_dist = norm(x - X[i,:]) EE104, Spring 2017-2018 S. Boyd & S. Lall Homework 1 1. Nearest neighbor predictor. We have a collection of n observations, x i R d, y i R, i = 1,..., n. Based on these observations, the nearest neighbor

More information

HW7 Solutions. Gabe Hope. May The accuracy of the nearest neighbor classifier was: 0.9, and the classifier took 2.55 seconds to run.

HW7 Solutions. Gabe Hope. May The accuracy of the nearest neighbor classifier was: 0.9, and the classifier took 2.55 seconds to run. HW7 Solutions Gabe Hope May 2017 1 Problem 1 1.1 Part a The accuracy of the nearest neighbor classifier was: 0.9, and the classifier took 2.55 seconds to run. 1.2 Part b The results using random projections

More information

Lab 8 CSC 5930/ Computer Vision

Lab 8 CSC 5930/ Computer Vision Lab 8 CSC 5930/9010 - Computer Vision Description: One way to effectively train a neural network with multiple layers is by training one layer at a time. You can achieve this by training a special type

More information

Neural Networks Laboratory EE 329 A

Neural Networks Laboratory EE 329 A Neural Networks Laboratory EE 329 A Introduction: Artificial Neural Networks (ANN) are widely used to approximate complex systems that are difficult to model using conventional modeling techniques such

More information

ECE 662 Spring 2008 Homework #2

ECE 662 Spring 2008 Homework #2 PROBLEM 1 The number of experiments was 1000. The MATLAB codes are attached. To simplify things the same set of data were used for training and testing purposes. From the graph it is obvious that substituting

More information

The Mathematics Behind Neural Networks

The Mathematics Behind Neural Networks The Mathematics Behind Neural Networks Pattern Recognition and Machine Learning by Christopher M. Bishop Student: Shivam Agrawal Mentor: Nathaniel Monson Courtesy of xkcd.com The Black Box Training the

More information

Programming Exercise 5: Regularized Linear Regression and Bias v.s. Variance

Programming Exercise 5: Regularized Linear Regression and Bias v.s. Variance Programming Exercise 5: Regularized Linear Regression and Bias v.s. Variance Machine Learning May 13, 212 Introduction In this exercise, you will implement regularized linear regression and use it to study

More information

Figure (5) Kohonen Self-Organized Map

Figure (5) Kohonen Self-Organized Map 2- KOHONEN SELF-ORGANIZING MAPS (SOM) - The self-organizing neural networks assume a topological structure among the cluster units. - There are m cluster units, arranged in a one- or two-dimensional array;

More information

Lecture Linear Support Vector Machines

Lecture Linear Support Vector Machines Lecture 8 In this lecture we return to the task of classification. As seen earlier, examples include spam filters, letter recognition, or text classification. In this lecture we introduce a popular method

More information

Homework 3. (a) The following code imports the data and standardizes the vector u, called U, include("readclassjson.jl")

Homework 3. (a) The following code imports the data and standardizes the vector u, called U, include(readclassjson.jl) EE104, Spring 2017-2018 S. Boyd & S. Lall Homework 3 1. Predicting power demand. Power utilities need to predict the power demanded by consumers. A possible predictor is an auto-regressive (AR) model,

More information

Radial Basis Function Networks

Radial Basis Function Networks Radial Basis Function Networks As we have seen, one of the most common types of neural network is the multi-layer perceptron It does, however, have various disadvantages, including the slow speed in learning

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Neural Networks in MATLAB This is a good resource on Deep Learning for papers and code: https://github.com/kjw612/awesome

More information

1 Training/Validation/Testing

1 Training/Validation/Testing CPSC 340 Final (Fall 2015) Name: Student Number: Please enter your information above, turn off cellphones, space yourselves out throughout the room, and wait until the official start of the exam to begin.

More information

SUPERVISED LEARNING WITH SCIKIT-LEARN. How good is your model?

SUPERVISED LEARNING WITH SCIKIT-LEARN. How good is your model? SUPERVISED LEARNING WITH SCIKIT-LEARN How good is your model? Classification metrics Measuring model performance with accuracy: Fraction of correctly classified samples Not always a useful metric Class

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

Extreme Learning Machines. Tony Oakden ANU AI Masters Project (early Presentation) 4/8/2014

Extreme Learning Machines. Tony Oakden ANU AI Masters Project (early Presentation) 4/8/2014 Extreme Learning Machines Tony Oakden ANU AI Masters Project (early Presentation) 4/8/2014 This presentation covers: Revision of Neural Network theory Introduction to Extreme Learning Machines ELM Early

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

Simulation of Back Propagation Neural Network for Iris Flower Classification

Simulation of Back Propagation Neural Network for Iris Flower Classification American Journal of Engineering Research (AJER) e-issn: 2320-0847 p-issn : 2320-0936 Volume-6, Issue-1, pp-200-205 www.ajer.org Research Paper Open Access Simulation of Back Propagation Neural Network

More information

Self-Organizing Map. presentation by Andreas Töscher. 19. May 2008

Self-Organizing Map. presentation by Andreas Töscher. 19. May 2008 19. May 2008 1 Introduction 2 3 4 5 6 (SOM) aka Kohonen Network introduced by Teuvo Kohonen implements a discrete nonlinear mapping unsupervised learning Structure of a SOM Learning Rule Introduction

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

from sklearn import tree from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier

from sklearn import tree from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier 1 av 7 2019-02-08 10:26 In [1]: import pandas as pd import numpy as np import matplotlib import matplotlib.pyplot as plt from sklearn import tree from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier

More information

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 75 CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 6.1 INTRODUCTION Counter propagation network (CPN) was developed by Robert Hecht-Nielsen as a means to combine an unsupervised Kohonen

More information

Experiments with the Hopfield Network Model Applied to Associative Memories

Experiments with the Hopfield Network Model Applied to Associative Memories Experiments with the Hopfield Network Model Applied to Associative Memories Thomas Nabelek May 8, 2017 ECE 8875 - Advanced Topics in Computational Intelligence Project 2 Nabelek 2 1) Abstract Popularized

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves

More information

Problem Set 5. MAS 622J/1.126J: Pattern Recognition and Analysis. Due Monday, 8 November Resubmission due, 12 November 2010

Problem Set 5. MAS 622J/1.126J: Pattern Recognition and Analysis. Due Monday, 8 November Resubmission due, 12 November 2010 Problem Set 5 MAS 622J/1.126J: Pattern Recognition and Analysis Due Monday, 8 November 2010. Resubmission due, 12 November 2010 Note: All instructions to plot data or write a program should be carried

More information

CS420 Project IV. Experimentation with Artificial Neural Network Architectures. Alexander Saites

CS420 Project IV. Experimentation with Artificial Neural Network Architectures. Alexander Saites CS420 Project IV Experimentation with Artificial Neural Network Architectures Alexander Saites 3/26/2012 Alexander Saites 1 Introduction In this project, I implemented an artificial neural network that

More information

Homework 3: Solutions

Homework 3: Solutions Homework 3: Solutions Statistics 43 Fall 207 Data Analysis: Note: All data analysis results are provided by Yixin Chen STAT43 - HW3 Yixin Chen Data Analysis in R Pipeline (a) Since the purpose is to find

More information

Artificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( )

Artificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( ) Structure: 1. Introduction 2. Problem 3. Neural network approach a. Architecture b. Phases of CNN c. Results 4. HTM approach a. Architecture b. Setup c. Results 5. Conclusion 1.) Introduction Artificial

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Edge Detection (with a sidelight introduction to linear, associative operators). Images Images (we will, eventually, come back to imaging geometry. But, now that we know how images come from the world, we will examine operations on images). Edge Detection (with a sidelight introduction to

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation

More information

Opening the Black Box Data Driven Visualizaion of Neural N

Opening the Black Box Data Driven Visualizaion of Neural N Opening the Black Box Data Driven Visualizaion of Neural Networks September 20, 2006 Aritificial Neural Networks Limitations of ANNs Use of Visualization (ANNs) mimic the processes found in biological

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Computerlinguistische Anwendungen Support Vector Machines

Computerlinguistische Anwendungen Support Vector Machines with Scikitlearn Computerlinguistische Anwendungen Support Vector Machines Thang Vu CIS, LMU thangvu@cis.uni-muenchen.de May 20, 2015 1 Introduction Shared Task 1 with Scikitlearn Today we will learn about

More information

Problem Set 2: From Perceptrons to Back-Propagation

Problem Set 2: From Perceptrons to Back-Propagation COMPUTER SCIENCE 397 (Spring Term 2005) Neural Networks & Graphical Models Prof. Levy Due Friday 29 April Problem Set 2: From Perceptrons to Back-Propagation 1 Reading Assignment: AIMA Ch. 19.1-4 2 Programming

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network

Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network Cursive Handwriting Recognition System Using Feature Extraction and Artificial Neural Network Utkarsh Dwivedi 1, Pranjal Rajput 2, Manish Kumar Sharma 3 1UG Scholar, Dept. of CSE, GCET, Greater Noida,

More information

Automatic basis selection for RBF networks using Stein s unbiased risk estimator

Automatic basis selection for RBF networks using Stein s unbiased risk estimator Automatic basis selection for RBF networks using Stein s unbiased risk estimator Ali Ghodsi School of omputer Science University of Waterloo University Avenue West NL G anada Email: aghodsib@cs.uwaterloo.ca

More information

Package nodeharvest. June 12, 2015

Package nodeharvest. June 12, 2015 Type Package Package nodeharvest June 12, 2015 Title Node Harvest for Regression and Classification Version 0.7-3 Date 2015-06-10 Author Nicolai Meinshausen Maintainer Nicolai Meinshausen

More information

Robust line segmentation for handwritten documents

Robust line segmentation for handwritten documents Robust line segmentation for handwritten documents Kamal Kuzhinjedathu, Harish Srinivasan and Sargur Srihari Center of Excellence for Document Analysis and Recognition (CEDAR) University at Buffalo, State

More information

Neural Nets. General Model Building

Neural Nets. General Model Building Neural Nets To give you an idea of how new this material is, let s do a little history lesson. The origins of neural nets are typically dated back to the early 1940 s and work by two physiologists, McCulloch

More information

Weka ( )

Weka (  ) Weka ( http://www.cs.waikato.ac.nz/ml/weka/ ) The phases in which classifier s design can be divided are reflected in WEKA s Explorer structure: Data pre-processing (filtering) and representation Supervised

More information

Khmer Character Recognition using Artificial Neural Network

Khmer Character Recognition using Artificial Neural Network Khmer Character Recognition using Artificial Neural Network Hann Meng * and Daniel Morariu * Faculty of Engineering, Lucian Blaga University of Sibiu, Sibiu, Romania E-mail: meng.hann@rupp.edu.kh Tel:

More information

CS145: INTRODUCTION TO DATA MINING

CS145: INTRODUCTION TO DATA MINING CS145: INTRODUCTION TO DATA MINING 08: Classification Evaluation and Practical Issues Instructor: Yizhou Sun yzsun@cs.ucla.edu October 24, 2017 Learnt Prediction and Classification Methods Vector Data

More information

Practical example - classifier margin

Practical example - classifier margin Support Vector Machines (SVMs) SVMs are very powerful binary classifiers, based on the Statistical Learning Theory (SLT) framework. SVMs can be used to solve hard classification problems, where they look

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class

Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class Guidelines Submission. Submit a hardcopy of the report containing all the figures and printouts of code in class. For readability

More information

PROGRAM EFFICIENCY & COMPLEXITY ANALYSIS

PROGRAM EFFICIENCY & COMPLEXITY ANALYSIS Lecture 03-04 PROGRAM EFFICIENCY & COMPLEXITY ANALYSIS By: Dr. Zahoor Jan 1 ALGORITHM DEFINITION A finite set of statements that guarantees an optimal solution in finite interval of time 2 GOOD ALGORITHMS?

More information

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions ENEE 739Q SPRING 2002 COURSE ASSIGNMENT 2 REPORT 1 Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions Vikas Chandrakant Raykar Abstract The aim of the

More information

Hand Written Character Recognition using VNP based Segmentation and Artificial Neural Network

Hand Written Character Recognition using VNP based Segmentation and Artificial Neural Network International Journal of Emerging Engineering Research and Technology Volume 4, Issue 6, June 2016, PP 38-46 ISSN 2349-4395 (Print) & ISSN 2349-4409 (Online) Hand Written Character Recognition using VNP

More information

Classifying Building Energy Consumption Behavior Using an Ensemble of Machine Learning Methods

Classifying Building Energy Consumption Behavior Using an Ensemble of Machine Learning Methods Classifying Building Energy Consumption Behavior Using an Ensemble of Machine Learning Methods Kunal Sharma, Nov 26 th 2018 Dr. Lewe, Dr. Duncan Areospace Design Lab Georgia Institute of Technology Objective

More information

CS249: ADVANCED DATA MINING

CS249: ADVANCED DATA MINING CS249: ADVANCED DATA MINING Classification Evaluation and Practical Issues Instructor: Yizhou Sun yzsun@cs.ucla.edu April 24, 2017 Homework 2 out Announcements Due May 3 rd (11:59pm) Course project proposal

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Package neural. R topics documented: February 20, 2015

Package neural. R topics documented: February 20, 2015 Version 1.4.2.2 Title Neural Networks Author Adam Nagy Maintainer Billy Aung Myint Package neural February 20, 2015 RBF and MLP neural networks with graphical user interface License GPL (>=

More information

Applying Supervised Learning

Applying Supervised Learning Applying Supervised Learning When to Consider Supervised Learning A supervised learning algorithm takes a known set of input data (the training set) and known responses to the data (output), and trains

More information

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Markus Turtinen, Topi Mäenpää, and Matti Pietikäinen Machine Vision Group, P.O.Box 4500, FIN-90014 University

More information

Extract an Essential Skeleton of a Character as a Graph from a Character Image

Extract an Essential Skeleton of a Character as a Graph from a Character Image Extract an Essential Skeleton of a Character as a Graph from a Character Image Kazuhisa Fujita University of Electro-Communications 1-5-1 Chofugaoka, Chofu, Tokyo, 182-8585 Japan k-z@nerve.pc.uec.ac.jp

More information

Exercise: Training Simple MLP by Backpropagation. Using Netlab.

Exercise: Training Simple MLP by Backpropagation. Using Netlab. Exercise: Training Simple MLP by Backpropagation. Using Netlab. Petr Pošík December, 27 File list This document is an explanation text to the following script: demomlpklin.m script implementing the beckpropagation

More information

Weight Optimization for a Neural Network using Particle Swarm Optimization (PSO)

Weight Optimization for a Neural Network using Particle Swarm Optimization (PSO) Institute of Integrated Sensor Systems Dept. of Electrical Engineering and Information Technology Weight Optimization for a Neural Network using Particle Swarm Optimization (PSO) Stefanie Peters October

More information

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa Instructors: Parth Shah, Riju Pahwa Lecture 2 Notes Outline 1. Neural Networks The Big Idea Architecture SGD and Backpropagation 2. Convolutional Neural Networks Intuition Architecture 3. Recurrent Neural

More information

Classifying Depositional Environments in Satellite Images

Classifying Depositional Environments in Satellite Images Classifying Depositional Environments in Satellite Images Alex Miltenberger and Rayan Kanfar Department of Geophysics School of Earth, Energy, and Environmental Sciences Stanford University 1 Introduction

More information

A Quick Guide on Training a neural network using Keras.

A Quick Guide on Training a neural network using Keras. A Quick Guide on Training a neural network using Keras. TensorFlow and Keras Keras Open source High level, less flexible Easy to learn Perfect for quick implementations Starts by François Chollet from

More information

In stochastic gradient descent implementations, the fixed learning rate η is often replaced by an adaptive learning rate that decreases over time,

In stochastic gradient descent implementations, the fixed learning rate η is often replaced by an adaptive learning rate that decreases over time, Chapter 2 Although stochastic gradient descent can be considered as an approximation of gradient descent, it typically reaches convergence much faster because of the more frequent weight updates. Since

More information

The Traveling Salesman

The Traveling Salesman Neural Network Approach To Solving The Traveling Salesman Problem The Traveling Salesman The shortest route for a salesman to visit every city, without stopping at the same city twice. 1 Random Methods

More information

Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks

Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks Series Prediction as a Problem of Missing Values: Application to ESTSP7 and NN3 Competition Benchmarks Antti Sorjamaa and Amaury Lendasse Abstract In this paper, time series prediction is considered as

More information

Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF Networks

Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF Networks Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF etworks Malek Amiri Abdeboochali Department of Electrical Engineering Razi University Kermanshah,

More information

Random Forest A. Fornaser

Random Forest A. Fornaser Random Forest A. Fornaser alberto.fornaser@unitn.it Sources Lecture 15: decision trees, information theory and random forests, Dr. Richard E. Turner Trees and Random Forests, Adele Cutler, Utah State University

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

Character Recognition using Hopfield Network

Character Recognition using Hopfield Network ECE 559 Neural Networks Character Recognition using Hopfield Network Assignment II Arindam Bose UIN: 665387232 10-1-2015 1. Abstract Advancement in Artificial Intelligence has led to the developments of

More information

Bayes Risk. Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Discriminative vs Generative Models. Loss functions in classifiers

Bayes Risk. Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Discriminative vs Generative Models. Loss functions in classifiers Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Examine each window of an image Classify object class within each window based on a training set images Example: A Classification Problem Categorize

More information

In this assignment, we investigated the use of neural networks for supervised classification

In this assignment, we investigated the use of neural networks for supervised classification Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric

More information

Model Answers to The Next Pixel Prediction Task

Model Answers to The Next Pixel Prediction Task Model Answers to The Next Pixel Prediction Task December 2, 25. (Data preprocessing and visualization, 8 marks) (a) Solution. In Algorithm we are told that the data was discretized to 64 grey scale values,...,

More information

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R. Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric

More information

Classifiers for Recognition Reading: Chapter 22 (skip 22.3)

Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Classifiers for Recognition Reading: Chapter 22 (skip 22.3) Examine each window of an image Classify object class within each window based on a training set images Slide credits for this chapter: Frank

More information

ECE 285 Class Project Report

ECE 285 Class Project Report ECE 285 Class Project Report Based on Source localization in an ocean waveguide using supervised machine learning Yiwen Gong ( yig122@eng.ucsd.edu), Yu Chai( yuc385@eng.ucsd.edu ), Yifeng Bu( ybu@eng.ucsd.edu

More information

Machine Learning 13. week

Machine Learning 13. week Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of

More information

Using Decision Boundary to Analyze Classifiers

Using Decision Boundary to Analyze Classifiers Using Decision Boundary to Analyze Classifiers Zhiyong Yan Congfu Xu College of Computer Science, Zhejiang University, Hangzhou, China yanzhiyong@zju.edu.cn Abstract In this paper we propose to use decision

More information

R for SQListas, a Continuation

R for SQListas, a Continuation 3-2 - 1-0: Classifying Digits with R R for SQListas, a Continuation R for SQListas: Now that we're in the tidyverse... what can we do now? Machine Learning MNIST - the Drosophila of Machine Learning (attributed

More information

Final Report: Classification of Plankton Classes By Tae Ho Kim and Saaid Haseeb Arshad

Final Report: Classification of Plankton Classes By Tae Ho Kim and Saaid Haseeb Arshad Final Report: Classification of Plankton Classes By Tae Ho Kim and Saaid Haseeb Arshad Table of Contents 1. Project Overview a. Problem Statement b. Data c. Overview of the Two Stages of Implementation

More information

Machine Learning with MATLAB --classification

Machine Learning with MATLAB --classification Machine Learning with MATLAB --classification Stanley Liang, PhD York University Classification the definition In machine learning and statistics, classification is the problem of identifying to which

More information

Index. Umberto Michelucci 2018 U. Michelucci, Applied Deep Learning,

Index. Umberto Michelucci 2018 U. Michelucci, Applied Deep Learning, A Acquisition function, 298, 301 Adam optimizer, 175 178 Anaconda navigator conda command, 3 Create button, 5 download and install, 1 installing packages, 8 Jupyter Notebook, 11 13 left navigation pane,

More information

Unsupervised learning

Unsupervised learning Unsupervised learning Enrique Muñoz Ballester Dipartimento di Informatica via Bramante 65, 26013 Crema (CR), Italy enrique.munoz@unimi.it Enrique Muñoz Ballester 2017 1 Download slides data and scripts:

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 4 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 s are functions of different nature where the output parameters are described in terms of the input parameters

More information

Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing

Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing Acoustic to Articulatory Mapping using Memory Based Regression and Trajectory Smoothing Samer Al Moubayed Center for Speech Technology, Department of Speech, Music, and Hearing, KTH, Sweden. sameram@kth.se

More information

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering

Digital Image Processing. Prof. P.K. Biswas. Department of Electronics & Electrical Communication Engineering Digital Image Processing Prof. P.K. Biswas Department of Electronics & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Image Segmentation - III Lecture - 31 Hello, welcome

More information

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization More on Learning Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization Neural Net Learning Motivated by studies of the brain. A network of artificial

More information

Learning and Generalization in Single Layer Perceptrons

Learning and Generalization in Single Layer Perceptrons Learning and Generalization in Single Layer Perceptrons Neural Computation : Lecture 4 John A. Bullinaria, 2015 1. What Can Perceptrons do? 2. Decision Boundaries The Two Dimensional Case 3. Decision Boundaries

More information

No more questions will be added

No more questions will be added CSC 2545, Spring 2017 Kernel Methods and Support Vector Machines Assignment 2 Due at the start of class, at 2:10pm, Thurs March 23. No late assignments will be accepted. The material you hand in should

More information

USING OF THE K NEAREST NEIGHBOURS ALGORITHM (k-nns) IN THE DATA CLASSIFICATION

USING OF THE K NEAREST NEIGHBOURS ALGORITHM (k-nns) IN THE DATA CLASSIFICATION USING OF THE K NEAREST NEIGHBOURS ALGORITHM (k-nns) IN THE DATA CLASSIFICATION Gîlcă Natalia, Roșia de Amaradia Technological High School, Gorj, ROMANIA Gîlcă Gheorghe, Constantin Brîncuși University from

More information

COMP 551 Applied Machine Learning Lecture 16: Deep Learning

COMP 551 Applied Machine Learning Lecture 16: Deep Learning COMP 551 Applied Machine Learning Lecture 16: Deep Learning Instructor: Ryan Lowe (ryan.lowe@cs.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted, all

More information

On The Value of Leave-One-Out Cross-Validation Bounds

On The Value of Leave-One-Out Cross-Validation Bounds On The Value of Leave-One-Out Cross-Validation Bounds Jason D. M. Rennie jrennie@csail.mit.edu December 15, 2003 Abstract A long-standing problem in classification is the determination of the regularization

More information

A reevaluation and benchmark of hidden Markov Models

A reevaluation and benchmark of hidden Markov Models 04-09-2014 1 A reevaluation and benchmark of hidden Markov Models Jean-Paul van Oosten Prof. Lambert Schomaker 04-09-2014 2 Hidden Markov model fields & variants Automatic speech recognition Gene sequence

More information

BinaryMatcher2. Preparing the Data. Preparing the Raw Data: 32 rows Binary and 1-Hot. D. Thiebaut. March 19, 2017

BinaryMatcher2. Preparing the Data. Preparing the Raw Data: 32 rows Binary and 1-Hot. D. Thiebaut. March 19, 2017 BinaryMatcher2 D. Thiebaut March 19, 2017 This Jupyter Notebook illustrates how to design a simple multi-layer Tensorflow Neural Net to recognize integers coded in binary and output them as 1-hot vector.

More information

1 Case study of SVM (Rob)

1 Case study of SVM (Rob) DRAFT a final version will be posted shortly COS 424: Interacting with Data Lecturer: Rob Schapire and David Blei Lecture # 8 Scribe: Indraneel Mukherjee March 1, 2007 In the previous lecture we saw how

More information

Efficient Object Tracking Using K means and Radial Basis Function

Efficient Object Tracking Using K means and Radial Basis Function Efficient Object Tracing Using K means and Radial Basis Function Mr. Pradeep K. Deshmuh, Ms. Yogini Gholap University of Pune Department of Post Graduate Computer Engineering, JSPM S Rajarshi Shahu College

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Neural Computation : Lecture 14 John A. Bullinaria, 2015 1. The RBF Mapping 2. The RBF Network Architecture 3. Computational Power of RBF Networks 4. Training

More information

Data Mining Practical Machine Learning Tools and Techniques. Slides for Chapter 6 of Data Mining by I. H. Witten and E. Frank

Data Mining Practical Machine Learning Tools and Techniques. Slides for Chapter 6 of Data Mining by I. H. Witten and E. Frank Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 6 of Data Mining by I. H. Witten and E. Frank Implementation: Real machine learning schemes Decision trees Classification

More information

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation C.J. Norsigian Department of Bioengineering cnorsigi@eng.ucsd.edu Vishwajith Ramesh Department of Bioengineering vramesh@eng.ucsd.edu

More information