Assignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Assignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions"

Transcription

1 ENEE 739Q: STATISTICAL AND NEURAL PATTERN RECOGNITION Spring 2002 Assignment 2 Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions Aravind Sundaresan ENEE 739Q Assignment 2 1 of 14

2 ENEE 739Q Assignment 2 2 of 14

3 1. Pattern Classification using Linear Networks A set of N=300 training samples were used to train a 3 3 linear network, where the input is a 3 dimensional vector X x 100 y T (the bias is chosen to be 0.5). The LMS algorithm was used to train the weights iteratively. The output is a 3 dimensional vector, Z, whose i th element is set to 1 if the input is from the i th class else it is set to zero. The output of the linear network is calculated as follows. Z W X, where X is the input and W is the weight vector. O arg max Z i i Strategy: The learning rate needs to be chosen carefully as large values for the learning rate cause the error to diverge leading to instability in the algorithm. The learning rate is a function of the iteration index and is given by t t It is a 0 good idea to normalize the input so that input values lie in [0,1] or [ 1,1]. In the implementation the inputs have been scaled so that they lie in 0,1 d. Figure 1.1: The Performance of the Network for different learning rates ENEE 739Q Assignment 2 3 of 14

4 Results: The rate of convergence of the error for three different values of are illustrated in Figure 1.1. The convergence is faster and the error (or the energy 1 function which is set to be equal to 2 Z T 2 ) is lesser for higher learning rates, but as observed earlier the algorithm becomes unstable for higher learning rates leading to divergence of the error function. The original configuration and the classification achieved by the linear network after training with learning rate are illustrated in Figure 1.2. Conclusions: Obviously the performance of the network is limited by its linearity. As can be observed from Figure 1.2 only linear discrimination can be performed. In this case where the input is from a 2 dimensional space, the output space is split into regions (classes) separated by lines (hyperplanes in the general case). Figure 1.2: The Output of the Network 2. Pattern Classification using Multi Layer Perceptrons A set of N 2000 training samples were used to train 3 h 1 network (Multi layer perceptron network) using the back propagation algorithm. The input is a is a 3 dimensional vector X x y 50 T. The desired output is a scalar which takes the value 1 if the input is in the foreground and the value 1 if it is from the background. Strategy: Initial Weights are uniformly (and independently ) distributed in [ 0.5C, 0.5C] where C is a scaling constant that is inversely proportional to the average magnitude of the input. The training rate,, is calculated as follows. t 0 1 t 400, where ENEE 739Q Assignment 2 4 of 14

5 The tan sigmoid function is chosen as the activation function. The activation function, and the derivative of the activation function are calculated as f x 1.7 tanh 0.7x f x tanh 2 0.7x The error function is calculated as 1 J N n 1 2 Z n T n 2, where T n is the desired O/P, Z n is the actual O/P. The weights are updated for every sample input (online training) according to the back propagation algorithm. The input is not scaled and therefore a scaling factor (inversely proportional to the average magnitude of the input vector) is multiplied with the actual weight increment to obtain the modified weight increment. The training strategy is to continue training the network until the training set error is below a predetermined threshold. Since the error function of both the training set and the validation set may have multiple minimas, the decision to stop the training becomes complicated if it is based on the minima of the error function of either the training or the validation set. It can be in general quite complicated. Here, since there is a very clear demarcation between the foreground and the background, the error of the validation set does not attain a minima even after several iterations. Therefore a good stopping criterion would be based on the value of the training set error. In the MLP network implemented, training is stopped after 2000 iterations or when J t E threshold, whichever occurs first. Results: Table 2.1 illustrates how the validation error varies with the number of hidden units, the stopping criterion being J t E threshold Figure 2.1 shows the output of the network (without thresholding) for several values of h. Hidden units Stopping iteration Error of Training Set Error of Validation Set 10 2, , , Table 2.1: Number of Hidden units for Optimal Performance As both Table 2.1 and Figure 2.1 indicate, the optimal choice for the number of hidden units seems to be 25. Figure 2.2 and Figure 2.3 illustrate the performance of the network wit h 25 hidden units. ENEE 739Q Assignment 2 5 of 14

6 Figure 2.1: The performance of the MLP network for different values of h Figure 2.2: Performance of MLP network for h = 25 ENEE 739Q Assignment 2 6 of 14

7 Figure 2.3: The error of the MLP network with h = 25 Optimal Brain Damage: Because of the random nature of the initialization process, and possibly other factors, the optimal performance of the MLP network is obtained with a higher number of hidden units than may be actually necessary. Thus, some of the weights in the network with the optimal number of hidden units may be superficial or redundant. These redundant weights or units maybe removed by a process called Optimal Brain Damage, which sets to zero the weights that do not affect the output, or the performance of the network. This has been implemented in the following manner. 1. Train the network using h h opt hidden units than required in the optimal case determined earlier (In this case the number of hidden units is chosen as 25). 2. Determine the saliency of each of the weights in the Input Hidden Layer and set to zero three of the weights that have the smallest saliency. 3. Train the network (keeping the value of the discarded the weights equal to zero) until the training set error is less than the threshold or until 2000 iterations are completed. If the final error is less than the threshold, there is scope for further pruning: Repeat 2. If the final error is greater than the threshold it can be concluded that the number of non zero weights required may be less than the number necessary: Go to Use the most recent weight vector that gave an error less than the threshold with the training set. Using an initial value of h 25, and pruning the weights with E threshold 0.045, we ended up with a network that had 45 nonzero weights and 20 hidden units. The ENEE 739Q Assignment 2 7 of 14

8 performance of the pruned network is illustrated in Figure 2.4. The results of the pruning are summarized in Table 2.2. The number of weights has been reduced by 40% and 5 (20%) of the hidden units have been removed. Hidden units Weights Error of Training Set Error of Validation Set Before Pruning After Pruning Table 2.2: Summary of the pruning Figure 2.4: Performance of MLP network after pruning 3. Function approximating using Radial Basis Functions The objective is to train a RBF network using N 1000 sample points. Though the input is a 3 h 1dimensional vector like before, the bias does not make any difference, because the bias of all the "function centres" is the same as the bias of the input. Strategy: The strategy is to use randomly select the function centres from the training set. The function used in the network is is the inverse multi quadratic basis function defined as i x 1 1 x x i 2 2, where x is the input and x i is the function centre. ENEE 739Q Assignment 2 8 of 14

9 The "variance" or the spread,, is set according to the number of function centers chosen (the hidden units). The experiment is repeated for different values of h, the number of hidden units. The value of for a given value of h is calculated as follows. ( h is proportional to the ratio of the area of the domain of the mapping to 2 ) h The weights W are determined iteratively using the LMS algorithm. The weights are trained until the validation set error increases continuously for 3 epochs or the number of iterations exceeds 200. The network is trained and the results are compared for different values of h. Hidden units Figure 3.1: Performance of RBF network for different values of h Error of Training Set Error of Validation Set Table 3.1: Performance of RBF network for different values of h ENEE 739Q Assignment 2 9 of 14

10 Results: The results for different values of h are listed in Table 3.1 and the respective outputs of the network are illustrated in Figure 3.1. The performance of the network for h 80 is illustrated in Figure 3.2 and Figure 3.3. The RBF network performs rather poorly because we do not train the function centres or the "variance" of the radial basis functions. Training these parameters using the EM algorithm or the gradient descent algorithm should result in a much better performance. Besides, the performace of the RBF network is very much dependent on the choice of the radial basis function and is more suited to (smooth) function approximation rather tha n the current scenario. The RBF network is not able to sharply define the boundary regions because of the inherent smoothness of the basis fucntion. Figure 3.2: Performance of RBF network for h = 80 ENEE 739Q Assignment 2 10 of 14

11 4. Optical Character Reader Figure 3.3 The error of the RBF network with h = 80 To implement an OCR we require a Multi Output Multi layer network. The input is a 16x16 grayscale image anda bias. The simplest network architecture would have 257 input nodes, h hidden units, and 10 output nodes, a 257 h 10 MLP network. Strategy: The training set can be obtained by using using manufactured data that provides for translational, rotational, and scale invariance in the network. The target output is set as follows. T i 1;input i 1;input i The network is trained using the manufactured data. The manufactured data has a translation (in pixels) which is uniformly distributed in [ 1.5, 1.5], rotation (in degrees) which is uniformly distributed in [ 9,9] and a scale factor that is uniformly distributed in [0.9,1.1]. A subset of the training set is presented in Figure 4.1. The output of the neural network is chosen as follows. O arg max Z i i ENEE 739Q Assignment 2 11 of 14

12 The training is continued for 1000 iterations or till the number of misclassified samples for the validation set remains consistently higher than the sum of the minimum value achieved and a threshold. Figure: 4.1: Manufactured data for rotational, translational, and scale invariance Dimensionality reduction using PCA: In the previous case the input dimensions are rather large and this leads to increased computations because the number of weights to be trained depends on the number of input nodes. If it is possible to represent the image using a smaller vector the training would be much less computationally intensive. To this end, the input vector can be transformed using Principal Component Analysis. An estimate of the auto correlation matrix can be obtained from the training set data and using this estimate, k principal eigenvectors (eigenvectors corresponding to the largest eigenvalues) are obtained. The projections of the input vector on these k components are packed into a k dimensional vector, which retains as much information as is necessary to correctly identify the digit. This has an additional advantage that some noise (unnecessary information) is also filtered out which results in a better performance. In the implementation k is set to 30. Thus, including the bias, the dimension of the input vector is 31. Results: The results of the training for both the normal case and the PCA case are summarized in table 4.1. The performances of the normal and PCA cases are also illustrated in Figure 4.2 and Figure 4.3 respectively. Type Input Dimension Hidden units Iterations Misclassified Samples Error of Training Set Error of Validation Set Normal % PCA % Table 4.1: Summary of performances for Normal and PCA cases ENEE 739Q Assignment 2 12 of 14

13 Figure 4.2: Performance of the network :Direct Input Figure 4.3: Performance of the network :PCA ENEE 739Q Assignment 2 13 of 14

14 As can be seen, using PCA to reduce the dimensions of the input leads to a far better performance (both in terms of speed of convergence and validation set error) with the number of misclassified samples in the validation set falling as low as 0.20% (2 in 1000 samples ). In a more general setting it may be a good idea to use a general transformation such as DCT and select the low frequency components to represent the image. 5. References 1. Yann Le Cun, John S. Denker and Sara A. Solla, Optimal Brain Damage. AT&T Bell Laboratories, NJ. 2. Richard Duda, Peter Hart, and David Stork, Pattern Classification. Wiley Interscience, New York, ENEE 739Q Assignment 2 14 of 14

Neural Networks (Overview) Prof. Richard Zanibbi

Neural Networks (Overview) Prof. Richard Zanibbi Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

Artificial Neural Networks (Feedforward Nets)

Artificial Neural Networks (Feedforward Nets) Artificial Neural Networks (Feedforward Nets) y w 03-1 w 13 y 1 w 23 y 2 w 01 w 21 w 22 w 02-1 w 11 w 12-1 x 1 x 2 6.034 - Spring 1 Single Perceptron Unit y w 0 w 1 w n w 2 w 3 x 0 =1 x 1 x 2 x 3... x

More information

Classification: Linear Discriminant Functions

Classification: Linear Discriminant Functions Classification: Linear Discriminant Functions CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Discriminant functions Linear Discriminant functions

More information

In this assignment, we investigated the use of neural networks for supervised classification

In this assignment, we investigated the use of neural networks for supervised classification Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric

More information

Why MultiLayer Perceptron/Neural Network? Objective: Attributes:

Why MultiLayer Perceptron/Neural Network? Objective: Attributes: Why MultiLayer Perceptron/Neural Network? Neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, can be used to extract patterns and detect trends that are

More information

Recitation Supplement: Creating a Neural Network for Classification SAS EM December 2, 2002

Recitation Supplement: Creating a Neural Network for Classification SAS EM December 2, 2002 Recitation Supplement: Creating a Neural Network for Classification SAS EM December 2, 2002 Introduction Neural networks are flexible nonlinear models that can be used for regression and classification

More information

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER 1999 1271 Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming Bao-Liang Lu, Member, IEEE, Hajime Kita, and Yoshikazu

More information

Radial Basis Function Networks: Algorithms

Radial Basis Function Networks: Algorithms Radial Basis Function Networks: Algorithms Neural Computation : Lecture 14 John A. Bullinaria, 2015 1. The RBF Mapping 2. The RBF Network Architecture 3. Computational Power of RBF Networks 4. Training

More information

Deep Learning. Volker Tresp Summer 2014

Deep Learning. Volker Tresp Summer 2014 Deep Learning Volker Tresp Summer 2014 1 Neural Network Winter and Revival While Machine Learning was flourishing, there was a Neural Network winter (late 1990 s until late 2000 s) Around 2010 there

More information

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017 Data Analysis 3 Support Vector Machines Jan Platoš October 30, 2017 Department of Computer Science Faculty of Electrical Engineering and Computer Science VŠB - Technical University of Ostrava Table of

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

Hand Written Digit Recognition Using Tensorflow and Python

Hand Written Digit Recognition Using Tensorflow and Python Hand Written Digit Recognition Using Tensorflow and Python Shekhar Shiroor Department of Computer Science College of Engineering and Computer Science California State University-Sacramento Sacramento,

More information

Support Vector Machines

Support Vector Machines Support Vector Machines About the Name... A Support Vector A training sample used to define classification boundaries in SVMs located near class boundaries Support Vector Machines Binary classifiers whose

More information

The Automation of the Feature Selection Process. Ronen Meiri & Jacob Zahavi

The Automation of the Feature Selection Process. Ronen Meiri & Jacob Zahavi The Automation of the Feature Selection Process Ronen Meiri & Jacob Zahavi Automated Data Science http://www.kdnuggets.com/2016/03/automated-data-science.html Outline The feature selection problem Objective

More information

Feature Selection Using a Multilayer Perceptron

Feature Selection Using a Multilayer Perceptron Feature Selection Using a Multilayer Perceptron Dennis W. Ruck Steven K. Rogers Matthew Kabrisky Department of Electrical and Computer Engineering Air Force Institute of Technology AFIT/ENG, Wright-Patterson

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

The Detection of Faces in Color Images: EE368 Project Report

The Detection of Faces in Color Images: EE368 Project Report The Detection of Faces in Color Images: EE368 Project Report Angela Chau, Ezinne Oji, Jeff Walters Dept. of Electrical Engineering Stanford University Stanford, CA 9435 angichau,ezinne,jwalt@stanford.edu

More information

Machine Learning : Clustering, Self-Organizing Maps

Machine Learning : Clustering, Self-Organizing Maps Machine Learning Clustering, Self-Organizing Maps 12/12/2013 Machine Learning : Clustering, Self-Organizing Maps Clustering The task: partition a set of objects into meaningful subsets (clusters). The

More information

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York

A Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York A Systematic Overview of Data Mining Algorithms Sargur Srihari University at Buffalo The State University of New York 1 Topics Data Mining Algorithm Definition Example of CART Classification Iris, Wine

More information

Performance analysis of a MLP weight initialization algorithm

Performance analysis of a MLP weight initialization algorithm Performance analysis of a MLP weight initialization algorithm Mohamed Karouia (1,2), Régis Lengellé (1) and Thierry Denœux (1) (1) Université de Compiègne U.R.A. CNRS 817 Heudiasyc BP 49 - F-2 Compiègne

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

Center for Automation and Autonomous Complex Systems. Computer Science Department, Tulane University. New Orleans, LA June 5, 1991.

Center for Automation and Autonomous Complex Systems. Computer Science Department, Tulane University. New Orleans, LA June 5, 1991. Two-phase Backpropagation George M. Georgiou Cris Koutsougeras Center for Automation and Autonomous Complex Systems Computer Science Department, Tulane University New Orleans, LA 70118 June 5, 1991 Abstract

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation

More information

Classification using Weka (Brain, Computation, and Neural Learning)

Classification using Weka (Brain, Computation, and Neural Learning) LOGO Classification using Weka (Brain, Computation, and Neural Learning) Jung-Woo Ha Agenda Classification General Concept Terminology Introduction to Weka Classification practice with Weka Problems: Pima

More information

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1

Big Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1 Big Data Methods Chapter 5: Machine learning Big Data Methods, Chapter 5, Slide 1 5.1 Introduction to machine learning What is machine learning? Concerned with the study and development of algorithms that

More information

Supervised vs unsupervised clustering

Supervised vs unsupervised clustering Classification Supervised vs unsupervised clustering Cluster analysis: Classes are not known a- priori. Classification: Classes are defined a-priori Sometimes called supervised clustering Extract useful

More information

Feature Selection Using Principal Feature Analysis

Feature Selection Using Principal Feature Analysis Feature Selection Using Principal Feature Analysis Ira Cohen Qi Tian Xiang Sean Zhou Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign Urbana,

More information

Convolution Neural Networks for Chinese Handwriting Recognition

Convolution Neural Networks for Chinese Handwriting Recognition Convolution Neural Networks for Chinese Handwriting Recognition Xu Chen Stanford University 450 Serra Mall, Stanford, CA 94305 xchen91@stanford.edu Abstract Convolutional neural networks have been proven

More information

Yuki Osada Andrew Cannon

Yuki Osada Andrew Cannon Yuki Osada Andrew Cannon 1 Humans are an intelligent species One feature is the ability to learn The ability to learn comes down to the brain The brain learns from experience Research shows that the brain

More information

Region-based Segmentation

Region-based Segmentation Region-based Segmentation Image Segmentation Group similar components (such as, pixels in an image, image frames in a video) to obtain a compact representation. Applications: Finding tumors, veins, etc.

More information

Louis Fourrier Fabien Gaie Thomas Rolf

Louis Fourrier Fabien Gaie Thomas Rolf CS 229 Stay Alert! The Ford Challenge Louis Fourrier Fabien Gaie Thomas Rolf Louis Fourrier Fabien Gaie Thomas Rolf 1. Problem description a. Goal Our final project is a recent Kaggle competition submitted

More information

Lecture #11: The Perceptron

Lecture #11: The Perceptron Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be

More information

TRANSFORM FEATURES FOR TEXTURE CLASSIFICATION AND DISCRIMINATION IN LARGE IMAGE DATABASES

TRANSFORM FEATURES FOR TEXTURE CLASSIFICATION AND DISCRIMINATION IN LARGE IMAGE DATABASES TRANSFORM FEATURES FOR TEXTURE CLASSIFICATION AND DISCRIMINATION IN LARGE IMAGE DATABASES John R. Smith and Shih-Fu Chang Center for Telecommunications Research and Electrical Engineering Department Columbia

More information

Learning and Generalization in Single Layer Perceptrons

Learning and Generalization in Single Layer Perceptrons Learning and Generalization in Single Layer Perceptrons Neural Computation : Lecture 4 John A. Bullinaria, 2015 1. What Can Perceptrons do? 2. Decision Boundaries The Two Dimensional Case 3. Decision Boundaries

More information

CART. Classification and Regression Trees. Rebecka Jörnsten. Mathematical Sciences University of Gothenburg and Chalmers University of Technology

CART. Classification and Regression Trees. Rebecka Jörnsten. Mathematical Sciences University of Gothenburg and Chalmers University of Technology CART Classification and Regression Trees Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology CART CART stands for Classification And Regression Trees.

More information

Normalized cuts and image segmentation

Normalized cuts and image segmentation Normalized cuts and image segmentation Department of EE University of Washington Yeping Su Xiaodan Song Normalized Cuts and Image Segmentation, IEEE Trans. PAMI, August 2000 5/20/2003 1 Outline 1. Image

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

Neural Networks Laboratory EE 329 A

Neural Networks Laboratory EE 329 A Neural Networks Laboratory EE 329 A Introduction: Artificial Neural Networks (ANN) are widely used to approximate complex systems that are difficult to model using conventional modeling techniques such

More information

Speeding Up the Wrapper Feature Subset Selection in Regression by Mutual Information Relevance and Redundancy Analysis

Speeding Up the Wrapper Feature Subset Selection in Regression by Mutual Information Relevance and Redundancy Analysis Speeding Up the Wrapper Feature Subset Selection in Regression by Mutual Information Relevance and Redundancy Analysis Gert Van Dijck, Marc M. Van Hulle Computational Neuroscience Research Group, Laboratorium

More information

Face recognition based on improved BP neural network

Face recognition based on improved BP neural network Face recognition based on improved BP neural network Gaili Yue, Lei Lu a, College of Electrical and Control Engineering, Xi an University of Science and Technology, Xi an 710043, China Abstract. In order

More information

CHAPTER IX Radial Basis Function Networks

CHAPTER IX Radial Basis Function Networks CHAPTER IX Radial Basis Function Networks Radial basis function (RBF) networks are feed-forward networks trained using a supervised training algorithm. They are typically configured with a single hidden

More information

Perceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15

Perceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15 Perceptrons and Backpropagation Fabio Zachert Cognitive Modelling WiSe 2014/15 Content History Mathematical View of Perceptrons Network Structures Gradient Descent Backpropagation (Single-Layer-, Multilayer-Networks)

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group Deep Learning Vladimir Golkov Technical University of Munich Computer Vision Group 1D Input, 1D Output target input 2 2D Input, 1D Output: Data Distribution Complexity Imagine many dimensions (data occupies

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

ELECTROCARDIOGRAM ABNORMALITY DETECTION BASED ON MULTIPLE MODELS. Gangyi Zhu, Congrong Guan, Ke Tan, Peidong Wang

ELECTROCARDIOGRAM ABNORMALITY DETECTION BASED ON MULTIPLE MODELS. Gangyi Zhu, Congrong Guan, Ke Tan, Peidong Wang ELECTROCARDIOGRAM ABNORMALITY DETECTION BASED ON MULTIPLE MODELS Gangyi Zhu, Congrong Guan, Ke Tan, Peidong Wang Department of Computer Science and Engineering The Ohio State University, Columbus, OH 4320.

More information

Classification of Hyperspectral Breast Images for Cancer Detection. Sander Parawira December 4, 2009

Classification of Hyperspectral Breast Images for Cancer Detection. Sander Parawira December 4, 2009 1 Introduction Classification of Hyperspectral Breast Images for Cancer Detection Sander Parawira December 4, 2009 parawira@stanford.edu In 2009 approximately one out of eight women has breast cancer.

More information

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University

Classification. Vladimir Curic. Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Classification Vladimir Curic Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University Outline An overview on classification Basics of classification How to choose appropriate

More information

Alex Waibel

Alex Waibel Alex Waibel 815.11.2011 1 16.11.2011 Organisation Literatur: Introduction to The Theory of Neural Computation Hertz, Krogh, Palmer, Santa Fe Institute Neural Network Architectures An Introduction, Judith

More information

arxiv: v1 [cs.cv] 21 Nov 2017

arxiv: v1 [cs.cv] 21 Nov 2017 Autoencoder Node Saliency: Selecting Relevant Latent Representations Ya Ju Fan a arxiv:1711.07871v1 [cs.cv] 21 Nov 2017 a Center for Applied Scientific Computing, Lawrence Livermore National Laboratory,

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

β-release Multi Layer Perceptron Trained by Quasi Newton Rule MLPQNA User Manual

β-release Multi Layer Perceptron Trained by Quasi Newton Rule MLPQNA User Manual β-release Multi Layer Perceptron Trained by Quasi Newton Rule MLPQNA User Manual DAME-MAN-NA-0015 Issue: 1.0 Date: July 28, 2011 Author: M. Brescia, S. Riccardi Doc. : BetaRelease_Model_MLPQNA_UserManual_DAME-MAN-NA-0015-Rel1.0

More information

Using the DATAMINE Program

Using the DATAMINE Program 6 Using the DATAMINE Program 304 Using the DATAMINE Program This chapter serves as a user s manual for the DATAMINE program, which demonstrates the algorithms presented in this book. Each menu selection

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Edge Pixel Classification Using Automatic Programming

Edge Pixel Classification Using Automatic Programming Edge Pixel Classification Using Automatic Programming Kristin Larsen, Lars Vidar Magnusson and Roland Olsson Østfold University College Abstract We have considered edge detection as a classification problem,

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes

Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes TDT 03 - Advanced Topics in Computer Graphics Presentation by Ruben H. Fagerli Paper to be summarized Using Spin Images for Efficient

More information

Face Recognition: A Convolutional Neural Network Approach

Face Recognition: A Convolutional Neural Network Approach IEEE Transactions on Neural Networks, Special Issue on Neural Networks and Pattern Recognition, Volume 8, Number 1, pp. 98 113, 1997. Copyright IEEE. Face Recognition: A Convolutional Neural Network Approach

More information

FAST NEURAL NETWORK ALGORITHM FOR SOLVING CLASSIFICATION TASKS

FAST NEURAL NETWORK ALGORITHM FOR SOLVING CLASSIFICATION TASKS Virginia Commonwealth University VCU Scholars Compass Theses and Dissertations Graduate School 2012 FAST NEURAL NETWORK ALGORITHM FOR SOLVING CLASSIFICATION TASKS Noor Albarakati Virginia Commonwealth

More information

All lecture slides will be available at CSC2515_Winter15.html

All lecture slides will be available at  CSC2515_Winter15.html CSC2515 Fall 2015 Introduc3on to Machine Learning Lecture 9: Support Vector Machines All lecture slides will be available at http://www.cs.toronto.edu/~urtasun/courses/csc2515/ CSC2515_Winter15.html Many

More information

ECE 285 Class Project Report

ECE 285 Class Project Report ECE 285 Class Project Report Based on Source localization in an ocean waveguide using supervised machine learning Yiwen Gong ( yig122@eng.ucsd.edu), Yu Chai( yuc385@eng.ucsd.edu ), Yifeng Bu( ybu@eng.ucsd.edu

More information

Semi-supervised learning and active learning

Semi-supervised learning and active learning Semi-supervised learning and active learning Le Song Machine Learning II: Advanced Topics CSE 8803ML, Spring 2012 Combining classifiers Ensemble learning: a machine learning paradigm where multiple learners

More information

Slides adapted from Marshall Tappen and Bryan Russell. Algorithms in Nature. Non-negative matrix factorization

Slides adapted from Marshall Tappen and Bryan Russell. Algorithms in Nature. Non-negative matrix factorization Slides adapted from Marshall Tappen and Bryan Russell Algorithms in Nature Non-negative matrix factorization Dimensionality Reduction The curse of dimensionality: Too many features makes it difficult to

More information

Robust Pose Estimation using the SwissRanger SR-3000 Camera

Robust Pose Estimation using the SwissRanger SR-3000 Camera Robust Pose Estimation using the SwissRanger SR- Camera Sigurjón Árni Guðmundsson, Rasmus Larsen and Bjarne K. Ersbøll Technical University of Denmark, Informatics and Mathematical Modelling. Building,

More information

FADA: An Efficient Dimension Reduction Scheme for Image Classification

FADA: An Efficient Dimension Reduction Scheme for Image Classification Best Paper Candidate in Retrieval rack, Pacific-rim Conference on Multimedia, December 11-14, 7, Hong Kong. FADA: An Efficient Dimension Reduction Scheme for Image Classification Yijuan Lu 1, Jingsheng

More information

Cost Functions in Machine Learning

Cost Functions in Machine Learning Cost Functions in Machine Learning Kevin Swingler Motivation Given some data that reflects measurements from the environment We want to build a model that reflects certain statistics about that data Something

More information

Khmer Character Recognition using Artificial Neural Network

Khmer Character Recognition using Artificial Neural Network Khmer Character Recognition using Artificial Neural Network Hann Meng * and Daniel Morariu * Faculty of Engineering, Lucian Blaga University of Sibiu, Sibiu, Romania E-mail: meng.hann@rupp.edu.kh Tel:

More information

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,

More information

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM Olga Kouropteva, Oleg Okun and Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and

More information

Object and Action Detection from a Single Example

Object and Action Detection from a Single Example Object and Action Detection from a Single Example Peyman Milanfar* EE Department University of California, Santa Cruz *Joint work with Hae Jong Seo AFOSR Program Review, June 4-5, 29 Take a look at this:

More information

A Neural Network for Real-Time Signal Processing

A Neural Network for Real-Time Signal Processing 248 MalkofT A Neural Network for Real-Time Signal Processing Donald B. Malkoff General Electric / Advanced Technology Laboratories Moorestown Corporate Center Building 145-2, Route 38 Moorestown, NJ 08057

More information

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems

Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Robust Kernel Methods in Clustering and Dimensionality Reduction Problems Jian Guo, Debadyuti Roy, Jing Wang University of Michigan, Department of Statistics Introduction In this report we propose robust

More information

Future Image Prediction using Artificial Neural Networks

Future Image Prediction using Artificial Neural Networks Future Image Prediction using Artificial Neural Networks Abhishek Kar (Y8021) Dept. of Computer Science and Engineering, IIT Kanpur Abstract In this work we present an Artificial Neural Network approach

More information

Discriminate Analysis

Discriminate Analysis Discriminate Analysis Outline Introduction Linear Discriminant Analysis Examples 1 Introduction What is Discriminant Analysis? Statistical technique to classify objects into mutually exclusive and exhaustive

More information

Autoencoders, denoising autoencoders, and learning deep networks

Autoencoders, denoising autoencoders, and learning deep networks 4 th CiFAR Summer School on Learning and Vision in Biology and Engineering Toronto, August 5-9 2008 Autoencoders, denoising autoencoders, and learning deep networks Part II joint work with Hugo Larochelle,

More information

Data Mining Classification: Bayesian Decision Theory

Data Mining Classification: Bayesian Decision Theory Data Mining Classification: Bayesian Decision Theory Lecture Notes for Chapter 2 R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification, 2nd ed. New York: Wiley, 2001. Lecture Notes for Chapter

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

Early Prediction of Software Fault-Prone Module using Artificial Neural Network

Early Prediction of Software Fault-Prone Module using Artificial Neural Network International Journal of Performability Engineering Vol. 11, No. 1, January 2015, pp. 43-52. RAMS Consultants Printed in India Early Prediction of Software Fault-Prone Module using Artificial Neural Network

More information

Introduction to Support Vector Machines

Introduction to Support Vector Machines Introduction to Support Vector Machines CS 536: Machine Learning Littman (Wu, TA) Administration Slides borrowed from Martin Law (from the web). 1 Outline History of support vector machines (SVM) Two classes,

More information

Radial Basis Function Networks

Radial Basis Function Networks Radial Basis Function Networks As we have seen, one of the most common types of neural network is the multi-layer perceptron It does, however, have various disadvantages, including the slow speed in learning

More information

Multi Layer Perceptron trained by Quasi Newton learning rule

Multi Layer Perceptron trained by Quasi Newton learning rule Multi Layer Perceptron trained by Quasi Newton learning rule Feed-forward neural networks provide a general framework for representing nonlinear functional mappings between a set of input variables and

More information

ENEE633 Project Report SVM Implementation for Face Recognition

ENEE633 Project Report SVM Implementation for Face Recognition ENEE633 Project Report SVM Implementation for Face Recognition Ren Mao School of Electrical and Computer Engineering University of Maryland Email: neroam@umd.edu Abstract Support vector machine(svm) is

More information

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS Toomas Kirt Supervisor: Leo Võhandu Tallinn Technical University Toomas.Kirt@mail.ee Abstract: Key words: For the visualisation

More information

Machine Learning Classifiers and Boosting

Machine Learning Classifiers and Boosting Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve

More information

Re-Dispatching Generation to Increase Power System Security Margin and Support Low Voltage Bus

Re-Dispatching Generation to Increase Power System Security Margin and Support Low Voltage Bus 496 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 15, NO 2, MAY 2000 Re-Dispatching Generation to Increase Power System Security Margin and Support Low Voltage Bus Ronghai Wang, Student Member, IEEE, and Robert

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Multivariate Data Analysis and Machine Learning in High Energy Physics (V)

Multivariate Data Analysis and Machine Learning in High Energy Physics (V) Multivariate Data Analysis and Machine Learning in High Energy Physics (V) Helge Voss (MPI K, Heidelberg) Graduierten-Kolleg, Freiburg, 11.5-15.5, 2009 Outline last lecture Rule Fitting Support Vector

More information

Adversarial Attacks on Image Recognition*

Adversarial Attacks on Image Recognition* Adversarial Attacks on Image Recognition* Masha Itkina, Yu Wu, and Bahman Bahmani 3 Abstract This project extends the work done by Papernot et al. in [4] on adversarial attacks in image recognition. We

More information

Image Processing. Application area chosen because it has very good parallelism and interesting output.

Image Processing. Application area chosen because it has very good parallelism and interesting output. Chapter 11 Slide 517 Image Processing Application area chosen because it has very good parallelism and interesting output. Low-level Image Processing Operates directly on stored image to improve/enhance

More information

Random Search Report An objective look at random search performance for 4 problem sets

Random Search Report An objective look at random search performance for 4 problem sets Random Search Report An objective look at random search performance for 4 problem sets Dudon Wai Georgia Institute of Technology CS 7641: Machine Learning Atlanta, GA dwai3@gatech.edu Abstract: This report

More information

Color Image Segmentation

Color Image Segmentation Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.

More information

SIFT: Scale Invariant Feature Transform

SIFT: Scale Invariant Feature Transform 1 / 25 SIFT: Scale Invariant Feature Transform Ahmed Othman Systems Design Department University of Waterloo, Canada October, 23, 2012 2 / 25 1 SIFT Introduction Scale-space extrema detection Keypoint

More information

Forward Feature Selection Using Residual Mutual Information

Forward Feature Selection Using Residual Mutual Information Forward Feature Selection Using Residual Mutual Information Erik Schaffernicht, Christoph Möller, Klaus Debes and Horst-Michael Gross Ilmenau University of Technology - Neuroinformatics and Cognitive Robotics

More information

Automatic Fatigue Detection System

Automatic Fatigue Detection System Automatic Fatigue Detection System T. Tinoco De Rubira, Stanford University December 11, 2009 1 Introduction Fatigue is the cause of a large number of car accidents in the United States. Studies done by

More information

A Dendrogram. Bioinformatics (Lec 17)

A Dendrogram. Bioinformatics (Lec 17) A Dendrogram 3/15/05 1 Hierarchical Clustering [Johnson, SC, 1967] Given n points in R d, compute the distance between every pair of points While (not done) Pick closest pair of points s i and s j and

More information

Boosting Sex Identification Performance

Boosting Sex Identification Performance Boosting Sex Identification Performance Shumeet Baluja, 2 Henry Rowley shumeet@google.com har@google.com Google, Inc. 2 Carnegie Mellon University, Computer Science Department Abstract This paper presents

More information

IN recent years, neural networks have attracted considerable attention

IN recent years, neural networks have attracted considerable attention Multilayer Perceptron: Architecture Optimization and Training Hassan Ramchoun, Mohammed Amine Janati Idrissi, Youssef Ghanou, Mohamed Ettaouil Modeling and Scientific Computing Laboratory, Faculty of Science

More information

Programming Exercise 4: Neural Networks Learning

Programming Exercise 4: Neural Networks Learning Programming Exercise 4: Neural Networks Learning Machine Learning Introduction In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written

More information