Neuro-Fuzzy Computing

Similar documents
MATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons.

Neural Networks. Lab 3: Multi layer perceptrons. Nonlinear regression and prediction.

1. Approximation and Prediction Problems

1 The Options and Structures in the Neural Net

Gdansk University of Technology Faculty of Electrical and Control Engineering Department of Control Systems Engineering

Neural Networks Laboratory EE 329 A

Neuro-Fuzzy Computing

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

Planar Robot Arm Performance: Analysis with Feedforward Neural Networks

Neural Nets. General Model Building

CHAPTER VI BACK PROPAGATION ALGORITHM

CONTROLO E DECISÃO INTELIGENTE 08/09

ALGORITHMS FOR INITIALIZATION OF NEURAL NETWORK WEIGHTS

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

A Study of Various Training Algorithms on Neural Network for Angle based Triangular Problem

Week 3: Perceptron and Multi-layer Perceptron

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

In this assignment, we investigated the use of neural networks for supervised classification

ANN Based Short Term Load Forecasting Paradigms for WAPDA Pakistan

The Pennsylvania State University. The Graduate School DEVELOPMENT OF AN ARTIFICIAL NEURAL NETWORK FOR DUAL LATERAL HORIZONTAL WELLS IN GAS RESERVOIRS

Neural Network Toolbox User's Guide

PERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION

Neural Network Toolbox User s Guide

Statistical & Data Analysis Using Neural Network

The Pennsylvania State University. The Graduate School. John and Willie Leone Family Department of Energy and Mineral Engineering

Lecture 20: Neural Networks for NLP. Zubin Pahuja

MODELLING OF ARTIFICIAL NEURAL NETWORK CONTROLLER FOR ELECTRIC DRIVE WITH LINEAR TORQUE LOAD FUNCTION

Exercise: Training Simple MLP by Backpropagation. Using Netlab.

CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System

Neuro-Fuzzy Comp. Ch. 8 May 12, 2005

Supervised Learning in Neural Networks (Part 2)

Programming Exercise 3: Multi-class Classification and Neural Networks

Programming Exercise 4: Neural Networks Learning

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

A Matlab based Face Recognition GUI system Using Principal Component Analysis and Artificial Neural Network

The AMORE Package. July 27, 2006

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

MULTILAYER PERCEPTRON WITH ADAPTIVE ACTIVATION FUNCTIONS CHINMAY RANE. Presented to the Faculty of Graduate School of

CSC 578 Neural Networks and Deep Learning

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017)

Reification of Boolean Logic

Role of Hidden Neurons in an Elman Recurrent Neural Network in Classification of Cavitation Signals

Using the NNET Toolbox

ANN training the analysis of the selected procedures in Matlab environment

Perceptron as a graph

A Segmented Wavelet Inspired Neural Network Approach To Compress Images

Implementation of SDES and DES Using Neural Network

Application of a Back-Propagation Artificial Neural Network to Regional Grid-Based Geoid Model Generation Using GPS and Leveling Data

CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN)

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 2, MARCH

Gupta Nikita $ Kochhar

Introduction to Neural Networks: Structure and Training

A Neural Network Model Of Insurance Customer Ratings

Early tube leak detection system for steam boiler at KEV power plant

Sequence Modeling: Recurrent and Recursive Nets. By Pyry Takala 14 Oct 2015

Technical University of Munich. Exercise 7: Neural Network Basics

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.

Experimental Investigation and Development of Multi Response ANN Modeling in Turning Al-SiCp MMC using Polycrystalline Diamond Tool

Knowledge Discovery and Data Mining

PERFORMANCE ANALYSIS AND VALIDATION OF CLUSTERING ALGORITHMS USING SOFT COMPUTING TECHNIQUES

Three Component Cutting Force System Modeling and Optimization in Turning of AISI D6 Tool Steel Using Design of Experiments and Neural Networks

Image Compression: An Artificial Neural Network Approach

Neural Networks. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Lecture 17: Neural Networks and Deep Learning. Instructor: Saravanan Thirumuruganathan

NEURAL NETWORK FOR PLC

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions

A Framework of Hyperspectral Image Compression using Neural Networks

APPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB

Artificial Neural Network (ANN) Approach for Predicting Friction Coefficient of Roller Burnishing AL6061

Data Mining. Neural Networks

11/14/2010 Intelligent Systems and Soft Computing 1

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

NNIGnets, Neural Networks Software

FACE DETECTION AND RECOGNITION USING BACK PROPAGATION NEURAL NETWORK (BPNN)

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

CS6220: DATA MINING TECHNIQUES

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

Applying Neural Network Architecture for Inverse Kinematics Problem in Robotics

Lecture 13. Deep Belief Networks. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen

Neural Network Estimator for Electric Field Distribution on High Voltage Insulators

Multi-layer Perceptron Forward Pass Backpropagation. Lecture 11: Aykut Erdem November 2016 Hacettepe University

Learning internal representations

SEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

Keras: Handwritten Digit Recognition using MNIST Dataset

Deep Learning for Visual Computing Prof. Debdoot Sheet Department of Electrical Engineering Indian Institute of Technology, Kharagpur

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Multi Layer Perceptron with Back Propagation. User Manual

Performance Evaluation of Artificial Neural Networks for Spatial Data Analysis

Learning. Generate models from data Enhance models using training data. SAMT-Tutorial p.1/21

Training of Neural Networks. Q.J. Zhang, Carleton University

OPTIMIZED NEURAL NETWORK MODEL FOR A POTATO STORAGE SYSTEM

Homework 5. Due: April 20, 2018 at 7:00PM

Transcription:

CSE531 Neuro-Fuzzy Computing Tutorial/Assignment 2: Adaline and Multilayer Perceptron About this tutorial The objective of this tutorial is to study: You can create a single (composite) layer of neurons having different transfer The Neural Network functions Toolbox simply by forputting MATLAB. two of the networks shown earlier in parallel. Both networks would have the same inputs, and each network would create some of Linear neuralthe networks outputs. (Adalines) and related learning algorithms. The input vector elements enter the network through the weight matrix W. Multilayer perceptrons and related learning algorithms. 2.1 Neural Network Toolbox W = w 1, 1 w 1, 2 w 1, R w 2, 1 w 2, 2 w 2, R Network Architectures This section is an extract from Neural Network Toolbox User s Guide that is available in the pdf w format in $CSE531/nnet.pdf S, 1 w S, 2 w S, R Note that the row indices on the elements of matrix W indicate the destination 2.1.1 Notation neuron of the weight, and the column indices indicate which source is the input for that weight. Thus, the indices in w 1, 2 say that the strength of the signal Notation used in thefrom Neural the second Network input Toolbox element User s to the Guide first (and differs only) fromneuron that used is w 1, in 2. Lecture Notes. Graphically, the networks The S neuron in the User s R input Guide one-layer are presented network also using can a be block-diagram drawn in abbreviated notation. A layer of neuron isnotation. presented in the User s Guide in the following way: Input Layer of Neurons p W R x 1 S x R 1 b R S x 1 S a n f S x 1 S x 1 Where... R = number of elements in input vector S = number of neurons in layer 1 a= f (Wp + b) Equivalently, in thehere Lecture p is Notes an R length notation input a neuron vector, isw represented is an SxR matrix, by the and following a and b block-diagram: are S length vectors. As defined previously, the neuron layer includes the weight matrix, the multiplication operations, the bias vector b, the summer, and the x v y = σ(w x) transfer p function Wboxes. m σ (m p) m Note and record differences/similarities in notations. Before we start the next part it is sensible to be aware of existence of cells and structures in MATLAB. Try: help struct and help cell 2-9 A.P. Papliński 1

2.1.2 Creating a Network (newff) To create a feedforward neural network (MLP) object in an interactive way you can use the command nntool to open the Neural Network Manager. You can then import or define data and train your network in the GUI or export your network to the command line for training. To crate a feedforward neural network from the command line use the the function newff: net = newff(mnmx, [S 1 S 2... S K ], {TF 1 TF 2... TF K }, BTF, BLF, PF) where MnMx (p 1 2) matrix of min and max values for x input vectors. S i Size of ith layer, for K layers. TF i Transfer function of ith layer, default = tansig. BTF Backpropagation network training function, default = traingdx. BLF Backpropagation weight/bias learning function, default = learngdm. PF Performance function, default = mse. Note that the dimensionality of the input space p is defined indirectly through the size of the matrix MnMx. The function nntool returns an K-layer feed-forward neural network network ready to be trained. The transfer functions TF i can be any differentiable transfer function such as tansig (hyperbolic tangent, tanh), logsig (unipolar sigmoidal function), or purelin (linear function). The training function BTF can be any of the backpropagation training functions such as trainlm, trainbfg, trainrp, traingd, etc. Details are discussed in the Lecture Notes. Example: Consider a two-layer feed-forward network x u h p W h L L }{{} Hidden layer v ψ W y σ y m m }{{} Output layer A.P. Papliński 2

with inputs: p = 4, (x = [x 1, x 2, x 3, 1]). The range of input values is specified as follows: x 1 [ 1... 2], x 2 [... 5], x 3 [ 3... 3]. hidden layer: L = 6, (h = [h 1,..., h 5, 1]). TF = tansig output layer: m = 2, (y = [y 1, y 2 ]), TF = purelin. The training function is to be traingd. The following command defines the network nnet1: p = 4; L = 6; m = 2 ; XR = [-1 2; 5; -3 3] ; nnet1 = newff(xr,[l-1,m],{ tansig, purelin }, traingd ) ; This command creates the network object nnet1 and also initializes the weights and biases of the network using the default Nguyen-Widrow algorithm (initnw). Therefore the network is ready for training. 2.1.3 Network structure and parameters For the above example we can examine the structure of the created network nnet1 and its all initial parameters. Typing in nnet1 on the command line gives the structure of the top level neural network object. The extract of the structure is as follows: Neural Network object:... subobject structures: inputs: {1x1 cell} of inputs layers: {2x1 cell} of layers outputs: {1x2 cell} containing 1 output targets: {1x2 cell} containing 1 target biases: {2x1 cell} containing 2 biases inputweights: {2x1 cell} containing 1 input weight layerweights: {2x2 cell} containing 1 layer weight... weight and bias values: IW: {2x1 cell} containing 1 input weight matrix LW: {2x2 cell} containing 1 layer weight matrix b: {2x1 cell} containing 2 bias vectors The range matrix MnMx is hidden in nnet1.inputs{1}.range (Try it!) Parameters of two layers are stored in nnet1.layers{1} and in nnet1.layers{1} respectively. A.P. Papliński 3

Initial values of the hidden weight matrix W h (L 1 p) (called input matrix ) are give in: nnet1.iw{1} nnet1.b{1} 1.3465.474.1727-4.2433-1.747 -.144.5835 1.9954.2627 -.715.5187 1.6448 -.559.7689.4749-3.912 1.2225 -.137 -.557 2.419 Similarly, initial values of the output weight matrix W y (m L) (called hidden matrix ) are give in: nnet1.lw{2,1} nnet1.b{2} -.1886.8338.7873 -.2943 -.983 -.5945.879 -.1795 -.8842.6263 -.7222 -.626 Examine other parameters of the network nnet1 2.1.4 Re-Initializing Weights If you need re-initialization, or change to the default initialization algorithm you use the command init: The function init takes a network object as input and returns a network object with all weights and biases initialized. This function is invoked by the newff command and uses the Nguyen-Widrow algorithm as the default initialization. If, for example, we would like to re-initialise the weights and biases in the first layer to random values using the rands function, we would issue the following commands: nnet1.layers{1}.initfcn = initwb ; nnet1.inputweights{1,1}.initfcn = rands ; nnet1.biases{1,1}.initfcn = rands ; nnet1.biases{2,1}.initfcn = rands ; nnet1 = init(nnet1); A.P. Papliński 4

2.1.5 Simulation (sim) The function sim simulates a network, that is it calculates outputs for given inputs: The function sim takes the network input X, and the network object net, and returns the network outputs Y. Y = σ(w y σ(w h X)) Here is how sim can be used to simulate the network nnet1 we created above (p 1 = 3 and m = 2) for a single input pattern: X =.5 2. 1. Y = sim(nnet1, X).5628 1.2168 (If you try these commands, your output may be different, depending on the state of your random number generator when the network was initialized.) Below, sim is called to calculate the outputs for a set of three input vectors N = 3. X = -.5 1. 1.5 4. 2. 3. -2. 1. 2. Y = sim(nnet1, X) 1.5924.5821.5193 -.7313.1693.2691 2.1.6 Incremental Training (adapt) In incremental training the weights and biases of the network are updated each time an input is presented to the network. The function adapt is used to train networks in the incremental (pattern) mode. Use adapt for the description of the function help The function adapt takes the network object and the inputs and the targets from the training set, and returns the trained network object and the outputs and errors of the network for the final weights and biases. Inputs and targets can be presented either as cell arrays: X = {[1;2] [2;1] [2;3] [3;1]}; D = {4 5 7 7}; or matrices: A.P. Papliński 5

X = [[1;2] [2;1] [2;3] [3;1]]; D = [4 5 7 7]; The cell array format is most convenient to be used for networks with multiple inputs and outputs, and allows sequences of inputs to be presented: The matrix format can be used if only one time step is to be simulated, so that the sequences of inputs cannot be described. There are two basic functions available for the incremental training: learngd and learngdm. Refer to the manual for details. 2.1.7 Batch Training (train) In batch mode the weights and biases of the network are updated only after the entire training set has been applied to the network. The batch training is invoked using the function train. All advanced training algorithms are used in the batch mode. The format of data/signal is the same as for the incremental learning. 2.1.8 Function approximation Example In this example we use the Levenberg-Marquardt algorithm to approximate two functions of two variables. % fap2dlm.m % 28 March 25 % Approximation of two 2D functions using the Levenberg-Marquardt % algorithm clear % Generation of training points na = 16 ; N = naˆ2; nn = :na-1; X1 = nn*4/na - 2; [X1 X2] = meshgrid(x1); R = -(X1.ˆ2 + X2.ˆ2 +.1); D1 = X1.* exp(r); D = (D1(:)) ; D2 =.25*sin(2*R)./R ; D = [D ; (D2(:)) ]; Y = zeros(size(d)) ; X = [ X1(:) ; X2(:) ]; figure(1), % myfiga(1, 11, 1) surfc([x1-2 X1+2], [X2 X2], [D1 D2]), title( Two 2-D target functions ), grid on, drawnow % print(1, -depsc2, p2lm1 ) % network specification A.P. Papliński 6

p = 2 ; % Number of inputs excluding bias L = 12; % Number of hidden signals excluding bias m = 2 ; % Number of outputs MnMx = [-2 2; -2 2] ; Nnet = newff(mnmx, [L, m]) ; % Network creation Nnet.trainParam.epochs = 5; Nnet.trainParam.goal =.16; Nnet = train(nnet, X, D) ; % Training grid on, hf = gcf ; set(hf, PaperUnits, centimeters, PaperPosition, [ 1 11] ); % print(hf, -depsc2, p2lm3 ) Y = sim(nnet, X) ; % Validation D1(:)=Y(1,:) ; D2(:)=Y(2,:) ; % Two 2-D approximated functions are plotted side by side figure(2), % myfiga(2, 11, 1) surfc([x1-2 X1+2], [X2 X2], [D1 D2]) title( function approximation ) grid on, drawnow % print(2, -depsc2, p2lm2 ) The results should be similar to those in the following figures: Two 2 D target functions function approximation.5.6.4.2.2.5 2 1 1 2 4 2 2 4.4 2 1 1 2 4 2 2 4 A.P. Papliński 7

1 Performance is.156939, Goal is.16 Training Blue Goal Black 1 1 1 2 1 3 5 1 15 2 25 3 31 Epochs Exercise 2. 1 Repeat the above function approximation exercise replacing the Levenberg-Marquardt algorithm with a conjugate gradient algorithm, traincgp. Run it for two different values of the target error (goal) parameter. You might also need to increase the maximum number of epochs to achieve satisfactory results. A.P. Papliński 8

2.2 Linear Networks Adaline and its applications Exercise 2. 2 Study the following demo files which demonstrate properties and applications of linear neural networks: $CSE531/Mtlb/adln1.m, $NNET/nndemos/demolin1...8.m. Exercise 2. 3 Consider a problem of approximation of a collection of points by a regression line. In this case the dimensionality of the augmented input space is p = 2. The objective is to find the regression line: y = a x + b, that is, to find the parameters {a, b} specifying the approximation line, from the collection of points {x(n), d(n)}. 1. Re-write the Normal (Wiener-Hopf) equation directly for two unknown weights, w 1, w 2 and points, {x 1 (n), x 2 (n) = 1, d(n)} for n = 1,..., N. Derive equations for a = w 1, b = w 2 as functions of the training points. In the derived expressions identify the following statistical quantities: the mean, x the second moment x2 of x, the mean d of d, and the correlation c of d and x. 2. Illustrate using MATLAB the problem of fitting the regression line into a collection of points from a {x, y} plane. Compare the results obtained using the following three methods: (a) Direct solution of the Wiener-Hopf equation as derived above, (b) The LMS learning law, (c) The Sequential Regression algorithm. Exercise 2. 4 Study the following MATLAB scripts which demonstrate applications of linear neural networks in adaptive signal processing: $CSE531/Mtlb/adlpr.m, adsid.m, adlnc.m $NNET/nndemos/applin1...4.m Exercise 2. 5 Modify the following scripts 1. adlpr.m adaptive prediction, 2. adsid.m adaptive system identification, 3. adlnc.m adaptive noise cancellation Modification should include: replacement of the LMS learning law by the sequential regression (SR) algorithm, usage of the amplitude and frequency modulated sinusoidal input signal in all three cases. Modify parameters of the signal and filters involved. A.P. Papliński 9

2.3 Multilayer Perceptron Back-Propagation Exercise 2. 6 Study the following demo files which demonstrate properties and applications of multilayer perceptrons and backpropagation learning law: $NNET/nnd12sd1, nnd12sd2, nnd12vl, nnd12cg, nnd12m Exercise 2. 7 Write a function approximation script similar to $CSE531/Mtlb/fap2D.m following modifications: with the 1. The function to be approximated, d = f(x), is of the following form: d =.2x(x 2 13x + 48) cos(2x) +.2noise, for x (,..., 9) Note that to plot such a function in MATLAB you can use: x = :dx:9 ; d =.2*polyval([1-13 48 ],x).*cos(2*x)+.2*rand(1,n); plot(x, d), grid 2. Use the pattern update mode in learning 3. Use the Nguyen-Widrow initialisation. You can use the script from $CSE531/Mtlb/nwini.m described in Lecture Notes in sec. 5.4 Exercise 2. 8 Repeat the above exercise using the Neural Network Toolbox as in sec. 2.1.8. Use the conjugate gradient and Levenberg-Marquardt learning laws and compare their efficiency. Exercise 2. 9 For the above neural network plot two error surfaces, J(w a, w b ), where (w a, w b ) is a pair of weights of your choice from the hidden and output layers. Exercise 2. 1 Use a two-layer perceptron and a selected backpropagation learning law (conjugate gradient or Levenberg-Marquardt) to design an XOR gate. Aim at the minimal total number of synapses. Once the learning is finished, replace the output activation function with a unipolar step function and verify that the perceptron really acts as an XOR gate. Sketch the detailed structure of the obtained XOR perceptron. A.P. Papliński 1

2.4 An image coding algorithm using a two-layer perceptron Exercise 2. 11 Implement in MATLAB an image coding algorithm using a multi-layer perceptron and an advanced learning algorithm as discussed in Lecture Notes. Demo images can be found in $MATLAB/toolbox/matlab/demos/*.mat To examine the demo images run: load file_name whos image(x), colormap(map), title(caption) imageext For initial experimentation use a 6 8 pixel sub-image from any image available in MATLAB. The block-matrix conversion functions blkm2vc.m, vc2blkm.m are in $CSE531/Mtlb Use either the Levenberg-Marquardt or a conjugate gradient algorithm. Determine the speed of training and the SNR. Submission In your report/submission include answers/solutions to all exercises. Include Brief comments regarding the demo files, Relevant derivations, equations, scripts and plots with suitable comments. Note that in the recent versions of MATLAB from the editor window you can invoke the publish command: File Publish To HTML It creates the directory html with all the results in it. A.P. Papliński 11