1. Approximation and Prediction Problems
|
|
- Austen Rolf Edwards
- 6 years ago
- Views:
Transcription
1 Neural and Evolutionary Computing Lab 2: Neural Networks for Approximation and Prediction 1. Approximation and Prediction Problems Aim: extract from data a model which describes either the depence between two variables (the variables can be one or multi-dimensional) or the variation in time of a variable (in order to predict new values in a time series). Examples: Experimental data analysis: o The input data are the measured values the analyzed variables (one variable is the predictor, the other is the predicted one); o The results are values of the predicted variables estimated for some given values of the predictor. Time series prediction: o The input data are previous values of a time-depent variable; o The result is an estimation of the next value in the time series. 2. Using the Matlab nftool for approximation problems nftool (Neural Network Fitting Tool) provides a graphical user interface for designing and training a feedforward neural network for solving approximation (fitting) problems. The networks created by nftool are characterized by: One hidden layer (the number of hidden units can be changed by the user; the default value is 20) The hidden units have a sigmoidal activation function (tansig or logsig) while the output units have a linear activation function The training algorithm is Backpropagation based on a Levenberg-Marquardt minimization method (the corresponding Matlab function is trainlm ) The learning process is controlled by a cross-validation technique based on a random division of the initial set of data in 3 subsets: for training (weights adjustment), for learning process control (validation) and for evaluation of the quality of approximation (testing). The quality of the approximation can be evaluated by: Mean Squared Error (MSE): it expresses the difference between the correct otputs and those provided by the network; the approximation is better if MSE is smaller (closer to 0) Pearson s Correlation Coefficient (R): it measures the correlation between the correct outputs and those provided by the network; as R is closer to 1 as the approximation is better. Exercise 1 a) Using nftool design and train a neural network to discover the depence between the data in the set Simple Fitting Problem.
2 b) Analize the network behavior by visualizing the dataset and the approximated function. Analyze the quality of the approximation by using the plot of R c) Compare the obtained results for different numbers of hidden units: 5, 10, 20. d) Save the network in an object named netfitting and simulate its functioning for different input data Hint: a) use nftool and select the corresponding dataset from Load Example Data Set b) visualize the graphs with PlotFit and PlotRegression c) change the architecture, retrain and compare the quality measures (MSE, R) for different cases d) after the network is saved, it can be simulated by using sim(netfitting,dateintrare) 3. Using Matlab functions to create and train feedforward neural networks The function used in Matlab to define a multi layer perceptron is newff. In the case of a network having m layers of functional units (i.e., m-1 hidden layers) the syntax of the function is: ffnet = newff(inputdata, desiredoutputs, {k 1,k 2,,k m-1 }, {f 1,f 2,,f m }, <training algorithm> ) The significance of the parameters involved in newff is: inputdata: a matrix containing on its columns the input data corresponding to the training set. Based on this, newff establishes the number of input units. desiredoutputs: a matrix containing on its columns the correct answers corresponding to the training set. Based on this, newff establishes the number of output units. {k 1,k 2,,k m-1 }: the number of units on each hidden layer {f 1,f 2,,f m }: the activation functions for all functional layers. Possible values are: logsig, tansig and purelin. The hidden layers should have nonlinear activation functions ( logsig or tansig ) while the output layer can have a linear activation function. <training algorithm> : is a parameter specifying the type of the learning algorithm. There are two main training variants: incremental (the training process is initiated by the function adapt) and batch (the training process is initiated by the function train). When the function adapt is used the possible values of this parameter are: learngd (it corresponds to the serial standard BackPropagation derived using the gradient descent method), learngdm (it corresponds to the momentum variant of BackPropagation), learngda (variant with adaptive learning rate). When the function train is used the possible values of this parameter are: traingd (it corresponds to the classical batch BackPropagation derived using the gradient descent method), traingdm (it corresponds to the momentum variant of BackPropagation), trainlm (variant based on the Levenberg Marquardt minimization method). Remark 1: the learning function can be different for different layers. For instance by ffnet.inputweights{1,1}.learnfcn='learngd' one can specify which is the learning algorithm used for the weights between the input layer and the first layer of functional units.
3 Exemple 1. Definition of a network used to represent the XOR function: ffnet=newff([ ; ],[ ],{5,1},{'logsig','logsig'},'traingd') Training. There are two training functions: adapt (incremental learning) and train (batch learning). Adapt. The syntax of this function is: [fftrained,y,err]=adapt(ffnet, inputdata, desiredoutputs) There are several parameters which should be set before starting the learning process: Learning rate - lr: the implicit value is 0.01 Number of epochs - passes (number of passes through the training set): the implicit value is 1 Momentum coefficient mc (when learngdm is used): the implicit value is 0.9 Example: ffnet.adaptparam.passes=10; ffnet.adaptparam.lr=0.1; Train. The syntax of this function is: [fftrained,y,err]=train(ffnet, inputdata, desiredoutputs) There are several parameters which should be set before starting the learning process: Learning rate - lr: the implicit value is 0.01 Number of epochs - epochs (number of passes through the training set): the implicit value is 1 Maximal value of the error: goal Momentum coefficient mc (when learngdm is used): the implicit value is 0.9 Example: ffnet.trainparam.epochs=1000; ffnet.trainparam.lr=0.1; ffnet.trainparam.goal=0.001; Simulating multi layer perceptrons. In order to compute the output of a network, one can use the function sim, having the syntax: sim(fftrained, testdata) RBF networks. These neural networks always contain only one layer of hidden units. These hidden units have activation functions with radial symmetry (this means that their graph is symmetric with respect to the Oy axis). The most frequently used activation function is the Gaussian one (specified in Matlab by radbas). The particularity of Matlab training for RBF function is the fact that the weights are established just during the network creation. There are to creation (including training) functions: newrbe (in,d,sigma) newrb(in,d,goal,sigma)
4 In both cases the parameters in and d represents elementals from the training set and sigma represents the width parameter associated to the Gaussian function. The function newrbe creates a network having as many hidden units as data are in the training set. The function newrb constructs the hidden layer by incrementally adding units until the training error becomes less than goal. Application 1. Linear and nonlinear regression Let us consider a set of two dimensional data represented as points in plane (the coordinates are specified by the user using the mouse coordinates of the point where a click is made can be captures by thematlab function ginput). Our aim is to train a neural network in order to capture the functional relationship between the coordinates of the points, i.e. the depence between the value of the ordinate (Oy axis) and abscissa (Ox axis). The network will be trained such that it minimizes the sum of squared distances between the input points and the graph of the estimated function). function [in,d]=regression(layers,hiddenunits,training,cycles) % if layers = 1 then a single layer linear neural network will be created % if layers = 2 then a network with one hidden layer will be created and % hiddenunits denote the number of units on the hidden layer % training = 'adapt' or 'train' % cycles = the number of passes (for adapt) and epochs (for train) % read the data clf axis([ ]) hold on in = []; d = []; n = 0; b = 1; while b == 1 [xi,yi,b] = ginput(1); plot(xi,yi,'r*'); n = n+1; in(1,n) = xi; d(1,n) = yi; inf=min(in); sup=max(in); % define the network if (layers==1) reta=newlind(in,d); % linear network designed starting from input data else ret=newff(in,d,hiddenunits,{'tansig','purelin'},'traingd'); if(training == 'adapt') % setting the parameters ret.adaptparam.passes=cycles; ret.adaptparam.lr=0.05;
5 % network training reta=adapt(ret,in,d); else ret.trainparam.epochs=cycles; ret.trainparam.lr=0.05; ret.trainparam.goal=0.001; % network training reta=train(ret,in,d); % graphical representation of the approximation x=inf:(sup-inf)/100.:sup; y=sim(reta,x); plot(in,d,'b*',x,y,'r-'); Exercises: 1. Test the ability of function regression to approximate different sets of data. Hint: for linear regression call the function as: regression(1,0, adapt, 1) (the last two parameters are ignored); for nonlinear regression the call should be: regression(2,5, adapt, 5000) 2. In the case of nonlinear regression analyze the influence of the number of hidden units. Values to test: 5 (as in the previous exercise), 10 and 20. In order to test the behavior of different architectures on the same set of data save the data specified in the first call and define a new function which does not read the data but receive them as parameters. Application 2. Prediction Let us consider a sequence of data (a time series): x 1,x 2,.,x n which can be interpreted as values recorded at successive moments of time. The goal is to predict the value corresponding to moment (n+1). The main idea is to suppose that a current value x i deps on N previous values: x i-1, x i-2,,x i-n. Based on this hypothesis we can design a neural network which is trained to extract the association between any subsequence of N values and the next value in the series. Therefore, the neural network will have N input units, a given number of hidden units and 1 output unit. The training set will have n-n pairs of (input data, correct output): {((x 1,x 2,,x N ),x N+1 ),((x 2,x 3,,x N+1 ),x N+2 ),,((x n-n,x n-n+1,,x n-1 ),x n )}. A solution in Matlab is: function [in,d]=prediction(data,inputunits,hiddenunits,training,epochs) % data : the series of data (one row matrix) % inputunits : number of previous data which influences the current one % hiddenunits : number of hidden units % traininb= 'adapt' or 'train' % epochs = number of epochs % preparing the training set L=size(data,2);
6 in=zeros(inputunits,l-inputunits); d=zeros(1,l-inputunits); for i=1:l-inputunits in(:,i)=data(1,i:i+inputunits-1); d(i)=data(1,i+inputunits); % creating the network ret=newff(in,d,hiddenunits,{'tansig','purelin'},'traingd'); if(training == 'adapt') ret.adaptparam.passes=epochs; ret.adaptparam.lr=0.05; reta=adapt(ret,in,d); else ret.trainparam.epochs=epochs; ret.trainparam.lr=0.05; ret.trainparam.goal=0.001; reta=train(ret,in,d); % graphical plot x=1:l-inputunits; y=sim(reta,in); intest=zeros(inputunits,1); intest=data(1,l-inputunits+1:l)'; reztest=sim(reta,intest); disp('predicted value:'); disp(reztest); plot(x,d,'b-',x,y,'r-',l+1,reztest,'k*'); Let us use the prediction function to estimate the next value in a series of Euro/Lei exchange rate. The data should be first read: data=csvread( dailyrate.csv ) ; The data in the file are daily rates from 2005 up to October 16, 2009 downloaded from Since the values are stored in reverse chronological order we construct a new list: data2(1:length(data))=data(length(data):-1:1) Then the prediction function can be called by specifying the data set, the number of input units, the number of hidden units, the type of the learning process and the maximal number of epochs:: prediction(data2, 5,10, adapt,2000) Exercise. 1. Analyze the influence of the number of input units on the ability of the network to make prediction (Hint: try the following values: 2, 5, 10, 15)
7 2. Analyze the influence of the number of hidden units on the ability of the network to make prediction (Hint: the number of hidden units should be at least as large as the number of input units) 3. Analyze the influence of training algorithm on the ability of the network to make prediction 4. Use a RBF network to solve the prediction problem (Hint: a variant is implemented in predictionrbf.m) Homework: Let us consider a set of data (file NrVisits.txt) containing the number of visits of a web server. Approximate the series of values using a neural network and estimate the next value in the series. You can try to use both a BackPropagation and a RBF network.
Neural Networks. Lab 3: Multi layer perceptrons. Nonlinear regression and prediction.
Neural Networks. Lab 3: Multi layer perceptrons. Nonlinear regression and prediction. 1. Defining multi layer perceptrons. A multi layer perceptron (i.e. feedforward neural networks with hidden layers)
More informationNeuro-Fuzzy Computing
CSE531 Neuro-Fuzzy Computing Tutorial/Assignment 2: Adaline and Multilayer Perceptron About this tutorial The objective of this tutorial is to study: You can create a single (composite) layer of neurons
More informationPlanar Robot Arm Performance: Analysis with Feedforward Neural Networks
Planar Robot Arm Performance: Analysis with Feedforward Neural Networks Abraham Antonio López Villarreal, Samuel González-López, Luis Arturo Medina Muñoz Technological Institute of Nogales Sonora Mexico
More information1 The Options and Structures in the Neural Net
1 The Options and Structures in the Neural Net These notes are broken into several parts to give you one place to look for the most salient features of a neural network. 1.1 Initialize the Neural Network
More informationMATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons.
MATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons. Introduction: Neural Network topologies (Typical Architectures)
More informationA neural network that classifies glass either as window or non-window depending on the glass chemistry.
A neural network that classifies glass either as window or non-window depending on the glass chemistry. Djaber Maouche Department of Electrical Electronic Engineering Cukurova University Adana, Turkey
More informationNeural Networks Laboratory EE 329 A
Neural Networks Laboratory EE 329 A Introduction: Artificial Neural Networks (ANN) are widely used to approximate complex systems that are difficult to model using conventional modeling techniques such
More informationGdansk University of Technology Faculty of Electrical and Control Engineering Department of Control Systems Engineering
Gdansk University of Technology Faculty of Electrical and Control Engineering Department of Control Systems Engineering Artificial Intelligence Methods Neuron, neural layer, neural netorks - surface of
More informationCHAPTER VI BACK PROPAGATION ALGORITHM
6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted
More informationInternational Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017)
APPLICATION OF LRN AND BPNN USING TEMPORAL BACKPROPAGATION LEARNING FOR PREDICTION OF DISPLACEMENT Talvinder Singh, Munish Kumar C-DAC, Noida, India talvinder.grewaal@gmail.com,munishkumar@cdac.in Manuscript
More informationA Study of Various Training Algorithms on Neural Network for Angle based Triangular Problem
A Study of Various Training Algorithms on Neural Network for Angle based Triangular Problem Amarpal Singh M.Tech (CS&E) Amity University Noida, India Piyush Saxena M.Tech (CS&E) Amity University Noida,
More informationAn Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.
An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar
More informationNeural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer
More informationResearch on Evaluation Method of Product Style Semantics Based on Neural Network
Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:
More informationSupport Vector Machines
Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining
More informationCS273 Midterm Exam Introduction to Machine Learning: Winter 2015 Tuesday February 10th, 2014
CS273 Midterm Eam Introduction to Machine Learning: Winter 2015 Tuesday February 10th, 2014 Your name: Your UCINetID (e.g., myname@uci.edu): Your seat (row and number): Total time is 80 minutes. READ THE
More informationNeural Nets. General Model Building
Neural Nets To give you an idea of how new this material is, let s do a little history lesson. The origins of neural nets are typically dated back to the early 1940 s and work by two physiologists, McCulloch
More informationFor Monday. Read chapter 18, sections Homework:
For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron
More informationExtensive research has been conducted, aimed at developing
Chapter 4 Supervised Learning: Multilayer Networks II Extensive research has been conducted, aimed at developing improved supervised learning algorithms for feedforward networks. 4.1 Madalines A \Madaline"
More informationCONTROLO E DECISÃO INTELIGENTE 08/09
CONTROLO E DECISÃO INTELIGENTE 08/09 PL #5 MatLab Neural Networks Toolbox Alexandra Moutinho Example #1 Create a feedforward backpropagation network with a hidden layer. Here is a problem consisting of
More informationSupervised Learning in Neural Networks (Part 2)
Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning
More informationCMPT 882 Week 3 Summary
CMPT 882 Week 3 Summary! Artificial Neural Networks (ANNs) are networks of interconnected simple units that are based on a greatly simplified model of the brain. ANNs are useful learning tools by being
More informationComputational Intelligence (CS) SS15 Homework 1 Linear Regression and Logistic Regression
Computational Intelligence (CS) SS15 Homework 1 Linear Regression and Logistic Regression Anand Subramoney Points to achieve: 17 pts Extra points: 5* pts Info hour: 28.04.2015 14:00-15:00, HS i12 (ICK1130H)
More informationIntroduction to Support Vector Machines
Introduction to Support Vector Machines CS 536: Machine Learning Littman (Wu, TA) Administration Slides borrowed from Martin Law (from the web). 1 Outline History of support vector machines (SVM) Two classes,
More informationProgramming Exercise 4: Neural Networks Learning
Programming Exercise 4: Neural Networks Learning Machine Learning Introduction In this exercise, you will implement the backpropagation algorithm for neural networks and apply it to the task of hand-written
More informationThe Pennsylvania State University. The Graduate School. John and Willie Leone Family Department of Energy and Mineral Engineering
The Pennsylvania State University The Graduate School John and Willie Leone Family Department of Energy and Mineral Engineering RATE TRANSIENT ANALYSIS OF DUAL LATERAL WELLS IN NATURALLY FRACTURED RESERVOIRS
More informationPERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION
I J C T A, 9(28) 2016, pp. 431-437 International Science Press PERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION
More informationNotes on Multilayer, Feedforward Neural Networks
Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book
More informationWeek 3: Perceptron and Multi-layer Perceptron
Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,
More informationCOMPUTATIONAL INTELLIGENCE
COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation
More informationThe Pennsylvania State University. The Graduate School DEVELOPMENT OF AN ARTIFICIAL NEURAL NETWORK FOR DUAL LATERAL HORIZONTAL WELLS IN GAS RESERVOIRS
The Pennsylvania State University The Graduate School John and Willie Leone Family Department of Energy and Mineral Engineering DEVELOPMENT OF AN ARTIFICIAL NEURAL NETWORK FOR DUAL LATERAL HORIZONTAL WELLS
More informationDEEP LEARNING IN PYTHON. The need for optimization
DEEP LEARNING IN PYTHON The need for optimization A baseline neural network Input 2 Hidden Layer 5 2 Output - 9-3 Actual Value of Target: 3 Error: Actual - Predicted = 4 A baseline neural network Input
More informationCOMP 551 Applied Machine Learning Lecture 14: Neural Networks
COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted for this course
More informationNN-GVEIN: Neural Network-Based Modeling of Velocity Profile inside Graft-To-Vein Connection
Proceedings of 5th International Symposium on Intelligent Manufacturing Systems, Ma1y 29-31, 2006: 854-862 Sakarya University, Department of Industrial Engineering1 NN-GVEIN: Neural Network-Based Modeling
More informationIndex. Umberto Michelucci 2018 U. Michelucci, Applied Deep Learning,
A Acquisition function, 298, 301 Adam optimizer, 175 178 Anaconda navigator conda command, 3 Create button, 5 download and install, 1 installing packages, 8 Jupyter Notebook, 11 13 left navigation pane,
More informationThe AMORE Package. July 27, 2006
The AMORE Package July 27, 2006 Version 0.2-9 Date 2006-07-27 Title A MORE flexible neural network package Author Manuel CastejÃşn Limas, Joaquà n B. Ordieres MerÃl,Eliseo P. Vergara GonzÃąlez, Francisco
More informationIn this assignment, we investigated the use of neural networks for supervised classification
Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric
More information3 Nonlinear Regression
CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic
More informationNeural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.
Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric
More informationMachine Learning Classifiers and Boosting
Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve
More informationSEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic
SEMANTIC COMPUTING Lecture 8: Introduction to Deep Learning Dagmar Gromann International Center For Computational Logic TU Dresden, 7 December 2018 Overview Introduction Deep Learning General Neural Networks
More informationMachine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves
Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves
More informationSupport Vector Machines
Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining
More informationCHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS
CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS 8.1 Introduction The recognition systems developed so far were for simple characters comprising of consonants and vowels. But there is one
More informationWhat is machine learning?
Machine learning, pattern recognition and statistical data modelling Lecture 12. The last lecture Coryn Bailer-Jones 1 What is machine learning? Data description and interpretation finding simpler relationship
More informationNeural Networks in Statistica
http://usnet.us.edu.pl/uslugi-sieciowe/oprogramowanie-w-usk-usnet/oprogramowaniestatystyczne/ Neural Networks in Statistica Agnieszka Nowak - Brzezińska The basic element of each neural network is neuron.
More informationA Systematic Overview of Data Mining Algorithms. Sargur Srihari University at Buffalo The State University of New York
A Systematic Overview of Data Mining Algorithms Sargur Srihari University at Buffalo The State University of New York 1 Topics Data Mining Algorithm Definition Example of CART Classification Iris, Wine
More informationANN Based Short Term Load Forecasting Paradigms for WAPDA Pakistan
Australian Journal of Basic and Applied Sciences, 4(5): 932-947, 2010 ISSN 1991-8178 ANN Based Short Term Load Forecasting Paradigms for WAPDA Pakistan Laiq Khan, Kamran Javed, Sidra Mumtaz Department
More informationCPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016
CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:
More informationPERFORMANCE ANALYSIS AND VALIDATION OF CLUSTERING ALGORITHMS USING SOFT COMPUTING TECHNIQUES
PERFORMANCE ANALYSIS AND VALIDATION OF CLUSTERING ALGORITHMS USING SOFT COMPUTING TECHNIQUES Bondu Venkateswarlu Research Scholar, Department of CS&SE, Andhra University College of Engineering, A.U, Visakhapatnam,
More informationClassification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska
Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute
More informationRole of Hidden Neurons in an Elman Recurrent Neural Network in Classification of Cavitation Signals
Role of Hidden Neurons in an Elman Recurrent Neural Network in Classification of Cavitation Signals Ramadevi R Sathyabama University, Jeppiaar Nagar, Rajiv Gandhi Road, Chennai 600119, India Sheela Rani
More informationMini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class
Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class Guidelines Submission. Submit a hardcopy of the report containing all the figures and printouts of code in class. For readability
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More informationDeep Generative Models Variational Autoencoders
Deep Generative Models Variational Autoencoders Sudeshna Sarkar 5 April 2017 Generative Nets Generative models that represent probability distributions over multiple variables in some way. Directed Generative
More informationProgramming Exercise 3: Multi-class Classification and Neural Networks
Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks to recognize
More informationAPPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB
APPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB Z. Dideková, S. Kajan Institute of Control and Industrial Informatics, Faculty of Electrical Engineering and Information Technology, Slovak University
More informationNeural Network Neurons
Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given
More informationCOMP9444 Neural Networks and Deep Learning 5. Geometry of Hidden Units
COMP9 8s Geometry of Hidden Units COMP9 Neural Networks and Deep Learning 5. Geometry of Hidden Units Outline Geometry of Hidden Unit Activations Limitations of -layer networks Alternative transfer functions
More informationA Systematic Overview of Data Mining Algorithms
A Systematic Overview of Data Mining Algorithms 1 Data Mining Algorithm A well-defined procedure that takes data as input and produces output as models or patterns well-defined: precisely encoded as a
More informationLearning via Optimization
Lecture 7 1 Outline 1. Optimization Convexity 2. Linear regression in depth Locally weighted linear regression 3. Brief dips Logistic Regression [Stochastic] gradient ascent/descent Support Vector Machines
More informationLearning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation
Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters
More information742 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL. 13, NO. 6, DECEMBER Dong Zhang, Luo-Feng Deng, Kai-Yuan Cai, and Albert So
742 IEEE TRANSACTIONS ON FUZZY SYSTEMS, VOL 13, NO 6, DECEMBER 2005 Fuzzy Nonlinear Regression With Fuzzified Radial Basis Function Network Dong Zhang, Luo-Feng Deng, Kai-Yuan Cai, and Albert So Abstract
More informationPerceptron as a graph
Neural Networks Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 10 th, 2007 2005-2007 Carlos Guestrin 1 Perceptron as a graph 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0-6 -4-2
More informationClassification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions
ENEE 739Q SPRING 2002 COURSE ASSIGNMENT 2 REPORT 1 Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions Vikas Chandrakant Raykar Abstract The aim of the
More informationLearning from Data: Adaptive Basis Functions
Learning from Data: Adaptive Basis Functions November 21, 2005 http://www.anc.ed.ac.uk/ amos/lfd/ Neural Networks Hidden to output layer - a linear parameter model But adapt the features of the model.
More information1 Training/Validation/Testing
CPSC 340 Final (Fall 2015) Name: Student Number: Please enter your information above, turn off cellphones, space yourselves out throughout the room, and wait until the official start of the exam to begin.
More information4. Feedforward neural networks. 4.1 Feedforward neural network structure
4. Feedforward neural networks 4.1 Feedforward neural network structure Feedforward neural network is one of the most common network architectures. Its structure and some basic preprocessing issues required
More informationDeep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group
Deep Learning Vladimir Golkov Technical University of Munich Computer Vision Group 1D Input, 1D Output target input 2 2D Input, 1D Output: Data Distribution Complexity Imagine many dimensions (data occupies
More informationAdapting Association Rules Mining to task of Classification
Adapting Association Rules Mining to task of Classification Prof.Prachitee Shekhawat Alard College of Engineering and Management,Pune,Maharashtra,India. Abstract Classification rule mining is used to discover
More informationExercise: Training Simple MLP by Backpropagation. Using Netlab.
Exercise: Training Simple MLP by Backpropagation. Using Netlab. Petr Pošík December, 27 File list This document is an explanation text to the following script: demomlpklin.m script implementing the beckpropagation
More information5 Learning hypothesis classes (16 points)
5 Learning hypothesis classes (16 points) Consider a classification problem with two real valued inputs. For each of the following algorithms, specify all of the separators below that it could have generated
More information>> x2=-5.01:0.25:5.01; >> [x, y]=meshgrid(x1,x2); >> f=mexhat(x,y); >> mesh(x1,x2,f); The training data and test data sets with 100 instances each are
2D1431 Machine Learning Lab 3: Instance Based Learning & Neural Networks Frank Hoffmann e-mail: hoffmann@nada.kth.se November 27, 2002 1 Introduction In this lab you will learn about instance based learning
More informationDiffusion Spline Adaptive Filtering
24 th European Signal Processing Conference (EUSIPCO 16) Diffusion Spline Adaptive Filtering Authors: Simone Scardapane, Michele Scarpiniti, Danilo Comminiello and Aurelio Uncini Contents Introduction
More informationGradient Descent Optimization Algorithms for Deep Learning Batch gradient descent Stochastic gradient descent Mini-batch gradient descent
Gradient Descent Optimization Algorithms for Deep Learning Batch gradient descent Stochastic gradient descent Mini-batch gradient descent Slide credit: http://sebastianruder.com/optimizing-gradient-descent/index.html#batchgradientdescent
More informationSUPERVISED LEARNING METHODS. Stanley Liang, PhD Candidate, Lassonde School of Engineering, York University Helix Science Engagement Programs 2018
SUPERVISED LEARNING METHODS Stanley Liang, PhD Candidate, Lassonde School of Engineering, York University Helix Science Engagement Programs 2018 2 CHOICE OF ML You cannot know which algorithm will work
More informationLecture : Neural net: initialization, activations, normalizations and other practical details Anne Solberg March 10, 2017
INF 5860 Machine learning for image classification Lecture : Neural net: initialization, activations, normalizations and other practical details Anne Solberg March 0, 207 Mandatory exercise Available tonight,
More informationSolar Radiation Data Modeling with a Novel Surface Fitting Approach
Solar Radiation Data Modeling with a Novel Surface Fitting Approach F. Onur Hocao glu, Ömer Nezih Gerek, Mehmet Kurban Anadolu University, Dept. of Electrical and Electronics Eng., Eskisehir, Turkey {fohocaoglu,ongerek,mkurban}
More informationCS6220: DATA MINING TECHNIQUES
CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining
More informationCPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017
CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2017 Assignment 3: 2 late days to hand in tonight. Admin Assignment 4: Due Friday of next week. Last Time: MAP Estimation MAP
More informationBasis Functions. Volker Tresp Summer 2017
Basis Functions Volker Tresp Summer 2017 1 Nonlinear Mappings and Nonlinear Classifiers Regression: Linearity is often a good assumption when many inputs influence the output Some natural laws are (approximately)
More informationJMP Book Descriptions
JMP Book Descriptions The collection of JMP documentation is available in the JMP Help > Books menu. This document describes each title to help you decide which book to explore. Each book title is linked
More informationStatistical foundations of machine learning
Statistical foundations of machine learning INFO-F-422 Gianluca Bontempi Machine Learning Group Computer Science Department mlg.ulb.ac.be Some algorithms for nonlinear modeling Feedforward neural network
More informationInternal representations of multi-layered perceptrons
Internal representations of multi-layered perceptrons Włodzisław Duch School of Computer Engineering, Nanyang Technological University, Singapore & Dept. of Informatics, Nicholaus Copernicus University,
More informationOpening the black box of Deep Neural Networks via Information (Ravid Shwartz-Ziv and Naftali Tishby) An overview by Philip Amortila and Nicolas Gagné
Opening the black box of Deep Neural Networks via Information (Ravid Shwartz-Ziv and Naftali Tishby) An overview by Philip Amortila and Nicolas Gagné 1 Problem: The usual "black box" story: "Despite their
More informationChap.12 Kernel methods [Book, Chap.7]
Chap.12 Kernel methods [Book, Chap.7] Neural network methods became popular in the mid to late 1980s, but by the mid to late 1990s, kernel methods have also become popular in machine learning. The first
More informationIEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 2, MARCH
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 2, MARCH 2005 325 Linear-Least-Squares Initialization of Multilayer Perceptrons Through Backpropagation of the Desired Response Deniz Erdogmus, Member,
More informationPackage neural. R topics documented: February 20, 2015
Version 1.4.2.2 Title Neural Networks Author Adam Nagy Maintainer Billy Aung Myint Package neural February 20, 2015 RBF and MLP neural networks with graphical user interface License GPL (>=
More informationCHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN)
128 CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN) Various mathematical techniques like regression analysis and software tools have helped to develop a model using equation, which
More information2. Linear Regression and Gradient Descent
Pattern Recognition And Machine Learning - EPFL - Fall 2015 Emtiyaz Khan, Timur Bagautdinov, Carlos Becker, Ilija Bogunovic & Ksenia Konyushkova 2. Linear Regression and Gradient Descent 2.1 Goals The
More informationData Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017
Data Analysis 3 Support Vector Machines Jan Platoš October 30, 2017 Department of Computer Science Faculty of Electrical Engineering and Computer Science VŠB - Technical University of Ostrava Table of
More information3 Nonlinear Regression
3 Linear models are often insufficient to capture the real-world phenomena. That is, the relation between the inputs and the outputs we want to be able to predict are not linear. As a consequence, nonlinear
More informationDeep Learning. Architecture Design for. Sargur N. Srihari
Architecture Design for Deep Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation
More informationJ. Electrical Systems 13-2 (2017): Regular paper
Nor Hazwani Idris 1, *, Nur Ashida Salim 1, Muhammad Murtadha Othman 1 and Zuhaila Mat Yasin 1 Regular paper Prediction of Cascading Collapse Occurrence due to the Effect of Hidden Failure of a Protection
More informationLinear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines
Linear Models Lecture Outline: Numeric Prediction: Linear Regression Linear Classification The Perceptron Support Vector Machines Reading: Chapter 4.6 Witten and Frank, 2nd ed. Chapter 4 of Mitchell Solving
More informationRadial Basis Function Networks
Radial Basis Function Networks As we have seen, one of the most common types of neural network is the multi-layer perceptron It does, however, have various disadvantages, including the slow speed in learning
More informationTHE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION
International Journal of Science, Environment and Technology, Vol. 2, No 5, 2013, 779 786 ISSN 2278-3687 (O) THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM
More informationPattern Recognition Winter Semester 2009/2010. Exemplary Solution
Pattern Recognition Winter Semester 2009/2010 Exemplary Solution (bernhard.renard@iwr.uni-heidelberg.de) 4 Exercise The focus of this exercise will be neural networks. You will train and apply a multi-layer
More informationInternational Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System
Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, and D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart
More informationNeural Network Weight Selection Using Genetic Algorithms
Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks
More information