Package neural. R topics documented: February 20, 2015

Similar documents
Package FastKNN. February 19, 2015

Package rnn. R topics documented: June 21, Title Recurrent Neural Network Version 0.8.1

Package FCNN4R. March 9, 2016

Package automl. September 13, 2018

Package cwm. R topics documented: February 19, 2015

Package fastsom. February 15, 2013

Package hsiccca. February 20, 2015

Package datasets.load

Package mdftracks. February 6, 2017

Keras: Handwritten Digit Recognition using MNIST Dataset

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Package EDFIR. R topics documented: July 17, 2015

Package subtype. February 20, 2015

Package eulerian. February 19, Index 5

Package frt. R topics documented: February 19, 2015

Package ArrayBin. February 19, 2015

Package ECctmc. May 1, 2018

Package kernelfactory

Package DPBBM. September 29, 2016

Package assortnet. January 18, 2016

Package RYoudaoTranslate

Package comphclust. February 15, 2013

Package pnmtrem. February 20, Index 9

Package gridgraphics

Package SAENET. June 4, 2015

Package misclassglm. September 3, 2016

1. Approximation and Prediction Problems

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

Package CatEncoders. March 8, 2017

Package longclust. March 18, 2018

Package kirby21.base

Package MatchIt. April 18, 2017

Package NetCluster. R topics documented: February 19, Type Package Version 0.2 Date Title Clustering for networks

Supervised Learning in Neural Networks (Part 2)

Package lhs. R topics documented: January 4, 2018

Package clustvarsel. April 9, 2018

Package CvM2SL2Test. February 15, 2013

Package weco. May 4, 2018

Package clusterrepro

Neural Networks Laboratory EE 329 A

Package reval. May 26, 2015

Package CMPControl. February 19, 2015

Package egonet. February 19, 2015

Package sensory. R topics documented: February 23, Type Package

Exercise: Training Simple MLP by Backpropagation. Using Netlab.

Week 3: Perceptron and Multi-layer Perceptron

Package elmnnrcpp. July 21, 2018

Package bisect. April 16, 2018

Package PottsUtils. R topics documented: February 19, 2015

Package comphclust. May 4, 2017

Package manet. September 19, 2017

Package fractalrock. February 19, 2015

Package cgh. R topics documented: February 19, 2015

Package rpst. June 6, 2017

Package DRaWR. February 5, 2016

Deep Learning. Practical introduction with Keras JORDI TORRES 27/05/2018. Chapter 3 JORDI TORRES

Package gtrendsr. October 19, 2017

Package raker. October 10, 2017

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

Package gggenes. R topics documented: November 7, Title Draw Gene Arrow Maps in 'ggplot2' Version 0.3.2

Package geojsonsf. R topics documented: January 11, Type Package Title GeoJSON to Simple Feature Converter Version 1.3.

Package NORTARA. February 19, 2015

Package clustering.sc.dp

Package qap. February 27, Index 5. Solve Quadratic Assignment Problems (QAP)

Package PrivateLR. March 20, 2018

Package Rsomoclu. January 3, 2019

Package obliquerf. February 20, Index 8. Extract variable importance measure

Neural Networks (pp )

Package quadprogxt. February 4, 2018

Package phrasemachine

Package AnDE. R topics documented: February 19, 2015

Package atmcmc. February 19, 2015

Package shinyhelper. June 21, 2018

Package dirmcmc. R topics documented: February 27, 2017

Package gpr. February 20, 2015

Package rngsetseed. February 20, 2015

Package wethepeople. March 3, 2013

Package restlos. June 18, 2013

Package Mondrian. R topics documented: March 4, Type Package

Support Vector Machines - Supplement

Package FractalParameterEstimation

Package RaPKod. February 5, 2018

Package GlobalDeviance

Package PUlasso. April 7, 2018

MATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons.

Package SoftClustering

Package CuCubes. December 9, 2016

Package sglr. February 20, 2015

Classification and Regression using Linear Networks, Multilayer Perceptrons and Radial Basis Functions

Package smoother. April 16, 2015

Package barcoder. October 26, 2018

Package rferns. November 15, 2017

Package Rcsdp. April 25, 2016

Package svapls. February 20, 2015

Package Numero. November 24, 2018

Package gtrendsr. August 4, 2018

Package MultiOrd. May 21, 2016

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Package lxb. R topics documented: August 29, 2016

Package sfc. August 29, 2016

Transcription:

Version 1.4.2.2 Title Neural Networks Author Adam Nagy Maintainer Billy Aung Myint <r@use-r.com> Package neural February 20, 2015 RBF and MLP neural networks with graphical user interface License GPL (>= 2) Repository CRAN NeedsCompilation no Date/Publication 2014-09-28 18:26:46 R topics documented: letters_out.......................................... 1 letters_recall......................................... 2 letters_train......................................... 2 mlp............................................. 3 mlptrain........................................... 4 rbf.............................................. 6 rbftrain........................................... 6 Index 9 letters_out Letters training sample- output This data set gives the expected response for the letters_train dataset. Each letter has more than one representation. data(letters_out) 1

2 letters_train Format A matrix containing 40 response. Each row represents one expected response. Source Collecting from about 10 person and from some frequently used font type letters_recall Letters recall sample- input Format Source This data set gives the 8x8 bitmatrix representation of alphabetic letters from A to E. Each letter has more than one representation. Can be useful as a recalling set for mlp network. data(letters_recall) A matrix containing 50 representation of letters. Each row represents one letter. Collecting from about 10 person and from some frequently used font type letters_train Letters training sample- input Format Source This data set gives the 8x8 bitmatrix representation of alphabetic letters from A to E. Each letter has more than one representation. Can be useful as an input training set for mlp network. data(letters_train) A matrix containing 40 representation of letters. Each row represents one letter. Collecting from about 10 person and from some frequently used font type

mlp 3 mlp MLP neural network The recalling method of the MLP network which was trained by the mlptrain function. mlp(inp,,,,actfns=c(),layer=nan,...) Arguments inp actfns layer a matrix that contains one input data in each row. the s of the network. the ortions of the network. the ith layer will contains [i] neuron. a list, which contains the numeric code of the activation functions, or the activation function itself. The length of the vector must be the same as the length of the vector, and each element of the vector must be between 1 and 4 or a function. The possible numeric codes are the following: 1: Logistic function 2: Hyperbolic tangent function 3: Gauss function 4: Identical function. the number of the layer as far as the function is calculating. If the value is NaN, the function will calculate till the output layer. It can be useful if you need the output of one of the hidden layers.... currently not used. Details the ",,, actfns" arguments can be determined by the mlptrain algorithm. Value a matrix that contains the response data of the network, each row contains one response. See Also mlptrain for training an MLP network, rbf and rbftrain for approximation.

4 mlptrain mlptrain MLP neural network A simple MLP neural network that is suitable for classification tasks. mlptrain(inp,,out,=c(),=c(),alfa=0.2,it=200,online=true, permute=true,thresh=0,dthresh=0.1,actfns=c(),diffact=c(),visual=true,...) Arguments inp out alfa it online permute thresh dthresh actfns diffact visual a matrix that contains one input data in each row. the ith layer will contains [i] neuron. a matrix that contains one output data in each row. the starting s of the network. the starting ortions of the network. the learning-rate parameter of the back-propagation algorithm. the maximum number of training iterations. if TRUE the algorithm will operate in sequential mode of back-propagation,if FALSE the algorithm will operate in batch mode of back-propagation. if TRUE the algorithm will use a random permutation of the input data in each epoch. the maximal difference between the desired response and the actual response that is regarded as zero. if the difference between the desired response and the actual response is lesser than this value, the corresponding neuron is drawn in red, otherwise it is drawn in green. a list, which contains the numeric code of the activation functions, or the activation function itself. The length of the vector must be the same as the length of the vector, and each element of the vector must be between 1 and 4 or a function. The possible numeric codes are the following: 1: Logistic function 2: Hyperbolic tangent function 3: Gauss function 4: Identical function. a list, which contains the differential of the activation functions. Only need to use if you use your own activation functions.... currently not used. a logical value, that switches on/off the graphical user interface.

mlptrain 5 Details The function creates an MLP neural network on the basis of the function parameters. After the creation of the network it is trained with the back-propagation algorithm using the inp and out parameters. The inp and out parameters has to be the same number of rows, otherwise the function will stop with an error message. If you use the or argument, than that variables won t be determined by random. This could be useful if you want to retrain your network. In that case use both of this two arguments. From this vesion of the package there is the chance to use your own activation functions, by using the actfns argument. If you do this, don t forget to set the differential of the activation functions in the diffact argument at the same order and the same position where you are using the new activation function. (No need of using the diffact argument if you re using the preset activation functions.) The function has a graphical user interface that can be switched on and off using the visual argument. If the graphical interface is on, the activation functions can be set in manually. If the activation functions are not set then each of them will be automatically the logistic function. The result of the function are the parameters of the trained MLP neural network. Use the mlp function for information recall. Value a list with 5 arguments: actfns diffact the s of the network. the ortions of the network. the ith layer will contains [i] neuron. a list, that contains the activation functions. The length of the list is equal with the number of active layers. a list, which contains the differential of the activation functions. The length of the list is equal with the number of active layers. See Also mlp for recall; rbftrain and rbf for training an RBF network. Examples x<-matrix(c(1,1,0,0,1,0,1,0),4,2) y<-matrix(c(0,1,1,0),4,1) <-4 ## Not run: data<-mlptrain(x,,y,it=4000); mlp(x,data$,data$,data$,data$actfns) ## End(Not run)

6 rbftrain rbf RBF neural network The recalling method of the RBF network which was trained by the rbftrain function. rbf(inp,,,,sigma,...) Arguments inp Details Value sigma See Also a matrix that contains one input data in each row. the s of the network. the ortion of the network. the ith layer will contains [i] neuron. the sigma parameters of the network.... currently not used. the last four argument can be produce by the rbftrain algorithm. a matrix that contains the response data of the network in each row. rbftrain for training an RBF network, mlp and mlptrain for classification. rbftrain RBF neural network A simple RBF neural network which suitable for approximation. rbftrain(inp,,out,=c(),=c(),alfa=0.2,it=40,err=0, sigma=nan,online=true,permute=true,visual=true,...)

rbftrain 7 Arguments inp out alfa it err Details Value sigma online permute visual a matrix that contains one input data in each row. the number of in the hidden layer. a matrix that contains one output data in each row. the starting s of the network. the starting ortions of the network. the learning-rate parameter of the back-propagation algorithm. the maximum number of training iterations. the average error at the studying points,if the average error anytime lower than this value,the algorithm will stop. the width of the Gauss functions. if TRUE the algorithm will operate in sequential mode of back-propagation,if FALSE the algorithm will operate in batch mode of back-propagation. if TRUE the algorithm will use a random permutation of the input data in each epoch. a logical value, that switches on/off the graphical user interface.... currently not used... The function creates an RBF neural network on the basis of the function parameters. After the creation of the network the function trains it using the back-propagation algorithm using the inp and out parameter. This two parameters row number must be the same, else the function will stop with an error message. If you use the or argument, than that variables won t be determined by random. This could be useful if you want to retrain your network. In that case use both of this two arguments in the same time. The function works with normalized Gauss-functions, which width parameter will be the sigma argument. If you want to give the values, this argument should be a matrix, with rows equal the number of in the first layer, and columns equal the number of in the second layer. If the sigma argument is NaN, then the width of each Gauss function will be the half of the ance between the two nearest training samples times 1,1. If the sigma argument is exactly one number, then all sigma value will be that exact number. The function has a graphical user interface that can be switched on and off, with the visual argument. If the graphical user interface is on, then the function could show the result of the approximation in a co-ordinate system, if it s a function with one parameter. The result of the function is the parameters of the trained RBF neural network. Use the rbf function for information recall. list with 4 argument the s of the network.

8 rbftrain sigma the ortion of the network. the ith layer will contains [i] neuron. the width of the Gauss functions. See Also rbf for recalling; mlp and mlptrain for classification. Examples x<-t(matrix(-5:10*24,1,16)); y<-t(matrix(sin(pi/180*(-5:10*24)),1,16)); <-8; ## Not run: data<-rbftrain(x,,y,sigma=nan) rbf(x,data$,data$,data$,data$sigma) ## End(Not run)

Index Topic datasets letters_out, 1 letters_recall, 2 letters_train, 2 Topic neural mlp, 3 mlptrain, 4 rbf, 6 rbftrain, 6 letters_out, 1 letters_recall, 2 letters_train, 2 mlp, 3 mlptrain, 4 rbf, 6 rbftrain, 6 9