Package SAENET. June 4, 2015

Similar documents
Package quadprogxt. February 4, 2018

Package rnn. R topics documented: June 21, Title Recurrent Neural Network Version 0.8.1

Package elmnnrcpp. July 21, 2018

Package fasttextr. May 12, 2017

Package datasets.load

Package nsprcomp. August 29, 2016

Package tropicalsparse

Package robustreg. R topics documented: April 27, Version Date Title Robust Regression Functions

Package subplex. April 5, 2018

Package hiernet. March 18, 2018

Package milr. June 8, 2017

Package rpst. June 6, 2017

Package pairsd3. R topics documented: August 29, Title D3 Scatterplot Matrices Version 0.1.0

Package PUlasso. April 7, 2018

Package ECOSolveR. February 18, 2018

Package obliquerf. February 20, Index 8. Extract variable importance measure

Package CuCubes. December 9, 2016

Package KRMM. R topics documented: June 3, Type Package Title Kernel Ridge Mixed Model Version 1.0 Author Laval Jacquin [aut, cre]

Package kerasformula

Package SAM. March 12, 2019

Package automl. September 13, 2018

Package PrivateLR. March 20, 2018

Package vinereg. August 10, 2018

Package SSLASSO. August 28, 2018

Package ADMM. May 29, 2018

Package flsa. February 19, 2015

Package DRaWR. February 5, 2016

Package RandPro. January 10, 2018

Lab 8 CSC 5930/ Computer Vision

Package TVsMiss. April 5, 2018

Package jdx. R topics documented: January 9, Type Package Title 'Java' Data Exchange for 'R' and 'rjava'

Package Modeler. May 18, 2018

Package TrafficBDE. March 1, 2018

Package zoomgrid. January 3, 2019

Package jtrans. August 29, 2016

Package glmnetutils. August 1, 2017

Package NNLM. June 18, 2018

Package LINselect. June 9, 2018

Package RAMP. May 25, 2017

Package ECctmc. May 1, 2018

Package bife. May 7, 2017

Package TANDEM. R topics documented: June 15, Type Package

Package validara. October 19, 2017

Package glassomix. May 30, 2013

Package gpr. February 20, 2015

Package nscancor. February 15, 2018

Package nnet. R topics documented: March 20, Priority recommended. Version Date Depends R (>= 2.14.

Package pso. R topics documented: February 20, Version Date Title Particle Swarm Optimization

Package catenary. May 4, 2018

Package midastouch. February 7, 2016

Package GWRM. R topics documented: July 31, Type Package

Package CompGLM. April 29, 2018

Package fastdummies. January 8, 2018

Package embed. November 19, 2018

Package gensemble. R topics documented: February 19, 2015

Package msgps. February 20, 2015

Package gridgraphics

Package sfc. August 29, 2016

Package arulescba. April 24, 2018

Package sbrl. R topics documented: July 29, 2016

Package exporkit. R topics documented: May 5, Type Package Title Expokit in R Version Date

Package AnDE. R topics documented: February 19, 2015

Package apastyle. March 29, 2017

Package wskm. July 8, 2015

Package pwrab. R topics documented: June 6, Type Package Title Power Analysis for AB Testing Version 0.1.0

Package nodeharvest. June 12, 2015

Package sglr. February 20, 2015

Neural Network Neurons

Package tidyimpute. March 5, 2018

Package rgcvpack. February 20, Index 6. Fitting Thin Plate Smoothing Spline. Fit thin plate splines of any order with user specified knots

Package StVAR. February 11, 2017

Package humanize. R topics documented: April 4, Version Title Create Values for Human Consumption

Package sgmcmc. September 26, Type Package

Package bisect. April 16, 2018

Package semisup. March 10, Version Title Semi-Supervised Mixture Model

Package ether. September 22, 2018

Package SEMrushR. November 3, 2018

Package ggimage. R topics documented: November 1, Title Use Image in 'ggplot2' Version 0.0.7

Package msda. February 20, 2015

Package ecoseries. R topics documented: September 27, 2017

Package preprosim. July 26, 2016

Package fusionclust. September 19, 2017

Package diffusr. May 17, 2018

Package gppm. July 5, 2018

Package canvasxpress

Package rmi. R topics documented: August 2, Title Mutual Information Estimators Version Author Isaac Michaud [cre, aut]

Package mixsqp. November 14, 2018

Package swatches. December 21, 2017

Package Combine. R topics documented: September 4, Type Package Title Game-Theoretic Probability Combination Version 1.

Package maboost. R topics documented: February 20, 2015

Package deductive. June 2, 2017

Package semver. January 6, 2017

Package BiDimRegression

Package rmatio. July 28, 2017

Package clipr. June 23, 2018

Package sfadv. June 19, 2017

Package spark. July 21, 2017

Package endogenous. October 29, 2016

Package shinyhelper. June 21, 2018

Package vennlasso. January 25, 2019

Transcription:

Type Package Package SAENET June 4, 2015 Title A Stacked Autoencoder Implementation with Interface to 'neuralnet' Version 1.1 Date 2015-06-04 An implementation of a stacked sparse autoencoder for dimension reduction of features and pretraining of feed-forward neural networks with the 'neuralnet' package is contained within this package. The package also includes a predict function for the stacked autoencoder object to generate the compressed representation of new data if required. For the purposes of this package, 'stacked' is defined in line with http://ufldl.stanford.edu/wiki/index.php/stacked_autoencoders. The underlying sparse autoencoder is defined in the documentation of 'autoencoder'. License GPL-3 URL http://ufldl.stanford.edu/wiki/index.php/stacked_autoencoders, http://cran.r-project.org/package=autoencoder Imports autoencoder, neuralnet NeedsCompilation no Repository CRAN Author Stephen Hogg [aut, cre], Eugene Dubossarsky [aut] Maintainer Stephen Hogg <saenet.r@gmail.com> Date/Publication 2015-06-04 11:50:48 R topics documented: SAENET.nnet........................................ 2 SAENET.predict...................................... 2 SAENET.train........................................ 3 Index 5 1

2 SAENET.predict SAENET.nnet Use a stacked autoencoder to pre-train a feed-forward neural network. Use a stacked autoencoder to pre-train a feed-forward neural network. SAENET.nnet(h, X.train, target, nn.control = NULL) h X.train target nn.control The object returned from SAENET.train() A vector of target values. If given as a factor, classification will be performed, otherwise if an integer or numeric vector is given neuralnet() will default to regression A named list with elements to be passed as control parameters to neuralnet(). Package defaults used if no values entered. An object of class nn which can be used with the neuralnet package as normal SAENET.predict Obtain the compressed representation of new data for specified layers from a stacked autoencoder. Obtain the compressed representation of new data for specified layers from a stacked autoencoder. SAENET.predict(h, new.data, layers = c(1), all.layers = FALSE) h new.data layers all.layers The object returned from SAENET.train() A numeric vector indicating which layers of the stacked autoencoder to return output for A boolean value indicating whether to override layers and return the encoded output for all layers. Defaults to FALSE

SAENET.train 3 A list, for which each element corresponds to the output of predict.autoencoder() from package autoencoder for the specified layers of the stacked autoencoder. Examples library(autoencoder) data(iris) #### Train a stacked sparse autoencoder with a (5,3) architecture and #### a relatively minor sparsity penalty. Try experimenting with the #### lambda and beta parameters if you haven t worked with sparse #### autoencoders before - it s worth inspecting the final layer #### to ensure that output activations haven t simply converged to the value of #### rho that you gave (which is the desired activation level on average). #### If the lambda/beta parameters are set high, this is likely to happen. output <- SAENET.train(as.matrix(iris[1:100,1:4]), n.nodes = c(5,3), lambda = 1e-5, beta = 1e-5, rho = 0.01, epsilon = 0.01) predict.out <- SAENET.predict(output, as.matrix(iris[101:150,1:4]), layers = c(2)) SAENET.train Build a stacked Autoencoder. Build a stacked Autoencoder. SAENET.train(X.train, n.nodes = c(4, 3, 2), unit.type = c("logistic", "tanh"), lambda, beta, rho, epsilon, optim.method = c("bfgs", "L-BFGS-B", "CG"), rel.tol = sqrt(.machine$double.eps), max.iterations = 2000, rescale.flag = F, rescaling.offset = 0.001) X.train n.nodes unit.type lambda beta rho epsilon A vector of numbers containing the number of units at each hidden layer. hidden unit activation type as per autoencode() params. Vector of scalars indicating weight decay per layer as per autoencode(). Vector of scalars indicating sparsity penalty per layer as per autoencode(). Vector of scalars indicating sparsity parameter per layer as per autoencode(). Vector of scalars indicating initialisation parameter for weights per layer as per autoencode().

4 SAENET.train optim.method rel.tol Optimization method as per optim(). Relative convergence tolerance as per optim() max.iterations Maximum iterations for optim(). rescale.flag A logical flag indicating whether input data should be rescaled. rescaling.offset A small non-negative value used for rescaling. Further description available in the documentation of autoencoder. An object of class SAENET containing the following elements for each layer of the stacked autoencoder: ae.out X.output An object of class autoencoder containing the autoencoder created in that layer of the stacked autoencoder. In layers subsequent to the first, a matrix containing the activations of the hidden neurons. Examples library(autoencoder) data(iris) #### Train a stacked sparse autoencoder with a (5,3) architecture and #### a relatively minor sparsity penalty. Try experimenting with the #### lambda and beta parameters if you haven t worked with sparse #### autoencoders before - it s worth inspecting the final layer #### to ensure that output activations haven t simply converged to the value of #### rho that you gave (which is the desired activation level on average). #### If the lambda/beta parameters are set high, this is likely to happen. output <- SAENET.train(as.matrix(iris[,1:4]), n.nodes = c(5,3), lambda = 1e-5, beta = 1e-5, rho = 0.01, epsilon = 0.01)

Index SAENET.nnet, 2 SAENET.predict, 2 SAENET.train, 3 5