Package enpls. May 14, 2018

Similar documents
Package kdtools. April 26, 2018

Package catenary. May 4, 2018

Package TANDEM. R topics documented: June 15, Type Package

Package vinereg. August 10, 2018

Package stapler. November 27, 2017

Package zebu. R topics documented: October 24, 2017

Package reval. May 26, 2015

Package msda. February 20, 2015

Package preprosim. July 26, 2016

Package PUlasso. April 7, 2018

Package fitur. March 11, 2018

Package glmnetutils. August 1, 2017

Package treedater. R topics documented: May 4, Type Package

Package vip. June 15, 2018

Package mfbvar. December 28, Type Package Title Mixed-Frequency Bayesian VAR Models Version Date

Package milr. June 8, 2017

Package cattonum. R topics documented: May 2, Type Package Version Title Encode Categorical Features

Package balance. October 12, 2018

Package TSrepr. January 26, Type Package Title Time Series Representations Version Date

Package fastdummies. January 8, 2018

Package datasets.load

Package bayesdp. July 10, 2018

Package meme. December 6, 2017

Package meme. November 2, 2017

Package bestnormalize

Package RNAseqNet. May 16, 2017

Package liftr. R topics documented: May 14, Type Package

Package infer. July 11, Type Package Title Tidy Statistical Inference Version 0.3.0

Package gtrendsr. October 19, 2017

Package splithalf. March 17, 2018

Package fcr. March 13, 2018

Package ASICS. January 23, 2018

Package glinternet. June 15, 2018

Package bisect. April 16, 2018

Package forecasthybrid

Package statip. July 31, 2018

Package weco. May 4, 2018

Package geojsonsf. R topics documented: January 11, Type Package Title GeoJSON to Simple Feature Converter Version 1.3.

Package docxtools. July 6, 2018

Package caretensemble

Package rfsa. R topics documented: January 10, Type Package

Package fastqcr. April 11, 2017

Package mgc. April 13, 2018

Package gbts. February 27, 2017

Package validara. October 19, 2017

Package nsprcomp. August 29, 2016

Package tsfeatures. February 6, 2019

Package gggenes. R topics documented: November 7, Title Draw Gene Arrow Maps in 'ggplot2' Version 0.3.2

Package intccr. September 12, 2017

Package ECctmc. May 1, 2018

Package fuzzyreg. February 7, 2019

Package smoothr. April 4, 2018

Package omu. August 2, 2018

Package descriptr. March 20, 2018

Package MTLR. March 9, 2019

Package gtrendsr. August 4, 2018

Package ggimage. R topics documented: December 5, Title Use Image in 'ggplot2' Version 0.1.0

Package TVsMiss. April 5, 2018

Package spark. July 21, 2017

Package ICSShiny. April 1, 2018

Package CuCubes. December 9, 2016

Package BiDimRegression

Package ggimage. R topics documented: November 1, Title Use Image in 'ggplot2' Version 0.0.7

Package patentsview. July 12, 2017

Package Numero. November 24, 2018

Package optimus. March 24, 2017

Package FWDselect. December 19, 2015

Package TrafficBDE. March 1, 2018

Package calpassapi. August 25, 2018

Package rsppfp. November 20, 2018

Package graphframes. June 21, 2018

Package quickreg. R topics documented:

Package PCADSC. April 19, 2017

Package simtool. August 29, 2016

Package batchmeans. R topics documented: July 4, Version Date

Package diagis. January 25, 2018

Package fpmoutliers. November 22, 2017

Package SEMrushR. November 3, 2018

Package sure. September 19, 2017

Package ADMM. May 29, 2018

Package zoomgrid. January 3, 2019

Package kerasformula

Package JOUSBoost. July 12, 2017

Package MonteCarlo. April 23, 2017

Package modmarg. R topics documented:

Package pcr. November 20, 2017

Package NBLDA. July 1, 2018

Package svalues. July 15, 2018

Package mmpa. March 22, 2017

Package OLScurve. August 29, 2016

Package cosinor. February 19, 2015

Package canvasxpress

Package qualmap. R topics documented: September 12, Type Package

Package skynet. December 12, 2018

Package metaheur. June 30, 2016

Package ICEbox. July 13, 2017

Package messaging. May 27, 2018

Package hdnom. May 14, 2018

Package preprocomb. June 26, 2016

Package epitab. July 4, 2018

Transcription:

Package enpls May 14, 2018 Type Package Title Ensemble Partial Least Squares Regression Version 6.0 Maintainer Nan Xiao <me@nan.me> An algorithmic framework for measuring feature importance, outlier detection, model applicability domain evaluation, and ensemble predictive modeling with (sparse) partial least squares regressions. License GPL-3 file LICENSE URL https://nan.me/enpls/, https://github.com/road2stat/enpls BugReports https://github.com/road2stat/enpls/issues Depends R (>= 3.0.2) LazyData TRUE VignetteBuilder knitr Imports pls, spls, foreach, doparallel, ggplot2, reshape2, plotly Suggests knitr, rmarkdown Encoding UTF-8 RoygenNote 6.0.1.9000 NeedsCompilation no Author Nan Xiao [aut, cre] (<https://orcid.org/0000-0002-0250-5673>), Dong-Sheng Cao [aut], Miao-Zhu Li [aut], Qing-Song Xu [aut] Repository CRAN Date/Publication 2018-05-13 22:32:12 UTC 1

2 R topics documented: R topics documented: enpls-package........................................ 3 alkanes........................................... 3 cv.enpls........................................... 4 cv.enspls........................................... 5 enpls.ad........................................... 6 enpls.fit........................................... 8 enpls.fs........................................... 9 enpls.mae.......................................... 11 enpls.od........................................... 11 enpls.rmse.......................................... 13 enpls.rmsle......................................... 13 enspls.ad........................................... 14 enspls.fit........................................... 16 enspls.fs........................................... 17 enspls.od.......................................... 18 logd1k............................................ 19 plot.cv.enpls......................................... 20 plot.cv.enspls........................................ 21 plot.enpls.ad......................................... 22 plot.enpls.fs......................................... 23 plot.enpls.od......................................... 24 plot.enspls.ad........................................ 25 plot.enspls.fs........................................ 26 plot.enspls.od........................................ 27 predict.enpls.fit....................................... 28 predict.enspls.fit....................................... 29 print.cv.enpls........................................ 30 print.cv.enspls........................................ 31 print.enpls.ad........................................ 32 print.enpls.fit........................................ 33 print.enpls.fs........................................ 33 print.enpls.od........................................ 34 print.enspls.ad........................................ 35 print.enspls.fit........................................ 36 print.enspls.fs........................................ 37 print.enspls.od........................................ 38 Inde 39

alkanes 3 enpls-package Ensemble (Sparse) Partial Least Squares Regressions Details An algorithmic framework for measuring feature importance, outlier detection, model applicability evaluation, and ensemble predictive modeling with (sparse) partial least squares regressions. Browse the package vignette via vignette("enpls"). Package: Type: License: enpls Package GPL-3 alkanes Methylalkanes Retention Inde Dataset Format Details Methylalkanes retention inde dataset from Liang et, al. data("alkanes") A list with 2 components: - data frame with 207 rows (samples) and 21 columns (predictors) y - numeric vector of length 207 (response) This dataset contains 207 methylalkanes chromatographic retention inde (y) which have been modeled by 21 molecular descriptors (). Molecular descriptor types: Chi path, cluster and path/cluster indices Kappa shape indices E-state indices Molecular electricity distance vector inde

4 cv.enpls References Yi-Zeng Liang, Da-Lin Yuan, Qing-Song Xu, and Olav Martin Kvalheim. "Modeling based on subspace orthogonal projections for QSAR and QSPR research." Journal of Chemometrics 22, no. 1 (2008): 23 35. data("alkanes") str(alkanes) cv.enpls Cross Validation for Ensemble Partial Least Squares Regression K-fold cross validation for ensemble partial least squares regression. cv.enpls(, y, nfolds = 5L, verbose = TRUE,...) y Value Note nfolds verbose Predictor matri. Response vector. Number of cross-validation folds, default is 5. Note that this is the CV folds for the ensemble PLS model, not the individual PLS models. To control the CV folds for single PLS models, please use the argument cvfolds. Shall we print out the progress of cross-validation?... to be passed to enpls.fit. A list containing: ypred - a matri containing two columns: real y and predicted y residual - cross validation result (y.pred - y.real) RMSE - RMSE MAE - MAE Rsquare - Rsquare To maimize the probablity that each observation can be selected in the test set (thus the prediction uncertainty can be measured), please try setting a large reptimes.

cv.enspls 5 See enpls.fit for ensemble partial least squares regressions. data("alkanes") = alkanes$ y = alkanes$y cvfit = cv.enpls(, y, reptimes = 10) print(cvfit) plot(cvfit) cv.enspls Cross Validation for Ensemble Sparse Partial Least Squares Regression K-fold cross validation for ensemble sparse partial least squares regression. cv.enspls(, y, nfolds = 5L, verbose = TRUE,...) y nfolds verbose Predictor matri. Response vector. Number of cross-validation folds, default is 5. Note that this is the CV folds for the ensemble sparse PLS model, not the individual sparse PLS models. To control the CV folds for single sparse PLS models, please use the argument cvfolds. Shall we print out the progress of cross-validation?... to be passed to enspls.fit.

6 enpls.ad Value Note A list containing: ypred - a matri containing two columns: real y and predicted y residual - cross validation result (y.pred - y.real) RMSE - RMSE MAE - MAE Rsquare - Rsquare To maimize the probablity that each observation can be selected in the test set (thus the prediction uncertainty can be measured), please try setting a large reptimes. See enspls.fit for ensemble sparse partial least squares regressions. # This eample takes one minute to run ## Not run: data("logd1k") = logd1k$ y = logd1k$y cvfit = cv.enspls(, y, reptimes = 10) print(cvfit) plot(cvfit) ## End(Not run) enpls.ad Ensemble Partial Least Squares for Model Applicability Domain Evaluation Model applicability domain evaluation with ensemble partial least squares. enpls.ad(, y, test, ytest, macomp = NULL, cvfolds = 5L, space = c("sample", "variable"), method = c("mc", "boot"), reptimes = 500L, ratio = 0.8, parallel = 1L)

enpls.ad 7 y test ytest macomp cvfolds Predictor matri of the training set. Response vector of the training set. List, with the i-th component being the i-th test set s predictor matri (see eample code below). List, with the i-th component being the i-th test set s response vector (see eample code below). Maimum number of components included within each model. If not specified, will use the maimum number possible (considering cross-validation and special cases where n is smaller than p). Number of cross-validation folds used in each model for automatic parameter selection, default is 5. space Space in which to apply the resampling method. Can be the sample space ("sample") or the variable space ("variable"). method Resampling method. "mc" (Monte-Carlo resampling) or "boot" (bootstrapping). Default is "mc". reptimes ratio parallel Number of models to build with Monte-Carlo resampling or bootstrapping. Sampling ratio used when method = "mc". Integer. Number of CPU cores to use. Default is 1 (not parallelized). Value A list containing: tr.error.mean - absolute mean prediction error for training set tr.error.median - absolute median prediction error for training set tr.error.sd - prediction error sd for training set tr.error.matri - raw prediction error matri for training set te.error.mean - list of absolute mean prediction error for test set(s) te.error.median - list of absolute median prediction error for test set(s) te.error.sd - list of prediction error sd for test set(s) te.error.matri - list of raw prediction error matri for test set(s) Note Note that for space = "variable", method could only be "mc", since bootstrapping in the variable space will create duplicated variables, and that could cause problems.

8 enpls.fit data("alkanes") = alkanes$ y = alkanes$y # training set.tr = [1:100, ] y.tr = y[1:100] # two test sets.te = list( "test.1" = [101:150, ], "test.2" = [151:207, ]) y.te = list( "test.1" = y[101:150], "test.2" = y[151:207]) ad = enpls.ad(.tr, y.tr,.te, y.te, space = "variable", method = "mc", ratio = 0.9, reptimes = 50) print(ad) plot(ad) # the interactive plot requires a HTML viewer ## Not run: plot(ad, type = "interactive") ## End(Not run) enpls.fit Ensemble Partial Least Squares Regression Ensemble partial least squares regression. enpls.fit(, y, macomp = NULL, cvfolds = 5L, reptimes = 500L, method = c("mc", "boot"), ratio = 0.8, parallel = 1L) y macomp Predictor matri. Response vector. Maimum number of components included within each model. If not specified, will use the maimum number possible (considering cross-validation and special cases where n is smaller than p).

enpls.fs 9 cvfolds reptimes Number of cross-validation folds used in each model for automatic parameter selection, default is 5. Number of models to build with Monte-Carlo resampling or bootstrapping. method Resampling method. "mc" (Monte-Carlo resampling) or "boot" (bootstrapping). Default is "mc". ratio parallel Sampling ratio used when method = "mc". Integer. Number of CPU cores to use. Default is 1 (not parallelized). Value A list containing all partial least squares model objects. See enpls.fs for measuring feature importance with ensemble partial least squares regressions. See enpls.od for outlier detection with ensemble partial least squares regressions. data("alkanes") = alkanes$ y = alkanes$y fit = enpls.fit(, y, reptimes = 50) print(fit) predict(fit, new = ) enpls.fs Ensemble Partial Least Squares for Measuring Feature Importance Measuring feature importance with ensemble partial least squares. enpls.fs(, y, macomp = NULL, cvfolds = 5L, reptimes = 500L, method = c("mc", "boot"), ratio = 0.8, parallel = 1L)

10 enpls.fs y macomp cvfolds reptimes Predictor matri. Response vector. Maimum number of components included within each model. If not specified, will use the maimum number possible (considering cross-validation and special cases where n is smaller than p). Number of cross-validation folds used in each model for automatic parameter selection, default is 5. Number of models to build with Monte-Carlo resampling or bootstrapping. method Resampling method. "mc" (Monte-Carlo resampling) or "boot" (bootstrapping). Default is "mc". ratio parallel Sampling ratio used when method = "mc". Integer. Number of CPU cores to use. Default is 1 (not parallelized). Value A list containing two components: variable.importance - a vector of variable importance coefficient.matri - original coefficient matri See enpls.od for outlier detection with ensemble partial least squares regressions. See enpls.fit for fitting ensemble partial least squares regression models. data("alkanes") = alkanes$ y = alkanes$y fs = enpls.fs(, y, reptimes = 50) print(fs) plot(fs)

enpls.mae 11 enpls.mae Mean Absolute Error (MAE) Mean Absolute Error (MAE) enpls.mae(yreal, ypred) yreal ypred true response vector predicted response vector Value MAE enpls.od Ensemble Partial Least Squares for Outlier Detection Outlier detection with ensemble partial least squares. enpls.od(, y, macomp = NULL, cvfolds = 5L, reptimes = 500L, method = c("mc", "boot"), ratio = 0.8, parallel = 1L) y macomp cvfolds Predictor matri. Response vector. Maimum number of components included within each model. If not specified, will use the maimum number possible (considering cross-validation and special cases where n is smaller than p). Number of cross-validation folds used in each model for automatic parameter selection, default is 5.

12 enpls.od reptimes Number of models to build with Monte-Carlo resampling or bootstrapping. method Resampling method. "mc" (Monte-Carlo resampling) or "boot" (bootstrapping). Default is "mc". ratio parallel Sampling ratio used when method = "mc". Integer. Number of CPU cores to use. Default is 1 (not parallelized). Value A list containing four components: error.mean - error mean for all samples (absolute value) error.median - error median for all samples error.sd - error sd for all samples predict.error.matri - the original prediction error matri Note To maimize the probablity that each observation can be selected in the test set (thus the prediction uncertainty can be measured), please try setting a large reptimes. See enpls.fs for measuring feature importance with ensemble partial least squares regressions. See enpls.fit for fitting ensemble partial least squares regression models. data("alkanes") = alkanes$ y = alkanes$y od = enpls.od(, y, reptimes = 50) print(od) plot(od) plot(od, criterion = 'sd')

enpls.rmse 13 enpls.rmse Root Mean Squared Error (RMSE) Compute Root Mean Squared Error (RMSE). enpls.rmse(yreal, ypred) yreal ypred true response vector predicted response vector Value RMSE enpls.rmsle Root Mean Squared Logarithmic Error (RMSLE) Root Mean Squared Logarithmic Error (RMSLE) enpls.rmsle(yreal, ypred) yreal ypred true response vector predicted response vector Value RMSLE

14 enspls.ad enspls.ad Ensemble Sparse Partial Least Squares for Model Applicability Domain Evaluation Model applicability domain evaluation with ensemble sparse partial least squares. enspls.ad(, y, test, ytest, macomp = 5L, cvfolds = 5L, alpha = seq(0.2, 0.8, 0.2), space = c("sample", "variable"), method = c("mc", "boot"), reptimes = 500L, ratio = 0.8, parallel = 1L) y Value test ytest macomp cvfolds alpha Predictor matri of the training set. Response vector of the training set. List, with the i-th component being the i-th test set s predictor matri (see eample code below). List, with the i-th component being the i-th test set s response vector (see eample code below). Maimum number of components included within each model. If not specified, will use 5 by default. Number of cross-validation folds used in each model for automatic parameter selection, default is 5. Parameter (grid) controlling sparsity of the model. If not specified, default is seq(0.2, 0.8, 0.2). space Space in which to apply the resampling method. Can be the sample space ("sample") or the variable space ("variable"). method Resampling method. "mc" (Monte-Carlo resampling) or "boot" (bootstrapping). Default is "mc". reptimes ratio parallel A list containing: Number of models to build with Monte-Carlo resampling or bootstrapping. Sampling ratio used when method = "mc". Integer. Number of CPU cores to use. Default is 1 (not parallelized). tr.error.mean - absolute mean prediction error for training set tr.error.median - absolute median prediction error for training set tr.error.sd - prediction error sd for training set

enspls.ad 15 tr.error.matri - raw prediction error matri for training set te.error.mean - list of absolute mean prediction error for test set(s) te.error.median - list of absolute median prediction error for test set(s) te.error.sd - list of prediction error sd for test set(s) te.error.matri - list of raw prediction error matri for test set(s) Note Note that for space = "variable", method could only be "mc", since bootstrapping in the variable space will create duplicated variables, and that could cause problems. data("logd1k") # remove low variance variables = logd1k$[, -c(17, 52, 59)] y = logd1k$y # training set.tr = [1:300, ] y.tr = y[1:300] # two test sets.te = list( "test.1" = [301:400, ], "test.2" = [401:500, ]) y.te = list( "test.1" = y[301:400], "test.2" = y[401:500]) ad = enspls.ad(.tr, y.tr,.te, y.te, macomp = 3, alpha = c(0.3, 0.6, 0.9), space = "variable", method = "mc", ratio = 0.8, reptimes = 10) print(ad) plot(ad) # the interactive plot requires a HTML viewer ## Not run: plot(ad, type = "interactive") ## End(Not run)

16 enspls.fit enspls.fit Ensemble Sparse Partial Least Squares Regression Ensemble sparse partial least squares regression. enspls.fit(, y, macomp = 5L, cvfolds = 5L, alpha = seq(0.2, 0.8, 0.2), reptimes = 500L, method = c("mc", "boot"), ratio = 0.8, parallel = 1L) y macomp cvfolds alpha reptimes Predictor matri. Response vector. Maimum number of components included within each model. If not specified, will use 5 by default. Number of cross-validation folds used in each model for automatic parameter selection, default is 5. Parameter (grid) controlling sparsity of the model. If not specified, default is seq(0.2, 0.8, 0.2). Number of models to build with Monte-Carlo resampling or bootstrapping. method Resampling method. "mc" (Monte-Carlo resampling) or "boot" (bootstrapping). Default is "mc". ratio parallel Sampling ratio used when method = "mc". Integer. Number of CPU cores to use. Default is 1 (not parallelized). Value A list containing all sparse partial least squares model objects. See enspls.fs for measuring feature importance with ensemble sparse partial least squares regressions. See enspls.od for outlier detection with ensemble sparse partial least squares regressions.

enspls.fs 17 data("logd1k") = logd1k$ y = logd1k$y fit = enspls.fit(, y, reptimes = 5, macomp = 3, alpha = c(0.3, 0.6, 0.9)) print(fit) predict(fit, new = ) enspls.fs Ensemble Sparse Partial Least Squares for Measuring Feature Importance Measuring feature importance with ensemble sparse partial least squares. enspls.fs(, y, macomp = 5L, cvfolds = 5L, alpha = seq(0.2, 0.8, 0.2), reptimes = 500L, method = c("mc", "boot"), ratio = 0.8, parallel = 1L) y Value macomp cvfolds alpha reptimes Predictor matri. Response vector. Maimum number of components included within each model. If not specified, will use 5 by default. Number of cross-validation folds used in each model for automatic parameter selection, default is 5. Parameter (grid) controlling sparsity of the model. If not specified, default is seq(0.2, 0.8, 0.2). Number of models to build with Monte-Carlo resampling or bootstrapping. method Resampling method. "mc" (Monte-Carlo resampling) or "boot" (bootstrapping). Default is "mc". ratio parallel A list containing two components: Sampling ratio used when method = "mc". Integer. Number of CPU cores to use. Default is 1 (not parallelized). variable.importance - a vector of variable importance coefficient.matri - original coefficient matri

18 enspls.od See enspls.od for outlier detection with ensemble sparse partial least squares regressions. See enspls.fit for fitting ensemble sparse partial least squares regression models. data("logd1k") = logd1k$ y = logd1k$y fs = enspls.fs(, y, reptimes = 5, macomp = 2) print(fs, nvar = 10) plot(fs, nvar = 10) plot(fs, type = 'boplot', limits = c(0.05, 0.95), nvar = 10) enspls.od Ensemble Sparse Partial Least Squares for Outlier Detection Outlier detection with ensemble sparse partial least squares. enspls.od(, y, macomp = 5L, cvfolds = 5L, alpha = seq(0.2, 0.8, 0.2), reptimes = 500L, method = c("mc", "boot"), ratio = 0.8, parallel = 1L) y macomp cvfolds alpha reptimes Predictor matri. Response vector. Maimum number of components included within each model. If not specified, will use 5 by default. Number of cross-validation folds used in each model for automatic parameter selection, default is 5. Parameter (grid) controlling sparsity of the model. If not specified, default is seq(0.2, 0.8, 0.2). Number of models to build with Monte-Carlo resampling or bootstrapping. method Resampling method. "mc" (Monte-Carlo resampling) or "boot" (bootstrapping). Default is "mc". ratio parallel Sampling ratio used when method = "mc". Integer. Number of CPU cores to use. Default is 1 (not parallelized).

logd1k 19 Value A list containing four components: error.mean - error mean for all samples (absolute value) error.median - error median for all samples error.sd - error sd for all samples predict.error.matri - the original prediction error matri Note To maimize the probablity that each observation can be selected in the test set (thus the prediction uncertainty can be measured), please try setting a large reptimes. See enspls.fs for measuring feature importance with ensemble sparse partial least squares regressions. See enspls.fit for fitting ensemble sparse partial least squares regression models. data("logd1k") = logd1k$ y = logd1k$y od = enspls.od(, y, reptimes = 5, macomp = 3, alpha = c(0.3, 0.6, 0.9)) plot(od, prob = 0.1) plot(od, criterion = "sd", sdtimes = 1) logd1k logd7.4 Data for 1,000 Compounds Distribution coefficients at ph 7.4 (logd7.4) dataset from Wang et, al. data(logd1k)

20 plot.cv.enpls Format A list with 2 components: - data frame with 1,000 rows (samples) and 80 columns (predictors) y - numeric vector of length 1,000 (response) The first 1000 compounds in the original dataset were selected. Details This dataset contains distribution coefficients at ph 7.4 (logd7.4) for 1,000 compounds, and 80 molecular descriptors computed with RDKit. References Jian-Bing Wang, Dong-Sheng Cao, Min-Feng Zhu, Yong-Huan Yun, Nan Xiao, and Yi-Zeng Liang. "In silico evaluation of logd7.4 and comparison with other prediction methods." Journal of Chemometrics 29, no. 7 (2015): 389 398. data(logd1k) str(logd1k) plot.cv.enpls Plot cv.enpls object Plot cv.enpls object ## S3 method for class 'cv.enpls' plot(, lim = NULL, ylim = NULL, alpha = 0.8, main = NULL,...) An object of class cv.enpls. lim Vector of length 2 - ais limits of the plot. ylim y Vector of length 2 - y ais limits of the plot. alpha An alpha transparency value for points, a real number in (0, 1]. main Plot title, not used currently.... Additional graphical parameters, not used currently.

plot.cv.enspls 21 See cv.enpls for cross-validation of ensemble partial least squares regression models. data("alkanes") = alkanes$ y = alkanes$y cvfit = cv.enpls(, y, reptimes = 10) plot(cvfit) plot.cv.enspls Plot cv.enspls object Plot cv.enspls object ## S3 method for class 'cv.enspls' plot(, lim = NULL, ylim = NULL, alpha = 0.8, main = NULL,...) lim ylim An object of class cv.enspls. Vector of length 2 - ais limits of the plot. y Vector of length 2 - y ais limits of the plot. alpha An alpha transparency value for points, a real number in (0, 1]. main Plot title, not used currently.... Additional graphical parameters, not used currently. See cv.enspls for cross-validation of ensemble sparse partial least squares regression models.

22 plot.enpls.ad # This eample takes one minute to run ## Not run: data("logd1k") = logd1k$ y = logd1k$y cvfit = cv.enspls(, y, reptimes = 10) plot(cvfit) ## End(Not run) plot.enpls.ad Plot enpls.ad object Plot enpls.ad object ## S3 method for class 'enpls.ad' plot(, type = c("static", "interactive"), main = NULL,...) An object of class enpls.ad. type Plot type. Can be "static" or "interactive". main Plot title, not used currently.... Additional graphical parameters, not used currently. See enpls.ad for model applicability domain evaluation with ensemble partial least squares regressions.

plot.enpls.fs 23 data("alkanes") = alkanes$ y = alkanes$y # training set.tr = [1:100, ] y.tr = y[1:100] # two test sets.te = list( "test.1" = [101:150, ], "test.2" = [151:207, ]) y.te = list( "test.1" = y[101:150], "test.2" = y[151:207]) ad = enpls.ad(.tr, y.tr,.te, y.te, space = "variable", method = "mc", ratio = 0.9, reptimes = 50) plot(ad) # the interactive plot requires a HTML viewer ## Not run: plot(ad, type = "interactive") ## End(Not run) plot.enpls.fs Plot enpls.fs object Plot enpls.fs object ## S3 method for class 'enpls.fs' plot(, nvar = NULL, type = c("dotplot", "boplot"), limits = c(0, 1), main = NULL,...) nvar type limits An object of class enpls.fs. Number of top variables to show. Ignored if sort = FALSE. Plot type. "dotplot" or "boplot". Vector of length 2. Set boplot limits (in quantile) to remove the etreme outlier coefficients.

24 plot.enpls.od main Plot title, not used currently.... Additional graphical parameters, not used currently. See enpls.fs for measuring feature importance with ensemble partial least squares regressions. data("alkanes") = alkanes$ y = alkanes$y fs = enpls.fs(, y, reptimes = 50) plot(fs) plot(fs, nvar = 10) plot(fs, type = "boplot") plot(fs, type = "boplot", limits = c(0.05, 0.95)) plot.enpls.od Plot enpls.od object Plot enpls.od object ## S3 method for class 'enpls.od' plot(, criterion = c("quantile", "sd"), prob = 0.05, sdtimes = 3L, alpha = 1, main = NULL,...) criterion prob sdtimes An object of class enpls.od. Criterion of being classified as an outlier, can be "quantile" or "sd". Quantile probability as the cut-off value. Times of standard deviation as the cut-off value. alpha An alpha transparency value for points, a real number in (0, 1]. main Plot title.... Additional graphical parameters for plot.

plot.enspls.ad 25 See enpls.od for outlier detection with ensemble partial least squares regressions. data("alkanes") = alkanes$ y = alkanes$y od = enpls.od(, y, reptimes = 50) plot(od, criterion = "quantile") plot(od, criterion = "sd") plot.enspls.ad Plot enspls.ad object Plot enspls.ad object ## S3 method for class 'enspls.ad' plot(, type = c("static", "interactive"), main = NULL,...) An object of class enspls.ad. type Plot type. Can be "static" or "interactive". main Plot title.... Additional graphical parameters for plot. See enspls.ad for model applicability domain evaluation with ensemble sparse partial least squares regressions.

26 plot.enspls.fs data("logd1k") # remove low variance variables = logd1k$[, -c(17, 52, 59)] y = logd1k$y # training set.tr = [1:300, ] y.tr = y[1:300] # two test sets.te = list( "test.1" = [301:400, ], "test.2" = [401:500, ]) y.te = list( "test.1" = y[301:400], "test.2" = y[401:500]) ad = enspls.ad(.tr, y.tr,.te, y.te, macomp = 3, alpha = c(0.3, 0.6, 0.9), space = "variable", method = "mc", ratio = 0.8, reptimes = 10) plot(ad) # the interactive plot requires a HTML viewer ## Not run: plot(ad, type = "interactive") ## End(Not run) plot.enspls.fs Plot enspls.fs object Plot enspls.fs object ## S3 method for class 'enspls.fs' plot(, nvar = NULL, type = c("dotplot", "boplot"), limits = c(0, 1), main = NULL,...) nvar type An object of class enspls.fs. Number of top variables to show. Ignored if sort = FALSE. Plot type, can be "dotplot" or "boplot".

plot.enspls.od 27 limits main Vector of length 2. Set boplot limits (in quantile) to remove the etreme outlier coefficients. Plot title, not used currently.... Additional graphical parameters, not used currently. See enspls.fs for measuring feature importance with ensemble sparse partial least squares regressions. data("logd1k") = logd1k$ y = logd1k$y fs = enspls.fs(, y, reptimes = 5, macomp = 2) plot(fs, nvar = 10) plot(fs, type = "boplot", limits = c(0.05, 0.95), nvar = 10) plot.enspls.od Plot enspls.od object Plot enspls.od object ## S3 method for class 'enspls.od' plot(, criterion = c("quantile", "sd"), prob = 0.05, sdtimes = 3L, alpha = 1, main = NULL,...) criterion prob sdtimes An object of class enspls.od. Criterion of being classified as an outlier, can be "quantile" or "sd". Quantile probability as the cut-off value. Times of standard deviation as the cut-off value. alpha An alpha transparency value for points, a real number in (0, 1]. main Plot title.... Additional graphical parameters for plot.

28 predict.enpls.fit See enspls.od for outlier detection with ensemble sparse partial least squares regressions. data("logd1k") = logd1k$ y = logd1k$y od = enspls.od(, y, reptimes = 4, macomp = 2) plot(od, criterion = "quantile", prob = 0.1) plot(od, criterion = "sd", sdtimes = 1) predict.enpls.fit Make Predictions from a Fitted Ensemble Partial Least Squares Model Make predictions on new data by fitted enpls.fit object. ## S3 method for class 'enpls.fit' predict(object, new, method = c("mean", "median"),...) Value object new method An object of class enpls.fit. New data to predict with. Use "mean" or "median" to create the final prediction.... Additional parameters for predict. A numeric vector containing the predicted values. See enpls.fit for fitting ensemble partial least squares regression models.

predict.enspls.fit 29 data("alkanes") = alkanes$ y = alkanes$y fit = enpls.fit(, y, reptimes = 50) y.pred = predict(fit, new = ) plot(y, y.pred, lim = range(y), ylim = range(y)) abline(a = 0L, b = 1L) y.pred.med = predict(fit, new =, method = "median") plot(y, y.pred.med, lim = range(y), ylim = range(y)) abline(a = 0L, b = 1L) predict.enspls.fit Make Predictions from a Fitted Sparse Ensemble Partial Least Squares Model Make predictions on new data by fitted enspls.fit object. ## S3 method for class 'enspls.fit' predict(object, new, method = c("mean", "median"),...) object An object of class enspls.fit. new New data to predict with. method Use "mean" or "median" to create the final prediction.... Additional parameters for predict. Value A numeric vector containing the predicted values. See enspls.fit for fitting ensemble sparse partial least squares regression models.

30 print.cv.enpls data("logd1k") = logd1k$ y = logd1k$y fit = enspls.fit(, y, reptimes = 5, macomp = 2) y.pred = predict(fit, new = ) plot(y, y.pred, lim = range(y), ylim = range(y)) abline(a = 0L, b = 1L) y.pred.med = predict(fit, new =, method = "median") plot(y, y.pred.med, lim = range(y), ylim = range(y)) abline(a = 0L, b = 1L) print.cv.enpls Print cv.enpls Object Print cv.enpls object. ## S3 method for class 'cv.enpls' print(,...) An object of class cv.enpls.... Additional parameters for print. See cv.enpls for cross-validation of ensemble partial least squares regression models. data("alkanes") = alkanes$ y = alkanes$y cvfit = cv.enpls(, y, reptimes = 10) print(cvfit)

print.cv.enspls 31 print.cv.enspls Print cv.enspls Object Print cv.enspls object. ## S3 method for class 'cv.enspls' print(,...) An object of class cv.enspls.... Additional parameters for print. See cv.enspls for cross-validation of ensemble sparse partial least squares regression models. # This eample takes one minute to run ## Not run: data("logd1k") = logd1k$ y = logd1k$y cvfit = cv.enspls(, y, reptimes = 10) print(cvfit) ## End(Not run)

32 print.enpls.ad print.enpls.ad Print enpls.ad Object Print enpls.ad object. ## S3 method for class 'enpls.ad' print(,...) An object of class enpls.ad.... Additional parameters for print. See enpls.ad for model applicability domain evaluation with ensemble partial least squares regressions. data("alkanes") = alkanes$ y = alkanes$y # training set.tr = [1:100, ] y.tr = y[1:100] # two test sets.te = list( "test.1" = [101:150, ], "test.2" = [151:207, ]) y.te = list( "test.1" = y[101:150], "test.2" = y[151:207]) ad = enpls.ad(.tr, y.tr,.te, y.te, space = "variable", method = "mc", ratio = 0.9, reptimes = 50) print(ad)

print.enpls.fit 33 print.enpls.fit Print Fitted Ensemble Partial Least Squares Object Print coefficients of each model in the enpls.fit object. ## S3 method for class 'enpls.fit' print(,...) An object of class enpls.fit.... Additional parameters for print. See enpls.fit for fitting ensemble partial least squares regression models. data("alkanes") = alkanes$ y = alkanes$y fit = enpls.fit(, y, reptimes = 50) print(fit) print.enpls.fs Print enpls.fs Object Print enpls.fs object. ## S3 method for class 'enpls.fs' print(, sort = TRUE, nvar = NULL,...)

34 print.enpls.od An object of class enpls.fs. sort Should the variables be sorted in decreasing order of importance? nvar Number of top variables to show. Ignored if sort = FALSE.... Additional parameters for print. See enpls.fs for measuring feature importance with ensemble partial least squares regressions. data("alkanes") = alkanes$ y = alkanes$y fs = enpls.fs(, y, reptimes = 100) print(fs) print(fs, nvar = 10L) print.enpls.od Print enpls.od Object Print enpls.od object. ## S3 method for class 'enpls.od' print(,...) An object of class enpls.od.... Additional parameters for print.

print.enspls.ad 35 See enpls.od for outlier detection with ensemble partial least squares regressions. data("alkanes") = alkanes$ y = alkanes$y od = enpls.od(, y, reptimes = 40) print(od) print.enspls.ad Print enspls.ad Object Print enspls.ad object. ## S3 method for class 'enspls.ad' print(,...) An object of class enspls.ad.... Additional parameters for print. See enspls.ad for model applicability domain evaluation with ensemble sparse partial least squares regressions. data("logd1k") # remove low variance variables = logd1k$[, -c(17, 52, 59)] y = logd1k$y # training set.tr = [1:300, ]

36 print.enspls.fit y.tr = y[1:300] # two test sets.te = list( "test.1" = [301:400, ], "test.2" = [401:500, ]) y.te = list( "test.1" = y[301:400], "test.2" = y[401:500]) ad = enspls.ad(.tr, y.tr,.te, y.te, macomp = 3, alpha = c(0.3, 0.6, 0.9), space = "variable", method = "mc", ratio = 0.8, reptimes = 10) print(ad) print.enspls.fit Print Fitted Ensemble Sparse Partial Least Squares Object Print coefficients of each model in the enspls.fit object. ## S3 method for class 'enspls.fit' print(,...) An object of class enspls.fit.... Additional parameters for print. See enspls.fit for fitting ensemble sparse partial least squares regression models.

print.enspls.fs 37 data("logd1k") = logd1k$ y = logd1k$y fit = enspls.fit(, y, reptimes = 5, macomp = 3, alpha = c(0.3, 0.6, 0.9)) print(fit) print.enspls.fs Print enspls.fs Object Print enspls.fs object. ## S3 method for class 'enspls.fs' print(, sort = TRUE, nvar = NULL,...) sort nvar An object of class enspls.fs. Should the variables be sorted in decreasing order of importance? Number of top variables to show. Ignored if sort = FALSE.... Additional parameters for print. See enspls.fs for measuring feature importance with ensemble sparse partial least squares regressions. data("logd1k") = logd1k$ y = logd1k$y fs = enspls.fs(, y, reptimes = 5, macomp = 3,

38 print.enspls.od alpha = c(0.3, 0.6, 0.9)) print(fs, nvar = 10L) print.enspls.od Print enspls.od Object Print enspls.od object. ## S3 method for class 'enspls.od' print(,...) An object of class enspls.od.... Additional parameters for print. See enspls.od for outlier detection with ensemble sparse partial least squares regressions. data("logd1k") = logd1k$ y = logd1k$y od = enspls.od(, y, reptimes = 5, macomp = 3, alpha = c(0.3, 0.6, 0.9)) print(od)

Inde alkanes, 3 cv.enpls, 4, 21, 30 cv.enspls, 5, 21, 31 print.enspls.fit, 36 print.enspls.fs, 37 print.enspls.od, 38 enpls-package, 3 enpls.ad, 6, 22, 32 enpls.fit, 4, 5, 8, 10, 12, 28, 33 enpls.fs, 9, 9, 12, 24, 34 enpls.mae, 11 enpls.od, 9, 10, 11, 25, 35 enpls.rmse, 13 enpls.rmsle, 13 enspls.ad, 14, 25, 35 enspls.fit, 5, 6, 16, 18, 19, 29, 36 enspls.fs, 16, 17, 19, 27, 37 enspls.od, 16, 18, 18, 28, 38 logd1k, 19 plot, 24, 25, 27 plot.cv.enpls, 20 plot.cv.enspls, 21 plot.enpls.ad, 22 plot.enpls.fs, 23 plot.enpls.od, 24 plot.enspls.ad, 25 plot.enspls.fs, 26 plot.enspls.od, 27 predict, 28, 29 predict.enpls.fit, 28 predict.enspls.fit, 29 print, 30 38 print.cv.enpls, 30 print.cv.enspls, 31 print.enpls.ad, 32 print.enpls.fit, 33 print.enpls.fs, 33 print.enpls.od, 34 print.enspls.ad, 35 39