Package SafeBayes. October 20, 2016

Size: px
Start display at page:

Download "Package SafeBayes. October 20, 2016"

Transcription

1 Type Package Package SafeBayes October 20, 2016 Title Generalized and Safe-Bayesian Ridge and Lasso Regression Version 1.1 Date Depends R (>= 3.1.2), stats Description Functions for Generalized and Safe- Bayesian Ridge and Lasso Regression models with both fixed and varying variance. License GPL-2 NeedsCompilation yes LazyLoad true Author Rianne de Heide [aut, cre], Gustavo de los Campos [ctb], Paulino Perez Rodriguez [ctb], Bob Wheeler [ctb] Maintainer Rianne de Heide <r.de.heide@cwi.nl> Repository CRAN Date/Publication :04:18 R topics documented: GBLasso GBLassoFV GBRidge GBRidgeFV metroplambda rinvgauss SBLassoIlog SBLassoISq SBLassoRlog SBLassoRSq SBRidgeIlog SBRidgeISq

2 2 GBLasso SBRidgeRlog SBRidgeRSq Index 34 GBLasso Generalized Bayesian Lasso Description The function GBLasso (Generalized Bayesian Lasso) provides a Gibbs sampler to sample the posterior of generalized Bayesian lasso regression models with learning rate η. Usage GBLasso(y, X = NULL, eta = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, weights = NULL, piter = TRUE) Arguments y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. eta Learning rate η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$lambda: prior for the penalty parameter λ with three items. $value Initial value for λ. Default 50 $type Can be fixed : initial value is used as fixed penalty parameter or random, in which case a prior for λ is specified. Default random. For a Gamma prior on λ 2 : $shape for shape parameter and $rate for the rate parameter; for a Beta prior on λ: $shape1, $shape2 and $max for λ proportional to Beta(λ/max, shape1, shape2). Default: Gamma(0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta weights piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Vector of weights, numeric, length n. Default NULL, in which case all weights are set to 1. Print iterations, logical. Default TRUE.

3 GBLasso 3 Details Value Details on the generalized Bayesian lasso can be found in Chapter 2 of (de Heide, 2016). The implementation is heavily based on the BLR package of (de los Campos et al., 2009). Several authors have brought forward the idea of equipping Bayesian updating with a learning rate η, resulting in an η-generalized posterior (Vovk (1990), McAllester (2003), Seeger (2002), Catoni (2007), Audibert (2004), Zhang (2004)). Grunwald (2012) suggested its use as a method to deal with model misspecification. In the η-generalized posterior, the likelihood is raised to the power η in order to trade the relative weight of the likelihood and the prior, where η = 1 corresponds to standard Bayes. $y Vector of original outcome variables. $weights $mu $vare $yhat $SD.yHat Vector of weights. Posterior mean of the intercept. Posterior mean of of the variance. Posterior mean of mu + X*beta + epsilon. Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd $fit$dic Estimated number of effective parameters. Deviance Information Criterion. $lambda Posterior mean of λ. $bl Posterior mean of β. $SD.bL Corresponding standard deviation. $tau2 Posterior mean of τ 2. $prior $niter $burnin $thin List containing the priors used. Number of iterations. $eta Learning rate η. Author(s) R. de Heide References Number of iterations for burn-in. Number of iterations for thinning. de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Audibert, J.Y Bayesian generalized double pareto shrinkage. Statistica Sinica

4 4 GBLassoFV Catoni, O PAC-Bayesian Supervised Classification. Lecture Notes - Monograph Series. IMS. Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg McAllester, D PAC-Bayesian stochastic model selection. Machine Learning 51(1) 5-21 Vovk, V.G Aggregating strategies. In Proc. COLT Zhang, T Learning bounds for a generalized family of Bayesian posterior distributions. Advances of Neural Information Processing Systems 16, Thrun, L.S. and Schoelkopf, B. eds., MIT Press Examples rm(list=ls()) library(safebayes) # Simulate data x <- runif(100, -1, 1) # 100 random uniform x's between -1 and 1 y <- NULL # for each x, a y that is 0 + Gaussian noise for (i in 1:100) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } # Now sample 100 zero's and ones (coin toss) cointoss <- sample(0:1, 100, replace=true) # indices of the ones indices <- which(cointoss==1) # we replace x and y with (0,0) for the indices the cointoss # landed tail (1) x[indices] <- 0 y[indices] <- 0 plot(x,y) # Determine the generalized posterior for eta = 0.25 obj <- GBLasso(y, x, eta=0.25) # posterior means of the coefficients beta and intercept mu betafour <- obj$bl mufour <- obj$mu GBLassoFV Generalized Bayesian Lasso with fixed variance

5 GBLassoFV 5 Description Usage The function GBLassoFV (Generalized Bayesian Lasso with Fixed Variance) provides a Gibbs sampler to sample the posterior of generalized Bayesian lasso regression models with fixed variance and with learning rate η. GBLassoFV(y, X = NULL, sigma2 = NULL, eta = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, weights = NULL, piter = TRUE) Arguments Details y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. sigma2 Fixed variance parameter σ 2, numeric. Default NULL, in which case the variance will be estimated from the data. eta Learning rate η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$lambda: prior for the penalty parameter λ with three items. $value Initial value for λ. Default 50 $type Can be fixed : initial value is used as fixed penalty parameter or random, in which case a prior for λ is specified. Default random. For a Gamma prior on λ 2 : $shape for shape parameter and $rate for the rate parameter; for a Beta prior on λ: $shape1, $shape2 and $max for λ proportional to Beta(λ/max, shape1, shape2). Default: Gamma(0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta weights piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Vector of weights, numeric, length n. Default NULL, in which case all weights are set to 1. Print iterations, logical. Default TRUE. Details on the generalized Bayesian lasso can be found in Chapter 2 of (de Heide, 2016). The implementation is heavily based on the BLR package of (de los Campos et al., 2009). Several authors have brought forward the idea of equipping Bayesian updating with a learning rate η, resulting in an η-generalized posterior (Vovk (1990), McAllester (2003), Seeger (2002), Catoni (2007), Audibert (2004), Zhang (2004)). Grunwald (2012) suggested its use as a method to deal

6 6 GBLassoFV Value with model misspecification. In the η-generalized posterior, the likelihood is raised to the power η in order to trade the relative weight of the likelihood and the prior, where η = 1 corresponds to standard Bayes. $y Vector of original outcome variables. $weights $mu $vare $yhat $SD.yHat Vector of weights. Posterior mean of the intercept. Posterior mean of of the variance. Posterior mean of mu + X*beta + epsilon. Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd $fit$dic Estimated number of effective parameters. Deviance Information Criterion. $lambda Posterior mean of λ. $bl Posterior mean of β. $SD.bL Corresponding standard deviation. $tau2 Posterior mean of τ 2. $prior $niter $burnin $thin List containing the priors used. Number of iterations. $eta Learning rate η. Author(s) R. de Heide References Number of iterations for burn-in. Number of iterations for thinning. de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Audibert, J.Y Bayesian generalized double pareto shrinkage. Statistica Sinica Catoni, O PAC-Bayesian Supervised Classification. Lecture Notes - Monograph Series. IMS. Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg McAllester, D PAC-Bayesian stochastic model selection. Machine Learning 51(1) 5-21

7 GBRidge 7 Vovk, V.G Aggregating strategies. In Proc. COLT Zhang, T Learning bounds for a generalized family of Bayesian posterior distributions. Advances of Neural Information Processing Systems 16, Thrun, L.S. and Schoelkopf, B. eds., MIT Press Examples rm(list=ls()) library(safebayes) # Simulate data x <- runif(100, -1, 1) # 100 random uniform x's between -1 and 1 y <- NULL # for each x, a y that is 0 + Gaussian noise for (i in 1:100) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } # Now sample 100 zero's and ones (coin toss) cointoss <- sample(0:1, 100, replace=true) # indices of the ones indices <- which(cointoss==1) # we replace x and y with (0,0) for the indices the cointoss # landed tail (1) x[indices] <- 0 y[indices] <- 0 plot(x,y) # Determine the generalized posterior for eta = 0.25 obj <- GBLassoFV(y, x, eta=0.25) # posterior means of the coefficients beta and intercept mu betafour <- obj$bl mufour <- obj$mu GBRidge Generalized Bayesian Ridge Regression Description Usage The function GBRidge (Generalized Bayesian Ridge Regression) provides a Gibbs sampler to sample the posterior of generalized Bayesian ridge regression models with learning rate η. GBRidge(y, X = NULL, eta = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, weights = NULL, piter = TRUE)

8 8 GBRidge Arguments Details Value y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. eta Learning rate η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$varbr: prior for the variance of the Gaussian prior for the coefficients β, with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chi-square distribution. Default (0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta weights piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Vector of weights, numeric, length n. Default NULL, in which case all weights are set to 1. Print iterations, logical. Default TRUE. Details on generalized Bayesian regression can be found in (de Heide, 2016). The implementation is heavily based on the BLR package of (de los Campos et al., 2009). Several authors have brought forward the idea of equipping Bayesian updating with a learning rate η, resulting in an η-generalized posterior (Vovk (1990), McAllester (2003), Seeger (2002), Catoni (2007), Audibert (2004), Zhang (2004)). Grunwald (2012) suggested its use as a method to deal with model misspecification. In the η-generalized posterior, the likelihood is raised to the power η in order to trade the relative weight of the likelihood and the prior, where η = 1 corresponds to standard Bayes. $y Vector of original outcome variables. $weights $mu $vare $yhat $SD.yHat Vector of weights. Posterior mean of the intercept. Posterior mean of of the variance. Posterior mean of mu + X*beta + epsilon. Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd $fit$dic Estimated number of effective parameters. Deviance Information Criterion.

9 GBRidge 9 $br Posterior mean of β. $SD.bR $prior $niter $burnin $thin Corresponding standard deviation. List containing the priors used. Number of iterations. $eta Learning rate η. Author(s) R. de Heide References Number of iterations for burn-in. Number of iterations for thinning. de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Audibert, J.Y Bayesian generalized double pareto shrinkage. Statistica Sinica Catoni, O PAC-Bayesian Supervised Classification. Lecture Notes - Monograph Series. IMS. Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg McAllester, D PAC-Bayesian stochastic model selection. Machine Learning 51(1) 5-21 Vovk, V.G Aggregating strategies. In Proc. COLT Zhang, T Learning bounds for a generalized family of Bayesian posterior distributions. Advances of Neural Information Processing Systems 16, Thrun, L.S. and Schoelkopf, B. eds., MIT Press Examples rm(list=ls()) library(safebayes) # Simulate data x <- runif(100, -1, 1) # 100 random uniform x's between -1 and 1 y <- NULL # for each x, a y that is 0 + Gaussian noise for (i in 1:100) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } # Now sample 100 zero's and ones (coin toss) cointoss <- sample(0:1, 100, replace=true)

10 10 GBRidgeFV # indices of the ones indices <- which(cointoss==1) # we replace x and y with (0,0) for the indices the cointoss # landed tail (1) x[indices] <- 0 y[indices] <- 0 plot(x,y) # Determine the generalized posterior for eta = 0.25 obj <- GBLasso(y, x, eta=0.25) # posterior means of the coefficients beta and intercept mu betafour <- obj$bl mufour <- obj$mu GBRidgeFV Generalized Bayesian Ridge Regression with fixed variance Description Usage The function GBRidgeFV (Generalized Bayesian Ridge Regression with Fixed Variance) provides a Gibbs sampler to sample the posterior of generalized Bayesian ridge regression models with fixed variance and with learning rate η. GBRidgeFV(y, X = NULL, sigma2 = NULL, eta = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, weights = NULL, piter = TRUE) Arguments y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. sigma2 Fixed variance parameter σ 2, numeric. Default NULL, in which case the variance will be estimated from the data. eta Learning rate η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$varbr: prior for the variance of the Gaussian prior for the coefficients β, with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chi-square distribution. Default (0, 0). niter Number of iterations, integer. Default 1100.

11 GBRidgeFV 11 burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta weights piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Vector of weights, numeric, length n. Default NULL, in which case all weights are set to 1. Print iterations, logical. Default TRUE. Details Details on generalized Bayesian regression can be found in (de Heide, 2016). The implementation is heavily based on the BLR package of (de los Campos et al., 2009). Several authors have brought forward the idea of equipping Bayesian updating with a learning rate η, resulting in an η-generalized posterior (Vovk (1990), McAllester (2003), Seeger (2002), Catoni (2007), Audibert (2004), Zhang (2004)). Grunwald (2012) suggested its use as a method to deal with model misspecification. In the η-generalized posterior, the likelihood is raised to the power η in order to trade the relative weight of the likelihood and the prior, where η = 1 corresponds to standard Bayes. Value $y Vector of original outcome variables. $weights Vector of weights. $mu Posterior mean of the intercept. $vare Posterior mean of of the variance. $yhat Posterior mean of mu + X*beta + epsilon. $SD.yHat Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd Estimated number of effective parameters. $fit$dic Deviance Information Criterion. $br Posterior mean of β. $SD.bR Corresponding standard deviation. $prior List containing the priors used. $niter Number of iterations. $burnin Number of iterations for burn-in. $thin Number of iterations for thinning. $eta Learning rate η. Author(s) R. de Heide

12 12 GBRidgeFV References de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Audibert, J.Y Bayesian generalized double pareto shrinkage. Statistica Sinica Catoni, O PAC-Bayesian Supervised Classification. Lecture Notes - Monograph Series. IMS. Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg McAllester, D PAC-Bayesian stochastic model selection. Machine Learning 51(1) 5-21 Vovk, V.G Aggregating strategies. In Proc. COLT Zhang, T Learning bounds for a generalized family of Bayesian posterior distributions. Advances of Neural Information Processing Systems 16, Thrun, L.S. and Schoelkopf, B. eds., MIT Press Examples rm(list=ls()) library(safebayes) # Simulate data x <- runif(100, -1, 1) # 100 random uniform x's between -1 and 1 y <- NULL # for each x, a y that is 0 + Gaussian noise for (i in 1:100) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } # Now sample 100 zero's and ones (coin toss) cointoss <- sample(0:1, 100, replace=true) # indices of the ones indices <- which(cointoss==1) # we replace x and y with (0,0) for the indices the cointoss # landed tail (1) x[indices] <- 0 y[indices] <- 0 plot(x,y) # Determine the generalized posterior for eta = 0.25 obj <- GBLassoFV(y, x, eta=0.25) # posterior means of the coefficients beta and intercept mu betafour <- obj$bl mufour <- obj$mu

13 metroplambda 13 metroplambda Metropolis-Hastings algorithm to sample lambda with a Beta prior for the Bayesian Lasso Description Metropolis-Hastings algorithm to sample lambda with a Beta prior from (de los Campos et al., 2009) for the Bayesian Lasso regression model. Usage metroplambda(tau2, lambda, shape1 = 1.2, shape2 = 1.2, max = 200, ncp = 0) Arguments tau2 lambda shape1 shape2 max ncp Latent parameter tau-squared to form the Laplace prior on the coefficients of the Lasso from a normal-mixture. Initial value for lambda. First shape parameter for the Beta distribution. Second shape parameter for the Beta distribution. Maximum value of lambda. Dummy parameter. Details Metropolis-Hastings algorithm to sample lambda with a Beta prior from (de los Campos et al., 2009) for the Bayesian Lasso regression model. Value Returns a value for lambda to use in the Gibbs samplers of the functions in the SafeBayes package. Author(s) Copied from (de los Campos et al., 2009). References de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182:

14 14 rinvgauss Examples rm(list=ls()) library(safebayes) tau2 <- 1/4 lambda <- 50 metroplambda(tau2=tau2, lambda=lambda) rinvgauss The inverse Gaussian and Wald distributions Description Random generator for the inverse Gaussian and Wald distributions. Usage rinvgauss(n, nu, lambda) Arguments n nu lambda vector of numbers of observations vector real and non-negative parameter the Wald distribution results when nu=1 vector real and non-negative parameter Details This function is copied from the SuppDists package by Bob Wheeler. I have copied this function here, because the SuppDists pacakge is no longer maintained, so that I can maintain the rinvgauss function for use in the functions in this package. Probability functions: ] λ f(x, ν, λ) = [ λ 2πx 3 exp (x ν)2 2ν 2 the density x F (x, ν, λ) = Φ [ λ ( x ) ] [ λ ( x ) ] x ν 1 + e 2λ/ν Φ x ν + 1 the distribution function where Φ[] is the standard normal distribution function. The calculations are accurate to at least seven significant figures over an extended range - much larger than that of any existing tables. We have tested them for λ/ν = 10e 20, and λ/ν = 10e4. Accessible tables are those of Wassan and Roy (1969), which unfortunately, are sometimes good to only two significant digits. Much better tables are available in an expensive CRC Handbook (1989), which are accurate to at least 7 significant digits for λ/ν 0.02 to λ/ν 4000.

15 rinvgauss 15 Value These are first passage time distributions of Brownian motion with positive drift. See Whitmore and Seshadri (1987) for a heuristic derivation. The Wald (1947) form represents the average sample number in sequential analysis. The distribution has a non-monotonic failure rate, and is of considerable interest in lifetime studies: Ckhhikara and Folks (1977). A general reference is Seshadri (1993). This is an extremely difficult distribution to treat numerically, and it would not have been possible without some extraordinary contributions. An elegant derivation of the distribution function is to be found in Shuster (1968). The first such derivation seems to be that of Zigangirov (1962), which because of its inaccessibility, the author has not read. The method of generating random numbers is due to Michael, Schucany, and Haas (1976). The approximation of Whitmore and Yalovsky (1978) makes it possible to find starting values for inverting the distribution. All three papers are short, elegant, and non- trivial. The output values conform to the output from other such functions in R. rinvgauss() generates random numbers. Author(s) Bob Wheeler <bwheelerg@gmail.com> References Ckhhikara, R.S. and Folks, J.L. (1977) The inverse Gaussian distribution as a lifetime model. Technometrics CRC Handbook. (1989). Percentile points of the inverse Gaussian distribution. J.A. Koziol (ed.) Boca Raton, FL. Michael, J.R., Schucany, W.R. and Haas, R.W. (1976). Generating random variates using transformations with multiple roots. American Statistician Seshadri, V. (1993). The inverse Gaussian distribution. Clarendon, Oxford Shuster, J. (1968). On the inverse Gaussian distribution function. Jour. Am. Stat. Assoc Wasan, M.T. and Roy, L.K. (1969). Tables of inverse Gaussian percentage points. Technometrics Wald, A. (1947). Sequential analysis. Wiley, NY Whitmore, G.A. and Seshadri, V. (1987). A heuristic derivation of the inverse Gaussian distribution. American Statistician Whitmore, G.A. and Yalovsky, M. (1978). A normalizing logarithmic transformation for inverse Gaussian random variables. Technometrics Zigangirov, K.S. (1962). Expression for the Wald distribution in terms of normal distribution. Radiotech.Electron Examples rinvgauss(1, 1, 16)

16 16 SBLassoIlog SBLassoIlog I-log-Safe-Bayesian Lasso Description The function SBLassoIlog (I-log-Safe-Bayesian Lasso) provides a Gibbs sampler together with the I-log-Safe-Bayesian algorithm for Bayesian lasso regression models with varying variance. Usage SBLassoIlog(y, X = NULL, etaseq = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, piter = TRUE) Arguments Details y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. etaseq Vector of learning rates η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$lambda: prior for the penalty parameter λ with three items. $value Initial value for λ. Default 50 $type Can be fixed : initial value is used as fixed penalty parameter or random, in which case a prior for λ is specified. Default random. For a Gamma prior on λ 2 : $shape for shape parameter and $rate for the rate parameter; for a Beta prior on λ: $shape1, $shape2 and $max for λ proportional to Beta(λ/max, shape1, shape2). Default: Gamma(0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Print iterations, logical. Default TRUE. Details on the Safe-Bayesian lasso can be found in Chapter 2 of (de Heide, 2016). The implementation of the Gibbs sampler is based on the BLR package of (de los Campos et al., 2009). The Safe-Bayesian algorithm was proposed by Grunwald (2012) as a method to learn the learning rate for the generalized posterior to deal with model misspecification.

17 SBLassoIlog 17 Value $y Vector of original outcome variables. $mu $vare $yhat $SD.yHat Posterior mean of the intercept. Posterior mean of of the variance. Posterior mean of mu + X*beta + epsilon. Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd $fit$dic Estimated number of effective parameters. Deviance Information Criterion. $lambda Posterior mean of λ. $bl Posterior mean of β. $SD.bL Corresponding standard deviation. $tau2 Posterior mean of τ 2. $prior $niter $burnin $thin List containing the priors used. Number of iterations. Number of iterations for burn-in. Number of iterations for thinning. $CEallen List of cumulative eta-in-model-log-loss per η. $eta.min Author(s) R. de Heide References Learning rate η minimizing the cumulative eta-in-model-log-loss. de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg Examples rm(list=ls()) # Simulate data x <- runif(10, -1, 1) # 10 random uniform x's between -1 and 1 y <- NULL # for each x, an y that is 0 + Gaussian noise for (i in 1:10) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4)

18 18 SBLassoISq } plot(x,y) ## Not run: # Let I-log-SafeBayes learn the learning rate sbobj <- SBLassoIlog(y, x, etaseq=c(1, 0.5, 0.25)) # eta sbobj$eta.min ## End(Not run) SBLassoISq I-square-Safe-Bayesian Lasso Description Usage The function SBLassoISq (I-square-Safe-Bayesian Lasso) provides a Gibbs sampler together with the I-square-Safe-Bayesian algorithm for Bayesian lasso regression models with fixed variance. SBLassoISq(y, X = NULL, sigma2 = NULL, etaseq = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, piter = TRUE) Arguments y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. sigma2 Fixed variance parameter σ 2, numeric. Default NULL, in which case the variance will be estimated from the data per addition of new data point in the Safe- Bayesian algorithm. etaseq Vector of learning rates η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$lambda: prior for the penalty parameter λ with three items. $value Initial value for λ. Default 50 $type Can be fixed : initial value is used as fixed penalty parameter or random, in which case a prior for λ is specified. Default random. For a Gamma prior on λ 2 : $shape for shape parameter and $rate for the rate parameter; for a Beta prior on λ: $shape1, $shape2 and $max for λ proportional to Beta(λ/max, shape1, shape2). Default: Gamma(0, 0). niter Number of iterations, integer. Default 1100.

19 SBLassoISq 19 burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Print iterations, logical. Default TRUE. Details Details on the Safe-Bayesian lasso can be found in Chapter 2 of (de Heide, 2016). The implementation of the Gibbs sampler is based on the BLR package of (de los Campos et al., 2009). The Safe-Bayesian algorithm was proposed by Grunwald (2012) as a method to learn the learning rate for the generalized posterior to deal with model misspecification. Value $y Vector of original outcome variables. $mu Posterior mean of the intercept. $vare Posterior mean of of the variance. $yhat Posterior mean of mu + X*beta + epsilon. $SD.yHat Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd Estimated number of effective parameters. $fit$dic Deviance Information Criterion. $lambda Posterior mean of λ. $bl Posterior mean of β. $SD.bL Corresponding standard deviation. $tau2 Posterior mean of τ 2. $prior List containing the priors used. $niter Number of iterations. $burnin Number of iterations for burn-in. $thin Number of iterations for thinning. $CEallen List of cumulative eta-in-model-square-loss per η. $eta.min Learning rate η minimizing the cumulative eta-in-model-square-loss. Author(s) R. de Heide

20 20 SBLassoRlog References de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg Examples rm(list=ls()) # Simulate data x <- runif(10, -1, 1) # 10 random uniform x's between -1 and 1 y <- NULL # for each x, an y that is 0 + Gaussian noise for (i in 1:10) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } plot(x,y) ## Not run: # Let I-square-SafeBayes learn the learning rate sbobj <- SBLassoISq(y, x, etaseq=c(1, 0.5, 0.25)) # eta sbobj$eta.min ## End(Not run) SBLassoRlog R-log-Safe-Bayesian Lasso Description The function SBLassoRlog (R-log-Safe-Bayesian Lasso) provides a Gibbs sampler together with the R-log-Safe-Bayesian algorithm for Bayesian lasso regression models. Usage SBLassoRlog(y, X = NULL, etaseq = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, piter = TRUE)

21 SBLassoRlog 21 Arguments Details Value y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. etaseq Vector of learning rates η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$lambda: prior for the penalty parameter λ with three items. $value Initial value for λ. Default 50 $type Can be fixed : initial value is used as fixed penalty parameter or random, in which case a prior for λ is specified. Default random. For a Gamma prior on λ 2 : $shape for shape parameter and $rate for the rate parameter; for a Beta prior on λ: $shape1, $shape2 and $max for λ proportional to Beta(λ/max, shape1, shape2). Default: Gamma(0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Print iterations, logical. Default TRUE. Details on the Safe-Bayesian lasso can be found in Chapter 2 of (de Heide, 2016). The implementation of the Gibbs sampler is based on the BLR package of (de los Campos et al., 2009). The Safe-Bayesian algorithm was proposed by Grunwald (2012) as a method to learn the learning rate for the generalized posterior to deal with model misspecification. $y Vector of original outcome variables. $mu $vare $yhat $SD.yHat Posterior mean of the intercept. Posterior mean of of the variance. Posterior mean of mu + X*beta + epsilon. Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd $fit$dic Estimated number of effective parameters. Deviance Information Criterion. $lambda Posterior mean of λ. $bl Posterior mean of β. $SD.bL Corresponding standard deviation.

22 22 SBLassoRlog $tau2 Posterior mean of τ 2. $prior $niter $burnin $thin List containing the priors used. Number of iterations. Number of iterations for burn-in. Number of iterations for thinning. $CMRlogEallen List of cumulative posterior-expected posterior-randomized log-loss per η. $eta.min Learning rate η minimizing the cumulative posterior-expected posterior-randomized log-loss. Author(s) R. de Heide References de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg Examples rm(list=ls()) # Simulate data x <- runif(10, -1, 1) # 10 random uniform x's between -1 and 1 y <- NULL # for each x, an y that is 0 + Gaussian noise for (i in 1:10) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } plot(x,y) ## Not run: # Let R-log-SafeBayes learn the learning rate sbobj <- SBLassoRlog(y, x, etaseq=c(1, 0.5, 0.25)) # eta sbobj$eta.min ## End(Not run)

23 SBLassoRSq 23 SBLassoRSq R-square-Safe-Bayesian Lasso Description The function SBLassoRSq (R-square-Safe-Bayesian Lasso) provides a Gibbs sampler together with the R-square-Safe-Bayesian algorithm for Bayesian lasso regression models with fixed variance. Usage SBLassoRSq(y, X = NULL, sigma2 = NULL, etaseq = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, piter = TRUE) Arguments Details y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. sigma2 Fixed variance parameter σ 2, numeric. Default NULL, in which case the variance will be estimated from the data per addition of new data point in the Safe- Bayesian algorithm. etaseq Vector of learning rates η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$lambda: prior for the penalty parameter λ with three items. $value Initial value for λ. Default 50 $type Can be fixed : initial value is used as fixed penalty parameter or random, in which case a prior for λ is specified. Default random. For a Gamma prior on λ 2 : $shape for shape parameter and $rate for the rate parameter; for a Beta prior on λ: $shape1, $shape2 and $max for λ proportional to Beta(λ/max, shape1, shape2). Default: Gamma(0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Print iterations, logical. Default TRUE. Details on the Safe-Bayesian lasso can be found in Chapter 2 of (de Heide, 2016). The implementation of the Gibbs sampler is based on the BLR package of (de los Campos et al., 2009). The Safe-Bayesian algorithm was proposed by Grunwald (2012) as a method to learn the learning rate for the generalized posterior to deal with model misspecification.

24 24 SBLassoRSq Value $y Vector of original outcome variables. $mu $vare $yhat $SD.yHat Posterior mean of the intercept. Posterior mean of of the variance. Posterior mean of mu + X*beta + epsilon. Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd $fit$dic Estimated number of effective parameters. Deviance Information Criterion. $lambda Posterior mean of λ. $bl Posterior mean of β. $SD.bL Corresponding standard deviation. $tau2 Posterior mean of τ 2. $prior $niter $burnin $thin List containing the priors used. Number of iterations. Number of iterations for burn-in. Number of iterations for thinning. $CMRSEallen List of cumulative posterior-expected posterior-randomized square-loss per η. $eta.min Author(s) R. de Heide References Learning rate η minimizing the cumulative posterior-expected posterior-randomized square-loss. de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg Examples rm(list=ls()) # Simulate data x <- runif(10, -1, 1) # 10 random uniform x's between -1 and 1 y <- NULL # for each x, an y that is 0 + Gaussian noise

25 SBRidgeIlog 25 for (i in 1:10) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } plot(x,y) ## Not run: # Let R-square-SafeBayes learn the learning rate sbobj <- SBLassoRSq(y, x, etaseq=c(1, 0.5, 0.25)) # eta sbobj$eta.min ## End(Not run) SBRidgeIlog I-log-Safe-Bayesian Ridge Regression Description Usage The function SBRidgeIlog (I-log-Safe-Bayesian Ridge Regression) provides a Gibbs sampler together with the I-log-Safe-Bayesian algorithm for Ridge regression models with varying variance. SBRidgeIlog(y, X = NULL, etaseq = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, piter = TRUE) Arguments y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. etaseq Vector of learning rates η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$varbr: prior for the variance of the Gaussian prior for the coefficients β, with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chi-square distribution. Default (0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Print iterations, logical. Default TRUE.

26 26 SBRidgeIlog Details Details on generalized Bayesian regression can be found in (de Heide, 2016). The implementation of the Gibbs sampler is based on the BLR package of (de los Campos et al., 2009). The Safe-Bayesian algorithm was proposed by Grunwald (2012) as a method to learn the learning rate for the generalized posterior to deal with model misspecification. Value $y Vector of original outcome variables. $mu Posterior mean of the intercept. $vare Posterior mean of of the variance. $yhat Posterior mean of mu + X*beta + epsilon. $SD.yHat Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd Estimated number of effective parameters. $fit$dic Deviance Information Criterion. $br Posterior mean of β. $SD.bR Corresponding standard deviation. $prior List containing the priors used. $niter Number of iterations. $burnin Number of iterations for burn-in. $thin Number of iterations for thinning. $CEallen List of cumulative eta-in-model-log-loss per η. $eta.min Learning rate η minimizing the cumulative eta-in-model-log-loss. Author(s) R. de Heide References de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg

27 SBRidgeISq 27 Examples rm(list=ls()) # Simulate data x <- runif(10, -1, 1) # 10 random uniform x's between -1 and 1 y <- NULL # for each x, an y that is 0 + Gaussian noise for (i in 1:10) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } plot(x,y) ## Not run: # Let I-log-SafeBayes learn the learning rate sbobj <- SBRidgeIlog(y, x, etaseq=c(1, 0.5, 0.25)) # eta sbobj$eta.min ## End(Not run) SBRidgeISq I-square-Safe-Bayesian Ridge Regression Description Usage The function SBRidgeISq (I-square-Safe-Bayesian Ridge Regression) provides a Gibbs sampler together with the I-square-Safe-Bayesian algorithm for Ridge regression models with fixed variance. SBRidgeISq(y, X = NULL, sigma2 = NULL, etaseq = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, piter = TRUE) Arguments y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. sigma2 Fixed variance parameter σ 2, numeric. Default NULL, in which case the variance will be estimated from the data per addition of new data point in the Safe- Bayesian algorithm. etaseq Vector of learning rates η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0).

28 28 SBRidgeISq prior$varbr: prior for the variance of the Gaussian prior for the coefficients β, with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chi-square distribution. Default (0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta piter Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default Print iterations, logical. Default TRUE. Details Details on generalized Bayesian regression can be found in (de Heide, 2016). The implementation of the Gibbs sampler is based on the BLR package of (de los Campos et al., 2009). The Safe-Bayesian algorithm was proposed by Grunwald (2012) as a method to learn the learning rate for the generalized posterior to deal with model misspecification. Value $y Vector of original outcome variables. $mu Posterior mean of the intercept. $vare Posterior mean of of the variance. $yhat Posterior mean of mu + X*beta + epsilon. $SD.yHat Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd Estimated number of effective parameters. $fit$dic Deviance Information Criterion. $br Posterior mean of β. $SD.bR Corresponding standard deviation. $prior List containing the priors used. $niter Number of iterations. $burnin Number of iterations for burn-in. $thin Number of iterations for thinning. $CEallen List of cumulative eta-in-model-square-loss per η. $eta.min Learning rate η minimizing the cumulative eta-in-model-square-loss. Author(s) R. de Heide

29 SBRidgeRlog 29 References de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg Examples rm(list=ls()) # Simulate data x <- runif(10, -1, 1) # 10 random uniform x's between -1 and 1 y <- NULL # for each x, an y that is 0 + Gaussian noise for (i in 1:10) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } plot(x,y) ## Not run: # Let I-square-SafeBayes learn the learning rate sbobj <- SBRidgeISq(y, x, etaseq=c(1, 0.5, 0.25)) # eta sbobj$eta.min ## End(Not run) SBRidgeRlog R-log-Safe-Bayesian Ridge Regression Description The function SBRidgeRlog (R-log-Safe-Bayesian Ridge Regression) provides a Gibbs sampler together with the R-log-Safe-Bayesian algorithm for Ridge regression models with varying variance. Usage SBRidgeRlog(y, X = NULL, etaseq = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, piter = TRUE)

30 30 SBRidgeRlog Arguments y Vector of outcome variables, numeric, NA allowed, length n. X Design matrix, numeric, dimension n p, n 2. etaseq Vector of learning rates η, numeric, 0 η 1. Default 1. prior List containing the following elements prior$vare: prior for the variance parameter σ 2 with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chisquare distribution. Default (0, 0). prior$varbr: prior for the variance of the Gaussian prior for the coefficients β, with parameters $df and $S for respectively degrees of freedom and scale parameters for an inverse-chi-square distribution. Default (0, 0). niter Number of iterations, integer. Default burnin Number of iterations for burn-in, integer. Default 100. thin Number of iterations for thinning, integer. Default 10. minabsbeta Minimum absolute value of sampled coefficients beta to avoid numerical problems, numeric. Default piter Print iterations, logical. Default TRUE. Details Details on generalized Bayesian regression can be found in (de Heide, 2016). The implementation of the Gibbs sampler is based on the BLR package of (de los Campos et al., 2009). The Safe-Bayesian algorithm was proposed by Grunwald (2012) as a method to learn the learning rate for the generalized posterior to deal with model misspecification. Value $y Vector of original outcome variables. $mu Posterior mean of the intercept. $vare Posterior mean of of the variance. $yhat Posterior mean of mu + X*beta + epsilon. $SD.yHat Corresponding standard deviation. $whichna Vector with indices of missing values of y. $fit$pd Estimated number of effective parameters. $fit$dic Deviance Information Criterion. $br Posterior mean of β. $SD.bR Corresponding standard deviation. $prior List containing the priors used. $niter Number of iterations. $burnin Number of iterations for burn-in. $thin Number of iterations for thinning. $CMRlogEallen List of cumulative posterior-expected posterior-randomized log-loss per η. $eta.min Learning rate η minimizing the cumulative posterior-expected posterior-randomized log-loss.

31 SBRidgeRSq 31 Author(s) R. de Heide References de Heide, R The Safe-Bayesian Lasso. Master Thesis, Leiden University. de los Campos G., H. Naya, D. Gianola, J. Crossa, A. Legarra, E. Manfredi, K. Weigel and J. Cotes Predicting Quantitative Traits with Regression Models for Dense Molecular Markers and Pedigree. Genetics 182: Grunwald, P.D chapter The Safe Bayesian. Algorithmic Learning Theory: 23rd International Conference, ALT 2012, Lyon, France, October 29-31, Proceedings Springer Berlin Heidelberg Examples rm(list=ls()) # Simulate data x <- runif(10, -1, 1) # 10 random uniform x's between -1 and 1 y <- NULL # for each x, an y that is 0 + Gaussian noise for (i in 1:10) { y[i] <- 0 + rnorm(1, mean=0, sd=1/4) } plot(x,y) ## Not run: # Let R-log-SafeBayes learn the learning rate sbobj <- SBRidgeRlog(y, x, etaseq=c(1, 0.5, 0.25)) # eta sbobj$eta.min ## End(Not run) SBRidgeRSq R-square Safe-Bayesian Ridge Regression Description Usage The function SBRidgeRSq (R-square-Safe-Bayesian Ridge Regression) provides a Gibbs sampler together with the R-square-Safe-Bayesian algorithm for Ridge regression models with fixed variance. SBRidgeRSq(y, X = NULL, sigma2 = NULL, etaseq = 1, prior = NULL, niter = 1100, burnin = 100, thin = 10, minabsbeta = 1e-09, piter = TRUE)

Package UnivRNG. R topics documented: January 10, Type Package

Package UnivRNG. R topics documented: January 10, Type Package Type Package Package UnivRNG January 10, 2018 Title Univariate Pseudo-Random Number Generation Version 1.1 Date 2018-01-10 Author Hakan Demirtas, Rawan Allozi Maintainer Rawan Allozi

More information

Package BGLR. R topics documented: August 19, Version Date Title Bayesian Generalized Linear Regression

Package BGLR. R topics documented: August 19, Version Date Title Bayesian Generalized Linear Regression Version 1.0.5 Date 2016-08-19 Title Bayesian Generalized Linear Regression Package BGLR August 19, 2016 Author Gustavo de los Campos, Paulino Perez Rodriguez, Maintainer Paulino Perez Rodriguez

More information

BART STAT8810, Fall 2017

BART STAT8810, Fall 2017 BART STAT8810, Fall 2017 M.T. Pratola November 1, 2017 Today BART: Bayesian Additive Regression Trees BART: Bayesian Additive Regression Trees Additive model generalizes the single-tree regression model:

More information

Package BayesCR. September 11, 2017

Package BayesCR. September 11, 2017 Type Package Package BayesCR September 11, 2017 Title Bayesian Analysis of Censored Regression Models Under Scale Mixture of Skew Normal Distributions Version 2.1 Author Aldo M. Garay ,

More information

Package EBglmnet. January 30, 2016

Package EBglmnet. January 30, 2016 Type Package Package EBglmnet January 30, 2016 Title Empirical Bayesian Lasso and Elastic Net Methods for Generalized Linear Models Version 4.1 Date 2016-01-15 Author Anhui Huang, Dianting Liu Maintainer

More information

Package binomlogit. February 19, 2015

Package binomlogit. February 19, 2015 Type Package Title Efficient MCMC for Binomial Logit Models Version 1.2 Date 2014-03-12 Author Agnes Fussl Maintainer Agnes Fussl Package binomlogit February 19, 2015 Description The R package

More information

Package gwrr. February 20, 2015

Package gwrr. February 20, 2015 Type Package Package gwrr February 20, 2015 Title Fits geographically weighted regression models with diagnostic tools Version 0.2-1 Date 2013-06-11 Author David Wheeler Maintainer David Wheeler

More information

Package SSLASSO. August 28, 2018

Package SSLASSO. August 28, 2018 Package SSLASSO August 28, 2018 Version 1.2-1 Date 2018-08-28 Title The Spike-and-Slab LASSO Author Veronika Rockova [aut,cre], Gemma Moran [aut] Maintainer Gemma Moran Description

More information

Package msgps. February 20, 2015

Package msgps. February 20, 2015 Type Package Package msgps February 20, 2015 Title Degrees of freedom of elastic net, adaptive lasso and generalized elastic net Version 1.3 Date 2012-5-17 Author Kei Hirose Maintainer Kei Hirose

More information

Package bayescl. April 14, 2017

Package bayescl. April 14, 2017 Package bayescl April 14, 2017 Version 0.0.1 Date 2017-04-10 Title Bayesian Inference on a GPU using OpenCL Author Rok Cesnovar, Erik Strumbelj Maintainer Rok Cesnovar Description

More information

Package mederrrank. R topics documented: July 8, 2015

Package mederrrank. R topics documented: July 8, 2015 Package mederrrank July 8, 2015 Title Bayesian Methods for Identifying the Most Harmful Medication Errors Version 0.0.8 Date 2015-07-06 Two distinct but related statistical approaches to the problem of

More information

Package milr. June 8, 2017

Package milr. June 8, 2017 Type Package Package milr June 8, 2017 Title Multiple-Instance Logistic Regression with LASSO Penalty Version 0.3.0 Date 2017-06-05 The multiple instance data set consists of many independent subjects

More information

Package glmmml. R topics documented: March 25, Encoding UTF-8 Version Date Title Generalized Linear Models with Clustering

Package glmmml. R topics documented: March 25, Encoding UTF-8 Version Date Title Generalized Linear Models with Clustering Encoding UTF-8 Version 1.0.3 Date 2018-03-25 Title Generalized Linear Models with Clustering Package glmmml March 25, 2018 Binomial and Poisson regression for clustered data, fixed and random effects with

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Expectation-Maximization Methods in Population Analysis. Robert J. Bauer, Ph.D. ICON plc.

Expectation-Maximization Methods in Population Analysis. Robert J. Bauer, Ph.D. ICON plc. Expectation-Maximization Methods in Population Analysis Robert J. Bauer, Ph.D. ICON plc. 1 Objective The objective of this tutorial is to briefly describe the statistical basis of Expectation-Maximization

More information

Package BKPC. March 13, 2018

Package BKPC. March 13, 2018 Type Package Title Bayesian Kernel Projection Classifier Version 1.0.1 Date 2018-03-06 Author K. Domijan Maintainer K. Domijan Package BKPC March 13, 2018 Description Bayesian kernel

More information

Package inca. February 13, 2018

Package inca. February 13, 2018 Type Package Title Integer Calibration Version 0.0.3 Date 2018-02-09 Package inca February 13, 2018 Author Luca Sartore and Kelly Toppin Maintainer

More information

Package RegressionFactory

Package RegressionFactory Type Package Package RegressionFactory September 8, 2016 Title Expander Functions for Generating Full Gradient and Hessian from Single-Slot and Multi-Slot Base Distributions Version 0.7.2 Date 2016-09-07

More information

Package sspse. August 26, 2018

Package sspse. August 26, 2018 Type Package Version 0.6 Date 2018-08-24 Package sspse August 26, 2018 Title Estimating Hidden Population Size using Respondent Driven Sampling Data Maintainer Mark S. Handcock

More information

Package esabcv. May 29, 2015

Package esabcv. May 29, 2015 Package esabcv May 29, 2015 Title Estimate Number of Latent Factors and Factor Matrix for Factor Analysis Version 1.2.1 These functions estimate the latent factors of a given matrix, no matter it is highdimensional

More information

Package bacon. October 31, 2018

Package bacon. October 31, 2018 Type Package Package October 31, 2018 Title Controlling bias and inflation in association studies using the empirical null distribution Version 1.10.0 Author Maarten van Iterson [aut, cre], Erik van Zwet

More information

Package GLDreg. February 28, 2017

Package GLDreg. February 28, 2017 Type Package Package GLDreg February 28, 2017 Title Fit GLD Regression Model and GLD Quantile Regression Model to Empirical Data Version 1.0.7 Date 2017-03-15 Author Steve Su, with contributions from:

More information

Package FMsmsnReg. March 30, 2016

Package FMsmsnReg. March 30, 2016 Package FMsmsnReg March 30, 2016 Type Package Title Regression Models with Finite Mixtures of Skew Heavy-Tailed Errors Version 1.0 Date 2016-03-29 Author Luis Benites Sanchez and Rocio Paola Maehara and

More information

(Not That) Advanced Hierarchical Models

(Not That) Advanced Hierarchical Models (Not That) Advanced Hierarchical Models Ben Goodrich StanCon: January 10, 2018 Ben Goodrich Advanced Hierarchical Models StanCon 1 / 13 Obligatory Disclosure Ben is an employee of Columbia University,

More information

Package TVsMiss. April 5, 2018

Package TVsMiss. April 5, 2018 Type Package Title Variable Selection for Missing Data Version 0.1.1 Date 2018-04-05 Author Jiwei Zhao, Yang Yang, and Ning Yang Maintainer Yang Yang Package TVsMiss April 5, 2018

More information

Package Bergm. R topics documented: September 25, Type Package

Package Bergm. R topics documented: September 25, Type Package Type Package Package Bergm September 25, 2018 Title Bayesian Exponential Random Graph Models Version 4.2.0 Date 2018-09-25 Author Alberto Caimo [aut, cre], Lampros Bouranis [aut], Robert Krause [aut] Nial

More information

Package flsa. February 19, 2015

Package flsa. February 19, 2015 Type Package Package flsa February 19, 2015 Title Path algorithm for the general Fused Lasso Signal Approximator Version 1.05 Date 2013-03-23 Author Holger Hoefling Maintainer Holger Hoefling

More information

Package ADMM. May 29, 2018

Package ADMM. May 29, 2018 Type Package Package ADMM May 29, 2018 Title Algorithms using Alternating Direction Method of Multipliers Version 0.3.0 Provides algorithms to solve popular optimization problems in statistics such as

More information

Linear Model Selection and Regularization. especially usefull in high dimensions p>>100.

Linear Model Selection and Regularization. especially usefull in high dimensions p>>100. Linear Model Selection and Regularization especially usefull in high dimensions p>>100. 1 Why Linear Model Regularization? Linear models are simple, BUT consider p>>n, we have more features than data records

More information

Package texteffect. November 27, 2017

Package texteffect. November 27, 2017 Version 0.1 Date 2017-11-27 Package texteffect November 27, 2017 Title Discovering Latent Treatments in Text Corpora and Estimating Their Causal Effects Author Christian Fong

More information

Package bayesdccgarch

Package bayesdccgarch Version 2.0 Date 2016-01-29 Package bayesdccgarch February 7, 2016 Title The Bayesian Dynamic Conditional Correlation GARCH Model Bayesian estimation of dynamic conditional correlation GARCH model for

More information

Package mfa. R topics documented: July 11, 2018

Package mfa. R topics documented: July 11, 2018 Package mfa July 11, 2018 Title Bayesian hierarchical mixture of factor analyzers for modelling genomic bifurcations Version 1.2.0 MFA models genomic bifurcations using a Bayesian hierarchical mixture

More information

Package survivalmpl. December 11, 2017

Package survivalmpl. December 11, 2017 Package survivalmpl December 11, 2017 Title Penalised Maximum Likelihood for Survival Analysis Models Version 0.2 Date 2017-10-13 Author Dominique-Laurent Couturier, Jun Ma, Stephane Heritier, Maurizio

More information

Package beast. March 16, 2018

Package beast. March 16, 2018 Type Package Package beast March 16, 2018 Title Bayesian Estimation of Change-Points in the Slope of Multivariate Time-Series Version 1.1 Date 2018-03-16 Author Maintainer Assume that

More information

Package HKprocess. R topics documented: September 6, Type Package. Title Hurst-Kolmogorov process. Version

Package HKprocess. R topics documented: September 6, Type Package. Title Hurst-Kolmogorov process. Version Package HKprocess September 6, 2014 Type Package Title Hurst-Kolmogorov process Version 0.0-1 Date 2014-09-06 Author Maintainer Imports MCMCpack (>= 1.3-3), gtools(>= 3.4.1) Depends

More information

Calibration and emulation of TIE-GCM

Calibration and emulation of TIE-GCM Calibration and emulation of TIE-GCM Serge Guillas School of Mathematics Georgia Institute of Technology Jonathan Rougier University of Bristol Big Thanks to Crystal Linkletter (SFU-SAMSI summer school)

More information

Package MCMC4Extremes

Package MCMC4Extremes Type Package Package MCMC4Extremes July 14, 2016 Title Posterior Distribution of Extreme Models in R Version 1.1 Author Fernando Ferraz do Nascimento [aut, cre], Wyara Vanesa Moura e Silva [aut, ctb] Maintainer

More information

A Nonparametric Bayesian Approach to Detecting Spatial Activation Patterns in fmri Data

A Nonparametric Bayesian Approach to Detecting Spatial Activation Patterns in fmri Data A Nonparametric Bayesian Approach to Detecting Spatial Activation Patterns in fmri Data Seyoung Kim, Padhraic Smyth, and Hal Stern Bren School of Information and Computer Sciences University of California,

More information

Package acebayes. R topics documented: November 21, Type Package

Package acebayes. R topics documented: November 21, Type Package Type Package Package acebayes November 21, 2018 Title Optimal Bayesian Experimental Design using the ACE Algorithm Version 1.5.2 Date 2018-11-21 Author Antony M. Overstall, David C. Woods & Maria Adamou

More information

Package bayesdp. July 10, 2018

Package bayesdp. July 10, 2018 Type Package Package bayesdp July 10, 2018 Title Tools for the Bayesian Discount Prior Function Version 1.3.2 Date 2018-07-10 Depends R (>= 3.2.3), ggplot2, survival, methods Functions for data augmentation

More information

Package RAMP. May 25, 2017

Package RAMP. May 25, 2017 Type Package Package RAMP May 25, 2017 Title Regularized Generalized Linear Models with Interaction Effects Version 2.0.1 Date 2017-05-24 Author Yang Feng, Ning Hao and Hao Helen Zhang Maintainer Yang

More information

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov.

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. EE613 Machine Learning for Engineers LINEAR REGRESSION Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. 4, 2015 1 Outline Multivariate ordinary least squares Singular value

More information

The grplasso Package

The grplasso Package The grplasso Package June 27, 2007 Type Package Title Fitting user specified models with Group Lasso penalty Version 0.2-1 Date 2007-06-27 Author Lukas Meier Maintainer Lukas Meier

More information

GS3. Andrés Legarra. March 5, Genomic Selection Gibbs Sampling Gauss Seidel

GS3. Andrés Legarra. March 5, Genomic Selection Gibbs Sampling Gauss Seidel GS3 Genomic Selection Gibbs Sampling Gauss Seidel Andrés Legarra March 5, 2008 andres.legarra [at] toulouse.inra.fr INRA, UR 631, F-31326 Auzeville, France 1 Contents 1 Introduction 3 1.1 History...............................

More information

Package hergm. R topics documented: January 10, Version Date

Package hergm. R topics documented: January 10, Version Date Version 3.1-0 Date 2016-09-22 Package hergm January 10, 2017 Title Hierarchical Exponential-Family Random Graph Models Author Michael Schweinberger [aut, cre], Mark S.

More information

Bayesian Estimation for Skew Normal Distributions Using Data Augmentation

Bayesian Estimation for Skew Normal Distributions Using Data Augmentation The Korean Communications in Statistics Vol. 12 No. 2, 2005 pp. 323-333 Bayesian Estimation for Skew Normal Distributions Using Data Augmentation Hea-Jung Kim 1) Abstract In this paper, we develop a MCMC

More information

Package SeleMix. R topics documented: November 22, 2016

Package SeleMix. R topics documented: November 22, 2016 Package SeleMix November 22, 2016 Type Package Title Selective Editing via Mixture Models Version 1.0.1 Date 2016-11-22 Author Ugo Guarnera, Teresa Buglielli Maintainer Teresa Buglielli

More information

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov.

EE613 Machine Learning for Engineers LINEAR REGRESSION. Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. EE613 Machine Learning for Engineers LINEAR REGRESSION Sylvain Calinon Robot Learning & Interaction Group Idiap Research Institute Nov. 9, 2017 1 Outline Multivariate ordinary least squares Matlab code:

More information

What is machine learning?

What is machine learning? Machine learning, pattern recognition and statistical data modelling Lecture 12. The last lecture Coryn Bailer-Jones 1 What is machine learning? Data description and interpretation finding simpler relationship

More information

CSSS 510: Lab 2. Introduction to Maximum Likelihood Estimation

CSSS 510: Lab 2. Introduction to Maximum Likelihood Estimation CSSS 510: Lab 2 Introduction to Maximum Likelihood Estimation 2018-10-12 0. Agenda 1. Housekeeping: simcf, tile 2. Questions about Homework 1 or lecture 3. Simulating heteroskedastic normal data 4. Fitting

More information

Package spikeslab. February 20, 2015

Package spikeslab. February 20, 2015 Version 1.1.5 Date 2013-04-18 Package spikeslab February 20, 2015 Title Prediction and variable selection using spike and slab regression Author Hemant Ishwaran Maintainer Udaya

More information

Package endogenous. October 29, 2016

Package endogenous. October 29, 2016 Package endogenous October 29, 2016 Type Package Title Classical Simultaneous Equation Models Version 1.0 Date 2016-10-25 Maintainer Andrew J. Spieker Description Likelihood-based

More information

Markov Chain Monte Carlo (part 1)

Markov Chain Monte Carlo (part 1) Markov Chain Monte Carlo (part 1) Edps 590BAY Carolyn J. Anderson Department of Educational Psychology c Board of Trustees, University of Illinois Spring 2018 Depending on the book that you select for

More information

GS3. Andrés Legarra 1 2 Anne Ricard 3 4 Olivier Filangi 5 6. June 17, Genomic Selection Gibbs Sampling Gauss Seidel.

GS3. Andrés Legarra 1 2 Anne Ricard 3 4 Olivier Filangi 5 6. June 17, Genomic Selection Gibbs Sampling Gauss Seidel. GS3 Genomic Selection Gibbs Sampling Gauss Seidel (and BayesCπ) Andrés Legarra 1 2 Anne Ricard 3 4 Olivier Filangi 5 6 June 17, 2011 1 andres.legarra [at] toulouse.inra.fr 2 INRA, UR 631, F-31326 Auzeville,

More information

Package TBSSurvival. July 1, 2012

Package TBSSurvival. July 1, 2012 Package TBSSurvival July 1, 2012 Version 1.0 Date 2012-06-30 Title TBS Model R package Author Adriano Polpo , Cassio de Campos , D. Sinha , Stuart

More information

Package extweibquant

Package extweibquant Type Package Package extweibquant February 19, 2015 Title Estimate Lower Extreme Quantile with the Censored Weibull MLE and Censored Weibull Mixture Version 1.1 Date 2014-12-03 Author Yang (Seagle) Liu

More information

Markov chain Monte Carlo methods

Markov chain Monte Carlo methods Markov chain Monte Carlo methods (supplementary material) see also the applet http://www.lbreyer.com/classic.html February 9 6 Independent Hastings Metropolis Sampler Outline Independent Hastings Metropolis

More information

GAMs semi-parametric GLMs. Simon Wood Mathematical Sciences, University of Bath, U.K.

GAMs semi-parametric GLMs. Simon Wood Mathematical Sciences, University of Bath, U.K. GAMs semi-parametric GLMs Simon Wood Mathematical Sciences, University of Bath, U.K. Generalized linear models, GLM 1. A GLM models a univariate response, y i as g{e(y i )} = X i β where y i Exponential

More information

Package ssa. July 24, 2016

Package ssa. July 24, 2016 Title Simultaneous Signal Analysis Version 1.2.1 Package ssa July 24, 2016 Procedures for analyzing simultaneous signals, e.g., features that are simultaneously significant in two different studies. Includes

More information

Package ridge. R topics documented: February 15, Title Ridge Regression with automatic selection of the penalty parameter. Version 2.

Package ridge. R topics documented: February 15, Title Ridge Regression with automatic selection of the penalty parameter. Version 2. Package ridge February 15, 2013 Title Ridge Regression with automatic selection of the penalty parameter Version 2.1-2 Date 2012-25-09 Author Erika Cule Linear and logistic ridge regression for small data

More information

A New Method of Using Polytomous Independent Variables with Many Levels for the Binary Outcome of Big Data Analysis

A New Method of Using Polytomous Independent Variables with Many Levels for the Binary Outcome of Big Data Analysis Paper 2641-2015 A New Method of Using Polytomous Independent Variables with Many Levels for the Binary Outcome of Big Data Analysis ABSTRACT John Gao, ConstantContact; Jesse Harriott, ConstantContact;

More information

Package RobustGaSP. R topics documented: September 11, Type Package

Package RobustGaSP. R topics documented: September 11, Type Package Type Package Package RobustGaSP September 11, 2018 Title Robust Gaussian Stochastic Process Emulation Version 0.5.6 Date/Publication 2018-09-11 08:10:03 UTC Maintainer Mengyang Gu Author

More information

Linear Modeling with Bayesian Statistics

Linear Modeling with Bayesian Statistics Linear Modeling with Bayesian Statistics Bayesian Approach I I I I I Estimate probability of a parameter State degree of believe in specific parameter values Evaluate probability of hypothesis given the

More information

Package Kernelheaping

Package Kernelheaping Type Package Package Kernelheaping October 10, 2017 Title Kernel Density Estimation for Heaped and Rounded Data Version 2.1.8 Date 2017-10-04 Depends R (>= 2.15.0), MASS, ks, sparr Imports sp, plyr, fastmatch,

More information

Package EMC. February 19, 2015

Package EMC. February 19, 2015 Package EMC February 19, 2015 Type Package Title Evolutionary Monte Carlo (EMC) algorithm Version 1.3 Date 2011-12-08 Author Gopi Goswami Maintainer Gopi Goswami

More information

Package DTRlearn. April 6, 2018

Package DTRlearn. April 6, 2018 Type Package Package DTRlearn April 6, 2018 Title Learning Algorithms for Dynamic Treatment Regimes Version 1.3 Date 2018-4-05 Author Ying Liu, Yuanjia Wang, Donglin Zeng Maintainer Ying Liu

More information

Package citools. October 20, 2018

Package citools. October 20, 2018 Type Package Package citools October 20, 2018 Title Confidence or Prediction Intervals, Quantiles, and Probabilities for Statistical Models Version 0.5.0 Maintainer John Haman Functions

More information

Package rgcvpack. February 20, Index 6. Fitting Thin Plate Smoothing Spline. Fit thin plate splines of any order with user specified knots

Package rgcvpack. February 20, Index 6. Fitting Thin Plate Smoothing Spline. Fit thin plate splines of any order with user specified knots Version 0.1-4 Date 2013/10/25 Title R Interface for GCVPACK Fortran Package Author Xianhong Xie Package rgcvpack February 20, 2015 Maintainer Xianhong Xie

More information

Modeling Criminal Careers as Departures From a Unimodal Population Age-Crime Curve: The Case of Marijuana Use

Modeling Criminal Careers as Departures From a Unimodal Population Age-Crime Curve: The Case of Marijuana Use Modeling Criminal Careers as Departures From a Unimodal Population Curve: The Case of Marijuana Use Donatello Telesca, Elena A. Erosheva, Derek A. Kreader, & Ross Matsueda April 15, 2014 extends Telesca

More information

Package lgarch. September 15, 2015

Package lgarch. September 15, 2015 Type Package Package lgarch September 15, 2015 Title Simulation and Estimation of Log-GARCH Models Version 0.6-2 Depends R (>= 2.15.0), zoo Date 2015-09-14 Author Genaro Sucarrat Maintainer Genaro Sucarrat

More information

Bayesian Computation with JAGS

Bayesian Computation with JAGS JAGS is Just Another Gibbs Sampler Cross-platform Accessible from within R Bayesian Computation with JAGS What I did Downloaded and installed JAGS. In the R package installer, downloaded rjags and dependencies.

More information

Package semisup. March 10, Version Title Semi-Supervised Mixture Model

Package semisup. March 10, Version Title Semi-Supervised Mixture Model Version 1.7.1 Title Semi-Supervised Mixture Model Package semisup March 10, 2019 Description Useful for detecting SNPs with interactive effects on a quantitative trait. This R packages moves away from

More information

Package ordinalnet. December 5, 2017

Package ordinalnet. December 5, 2017 Type Package Title Penalized Ordinal Regression Version 2.4 Package ordinalnet December 5, 2017 Fits ordinal regression models with elastic net penalty. Supported model families include cumulative probability,

More information

Introduction to Bayesian Analysis in Stata

Introduction to Bayesian Analysis in Stata tools Introduction to Bayesian Analysis in Gustavo Sánchez Corp LLC September 15, 2017 Porto, Portugal tools 1 Bayesian analysis: 2 Basic Concepts The tools 14: The command 15: The bayes prefix Postestimation

More information

Bayes Estimators & Ridge Regression

Bayes Estimators & Ridge Regression Bayes Estimators & Ridge Regression Readings ISLR 6 STA 521 Duke University Merlise Clyde October 27, 2017 Model Assume that we have centered (as before) and rescaled X o (original X) so that X j = X o

More information

Package treethresh. R topics documented: June 30, Version Date

Package treethresh. R topics documented: June 30, Version Date Version 0.1-11 Date 2017-06-29 Package treethresh June 30, 2017 Title Methods for Tree-Based Local Adaptive Thresholding Author Ludger Evers and Tim Heaton

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

Package StVAR. February 11, 2017

Package StVAR. February 11, 2017 Type Package Title Student's t Vector Autoregression (StVAR) Version 1.1 Date 2017-02-10 Author Niraj Poudyal Maintainer Niraj Poudyal Package StVAR February 11, 2017 Description Estimation

More information

Package clustvarsel. April 9, 2018

Package clustvarsel. April 9, 2018 Version 2.3.2 Date 2018-04-09 Package clustvarsel April 9, 2018 Title Variable Selection for Gaussian Model-Based Clustering Description Variable selection for Gaussian model-based clustering as implemented

More information

Package visualize. April 28, 2017

Package visualize. April 28, 2017 Type Package Package visualize April 28, 2017 Title Graph Probability Distributions with User Supplied Parameters and Statistics Version 4.3.0 Date 2017-04-27 Depends R (>= 3.0.0) Graphs the pdf or pmf

More information

Package TBSSurvival. January 5, 2017

Package TBSSurvival. January 5, 2017 Version 1.3 Date 2017-01-05 Package TBSSurvival January 5, 2017 Title Survival Analysis using a Transform-Both-Sides Model Author Adriano Polpo , Cassio de Campos , D.

More information

Package BAMBI. R topics documented: August 28, 2017

Package BAMBI. R topics documented: August 28, 2017 Type Package Title Bivariate Angular Mixture Models Version 1.1.1 Date 2017-08-23 Author Saptarshi Chakraborty, Samuel W.K. Wong Package BAMBI August 28, 2017 Maintainer Saptarshi Chakraborty

More information

Package samplesizelogisticcasecontrol

Package samplesizelogisticcasecontrol Package samplesizelogisticcasecontrol February 4, 2017 Title Sample Size Calculations for Case-Control Studies Version 0.0.6 Date 2017-01-31 Author Mitchell H. Gail To determine sample size for case-control

More information

Package CompGLM. April 29, 2018

Package CompGLM. April 29, 2018 Type Package Package CompGLM April 29, 2018 Title Conway-Maxwell-Poisson GLM and Distribution Functions Version 2.0 Date 2018-04-29 Author Jeffrey Pollock Maintainer URL https://github.com/jeffpollock9/compglm

More information

Package Kernelheaping

Package Kernelheaping Type Package Package Kernelheaping December 7, 2015 Title Kernel Density Estimation for Heaped and Rounded Data Version 1.2 Date 2015-12-01 Depends R (>= 2.15.0), evmix, MASS, ks, sparr Author Marcus Gross

More information

Package addhaz. September 26, 2018

Package addhaz. September 26, 2018 Package addhaz September 26, 2018 Title Binomial and Multinomial Additive Hazard Models Version 0.5 Description Functions to fit the binomial and multinomial additive hazard models and to estimate the

More information

PIANOS requirements specifications

PIANOS requirements specifications PIANOS requirements specifications Group Linja Helsinki 7th September 2005 Software Engineering Project UNIVERSITY OF HELSINKI Department of Computer Science Course 581260 Software Engineering Project

More information

The glmmml Package. August 20, 2006

The glmmml Package. August 20, 2006 The glmmml Package August 20, 2006 Version 0.65-1 Date 2006/08/20 Title Generalized linear models with clustering A Maximum Likelihood and bootstrap approach to mixed models. License GPL version 2 or newer.

More information

Package robets. March 6, Type Package

Package robets. March 6, Type Package Type Package Package robets March 6, 2018 Title Forecasting Time Series with Robust Exponential Smoothing Version 1.4 Date 2018-03-06 We provide an outlier robust alternative of the function ets() in the

More information

Package coloc. February 24, 2018

Package coloc. February 24, 2018 Type Package Package coloc February 24, 2018 Imports ggplot2, snpstats, BMA, reshape, methods, flashclust, speedglm Suggests knitr, testthat Title Colocalisation Tests of Two Genetic Traits Version 3.1

More information

Package CVR. March 22, 2017

Package CVR. March 22, 2017 Type Package Title Canonical Variate Regression Version 0.1.1 Date 2017-03-17 Author Chongliang Luo, Kun Chen. Package CVR March 22, 2017 Maintainer Chongliang Luo Perform canonical

More information

Package SAENET. June 4, 2015

Package SAENET. June 4, 2015 Type Package Package SAENET June 4, 2015 Title A Stacked Autoencoder Implementation with Interface to 'neuralnet' Version 1.1 Date 2015-06-04 An implementation of a stacked sparse autoencoder for dimension

More information

Package merror. November 3, 2015

Package merror. November 3, 2015 Version 2.0.2 Date 2015-10-20 Package merror November 3, 2015 Author Title Accuracy and Precision of Measurements N>=3 methods are used to measure each of n items. The data are used

More information

Machine Learning / Jan 27, 2010

Machine Learning / Jan 27, 2010 Revisiting Logistic Regression & Naïve Bayes Aarti Singh Machine Learning 10-701/15-781 Jan 27, 2010 Generative and Discriminative Classifiers Training classifiers involves learning a mapping f: X -> Y,

More information

Package MfUSampler. June 13, 2017

Package MfUSampler. June 13, 2017 Package MfUSampler June 13, 2017 Type Package Title Multivariate-from-Univariate (MfU) MCMC Sampler Version 1.0.4 Date 2017-06-09 Author Alireza S. Mahani, Mansour T.A. Sharabiani Maintainer Alireza S.

More information

Package calibrar. August 29, 2016

Package calibrar. August 29, 2016 Version 0.2.0 Package calibrar August 29, 2016 Title Automated Parameter Estimation for Complex (Ecological) Models Automated parameter estimation for complex (ecological) models in R. This package allows

More information

Package plgp. February 20, 2015

Package plgp. February 20, 2015 Type Package Title Particle Learning of Gaussian Processes Version 1.1-7 Date 2014-11-27 Package plgp February 20, 2015 Author Robert B. Gramacy Maintainer Robert B. Gramacy

More information

Package smoothmest. February 20, 2015

Package smoothmest. February 20, 2015 Package smoothmest February 20, 2015 Title Smoothed M-estimators for 1-dimensional location Version 0.1-2 Date 2012-08-22 Author Christian Hennig Depends R (>= 2.0), MASS Some

More information

Fathom Dynamic Data TM Version 2 Specifications

Fathom Dynamic Data TM Version 2 Specifications Data Sources Fathom Dynamic Data TM Version 2 Specifications Use data from one of the many sample documents that come with Fathom. Enter your own data by typing into a case table. Paste data from other

More information

Package hiernet. March 18, 2018

Package hiernet. March 18, 2018 Title A Lasso for Hierarchical Interactions Version 1.7 Author Jacob Bien and Rob Tibshirani Package hiernet March 18, 2018 Fits sparse interaction models for continuous and binary responses subject to

More information