Package optimization

Similar documents
The R Package optimization: Flexible Global Optimization with Simulated-Annealing

Package GenSA. R topics documented: January 17, Type Package Title Generalized Simulated Annealing Version 1.1.

Package subplex. April 5, 2018

Package dfoptim. April 2, 2018

Package robustreg. R topics documented: April 27, Version Date Title Robust Regression Functions

SIMULATED ANNEALING TECHNIQUES AND OVERVIEW. Daniel Kitchener Young Scholars Program Florida State University Tallahassee, Florida, USA

Package libstabler. June 1, 2017

Package hsiccca. February 20, 2015

Package intcensroc. May 2, 2018

Package TVsMiss. April 5, 2018

An Evolutionary Algorithm for Minimizing Multimodal Functions

Package ramcmc. May 29, 2018

Package ECctmc. May 1, 2018

Package zoomgrid. January 3, 2019

Package optimsimplex

Package pnmtrem. February 20, Index 9

Package SSLASSO. August 28, 2018

Package pso. R topics documented: February 20, Version Date Title Particle Swarm Optimization

Simplex of Nelder & Mead Algorithm

Package stochprofml. February 20, 2015

Package RaPKod. February 5, 2018

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient

Package inca. February 13, 2018

Package treedater. R topics documented: May 4, Type Package

Package bife. May 7, 2017

Package geogrid. August 19, 2018

Package milr. June 8, 2017

Package PUlasso. April 7, 2018

Package coga. May 8, 2018

Package geojsonsf. R topics documented: January 11, Type Package Title GeoJSON to Simple Feature Converter Version 1.3.

Package Numero. November 24, 2018

Package validara. October 19, 2017

Package spark. July 21, 2017

Package lclgwas. February 21, Type Package

Package qap. February 27, Index 5. Solve Quadratic Assignment Problems (QAP)

Package acebayes. R topics documented: November 21, Type Package

Package BigVAR. R topics documented: April 3, Type Package

Package vinereg. August 10, 2018

Package bisect. April 16, 2018

Package xwf. July 12, 2018

Package densityclust

Package GUILDS. September 26, 2016

Package mixsqp. November 14, 2018

Package ADMM. May 29, 2018

Introduction to Stochastic Optimization Methods (meta-heuristics) Modern optimization methods 1

Package nlsrk. R topics documented: June 24, Version 1.1 Date Title Runge-Kutta Solver for Function nls()

Package markophylo. December 31, 2015

Package sglr. February 20, 2015

Package Bergm. R topics documented: September 25, Type Package

Package RWiener. February 22, 2017

Package MTLR. March 9, 2019

Modern Methods of Data Analysis - WS 07/08

Package diffusr. May 17, 2018

Package TBSSurvival. January 5, 2017

Package robets. March 6, Type Package

The DEoptim Package. July 10, DEoptim-methods... 1 DEoptim Index 6. DEoptim-methods

Simulated Annealing Overview

Package DDD. March 25, 2013

Package RCEIM. April 3, 2017

Available Optimization Methods

Package GWRM. R topics documented: July 31, Type Package

Package bayesdp. July 10, 2018

Package flam. April 6, 2018

Package minimaxdesign

Enhancement of the downhill simplex method of optimization

Package MonteCarlo. April 23, 2017

Package stockr. June 14, 2018

Package PedCNV. February 19, 2015

Package Combine. R topics documented: September 4, Type Package Title Game-Theoretic Probability Combination Version 1.

Package readxl. April 18, 2017

Package sankey. R topics documented: October 22, 2017

Package rgenoud. October 23, Index 14. GENetic Optimization Using Derivatives

ACONM: A hybrid of Ant Colony Optimization and Nelder-Mead Simplex Search

Package gibbs.met. February 19, 2015

Simulated Annealing. G5BAIM: Artificial Intelligence Methods. Graham Kendall. 15 Feb 09 1

Package coxsei. February 24, 2015

Package SAM. March 12, 2019

Package HI. February 19, 2015

Package StVAR. February 11, 2017

Package alphashape3d

Package kdevine. May 19, 2017

Package mrm. December 27, 2016

Package DEoptimR. November 19, 2016

Package rankdist. April 8, 2018

Package GADAG. April 11, 2017

Package ihs. February 25, 2015

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Package blocksdesign

Package intccr. September 12, 2017

A Nelder-Mead Tuner for Svm

Package ccapp. March 7, 2016

Package caic4. May 22, 2018

Package Rsomoclu. January 3, 2019

Package glassomix. May 30, 2013

Package RCA. R topics documented: February 29, 2016

The desirability Package

Package gplm. August 29, 2016

Package crochet. January 8, 2018

Package munfold. R topics documented: February 8, Type Package. Title Metric Unfolding. Version Date Author Martin Elff

Package glmnetutils. August 1, 2017

Transcription:

Type Package Package optimization October 24, 2017 Title Flexible Optimization of Complex Loss Functions with State and Parameter Space Constraints Version 1.0-7 Date 2017-10-20 Author Kai Husmann and Alexander Lange Maintainer Kai Husmann <khusman1@uni-goettingen.de> Description Flexible optimizer with numerous input specifications for detailed parameterisation. Designed for complex loss functions with state and parameter space constraints. Visualization tools for validation and analysis of the convergence are included. License GPL (>= 2 Depends R (>= 3.2.0, Rcpp (>= 0.12.12 Imports colorspace LinkingTo Rcpp RoxygenNote 6.0.1 Suggests R.rsp VignetteBuilder R.rsp NeedsCompilation yes Repository CRAN Date/Publication 2017-10-24 15:08:59 UTC R topics documented: optimization-package.................................... 2 optim_nm.......................................... 3 optim_sa........................................... 5 plot.optim_nmsa...................................... 10 Index 12 1

2 optimization-package optimization-package Flexible Optimization of Complex Loss Functions with State and Parameter Space Constraints Description Details Flexible optimizer with numerous input specifications for detailed parameterisation. Designed for complex loss functions with state and parameter space constraints. Visualization tools for validation and analysis of the convergence are included. Package: optimization Type: Package Version: 1.0-6 Date: 2017-09-23 License: GPL-2 Author(s Kai Husmann and Alexander Lange Maintainer: Kai Husmann <khusman1@uni-goettingen.de> References Corana, A., Marchesi, M., Martini, C. and Ridella, S. (1987, Minimizing Multimodal Functions of Continuous Variables with the Simulated Annealing Algorithm. ACM Transactions on Mathematical Software, 13(3:262-280. Gao, F. and Han, L. (2012. Implementing the nelder-mead simplex algorithm with adaptive parameters. Computational Optimization and Applications, 51(1:259 277. Geiger, C. and Kanzow, C. (1999. Das nelder-mead-verfahren. Numerische Verfahren zur Loesung unregestrier er Optimierungsaufgaben. Kirkpatrick, S., Gelatt, C. D. and Vecchi, M. P. (1983. Optimization by Simulated Annealing. Science, 220(4598: 671-680. Nelder, J. and Mead, R. (1965. A simplex method for function minimization. Computer Journal, 7(4. Pronzato, L., Walter, E., Venot, A. and Lebruchec, J.-F. (1984. A general-purpose global optimizer: Implimentation and applications. Mathematics and Computers in Simulation, 26(5:412-422. See Also optim_nm, optim_sa, optim, plot Examples hi <- function(x{(x[1]**2 + x[2] - 11**2 + (x[1] + x[2]**2-7**2 optim_nm(fun = hi, k = 2 optim_sa(fun = hi, start = c(runif(2, min = -1, max = 1, trace = FALSE, lower = c(-4, -4,

optim_nm 3 upper = c(4, 4, control = list(dyn_rf = FALSE, rf = 1.2, t0 = 10, nlimit = 100, r = 0.6, t_min = 0.1 optim_nm Optimization with Nelder-Mead Description Usage This function contains a direct search algorithm, to minimize or maximize an objective function with respect to their input parameters. Arguments fun k optim_nm(fun, k = 0, start, maximum = FALSE, trace = FALSE, alpha = 1, beta = 2, gamma = 1/2, delta = 1/2, tol = 0.00001, exit = 500, edge = 1 start maximum trace Function to minimize or maximize. It should return a single scalar value. Number of parameters of the objective function. Optional vector with starting values. Number of values must be equal to k. The initial simplex is constructed around this start vector. Logical. The default is FALSE. Logical. If TRUE, interim results are stored. Necessary for the plot function. Default is FALSE. alpha A positive scalar which indicates the size of the reflected simplex. The value 1 leads to a reflected simplex of the same size as the former iteration. beta gamma delta tol A positive scalar which indicates the size of the expended simplex. It is usually twice as high as alpha. It must be higher than alpha. A positive scalar which indicates the size of either the outside contracted simplex or inside contracted simplex. It is usually half as high as alpha. It must be smaller than alpha. A positive scalar which indicates the size of the shrinked simplex. It is usually half as high as alpha. It must be smaller than alpha. A positive scaler describing the tolerance at which the distances in between the function responses of the simplex vertices are close enough to zero to terminate the algorithm.

4 optim_nm exit edge A positive scalar giving the maximum number of iterations the algorithm is allowed to take. It is used to prevent infinite loops. In case of optimizing functions with higher dimensions it is quite likely that the algorithm needs more than 500 iterations. The value should therefore be adjusted to the specific optimization problem. A positive scalar providing the edge length of the initial simplex. It is useful to adjust the edge length if the initial guess is close to the global optimum or if the parameter space of the loss function is relatively small. Details The Nelder-Mead method is a comparatively simple heuristic optimization algorithm. It is, However, useful for relatively simple optimization problems without many local minima and low dimensions(n < 10. Nevertheless, the speed and accuracy are rather useful for simple problems. Moreover, the Nelder-Mead is able to optimize functions without derivatives. The handling of the optimization function is quite easy, because there are only few parameters to adjust. Value The output is a nmsa_optim object with following entries: par Function parameters after optimization. function_value Function response after optimization. trace Matrix with interim results. NULL if trace was not activated. fun The loss function. start The initial function parameters. lower The lower boundaries of the function parameters. upper The upper boundaries of the function parameters. control The number of parameters and iterations of the algorithm. Author(s Alexander Lange References Gao, F. and Han, L. (2012. Implementing the nelder-mead simplex algorithm with adaptive parameters. Computational Optimization and Applications, 51(1:259 277. Geiger, C. and Kanzow, C. (1999. Das nelder-mead-verfahren. Numerische Verfahren zur Loesung unregestrierter Optimierungsaufgaben. Nelder, J. and Mead, R. (1965. A simplex method for function minimization. Computer Journal, 7(4. See Also optim_sa, optim, plot.optim_nmsa

optim_sa 5 Examples ##### Rosenbrock function # minimum at f(1,1 = 0 B <- function(x{ 100*(x[2]-x[1]^2^2+(1-x[1]^2 ##### Minimization with an initial guess at c(-2.048, 2.048 optim_nm(b, start = c(-2.048, 2.048 ##### Himmelblau's function # minimum at f(3,2 = 0 # f(-2.805, -3.1313 = 0 # f(-3.779, -3.283 = 0 #f(3.5844, -1.848 = 0 H <- function(x{ (x[1]^2+x[2]-11^2+(x[1]+x[2]^2-7^2 ##### Minimization with defined number of parameters optim_nm(fun = H, k = 2 ##### Colville function with 4 parameters co <- function(x{ x1 <- x[1] x2 <- x[2] x3 <- x[3] x4 <- x[4] term1 <- 100 * (x1^2 - x2^2 term2 <- (x1-1^2 term3 <- (x3-1^2 term4 <- 90 * (x3^2 - x4^2 term5 <- 10.1 * ((x2-1^2 + (x4-1^2 term6 <- 19.8 * (x2-1*(x4-1 y <- term1 + term2 + term3 + term4 + term5 + term6 optim_nm(co, k = 4 #### Minimization with trace Output <- optim_nm(h, k = 2, trace = TRUE plot(output plot(output, 'contour' optim_sa Flexible Optimization with Simulated Annealing

6 optim_sa Description Random search optimization method with systematic component that searches the global optimum. The loss function is allowed to be non-linear, non-differentiable and multimodal. Undefined responses are allowed as well. Usage optim_sa(fun, start, maximization = FALSE, trace = FALSE, lower, upper, control = list( Arguments fun start maximization trace lower upper control Loss function to be optimized. It must return a scalar value. The variables must be assigned as a vector. See details. Vector of initial values for the function variables. Must be of same length as the variables vector of the loss function. The response of the initial variables combination must be defined (NA or NaN responses are not allowed. Logical. Default is FALSE. Logical. If TRUE, interim results are stored. Necessary for the plot function. Default is FALSE. Vector of lower boundaries for the function variables. Must be of same length as the variables vector of the function. Vector of upper boundaries for the function variables. Must be of same length as the variables vector of the function. List with optional further arguments to modify the optimization specifically to the loss function: vf Function that determines the variation of the function variables for the next iteration. The variation function is allowed to depend on the vector of variables of the current iteration, the vector of random factors rf and the temperature of the current iteration. Default is a uniform distributed random number with relative range rf. rf Numeric vector. Random factor vector that determines the variation of the random number of vf in relation to the dimension of the function variables for the following iteration. Default is 1. If dyn_rf is enabled, the rf change dynamically over time. dyn_rf Logical. rf change dynamically over time to ensure increasing precision with increasing number of iterations. Default is TRUE, see details. t0 Numeric. Initial temperature. Default is 1000. nlimit Integer. Maximum number of iterations of the inner loop. Default is 100. r Numeric. Temperature reduction in the outer loop. Default is 0.6. k Numeric. Constant for the Metropolis function. Default is 1. t_min Numeric. Temperature where outer loop stops. Default is 0.1. maxgood Integer. Break criterion to improve the algorithm performance. Maximum number of loss function improvements in the inner loop. Breaks the inner loop. Default is 100.

optim_sa 7 Details Value stopac Integer. Break criterion to improve the algorithm performance. Maximum number of repetitions where the loss improvement is lower than ac_acc. Breaks the inner loop. Default is 30. ac_acc Numeric. Accuracy of the stopac break criterion in relation to the response. Default is 1/10000 of the function value at initial variables combination. Simulated Annealing is an optimization algorithm for solving complex functions that may have several optima. The method is composed of a random and a systematic component. Basically, it randomly modifies the variables combination n_limit times to compare their response values. Depending on the temperature and the constant k, there is also a likelihood of choosing variables combinations with worse response. There is thus a time-decreasing likelihood of leaving local opima. The Simulated Annealing Optimization method is therefore advantageous for multimodal functions. Undefined response values (NA are allowed as well. This can be useful for loss functions with variables restrictions. The high number of parameters allows a very flexible parameterization. optim_sa is able to solve mathematical formulas as well as complex rule sets. The performance therefore highly depends on the settings. It is indispensable to parameterize the algorithm carefully. The control list is pre-parameterized for loss functions of medium complexity. To improve the performance, the settings should be changed when solving relatively simple functions (e. g. three dimensional multimodal functions. For complex functions the settings should be changed to improve the accuracy. Most important parameters are nlimit, r and t0. The dynamic rf adjustment depends on the number of loss function calls which are out of the variables boundaries as well as the temperature of the current iteration. The obligatory decreasing rf ensures a relatively wide search grid at the beginning of the optimization process that shrinks over time. It thus automatically adjusts for the trade-off between range of the search grid and accuracy. See Pronzato (1984 for more details. It is sometimes useful to disable the dynamic rf changing when the most performant rf are known. As dyn_rf usually improves the performance as well as the accuracy, the default is TRUE. The output is a nmsa_optim list object with following entries: par Function variables after optimization. function_value Loss function response after optimization. trace Matrix with interim results. NULL if trace was not activated. fun The loss function. start The initial function variables. lower The lower boundaries of the function variables. upper The upper boundaries of the function variables. control Control arguments, see details. Author(s Kai Husmann

8 optim_sa References Corana, A., Marchesi, M., Martini, C. and Ridella, S. (1987, Minimizing Multimodal Functions of Continuous Variables with the Simulated Annealing Algorithm. ACM Transactions on Mathematical Software, 13(3:262-280. Kirkpatrick, S., Gelatt, C. D. and Vecchi, M. P. (1983. Optimization by Simulated Annealing. Science, 220(4598:671-680. Pronzato, L., Walter, E., Venot, A. and Lebruchec, J.-F. (1984. A general-purpose global optimizer: Implimentation and applications. Mathematics and Computers in Simulation, 26(5:412-422. See Also optim_nm, optim, plot.optim_nmsa Examples ##### Rosenbrock function # minimum at f(1,1 = 0 ro <- function(x{ 100*(x[2]-x[1]^2^2+(1-x[1]^2 # Random start values. Example arguments for the relatively simple Rosenbrock function. ro_sa <- optim_sa(fun = ro, start = c(runif(2, min = -1, max = 1, lower = c(-5, -5, upper = c(5, 5, trace = TRUE, control = list(t0 = 100, nlimit = 550, t_min = 0.1, dyn_rf = FALSE, rf = 1, r = 0.7 # Visual inspection. plot(ro_sa plot(ro_sa, type = "contour" ##### Holder table function # 4 minima at #f(8.055, 9.665 = -19.2085 #f(-8.055, 9.665 = -19.2085 #f(8.055, -9.665 = -19.2085 #f(-8.055, -9.665 = -19.2085 ho <- function(x{

optim_sa 9 x1 <- x[1] x2 <- x[2] fact1 <- sin(x1 * cos(x2 fact2 <- exp(abs(1 - sqrt(x1^2 + x2^2 / pi y <- -abs(fact1 * fact2 # Random start values. Example arguments for the relatively complex Holder table function. optim_sa(fun = ho, start = c(1, 1, lower = c(-10, -10, upper = c(10, 10, trace = TRUE, control = list(dyn_rf = FALSE, rf = 1.6, t0 = 10, nlimit = 200, r = 0.6, t_min = 0.1 #### Himmelblau's function # 4 minima at # f(3, 2 = 0 # f(-2.804, -3.131 = 0 # f(-3.779, -3.283 = 0 # f( 3.584, -1.848 = 0 hi <- function(x{ (x[1]**2 + x[2] - 11**2 + (x[1] + x[2]**2-7**2 # Random start values. Example arguments for integer programming. # Only the integer solution will be found. var_func_int <- function(para_0, fun_length, rf, temp = NA{ ret_var_func <- para_0 + sample.int(rf, fun_length, replace = TRUE * ((rbinom(fun_length, 1, 0.5 * -2 + 1 return (ret_var_func optim_sa(fun = hi, start = round(c(runif(2, min = -1, max = 1, trace = TRUE, lower = c(-4, -4, upper=c(4, 4, control = list(t0 = 1000, nlimit = 1500, r = 0.8, vf = var_func_int, rf = 3

10 plot.optim_nmsa plot.optim_nmsa Plot an optim_nmsa Object Description Usage Creates convergence or contour plots for visual inspection of the optimization result. Note that trace must be activated for this function. In case of a bivariate optimization, the contour plot gives an overview of the parameter development over time in the entire state space. This is useful for the evaluation of the algorithm settings and therefore helps improving the performance. The development of the response can be visualized via the convergence plot. ## S3 method for class 'optim_nmsa' plot(x, type = 'convergence', lower = NA, upper = NA,... Arguments x type lower upper Object of type optim_nmsa to be plotted. The trace entry must not be empty. Character string which determines the plot type. Either convergence or contour is possible. Vector containing the lower limits of the variables in the plot. Only useful for contour plots. Vector containing the upper limits of the variables in the plot. Only useful for contour plots.... Further arguments for the generic plot function. Author(s See Also Kai Husmann, Alexander Lange optim_nm, optim_sa Examples # S3 method for class 'optim_nlme' # Himmelblau's function hi <- function(x{(x[1]**2 + x[2] - 11**2 + (x[1] + x[2]**2-7**2 out_nm <- optim_nm(hi, k = 2, trace = TRUE

plot.optim_nmsa 11 out_sa <- optim_sa(fun = hi, start = c(runif(2, min = -1, max = 1, trace = TRUE, lower = c(-4, -4,upper=c(4, 4, control = list(t0 = 1000, nlimit = 1500,r = 0.8 # Examples for optimization results via 'Nelder-Mead' method. plot(out_nm plot(out_nm, type = "contour", lower = c(-4, -4, upper = c(4, 4 # Examples for optimization results via 'Simulated Annealing' method. plot(out_sa plot(out_sa, type = "contour"

Index optim, 2, 4, 8 optim_nm, 2, 3, 8, 10 optim_sa, 2, 4, 5, 10 optimization-package, 2 plot, 2 plot.optim_nmsa, 4, 8, 10 12