Optimization in Scilab

Similar documents
Optimization with Scilab

MULTIOBJECTIVE OPTIMIZATION? DO IT WITH SCILAB

Multivariate Numerical Optimization

Constrained and Unconstrained Optimization

Modern Methods of Data Analysis - WS 07/08

THE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECHNOLOGY METAHEURISTICS

Newton and Quasi-Newton Methods

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING

Introduction to Optimization

Classical Gradient Methods

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Numerical Optimization: Introduction and gradient-based methods

Lecture 6 - Multivariate numerical optimization

25. NLP algorithms. ˆ Overview. ˆ Local methods. ˆ Constrained optimization. ˆ Global methods. ˆ Black-box methods.

Today. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient

A large number of user subroutines and utility routines is available in Abaqus, that are all programmed in Fortran. Subroutines are different for

Parameters Estimation of Material Constitutive Models using Optimization Algorithms

Appendix A MATLAB s Optimization Toolbox Algorithms

Experimental Data and Training

Optimal Control Techniques for Dynamic Walking

ISCTE/FCUL - Mestrado Matemática Financeira. Aula de Janeiro de 2009 Ano lectivo: 2008/2009. Diana Aldea Mendes

Optimization in MATLAB Seth DeLand

Theoretical Concepts of Machine Learning

OPTIMIZATION FOR AUTOMATIC HISTORY MATCHING

CS 395T Lecture 12: Feature Matching and Bundle Adjustment. Qixing Huang October 10 st 2018

NMath Analysis User s Guide

A CONJUGATE DIRECTION IMPLEMENTATION OF THE BFGS ALGORITHM WITH AUTOMATIC SCALING. Ian D Coope

An Evolutionary Algorithm for Minimizing Multimodal Functions

Introduction to nloptr: an R interface to NLopt

Logistic Regression

Problem Set #3 (With Corrections)

Lecture 5: Optimization of accelerators in simulation and experiments. X. Huang USPAS, Jan 2015

Numerical Optimization

User s Guide to Climb High. (A Numerical Optimization Routine) Introduction

A new mini-max, constrained optimization method for solving worst case problems

An introduction into numerical optimization with KNITRO

Introduction to optimization methods and line search

Short Reminder of Nonlinear Programming

Tested Paradigm to Include Optimization in Machine Learning Algorithms

Optimization. (Lectures on Numerical Analysis for Economists III) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 20, 2018

Matlab (Matrix laboratory) is an interactive software system for numerical computations and graphics.

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

Laboratory exercise. Laboratory experiment OPT-1 Nonlinear Optimization

Recent Developments in Model-based Derivative-free Optimization

Optimization with Scipy

QL: A Fortran Code for Convex Quadratic Programming - User s Guide, Version

Multi Layer Perceptron trained by Quasi Newton learning rule

MATLAB Based Optimization Techniques and Parallel Computing

A projected Hessian matrix for full waveform inversion Yong Ma and Dave Hale, Center for Wave Phenomena, Colorado School of Mines

arxiv: v1 [cs.ne] 11 Mar 2015

Introduction to Modern Control Systems

CasADi tutorial Introduction

Programming, numerics and optimization

PROCESS IDENTIFICATION USING OPEN-LOOP AND CLOSED-LOOP STEP RESPONSES

AN UNCONSTRAINED NONLINEAR OPTIMIZATION SOLVER: A USER S GUIDE

Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Chapter II. Linear Programming

ICRA 2016 Tutorial on SLAM. Graph-Based SLAM and Sparsity. Cyrill Stachniss

An interface between MATLAB and SIPAMPL for semi-infinite programming problems

10.7 Variable Metric Methods in Multidimensions

Algoritmi di Ottimizzazione: Parte A Aspetti generali e algoritmi classici

Mathematical Programming and Research Methods (Part II)

Chapter Multidimensional Gradient Method

Optimization in Shape Matching in the Context of Molecular Docking

Constraint Satisfaction Problems

Optimization. there will solely. any other methods presented can be. saved, and the. possibility. the behavior of. next point is to.

A Study on the Optimization Methods for Optomechanical Alignment

Introduction to Optimization Problems and Methods

Delaunay-based Derivative-free Optimization via Global Surrogate. Pooriya Beyhaghi, Daniele Cavaglieri and Thomas Bewley

Linear Programming with Bounds

Computational Methods. Constrained Optimization

5 Machine Learning Abstractions and Numerical Optimization

Machine Learning for Signal Processing Lecture 4: Optimization

Simplex of Nelder & Mead Algorithm

CS 6210 Fall 2016 Bei Wang. Review Lecture What have we learnt in Scientific Computing?

Robot Mapping. Least Squares Approach to SLAM. Cyrill Stachniss

Graphbased. Kalman filter. Particle filter. Three Main SLAM Paradigms. Robot Mapping. Least Squares Approach to SLAM. Least Squares in General

Optimization. Industrial AI Lab.

Humanoid Robotics. Least Squares. Maren Bennewitz

Optimization Toolbox Release Notes

A Brief Overview of Optimization Problems. Steven G. Johnson MIT course , Fall 2008

CS281 Section 3: Practical Optimization

Solving Optimization and Inverse Problems in Remote Sensing by using Evolutionary Algorithms

Package Rcplex. February 15, 2013

Optimization Problems and Wrap-Up. CS 221 Lecture 14 Tue 6 December 2011

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Recapitulation on Transformations in Neural Network Back Propagation Algorithm

SDLS: a Matlab package for solving conic least-squares problems

Solving Optimization Problems with MATLAB Loren Shure

A Survey of Basic Deterministic, Heuristic, and Hybrid Methods for Single-Objective Optimization and Response Surface Generation

Cluster Newton Method for Sampling Multiple Solutions of an Underdetermined Inverse Problem: Parameter Identification for Pharmacokinetics

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Package Rcplex. June 12, 2016

CUDA L-BFGS and MAP Superresolution Multi-core architectures and programming. Oliver Taubmann & Jens Wetzl August 7, 2012

LocalSolver 4.0: novelties and benchmarks

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Constrained Optimization COS 323

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Transcription:

Scilab sheet Optimization in Scilab Scilab provides a high-level matrix language and allows to define complex mathematical models and to easily connect to existing libraries. That is why optimization is an important and practical topic in Scilab, which provides tools to solve linear and nonlinear optimization problems by a large collection of tools. Overview of the industrial-grade solvers available in Scilab and the type of optimization problems which can be solved by Scilab. Objective Bounds Equality Inequalities Problem size Gradient needed Solver Linear y l l m - linpro Quadratic y l l Nonlinear Nonlinear Least Squares y m - quapro For the constraint columns, the letter "l" means linear, the letter "n" means nonlinear and "l*" means linear constraints in spectral sense. For the problem size column, the letters "s", "m" and "l" respectively mean small, medium and large. qld l - qpsolve l y optim s s l n n optional neldermead optim_ga fminsearch optim_sa lsqrsolve leastsq Min-Max y m y optim/"nd" Multi-Obj. y s optim_moga n l* l semidef Semi-Def. l* l* l n lmisolve

Focus on nonlinear optimization w The optim function solves optimization problems with nonlinear objectives, with or without bound constraints on the unknowns. The quasi-newton method optim/"qn" uses a Broyden-Fletcher-Goldfarb-Shanno formula to update the approximate Hessian matrix. The quasi-newton method has a O(n²) memory requirement. The limited memory BFGS algorithm optim/"gc" is efficient for large size problems due to its memory requirement in O(n). Finally, the optim/"nd" algorithm is a bundle method which may be used to solve unconstrained, non-differentiable problems. For all these solvers, a function that computes the gradient g must be provided. That gradient can be computed using finite differences based on an optimal step with the derivative function, for example. w The fminsearch function is based on the simplex algorithm of Nelder and Mead (not to be confused with Dantzig s simplex for linear optimization). This unconstrained algorithm does not require the gradient of the cost function. It is efficient for small problems, i.e. up to 1 parameters and its memory requirement is only O(n 2 ). It generally requires only 1 or 2 function evaluations per iteration. This algorithm is known to be able to manage "noisy" functions, i.e. situations where the cost function is the sum of a general nonlinear function and a low magnitude noise function. The neldermead component provides three simplex-based algorithms which allow to solve unconstrained and nonlinearly constrained optimization problems. It provides an object oriented access to the options. The fminsearch function is, in fact, a specialized use of the neldermead component. The flagship of Scilab is certainly the optim function, which provides a set of 5 algorithms for nonlinear unconstrained (and bound constrained) optimization problems. The optim function is an unconstrained (or bound constrained) nonlinear optimization solver. The calling sequence is: fopt = optim(costf,x) fopt = optim(costf,"b",lb,ub,x) fopt = optim(costf,"b",lb,ub,x,algo) [fopt,xopt] = optim(...) [fopt,xopt,gopt] = optim(...) where w f is the objective function, w x is the initial guess, w lb is the lower bound, w ub is the upper bound, w algo is the algorithm, w fopt is the minimum function value, w xopt is the optimal point, w gopt is the optimal gradient. The optim function allows to use 3 different algorithms: w algo = "qn": Quasi-Newton (the default solver) based on BFGS formula, w algo = "gc": Limited Memory BFGS algorithm for large-scale optimization, w algo = "nd": Bundle method for non-differentiable problems (e.g. min-max). w Provides efficient optimization solvers based on robust algorithms. w Objective function f in Scilab macros or external (dynamic link). w Extra parameters can be passed to the objective function (with a list or array). w Robust implementation: - Quasi-Newton based on the update of the Cholesky factors, - Line-Search of optim/"qn" based on a safeguarded cubic interpolation designed by Lemaréchal. Figure 1: Optimization of the Rosenbrock function by the optim function. In the following script, we compute the unconstrained optimum of the Rosenbrock function: function [f, g, ind]=rosenbrock(x, ind) f = 1*(x(2)-x(1)^2)^2 + (1-x(1))^2 g(1) = - 4*(x(2)-x(1)^2)*x(1) - 2*(1-x(1)) g(2) = 2*(x(2)-x(1)^2) x = [-1.2 1.]; [ fopt, xopt ] = optim ( rosenbrock, x )

The previous script produces the following output: -->[ fopt, xopt ] = optim ( rosenbrock, x ) xopt = 1. 1. fopt =. X2-1 Gradient - Evaluation points - Order 1 Computing the derivatives by finite differences Scilab provides the derivative function which computes approximate gradients based on finite differences. The calling sequence is: g = derivative(f,x) g = derivative(f,x,h) g = derivative(f,x,h,order) [g,h] = derivative(...) where w f is the objective function, w x is the point where to evaluate the gradient, w h is the step, w order is the order of the finite difference formula, w g is the gradient (or Jacobian matrix), w H is the Hessian matrix. X2-1 X1-1 Gradient - Evaluation points - Order 2 X1-1 Gradient - Evaluation points - Order 4 w Uses optimal step (manages limited precision of floating point numbers), w Can handle any type of objective function: macro or external program or library, w Provides order 1, 2 or 4 formulas. X2-1 In the following script, we compute the optimum of the Rosenbrock problem with finite differences: function f=rosenbrock(x) f = 1*(x(2)-x(1)^2)^2 + (1-x(1))^2 function [f, g, ind]=rosenbrockcost2(x, ind) f = rosenbrockf(x) g = derivative(rosenbrockf, x', order=4) Figure 2: Pattern of the order 1, 2 and 4 finite differences formulas in the derivative function. Linear and Quadratic Optimization The Quapro module defines linear quadratic programming solvers. The matrices defining the cost and constraints must be full, but the quadratic term matrix is not required to be full rank. w linpro: Linear programming solver, w quapro: Linear quadratic programming solver, X1-1 w mps2linpro: Convert lp problem given in MPS format to linpro format.

The Quapro module is available through ATOMS: http://atoms.scilab.org/toolboxes/quapro To install the Quapro module: atomsinstall("quapro"); and then re-start Scilab. w The gradient is optional. In the following example, we are searching for the parameters of a system of ordinary differential equations which best fit experimental data. The context is a chemical reaction for processing waters with phenolic compounds. The linpro function can solve linear programs in general form: Minimize c'*x A*x <= b Aeq*x = beq lb <= x <= ub The following example is extracted from "Operations Research: applications and algorithms", Wayne L. Winstons, Section 5.2, "The Computer and Sensitivity Analysis", in the "Degeneracy and Sensitivity Analysis" subsection. We consider the problem: Min -6*x1-4*x2-3*x3-2*x4 such that: 2*x1 + 3*x2 + x3 + 2* x4 <= 4 x1 + x2 + 2*x3 + x4 <= 15 2*x1 + x2 + x3 +.5*x4 <= 2 3*x1 + x2 + x4 <= 25; x >= ; The following script allows to solve the problem: c = [-6-4 -3-2]'; A = [ 2 3 1 2 1 1 2 1 2 1 1.5 3 1 1 ]; b = [4 15 2 25]'; ci=[ ]'; cs=[%inf %inf %inf %inf]'; [xopt,lagr,fopt]=linpro(c,a,b,ci,cs) This produces: xopt = [5,1,2.842D-14,] We use the lsqrsolve function in a practical case: function dy = mymodel(t,y,a,b) // The right-hand side of the // Ordinary Differential Equation. dy(1) = -a*y(2)+y(1)+t^2+6*t+b dy(2) = b*y(1)-a*y(2)+4*t+(a+b)*(1-t^2) function f = mydifferences(x,t,yexp) // Returns the difference between the // simulated differential // equation and the experimental data. a = x(1) b = x(2) y = yexp(1,:) t = y_calc=ode(y',t,t,list(mymodel,a,b)) diffmat = y_calc' - yexp // Maxe a column vector f = diffmat(:) function f = myadapter(x,m,t,yexp) // Adapts the header for lsqrsolve f = mydifferences(x,t,yexp) // 1. Experimental data t = [ 1 2 3 4 5 6]'; yexp(:,1) = [-1 2 11 26 47 74 17]'; yexp(:,2) = [ 1 3 9 19 33 51 73]'; Nonlinear Least Squares Scilab provides 2 solvers for nonlinear least squares: w lsqrsolve: Solves nonlinear least squares problems, Levenberg-Marquardt algorithm, w leastsq: Solves nonlinear least squares problems (built over optim). w Can handle any type of objective function, macro or external program or library, // 2. Optimize x = [.1;.4]; y = mydifferences(x,t,yexp); m = size(y,"*"); objfun = list(myadapter,t,yexp); [xopt,diffopt]=lsqrsolve(x,objfun,m) The previous script produces the following output: -->[xopt,diffopt]=lsqrsolve(x,objfun,m) lsqrsolve: relative error between two consecutive iterates is at most xtol.

diffopt =. - 2.195D-8-4.88D-8-8.743D-9 2.539D-8 7.487D-9-2.196D-8. Genetic Algorithms The genetic algorithms module in Scilab provides the following functions: w optim_ga: A flexible genetic algorithm, w optim_moga: A multi-objective genetic algorithm, w optim_nsga: A multi-objective Niched Sharing Genetic Algorithm, w optim_nsga2: A multi-objective Niched Sharing Genetic Algorithm version 2. 6.534D-8-6.167D-8-7.148D-8-6.18D-9 2.43D-8 1.671D-8 xopt = The following example is the minimization of the Rastrigin function defined by y = x 12 + x 2 ²- cos (12 x 1 ) -cos (18 x 2 ). This function has several local minima, but only one global minimum, that is (x 1 *,x 2 *) = (, ) associated with the value f (x 1 *, x 2 *) = - 2. We use a binary encoding of the input variables, which performs better in this case: 2. // 1. Define the Rastrigin function. 3. function y = rastriginv(x1,x2) // Vectorized function for contouring 25 2 Before Optimization y = x1.^2 + x2.^2-cos(12*x1)-cos(18*x2) Y1 Y2 Y1 15 1 5-5 1 2 3 4 5 6 t Before Optimization 1 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 t 12 1 8 6 4 2 After Optimization Figure 3: Searching for the best parameters fitting experiments data associated with a set of 2 Ordinary Differential Equations. -2 1 2 3 4 5 6 t After Optimization 8 7 6 5 4 3 2 1 1 2 3 4 5 6 Y2 function y = rastrigin(x) // Non-vectorized function for optimization y = rastriginv(x(1),x(2)) function y = rastriginbinary(x) BinLen = 8 lb = [-1;-1]; ub = [1;1]; tmp = convert_to_float(x,binlen,ub,lb) y = rastrigin (tmp) // 2. Compute the optimum. PopSize = 1; Proba_cross =.7; Proba_mut =.1; NbGen = 1; Log = %T; gaprms=init_param(); gaprms=add_param(gaprms,"minbound",[-1;-1]); gaprms=add_param(gaprms,"maxbound",[1;1]); gaprms=add_param(gaprms,"dimension",2); gaprms=add_param(gaprms,"binary_length",8); gaprms=add_param(gaprms,"crossover_func",.. crossover_ga_binary);

gaprms=add_param(gaprms,"mutation_func",.. mutation_ga_binary); gaprms=add_param(gaprms,"codage_func",.. coding_ga_binary); gaprms=add_param(gaprms,"multi_cross",%t); [xpopopt,fpopopt,xpop,fpop] =.. optim_ga(rastriginbinary,popsize,.. NbGen,Proba_mut,Proba_cross,Log,gaprms); The previous script produces the following output: -->[pop_opt,fobj_pop_opt,pop_init,fobj_pop_init] = optim_ga(rastriginbinary,.. --> PopSize, NbGen, Proba_mut, Proba_cross, Log, ga_params); Initialization of the population Iter. 1 - min/max value = -1.942974/.85538 Iter. 2 - min/max value = -1.942974/-.492852 Iter. 3 - min/max value = -1.942974/-.753347 Iter. 4 - min/max value = -1.942974/-.841115 Iter. 5 - min/max value = -1.942974/-.9851 Iter. 6 - min/max value = -1.942974/-1.94454 Iter. 7 - min/max value = -1.942974/-1.17877 Iter. 8 - min/max value = -1.98747/-1.255388 Iter. 9 - min/max value = -1.98747/-1.333186 Iter. 1 - min/max value = -1.98747/-1.4598 Open-Source libraries w optim is based on Modulopt, a collection of optimization solvers. http://www-rocq.inria.fr/~gilbert/modulopt/ w lsqrsolve is based on Minpack, a Fortran 77 code for solving nonlinear equations and nonlinear least squares problems. http://www.netlib.org/minpack/ w Quapro: Eduardo Casas Renteria, Cecilia Pola Mendez (Universidad De Cantabria), improved by Serge Steer (INRIA) and maintained by Allan Cornet, Michaël Baudin (Consortium Scilab - Digiteo). w qld: Designed by M.J.D. Powell (1983) and modied by K. Schittkowski, with minor modications by A. Tits and J.L. Zhou. w qp_solve, qpsolve: The Goldfarb-Idnani algorithm and the Quadprog package developed by Berwin A. Turlach. w linpro, quapro: The algorithm developed by Eduardo Casas Renteria and Cecilia Pola Mendez. Figure 4: Optimization of the Rastrigin function by the optim_ga function. w Thanks to Marcio Barbalho for providing the material of the Non Linear Least Squares example. w All scripts are available on Scilab wiki at: http://wiki.scilab.org/scilab%2optimization%2sheet Scilab sheet, updated in September 211.