Laboratory exercise. Laboratory experiment OPT-1 Nonlinear Optimization
|
|
- Maude Rogers
- 5 years ago
- Views:
Transcription
1 Fachgebiet Simulation und Optimale Prozesse Fakultät für Informatik und Automatisierung Institut für Automatisierungsund Systemtechnik Laboratory exercise Laboratory experiment OPT-1 Nonlinear Optimization Responsible professor: Prof. Dr. Ing. habil. P. Li Responsible for lab experiment: Dr. Ing. S. Hopfgarten Name, Surname Matrikel (registration) no. Coworker Date, mark, signature
2 Lab experiment OPT Aim The lab experiment serves to deepen the knowledge of the corresponding lectures and exercises and illustrates the procedure concerning the solution of unconstrained nonlinear optimization problems min x f(x), x R n, f : R n R 1 with different methods. Based on the software package MATLAB 1 it permits the investigation of properties of numerical methods of unconstrained nonlinear optimization. These methods can be evaluated using either prepared test functions or user-defined optimization problems with regard to effort, convergence rate, and other criteria. A visualization program with a graphical user interface is provided, allowing the 3D representation of the cost function and an isolines diagram for two-dimensional optimization problems (n = 2). Start points can graphically be selected or given by values. Search paths of different algorithms or multiple computations using the same algorithm can be compared. The graphical illustration of the iterative procedures facilitate the evaluation. 2 Realisation of the lab experiment The software package MATLAB is the base for the establishment of this lab exercise. This package enables scientific and engineering numerical computations (numerical analysis, matrices calculations, signal processing, graphical illustrations, etc.) in an easy-to-use environment. The matrix is the base data element (with in general complex elements). Problems, expressions, algorithms, etc., can be noted in a manner like mathematical notation. In the framework of this lab experiment the following derivative-free and gradient-based numerical methods of unconstrained optimization are made available: Gradient-based methods: Gradient method (Steepest descent), Conjugate gradient method according to Fletcher-Reeves, Polak-Ribiere, Hestenes-Stiefel Quasi-Newton method Wolfe (rank 1 update), Davidon-Fletcher-Powell (rank 2 update), Broyden-Fletcher-Goldfarb-Shanno (rank 2 update), each with (approximative) exact line search Quasi-Newton method according to Broyden-Fletcher-Goldfarb-Shanno with Armijo step-size rule Derivative-free methods: Gauss-Seidel method (coordinate search method with line search) 1 MATLAB is a registered trademark of The MathWorks, Inc.
3 Lab experiment OPT-1 3 Hooke-Jeeves method (pattern search) Rosenbrock method (rotating coordinates) Nelder-Mead simplex search method Evolutionary strategies: Single mutant (1+1) evolutionary strategy according to Schwefel [6] Multiple mutant (5/5,20) evolutionary strategy according to Rechenberg [7] Cascaded (1,5(5/5,20)) evolutionary strategy according to Rechenberg [7] Hybrid methods: Hybrid from (1,5) evolutionary strategy and Rosenbrock method (combined by method of direct integration) Hybrid from (1,5) evolutionary strategy and simplex method according to Nelder-Mead (combined by method of direct integration) Besides these optimization methods a set of test functions are implemented, e. g.: f(x) = 1 2 xt P x, P - symmetric (n, n) matrix f(x) = (x x 2 2 2x 1 ) x 1 (function of Zettl) f(x) = 100(x 2 x 2 1) 2 + (x 1 1) 2 (Rosenbrock valley) n f(x) = x 10 i See appendix, concerning more details about the implemented search methods, a complete listing of test functions, and hints with regard to the graphical user interface. 3 Preparation (Written homework) 3.1 Establish a positive definite, a negative definite, and an indefinite quadratic form, respectively, for the two-dimensional case (x R 2 )! 3.2 Calculate the location and the type of stationary points for the following cost functions: a) f(x) = 100(x 2 x 2 1) 2 + (x 1 1) 2 (Rosenbrock valley) b) f(x) = x 1 exp ( x 2 1 x 2 ) 2 (problem Nice ) c) f(x) = 2x 3 1 3x 2 1 6x 1 x 2 (x 1 x 2 1) (problem Fletcher25 ) 3.3 Repeat the theoretical fundamentals, procedure, and essential properties of selected numerical methods for unconstrained optimization (derivative-free methods: Gauss-Seidel, Hooke- Jeeves; gradient-based methods: gradient method, conjugate gradient method, Quasi-Newton method)!
4 Lab experiment OPT As a result of a theroetical process analysis for a given system a static behaviour (static characteristic curve) ŷ = (1 a 1 u) a 2 1 with the unknown parameters a 1 und a 2 was determined. Under utilization of measuring results u i ŷ i a 1 und a 2 are to be calculeted by means of the least squares method. For this optimization problem, formulate a suited cost function f(x) with x = [a 1 a 2 ] T and a corresponding MATLAB M file which looks like the following for the Nice problem: function f=f_nice(x) f=x(1)*exp(-x(1)^2-x(2)^2) with x as optimization variable and f as cost function! 4 Execution of the laboratory experiment All following investigations are performed by means of the MATLAB program opt1 (visualization, user interface), see appendix. Please, use table 1 from appendix for evaluation of convergence behaviour of numerical methods! 4.1 Display the 3D graphs corresponding to the quadratic forms established under 3.1! For that purpose, load the cost function f quad (data set Quad.mat) and modify the parameter P1 (Hessian matrix) according to your choice (homework)! In addition, investigate a positive-semidefinite and a negative-semidefinite quadratic form! 4.2 Solve the following two-dimensional quadratic optimization problems (cost function f quad and data set Quad.mat, resp., parameter P1: Hessian matrix) by means of Gauß-Seidel method, gradient method (steepest descent), conjugate gradient method, and Quasi-Newton method (BFGS) outgoing from different start points! Answer the questions below! a) P 1 = [ ] b) P 1 = Proposed start points: [ ] [ 1 α) x 0 = β) x 1 0 = [ ] ] c) P 1 = γ) x 0 = [ [ ] ] δ) x 0 = How do different start points influence the convergence behaviour of gradient, conjugate gradient, and Quasi-Newton method? Which influence has the axes position of the isolines regarding the coordinate system on the convergence behaviour of Gauss-Seidel method? 4.3 Investigate the procedure of selected derivative-free and gradient-based methods using the following simple non-quadratic optimization problems: [ ]
5 Lab experiment OPT-1 5 a) 3.2a (Rosenbrock valley; cost function f rose, data set Rose.mat); start points: [-1,0] T, [-1,1] T, [1,-1] T b) 3.2b (cost function f nice, data set Nice.mat); start point: [0.3,0.3] T c) problem according to Zettl (cost function f zettl, data set Zettl.mat); start points: [2,0.25] T, [1.2,0] T Put together the advantages and disadvantages of investigated methods and derive recommendations for the usage! 4.4 Solve the model building problem 3.4 by means of a method of your choice! Write an M file for cost function and gradient evaluation (if necessary)! Display the identified static characteristic curve together with the measurements! 4.5 Test selected optimization methods at pathological cost functions! a) f(x) = n x 10 i (f 10, data set F 10.mat) b) f(x) = n x i (f abs, data set Abs.mat) c) cost function f patho, data set Patho.mat. Literatur [1] P. Li.Lecture Steady-state optimization. TU Ilmenau [2] Taschenbuch Elektrotechnik. 1. Auflage, Berlin 1977, Bd. 2; 3. Auflage, Berlin 1987 Bd. 1. [3] R. Fletcher. Practical Methods of Optimization. Vol. 1: Unconstrained Optimization. Wiley, Chichester [4] The MathWorks, Inc., Natick, Massachusetts: Using MATLAB, [5] The MathWorks, Inc., Natick, Massachusetts: Optimization TOOLBOX for use with MATLAB, [6] H.-P. Schwefel. Numerische Optimierung von Computer-Modellen mittels der Evolutionsstrategie. Birkhäuser, Basel [7] I. Rechenberg. Evolutionsstrategie 94. frommann-holzboog, Stuttgart [8] T. Bäck. Handbook of evolutionary computation. Inst. of Physics Publ. Bristol 1997 A Appendix: Table 1 See next page. Appendix B (MATLAB programs) isn t immediately necessary for performing the laboratory experiment. For deeper understanding, structure and call of optimization routines (also for more than two optimization variables), cost function and gradient calculation procedures, examples of procedure calls, and visualization, the appendix delivers useful hints and can be used if needed.
6 Lab experiment OPT-1 6 method start optimal opt. no. CPU time no. no. point solution cost f. val. iterations ( value) c. f. eval. grad. eval. Tabelle 1: Table for convergence behaviour
7 Lab experiment OPT-1 7 B Appendix: MATLAB programs B.1 Optimization routines ovmeth: ovbfgs: ovevol: ovevol520: ovfmins: ovgs: ovhoje: ovrose: oveses: ovesrose: ovesfmins: Gradient-based search methods with (approximately) exact line search: gradient method (steepest descent), conjugate gradient method according to Fletcher-Reeves, Polak-Ribiere, Hestenes-Stiefel, quasi-newton method according to Wolfe (rank-1 update), Davidon-Fletcher-Powell (rank-2 update), Broyden-Fletcher-Goldfarb-Shanno (rank-2 update) Quasi-Newton method according to Broyden-Fletcher-Goldfarb- Shanno with Armijo step size rule single mutant (1+1) evolutionary strategy according to Schwefel (5/5,20) evolutionary strategy according to Rechenberg Simplex method of Nelder-Mead, corresponds to fmins from MAT- LAB Optimization Toolbox [5] Gauss-Seidel method (coordinate search method with line search) Hooke-Jeeves method (pattern search) Rosenbrock method (rotating coordinate system) cascaded (1,5(5/5,20)) evolutionary strategy according to Rechenberg hybrid method ((1,5) evolutionary strategy and Rosenbrock method, combined by direct integration method) hybrid method ((1,5) evolutionary strategy and Simplex method according to Nelder-Mead, combined by direct integration) The parameter lists of optimization routines were unified as far as possible and correspond to those of MATLAB Optimization Toolbox [5]. They contain the following parameters for all methods: fun: cost function procedure; either name of an M file (e. g. f rose), calculating the cost function value at the given point (f=fun(x)) or cost function as a character string of MATLAB statements (e. g. x(1)^2+2*x(2)^4 with the optimization variable x) x: start point (column or row vector) options: specification of truncation threshold, parameters for the methods, etc. options is a vector of legth 18; it s sufficient to give different than standard values (values in square brackets) ; options is completed up to length 18. options(1): options(2): options(3): Control of output ( 1: no, 0: standard, 1: iterations course numerically) [0] truncation threshold (change of variables) [1.0E-4] truncation threshold (change of cost function) (and gradient norm for gradient-based methods) [1.0E-4]
8 Lab experiment OPT-1 8 gradfun: P1,...,P10: options(4): options(5): options(6): options(7): options(8): options(9): options(10): options(11): not used not used method variant (ovmeth: calculation of search direction, start approximation of Hessian matrix) [0] line search algorithm (step length calculation, method dependent) [0] cost function value at point x after truncation test of gradient calculation (0: no test, 1: check of calculated gradient with gradfun by difference approximation; only for derivative-free methods) [0] no. of cost function evaluations no. of gradient calculations options(14): maximum no. of iterations [100] options(16): options(17): Return parameters of optimization routines: step length factor (method dependent) step length factor (method dependent) options(18): start step length for line search [0] (only for gradient-based methods) procedure for calculation of gradient of the cost function (column vector), see fun maximal 10 parameters (matrices), given to the cost function and the gradient calculation procedures. (They serve for the avoidance of global variables.) x: solution vector or value of optimization variables after truncation of iterations, respectively options: see above xpath: search path. The matrix xpath contains the optimization variable, cost function value, CPU time, and cumulative no. of cost function evaluations at each iteration. B.2 Prepared cost function and gradient evaluation procedurers The names of the M files start with f (cost function evaluation procedure) and df (gradient evaluation procedure), resp. The data set name given in brackets is used in the graphical user interface. f abs (Abs.mat) f ackley (Ackley.mat) sum of absolute values: f(x) = n x i problem of Ackley: f(x) = 20 exp ( 0.2 x / n ) ( n ) exp cos 2πx i /n
9 Lab experiment OPT-1 9 f beale (Beale.mat) problem of Beale: f(x) = (1.5 x 1 (1 x 2 )) 2 + ( 2.25 x 1 (1 x 2 2) ) ( x 1 (1 x 3 2) ) 2 (Flet23.mat) problem of Fletcher, p. 23: f(x) = 2x 3 1 3x 2 1 6x 1 x 2 (x 1 x 2 1) (Flet25.mat) problem of Fletcher, p. 25: f(x) = 2x x 2 2 2x 1 x 2 + 2x x 4 1 (Flet59.mat) problem of Fletcher, p. 59: ( f(x) = (x 2 1) 2 + (2x 1 1) 2 + (2x 2 1) 2 2 ) 2 3 f kowa parameter estimation problem with least squares method (n = 4) (Kursaw.mat) f leon (Leon.mat) problem of Kursawe: f(x) = n ( xi sin x 3 ) i problem of Leon: f(x) = 100 ( x 2 x 3 ) (x1 1) 2 f nice (F Nice.mat, Nice.mat) f(x) = x 1 exp ( x 2 1 x 2 ) 2 f patho (Patho.mat) non-differentiable cost function: f(x) = 1 2 max { x 1, x 2 } + min { [x 1 ] x 1, [x 2 ] x 2 } f quad (Quad.mat) quadratic cost function (with cost function parameter matrix P ): f(x) = 1 2 xt P x (Peaks.mat) f rast (Rast.mat) f regler f(x) = 3(1 x 1 ) 2 exp ( x 2 1 (x 2 + 1) 2) ( ) x 1 x 3 1 x 5 2 exp ( x 2 1 x 2 2) 1 3 exp ( (x 1 + 1) 2 x 2 ( 2) x x 2 ) 2 problem of Rastrigin: f(x) = 10 n + n ( x 2 i 10 cos (2 π x i ) ) control problem example (design of a PD controller)
10 Lab experiment OPT-1 10 f rose (Rose.mat) f foxholes (Foxholes.mat) (Sixhump.mat) f walsh (Walsh.mat) f zettl (Zettl.mat) f 10 (F 10) problem of Rosenbrock (Rosenbrock valley, banana function): f(x) = 100 ( x 2 x 2 ) (x1 1) 2 problem of Shekel (Shekel s foxholes): 1 f(x) = 1 25 K + j=1 K = 500, c j = j 1 c j + 2 (x i a ij ) (a ij ) = f(x) = x 2 1 ( ( 4 + x )) 3 x2 1 + x 1 x 2 + 4x 2 ( ) x model building problem of Walsh problem of Zettl: f(x) = ( x x 2 ) x x 1 10th power: f(x) = n x 10 i B.3 Example for procedure calls B.3.1 Cost function The optimization routines require a MATLAB M file for the evaluation of the cost function getting the optimization variable x as an argument and delivering the cost function value f(x) as the result. function f=f_nice(x) f=x(1)*exp(-x(1)^2-x(2)^2); The name of this M file (f nice) has to be given during the call of the optimization routine in the MATLAB Command Window, if the graphical user interface is not used. >>x0=[-1-1] ; >>ovfmins( f_nice,x0) x0 is the start point for Simplex search method according to Nelder-Mead used here. Alternatively, the cost function can also be entered as a MATLAB statement in a character string. The identifier x must be used for the optimization variable. >>ovfmins( x(1)*exp(-x(1)^2-x(2)^2),x0)
11 Lab experiment OPT-1 11 B.3.2 Gradient Some of the implemented optimization algorithms use the gradient f(x) of the cost function to determine the search direction. The gradient calculation can be done in a MATLAB M file. The optimzation variables x are given as an argument to the gradient calculation procedure delivering the n-dimensional column vector as the result. function df=df_nice(x) df=exp(-x(1)^2-x(2)^2)*[1-2*x(1)^2; -2*x(1)*x(2)]; >>x0=[1 1] ; >>ovbfgs( f_nice,x0,[], df_nice ) Alternatively, the gradient can also be entered as a MATLAB statement in a character string. The identifier x must be used for the optimization variable. >>ovbfgs( x(1)*exp(-x(1)^2-x(2)^2),x0,[],... exp(-x(1)^2-x(2)^2)*[1-2*x(1)^2; -2*x(1)*x(2)] ) If during a call of a gradient-based optimization routine no gradient is given, the derivatives needed are approximately calculated by finite differences. >>ovbfgs( x(1)*exp(-x(1)^2-x(2)^2),x0) B.3.3 Cost function parameters In many cases the cost function depends on additional parameters besides the optimization variables. The parameters themselves are not optimized, but there influence on the optimal solution is of interest. To avoid global variables in such cases, up to 10 such cost function parameters can directly be given to the cost function as additional arguments at the end of parameter list. function f=f_nice(x,p) if nargin<2, p=0; end f=x(1)*exp(-x(1)^2-x(2)^2)+p/2*(x(1)^2+x(2)^2); >>x0=[1 1] ; >>p=0.1; >>ovfmins( f_nice,x0,[],p) If the cost function (or the gradient) is given as a MATLAB statement in character string form, the identifier for the cost function parameters must be P1, P2, etc. >>ovfmins( x(1)*exp(-x(1)^2-x(2)^2)+p1/2*(x(1)^2+x(2)^2),x0,[],p)
12 Lab experiment OPT-1 12 B.4 Visualization/graphical user interface 3D graphs and search directions of the solution routines can be visualized for optimization problems with two variables (n = 2). For that purpose the graphical user interface opt1 is available and can be started from the MATLAB command window. >>cd OPT1 >>opt1( english ) The graphical user interface consists of 4 windows: The optimization problem to be investigated is defined by entering in the window Optimization problem : the cost function (M file or MATLAB statement) the gradient (M file or MATLAB statement) cost function parameters the graphical display area (grid points in (x 1, x 2 ) plane) isolines to be displayed (no. of isolines or vector of cost function levels) The gradient calculation can be validated by comparison with an numerical approximation of the gradient. If no gradient is given and the gradient is entered by a MATLAB statement the gradient is symbolically computed, otherwise the gradient-based routines use an approximate gradient calculation. Further dialog elements permit saving and reading of a prepared optimization problem, and closing of the program.
13 Lab experiment OPT-1 13 The cost function is visualized in the window Cost function in a (pseudo-)3d manner. The color map, type of display, the horizontal and vertical view angle can be modified by corresponding dialogue elements. Optionally the search path can be displayed. The isolines and the search path of optimization runs are shown in the window Cost function levels in dependence of the optimization variables.
14 Lab experiment OPT-1 14 Up to 4 optimization runs can be selected in the window Optimization runs identifiable via different colors. The method used, options for routine call, and start point can be selected. The start point can numerically or graphically (in window Cost function levels ) be set. After termination of an optimization run the solution found can numerically be seen, and the iteration course, i. e. the dependence of the cost function value on the no. of iterations, on the no. of function evaluations, and the CPU time is displayed in diagrams.
Introduction to unconstrained optimization - derivative-free methods
Introduction to unconstrained optimization - derivative-free methods Jussi Hakanen Post-doctoral researcher Office: AgC426.3 jussi.hakanen@jyu.fi Learning outcomes To understand the basic principles of
More informationClassical Gradient Methods
Classical Gradient Methods Note simultaneous course at AMSI (math) summer school: Nonlin. Optimization Methods (see http://wwwmaths.anu.edu.au/events/amsiss05/) Recommended textbook (Springer Verlag, 1999):
More informationConstrained and Unconstrained Optimization
Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical
More informationAPPLIED OPTIMIZATION WITH MATLAB PROGRAMMING
APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING Second Edition P. Venkataraman Rochester Institute of Technology WILEY JOHN WILEY & SONS, INC. CONTENTS PREFACE xiii 1 Introduction 1 1.1. Optimization Fundamentals
More informationMultivariate Numerical Optimization
Jianxin Wei March 1, 2013 Outline 1 Graphics for Function of Two Variables 2 Nelder-Mead Simplex Method 3 Steepest Descent Method 4 Newton s Method 5 Quasi-Newton s Method 6 Built-in R Function 7 Linear
More informationIntroduction to Optimization
Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,
More informationModern Methods of Data Analysis - WS 07/08
Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints
More informationDavid G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer
David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms
More informationMulti Layer Perceptron trained by Quasi Newton learning rule
Multi Layer Perceptron trained by Quasi Newton learning rule Feed-forward neural networks provide a general framework for representing nonlinear functional mappings between a set of input variables and
More informationINTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING
INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING DAVID G. LUENBERGER Stanford University TT ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California London Don Mills, Ontario CONTENTS
More informationLecture 6 - Multivariate numerical optimization
Lecture 6 - Multivariate numerical optimization Björn Andersson (w/ Jianxin Wei) Department of Statistics, Uppsala University February 13, 2014 1 / 36 Table of Contents 1 Plotting functions of two variables
More informationMATH3016: OPTIMIZATION
MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is
More informationContents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.
page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5
More informationToday. Golden section, discussion of error Newton s method. Newton s method, steepest descent, conjugate gradient
Optimization Last time Root finding: definition, motivation Algorithms: Bisection, false position, secant, Newton-Raphson Convergence & tradeoffs Example applications of Newton s method Root finding in
More informationIntroduction to optimization methods and line search
Introduction to optimization methods and line search Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi How to find optimal solutions? Trial and error widely used in practice, not efficient and
More informationConvex Optimization CMU-10725
Convex Optimization CMU-10725 Conjugate Direction Methods Barnabás Póczos & Ryan Tibshirani Conjugate Direction Methods 2 Books to Read David G. Luenberger, Yinyu Ye: Linear and Nonlinear Programming Nesterov:
More informationA Study on the Optimization Methods for Optomechanical Alignment
A Study on the Optimization Methods for Optomechanical Alignment Ming-Ta Yu a, Tsung-Yin Lin b *, Yi-You Li a, and Pei-Feng Shu a a Dept. of Mech. Eng., National Chiao Tung University, Hsinchu 300, Taiwan,
More informationNewton and Quasi-Newton Methods
Lab 17 Newton and Quasi-Newton Methods Lab Objective: Newton s method is generally useful because of its fast convergence properties. However, Newton s method requires the explicit calculation of the second
More informationTheoretical Concepts of Machine Learning
Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5
More informationFast Blackbox Optimization: Iterated Local Search and the Strategy of Powell. Oliver Kramer. Algorithm Engineering Report TR Feb.
Fast Blackbox Optimization: Iterated Local Search and the Strategy of Powell Oliver Kramer Algorithm Engineering Report TR9-2-3 Feb. 29 ISSN 1864-453 Faculty of Computer Science Algorithm Engineering (Ls11)
More informationSolving Optimization and Inverse Problems in Remote Sensing by using Evolutionary Algorithms
Technical University Munich Faculty for civil engineering and land surveying Remote Sensing Technology Prof. Dr.-Ing. Richard Bamler Solving Optimization and Inverse Problems in Remote Sensing by using
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June
More informationPoblano v1.0: A Matlab Toolbox for Gradient-Based Optimization
SANDIA REPORT SAND2010-1422 Unlimited Release Printed March 2010 Poblano v1.0: A Matlab Toolbox for Gradient-Based Optimization Daniel M. Dunlavy, Tamara G. Kolda, and Evrim Acar Prepared by Sandia National
More informationOptimization. Industrial AI Lab.
Optimization Industrial AI Lab. Optimization An important tool in 1) Engineering problem solving and 2) Decision science People optimize Nature optimizes 2 Optimization People optimize (source: http://nautil.us/blog/to-save-drowning-people-ask-yourself-what-would-light-do)
More information10.6 Conjugate Gradient Methods in Multidimensions
420 Chapter 10. Minimization or Maximization of Functions CITED REFERENCES AND FURTHER READING: Brent, R.P. 1973, Algorithms for Minimization without Derivatives (Englewood Cliffs, NJ: Prentice- Hall),
More information10.7 Variable Metric Methods in Multidimensions
10.7 Variable Metric Methods in Multidimensions 425 *fret=dbrent(ax,xx,bx,f1dim,df1dim,tol,&xmin); for (j=1;j
More informationRecapitulation on Transformations in Neural Network Back Propagation Algorithm
International Journal of Information and Computation Technology. ISSN 0974-2239 Volume 3, Number 4 (2013), pp. 323-328 International Research Publications House http://www. irphouse.com /ijict.htm Recapitulation
More informationNumerical Optimization: Introduction and gradient-based methods
Numerical Optimization: Introduction and gradient-based methods Master 2 Recherche LRI Apprentissage Statistique et Optimisation Anne Auger Inria Saclay-Ile-de-France November 2011 http://tao.lri.fr/tiki-index.php?page=courses
More informationA large number of user subroutines and utility routines is available in Abaqus, that are all programmed in Fortran. Subroutines are different for
1 2 3 A large number of user subroutines and utility routines is available in Abaqus, that are all programmed in Fortran. Subroutines are different for implicit (standard) and explicit solvers. Utility
More informationOptimization in Scilab
Scilab sheet Optimization in Scilab Scilab provides a high-level matrix language and allows to define complex mathematical models and to easily connect to existing libraries. That is why optimization is
More informationGradient, Newton and conjugate direction methods for unconstrained nonlinear optimization
Gradient, Newton and conjugate direction methods for unconstrained nonlinear optimization Consider the gradient method (steepest descent), with exact unidimensional search, the Newton method and the conjugate
More informationA projected Hessian matrix for full waveform inversion Yong Ma and Dave Hale, Center for Wave Phenomena, Colorado School of Mines
A projected Hessian matrix for full waveform inversion Yong Ma and Dave Hale, Center for Wave Phenomena, Colorado School of Mines SUMMARY A Hessian matrix in full waveform inversion (FWI) is difficult
More informationLINE SEARCH DESCENT METHODS FOR UNCONSTRAINED MINIMIZATION
Chapter 2 LINE SEARCH DESCENT METHODS FOR UNCONSTRAINED MINIMIZATION 2.1 General line search descent algorithm for unconstrained minimization Over the last 40 years many powerful direct search algorithms
More informationTHE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECHNOLOGY METAHEURISTICS
METAHEURISTICS 1. Objectives The goals of the laboratory workshop are as follows: to learn basic properties of evolutionary computation techniques and other metaheuristics for solving various global optimization
More informationIntroduction to Optimization Problems and Methods
Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex
More informationLecture 12: Feasible direction methods
Lecture 12 Lecture 12: Feasible direction methods Kin Cheong Sou December 2, 2013 TMA947 Lecture 12 Lecture 12: Feasible direction methods 1 / 1 Feasible-direction methods, I Intro Consider the problem
More informationTHE DEVELOPMENT OF THE POTENTIAL AND ACADMIC PROGRAMMES OF WROCLAW UNIVERISTY OF TECH- NOLOGY ITERATIVE LINEAR SOLVERS
ITERATIVE LIEAR SOLVERS. Objectives The goals of the laboratory workshop are as follows: to learn basic properties of iterative methods for solving linear least squares problems, to study the properties
More informationExperimental Data and Training
Modeling and Control of Dynamic Systems Experimental Data and Training Mihkel Pajusalu Alo Peets Tartu, 2008 1 Overview Experimental data Designing input signal Preparing data for modeling Training Criterion
More informationCS 6210 Fall 2016 Bei Wang. Review Lecture What have we learnt in Scientific Computing?
CS 6210 Fall 2016 Bei Wang Review Lecture What have we learnt in Scientific Computing? Let s recall the scientific computing pipeline observed phenomenon mathematical model discretization solution algorithm
More informationMaximum Likelihood estimation: Stata vs. Gauss
Maximum Likelihood estimation: Stata vs. Gauss Index Motivation Objective The Maximum Likelihood Method Capabilities: Stata vs Gauss Conclusions Motivation Stata is a powerful and flexible statistical
More information25. NLP algorithms. ˆ Overview. ˆ Local methods. ˆ Constrained optimization. ˆ Global methods. ˆ Black-box methods.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2017 18 25. NLP algorithms ˆ Overview ˆ Local methods ˆ Constrained optimization ˆ Global methods ˆ Black-box methods ˆ Course wrap-up Laurent Lessard
More informationCharacterizing Improving Directions Unconstrained Optimization
Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not
More informationarxiv: v1 [cs.na] 28 Dec 2018
arxiv:1812.10986v1 [cs.na] 28 Dec 2018 Vilin: Unconstrained Numerical Optimization Application Marko Miladinović 1, Predrag Živadinović 2, 1,2 University of Niš, Faculty of Sciences and Mathematics, Department
More informationNumerical Optimization
Numerical Optimization Quantitative Macroeconomics Raül Santaeulàlia-Llopis MOVE-UAB and Barcelona GSE Fall 2018 Raül Santaeulàlia-Llopis (MOVE-UAB,BGSE) QM: Numerical Optimization Fall 2018 1 / 46 1 Introduction
More informationOptimization. there will solely. any other methods presented can be. saved, and the. possibility. the behavior of. next point is to.
From: http:/ //trond.hjorteland.com/thesis/node1.html Optimization As discussed briefly in Section 4.1, the problem we are facing when searching for stationaryy values of the action given in equation (4.1)
More informationOptimization with Scipy
Lab 15 Optimization with Scipy Lab Objective: The Optimize package in Scipy provides highly optimized and versatile methods for solving fundamental optimization problems. In this lab we introduce the syntax
More informationAN IMPROVED ITERATIVE METHOD FOR SOLVING GENERAL SYSTEM OF EQUATIONS VIA GENETIC ALGORITHMS
AN IMPROVED ITERATIVE METHOD FOR SOLVING GENERAL SYSTEM OF EQUATIONS VIA GENETIC ALGORITHMS Seyed Abolfazl Shahzadehfazeli 1, Zainab Haji Abootorabi,3 1 Parallel Processing Laboratory, Yazd University,
More informationarxiv: v1 [cs.cv] 2 May 2016
16-811 Math Fundamentals for Robotics Comparison of Optimization Methods in Optical Flow Estimation Final Report, Fall 2015 arxiv:1605.00572v1 [cs.cv] 2 May 2016 Contents Noranart Vesdapunt Master of Computer
More informationGradient Methods for Machine Learning
Gradient Methods for Machine Learning Nic Schraudolph Course Overview 1. Mon: Classical Gradient Methods Direct (gradient-free), Steepest Descent, Newton, Levenberg-Marquardt, BFGS, Conjugate Gradient
More informationOPTIMIZATION FOR AUTOMATIC HISTORY MATCHING
INTERNATIONAL JOURNAL OF NUMERICAL ANALYSIS AND MODELING Volume 2, Supp, Pages 131 137 c 2005 Institute for Scientific Computing and Information OPTIMIZATION FOR AUTOMATIC HISTORY MATCHING Abstract. SHUGUANG
More informationWeek 5. Convex Optimization
Week 5. Convex Optimization Lecturer: Prof. Santosh Vempala Scribe: Xin Wang, Zihao Li Feb. 9 and, 206 Week 5. Convex Optimization. The convex optimization formulation A general optimization problem is
More informationEfficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 5, SEPTEMBER 2002 1225 Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms S. Sathiya Keerthi Abstract This paper
More informationLogistic Regression
Logistic Regression ddebarr@uw.edu 2016-05-26 Agenda Model Specification Model Fitting Bayesian Logistic Regression Online Learning and Stochastic Optimization Generative versus Discriminative Classifiers
More informationA Scaled Gradient Descent Method for. Unconstrained Optimiziation Problems With A. Priori Estimation of the Minimum Value
A Scaled Gradient Descent Method for Unconstrained Optimiziation Problems With A Priori Estimation of the Minimum Value A SCALED GRADIENT DESCENT METHOD FOR UNCONSTRAINED OPTIMIZIATION PROBLEMS WITH A
More informationCS281 Section 3: Practical Optimization
CS281 Section 3: Practical Optimization David Duvenaud and Dougal Maclaurin Most parameter estimation problems in machine learning cannot be solved in closed form, so we often have to resort to numerical
More informationIntroduction. Optimization
Introduction to Optimization Amy Langville SAMSI Undergraduate Workshop N.C. State University SAMSI 6/1/05 GOAL: minimize f(x 1, x 2, x 3, x 4, x 5 ) = x 2 1.5x 2x 3 + x 4 /x 5 PRIZE: $1 million # of independent
More informationA Survey of Basic Deterministic, Heuristic, and Hybrid Methods for Single-Objective Optimization and Response Surface Generation
Orlande/Thermal Measurements and Inverse Techniques K12031_C010 Page Proof page 355 21.12.2010 4:56am Compositor Name: PG1421 10 A Survey of Basic Deterministic, Heuristic, and Hybrid Methods for Single-Objective
More informationHartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne
Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne
More informationHartmut Pohlheim. DaimlerChrysler AG, Research and Technology Alt-Moabit 96a, Berlin, Germany.
Multidimensional Scaling for Evolutionary Algorithms - Visualization of the Path through Search Space and Solution Space using SAMMON Mapping Hartmut Pohlheim DaimlerChrysler AG, Research and Technology
More informationAn Asynchronous Implementation of the Limited Memory CMA-ES
An Asynchronous Implementation of the Limited Memory CMA-ES Viktor Arkhipov, Maxim Buzdalov, Anatoly Shalyto ITMO University 49 Kronverkskiy prosp. Saint-Petersburg, Russia, 197101 Email: {arkhipov, buzdalov}@rain.ifmo.ru,
More informationComparative Analysis of Various Evolutionary and Memetic Algorithms
Comparative Analysis of Various Evolutionary and Memetic Algorithms Krisztián Balázs 1, János Botzheim 2, László T. Kóczy 1,3 1 Department of Telecommunications and Media Informatics, Budapest University
More information2. Linear Regression and Gradient Descent
Pattern Recognition And Machine Learning - EPFL - Fall 2015 Emtiyaz Khan, Timur Bagautdinov, Carlos Becker, Ilija Bogunovic & Ksenia Konyushkova 2. Linear Regression and Gradient Descent 2.1 Goals The
More informationComparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles
INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.
More informationOptimization Plugin for RapidMiner. Venkatesh Umaashankar Sangkyun Lee. Technical Report 04/2012. technische universität dortmund
Optimization Plugin for RapidMiner Technical Report Venkatesh Umaashankar Sangkyun Lee 04/2012 technische universität dortmund Part of the work on this technical report has been supported by Deutsche Forschungsgemeinschaft
More informationTutorial on Convex Optimization for Engineers
Tutorial on Convex Optimization for Engineers M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de
More informationShort Reminder of Nonlinear Programming
Short Reminder of Nonlinear Programming Kaisa Miettinen Dept. of Math. Inf. Tech. Email: kaisa.miettinen@jyu.fi Homepage: http://www.mit.jyu.fi/miettine Contents Background General overview briefly theory
More informationCamera calibration. Robotic vision. Ville Kyrki
Camera calibration Robotic vision 19.1.2017 Where are we? Images, imaging Image enhancement Feature extraction and matching Image-based tracking Camera models and calibration Pose estimation Motion analysis
More informationA Brief Look at Optimization
A Brief Look at Optimization CSC 412/2506 Tutorial David Madras January 18, 2018 Slides adapted from last year s version Overview Introduction Classes of optimization problems Linear programming Steepest
More informationOptimization. (Lectures on Numerical Analysis for Economists III) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 20, 2018
Optimization (Lectures on Numerical Analysis for Economists III) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 20, 2018 1 University of Pennsylvania 2 Boston College Optimization Optimization
More informationISCTE/FCUL - Mestrado Matemática Financeira. Aula de Janeiro de 2009 Ano lectivo: 2008/2009. Diana Aldea Mendes
ISCTE/FCUL - Mestrado Matemática Financeira Aula 5 17 de Janeiro de 2009 Ano lectivo: 2008/2009 Diana Aldea Mendes Departamento de Métodos Quantitativos, IBS - ISCTE Business School Gab. 207 AA, diana.mendes@iscte.pt,
More informationCMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro
CMU-Q 15-381 Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization Teacher: Gianni A. Di Caro GLOBAL FUNCTION OPTIMIZATION Find the global maximum of the function f x (and
More informationCalibration by Optimization Without Using Derivatives
Calibration by Optimization Without Using Derivatives Markus Lazar 1, Fakultät für Ingenieurwissenschaften University of Applied Sciences, Rosenheim, Germany Florian Jarre 1, Mathematisches Institut, University
More informationIterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms
Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear
More informationTwo-Dimensional Fitting of Brightness Profiles in Galaxy Images with a Hybrid Algorithm
Two-Dimensional Fitting of Brightness Profiles in Galaxy Images with a Hybrid Algorithm Juan Carlos Gomez, Olac Fuentes, and Ivanio Puerari Instituto Nacional de Astrofísica Óptica y Electrónica Luis Enrique
More informationMinima, Maxima, Saddle points
Minima, Maxima, Saddle points Levent Kandiller Industrial Engineering Department Çankaya University, Turkey Minima, Maxima, Saddle points p./9 Scalar Functions Let us remember the properties for maxima,
More informationParameters Estimation of Material Constitutive Models using Optimization Algorithms
The University of Akron IdeaExchange@UAkron Honors Research Projects The Dr. Gary B. and Pamela S. Williams Honors College Spring 2015 Parameters Estimation of Material Constitutive Models using Optimization
More informationOptimization. 1. Optimization. by Prof. Seungchul Lee Industrial AI Lab POSTECH. Table of Contents
Optimization by Prof. Seungchul Lee Industrial AI Lab http://isystems.unist.ac.kr/ POSTECH Table of Contents I. 1. Optimization II. 2. Solving Optimization Problems III. 3. How do we Find x f(x) = 0 IV.
More informationM. Sc. (Artificial Intelligence and Machine Learning)
Course Name: Advanced Python Course Code: MSCAI 122 This course will introduce students to advanced python implementations and the latest Machine Learning and Deep learning libraries, Scikit-Learn and
More informationNMath Analysis User s Guide
NMath Analysis User s Guide Version 2.0 CenterSpace Software Corvallis, Oregon NMATH ANALYSIS USER S GUIDE 2009 Copyright CenterSpace Software, LLC. All Rights Reserved. The correct bibliographic reference
More informationModule 1 Lecture Notes 2. Optimization Problem and Model Formulation
Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization
More informationPerformance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization
6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained
More informationQuasi-Newton algorithm for best multilinear rank approximation of tensors
Quasi-Newton algorithm for best multilinear rank of tensors and Lek-Heng Lim Department of Mathematics Linköpings Universitet 6th International Congress on Industrial and Applied Mathematics Outline 1
More informationTHE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS. Alekseí Yu. Chekunov. 1. Introduction
MATEMATIQKI VESNIK Corrected proof Available online 01.10.2016 originalni nauqni rad research paper THE COMPUTER MODELLING OF GLUING FLAT IMAGES ALGORITHMS Alekseí Yu. Chekunov Abstract. In this paper
More informationA penalty based filters method in direct search optimization
A penalty based filters method in direct search optimization Aldina Correia CIICESI / ESTG P.PORTO Felgueiras, Portugal aic@estg.ipp.pt João Matias CM-UTAD UTAD Vila Real, Portugal j matias@utad.pt Pedro
More informationA CONJUGATE DIRECTION IMPLEMENTATION OF THE BFGS ALGORITHM WITH AUTOMATIC SCALING. Ian D Coope
i A CONJUGATE DIRECTION IMPLEMENTATION OF THE BFGS ALGORITHM WITH AUTOMATIC SCALING Ian D Coope No. 42 December 1987 A CONJUGATE DIRECTION IMPLEMENTATION OF THE BFGS ALGORITHM WITH AUTOMATIC SCALING IAN
More informationCOMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES
COMPSTAT 2004 Symposium c Physica-Verlag/Springer 2004 COMPARISON OF ALGORITHMS FOR NONLINEAR REGRESSION ESTIMATES Tvrdík J. and Křivý I. Key words: Global optimization, evolutionary algorithms, heuristics,
More informationTitle. Syntax. optimize( ) Function optimization. S = optimize init() (varies) optimize init which(s [, { "max" "min" } ] )
Title optimize( ) Function optimization Syntax S = optimize init() (varies) optimize init which(s [, { "max" "min" } ] ) (varies) optimize init evaluator(s [, &function() ] ) (varies) optimize init evaluatortype(s
More informationOptimization 3.1. for GAUSS TM. Aptech Systems, Inc.
Optimization 3.1 for GAUSS TM Aptech Systems, Inc. Information in this document is subject to change without notice and does not represent a commitment on the part of Aptech Systems, Inc. The software
More informationTitle. Description. stata.com
Title stata.com optimize( ) Function optimization Description Syntax Remarks and examples Conformability Diagnostics References Also see Description These functions find parameter vector or scalar p such
More informationUnidimensional Search for solving continuous high-dimensional optimization problems
2009 Ninth International Conference on Intelligent Systems Design and Applications Unidimensional Search for solving continuous high-dimensional optimization problems Vincent Gardeux, Rachid Chelouah,
More informationGood luck! First name Legi number Computer slabhg Points
Surname First name Legi number Computer slabhg... Note 1 2 4 5 Points Fill in the cover sheet. (Computer: write the number of the PC as printed on the table). Leave your Legi on the table. Switch off your
More informationarxiv: v1 [cs.ne] 11 Mar 2015
Benchmarking NLopt and state-of-art algorithms for Continuous Global Optimization via Hybrid IACO R Udit Kumar, Sumit Soman, Jayavdeva Department of Electrical Engineering, Indian Institute of Technology,
More informationTUNING COMPLEX FUZZY SYSTEMS BY SUPERVISED LEARNING ALGORITHMS
TUIG COPLEX FUZZY SYSTES BY SUPERVISED LEARIG ALGORITHS F. J. oreno-velo, I. Baturone, R. Senhadji, S. Sánchez-Solano Instituto de icroelectrónica de Sevilla - Centro acional de icroelectrónica Avda. Reina
More informationCSCE 5160 Parallel Processing. CSCE 5160 Parallel Processing
HW #9 10., 10.3, 10.7 Due April 17 { } Review Completing Graph Algorithms Maximal Independent Set Johnson s shortest path algorithm using adjacency lists Q= V; for all v in Q l[v] = infinity; l[s] = 0;
More informationMAT 275 Laboratory 2 Matrix Computations and Programming in MATLAB
MAT 75 Laboratory Matrix Computations and Programming in MATLAB In this laboratory session we will learn how to. Create and manipulate matrices and vectors.. Write simple programs in MATLAB NOTE: For your
More informationOptical Design with Zemax
Optical Design with Zemax Lecture 7: Optimization I 2012-12-11 Herbert Gross Winter term 2012 www.iap.uni-jena.de Time schedule 2 1 16.10. Introduction Introduction, Zemax interface, menues, file handling,
More informationConditional Random Fields for Word Hyphenation
Conditional Random Fields for Word Hyphenation Tsung-Yi Lin and Chen-Yu Lee Department of Electrical and Computer Engineering University of California, San Diego {tsl008, chl260}@ucsd.edu February 12,
More informationA penalty based filters method in direct search optimization
A penalty based filters method in direct search optimization ALDINA CORREIA CIICESI/ESTG P.PORTO Felgueiras PORTUGAL aic@estg.ipp.pt JOÃO MATIAS CM-UTAD Vila Real PORTUGAL j matias@utad.pt PEDRO MESTRE
More informationConvexity Theory and Gradient Methods
Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality
More informationAccelerating the Hessian-free Gauss-Newton Full-waveform Inversion via Preconditioned Conjugate Gradient Method
Accelerating the Hessian-free Gauss-Newton Full-waveform Inversion via Preconditioned Conjugate Gradient Method Wenyong Pan 1, Kris Innanen 1 and Wenyuan Liao 2 1. CREWES Project, Department of Geoscience,
More information