Search direction improvement for gradient-based optimization problems

Size: px
Start display at page:

Download "Search direction improvement for gradient-based optimization problems"

Transcription

1 Computer Aided Optimum Design in Engineering IX 3 Search direction improvement for gradient-based optimization problems S Ganguly & W L Neu Aerospace and Ocean Engineering, Virginia Tech, USA Abstract Most gradient-based optimization algorithms calculate the search vector using the gradient or the Hessian of the obective function This causes the optimization algorithm to perform poorly in cases where the dimensionality of the obective function is less than that of the problem Though some methods like the Modified Method of Feasible Directions tend to overcome this shortcoming, they again perform poorly in situations of competing constraints This paper introduces a simple modification in the calculation of the search vector that not only provides significant improvements in the solutions of optimization problems but also helps to reduce or, in some cases, overcome the problem of competing constraints Keywords: optimization, multidisciplinary design optimization, gradient-based algorithm, method of feasible directions, search direction 1 Introduction An optimization problem can be defined, in terms of a set of decision variables, X, as: Minimize Obective Function: F( X), Subect to: g ( X) 0 = 1 m Inequality Constraints, (1) hk ( X) = 0 k = 1 l Equality Constraints, l u Xi Xi Xi i = 1 n Side Constraints A number of numerical algorithms have been devised to solve this problem Gradient-based algorithms are based on the following recursive equation: q q 1 X = X + αs (2) q 1 q where X and X are the vectors of the decision variables in the ( q 1) th and th the q iterations respectively Considering the optimization problem in an

2 4 Computer Aided Optimum Design in Engineering IX n-dimensional space with each dimension corresponding to a decision variable, S represents the search vector It determines the direction of change in the decision variable space Each point in the decision space represents a vector of decision variables α is a scalar which represents the desired amount of change in the decision variables along the S vector Thus the optimization problem can be split into finding the search vector and calculating the parameter α This paper focuses on finding an improved search vector 2 Finding the search vector For unconstrained problems the search vector in the th q q iteration, S, is usually calculated by a standard method, such as Steepest Descent ( S q = F ) or Conugate Direction ( S = F ( X ) + βs q q 1 q 1, where F β = F X q 1 2 ( X ) q 2 2 ( ) ) or 1 Newton s method ( q [ ( q S = H X )] F ( X q )) In many situations, particularly in cases of competing constraints as discussed below, the calculation of an appropriate search direction can be a challenge for an optimization algorithm Most engineering design problems are nonlinear constrained problems Gradient-based optimization algorithms generally use a combination of unconstrained and constrained search methods A method for unconstrained optimization is used to calculate the search direction to arrive at a point in the solution space where a constraint(s) is(are) active or violated For a constraint to be active, its function value must lie within a specified range Once a constraint is active or violated, the search direction is obtained by one of the methods used for constrained optimization The constrained search direction method gives the algorithm the ability to move along a constraint boundary to arrive at a better solution point One of the simpler direction finding methods is Zoutendik s [1] Method of Feasible Directions Here, a sub-optimization problem, referred to as the Direction Finding Sub-problem (DFS) finds a usable and feasible search direction when a constraint is active A usable search direction is a direction which improves the obective function value whereas a feasible search direction maintains feasibility or reduces constraint violation For all points in the solution space that are within the active region, search directions are calculated using the DFS When a constraint is active, the DFS is mathematically represented as: q 1 T q Minimize: F ( X ) S Usability Condition, q 1 T q Subect to: g ( X ) S 0 Feasibility condition, (3) q T ( S ) S q 1 Bounds on S Fig 1 demonstrates the DFS in the active region for a two dimensional case

3 Computer Aided Optimum Design in Engineering IX 5 Active region F 0 ( X ) F ( X) = const X 0 Feasible Sector 0 g( X ) Usable Sector S g ( X) = 0 Figure 1: DFS in the active region at point X 0 in a 2-D solution space When a constraint is violated the DFS finds a search direction that rapidly decreases the constraint violation while also trying to reduce the obective function The DFS is given by: q 1 T q Minimize: F ( X ) S ΦW (4) q 1 T q Subect to: g ( X ) S + θ W 0for active and violated constraints (5) ( q T ) q S S W (6) Here Φ is a large positive number which makes the second term in eqn (4) dominant W is an artificial variable that is used only to formulate the DFS and has no physical significance In order to minimize eqn (4), W tends to increase If θ is positive, as W increases, the first term in eqn (5) is driven more negative, ie, q S is pushed more in the direction opposite that of g This, in effect, drives the search direction to a direction that reduces constraint violation Further details can be found in Vanderplaats [2] It can be shown that the algorithmic map of the feasible directions method is not closed Wolfe [3] showed through a counter example that Zoutendik s algorithm, given by eqn (3) does not converge to a Karush-Kuhn-Tucker point Topkis and Veinott s [4] modification to the feasible direction algorithm guarantees convergence to a Fritz-John point The DFS is represented as:

4 6 Computer Aided Optimum Design in Engineering IX Minimize: z st q 1 T q ( ) z 0 F X S g z g q 1 T q q 1 ( X ) S ( X ) q T q ( S ) S 1 q 1 T q q 1 T q where z = Max{ F( X ) S, g ( X ) S } (7) Here, both active and non-active constraints are involved in the direction finding algorithm Although this method derives a direction that incorporates all the dimensions of the problem, it fails when the decision space contains regions of competing constraints as defined in the next section 3 Problems with gradient-based algorithms In some optimization problems, the dimensionality of the obective function is less than that of the problem In other words, some of the decision variables do not appear in the obective function These decision variables increase the dimensionality of the problem through the constraints In such cases, the search vector has a zero component for the decision variables that are not explicitly present in the obective function until a constraint that explicitly contains these decision variables becomes active Simply stated, the search direction cannot see the dimensions added to the problem by decision variables not explicitly present in the obective function until a constraint that is a function of these decision variables becomes active This often forces the algorithm to terminate before reaching at least a local minimum A more important problem, which inevitably results in the premature termination of the optimization process, is the problem of competing constraints Fig 2 demonstrates a situation of competing constraints in a two dimensional case A region of competing constraints exists if the solution space has an equality and an inequality constraint placed in such a way that the initial set of design variables satisfies the inequality constraint but violates the equality constraint If the equality constraint is represented as two inequality constraints or if the decision variables are split into dependent and independent variables (reduced gradient method, see [2]), the optimizer will attempt to satisfy the equality constraint resulting in a solution point that violates the inequality constraint The optimizer s attempt at reducing the violation of the inequality constraint will tend to violate the equality This leads to oscillations about the two constraints, as illustrated by the traectory of solid arrows in the figure, until the optimizer finds no feasible solution and terminates We say that the optimization process is trapped within the region of competing constraints The solution does not follow a constraint to find a feasible solution If the obective function does not contain decision variables which are present in the competing constraints (the dimensionality issue above), the optimizer terminates even earlier The obective of this work is to modify the search vector such that the solution follows a traectory similar to that described by the dashed arrows

5 Computer Aided Optimum Design in Engineering IX 7 x 2 hx ( 1, x 2) = 0 Minimize: x 1 Subect to: g(x 1, x 2 ) 0 constraint h(x 1, x 2 ) = 0 inequality equality constraint g < 0 region gx ( 1, x 2) = 0 Starting point Figure 2: Example of competing constraints Solid arrows are typical method of feasible directions solution, dashed arrows are desired solution x 1 4 Modification of the search direction The above-mentioned problems can be reduced or overcome by involving the constraints in the calculation of the search vector even when there are no active or violated constraints Experience shows that the DFS is the most computationally intensive part of the optimization process If a constraint becomes active, then the solution follows the constraint boundary until it reaches a local minimum This means that after a constraint becomes active, the DFS is solved every iteration, a behaviour that makes the algorithm computationally expensive It is desirable to find a simpler algorithm that causes the search vector to turn along the contours of the constraints, in the direction that improves the value of the obective function, much before the constraints become active or violated Consider the two-dimensional optimization problem of fig 2 If the equality constraint is represented by two inequality constraints then the search vector is calculated from the DFS given by eqns (4) to (6) Notice that, for the example given above, the first term in eqn (4) has a zero x 2 component Thus the search vector will try to reduce the constraint violation only by moving in a direction opposite that of the gradient of the violated constraint with no heed to the direction of increasing x 2 The solid arrows in the diagram show this behavior Once in the region of competing constraints, the optimizer finds no feasible solution and terminates We can also see that the smaller the starting value of x 2, the more difficult it becomes for the optimizer to get through the region of competing constraints This algorithm can only be successful if started at a large enough value of x 2 Now, if we were to modify the search direction so that the optimizer followed a path similar to that of the dashed arrows, we could reasonably reduce the chances of being trapped in the region of competing constraints even for low

6 8 Computer Aided Optimum Design in Engineering IX starting values of x 2 We can also imagine that even when there is no region of competing constraints, by taking the path of the dashed arrows, the optimizer can actually improve the rate at which it reaches the optimum Clearly, for the optimizer to follow this path, it needs to have a search direction that has a component along the constraint contours as it reduces the obective function This turning of the direction vector along the constraint contour needs to be activated from a position much before the constraint is active In other words, the solution should move up or down a constraint depending on which direction improves the obective, from a point much before the constraint is active Fig 3 shows the basis of the proposed modification g(x 1, x 2 ) = 0 S ' T g S g Figure 3: Modification of the search direction To turn the search vector toward a constraint contour at a point where the constraint is not active, a vector is added to the search vector that is normal to the search vector but has a magnitude proportional to the proection of the gradient of the constraint onto the search vector For the problem stated earlier, where the obective function is not a function of x 2, the search direction is now aware of the full dimensionality of the problem and can move the solution along the dashed arrow path of fig 2 The turning vector is constructed as follows (refer to fig 4 for a two-dimensional visualization) The unit vector in the direction of S is sˆ = S S The proection of the gradient of the constraint onto the search vector is R 1 = g s ˆ ( = g ( s ˆ) ) The proection of the gradient of the constraint onto the hyperplane normal to the search vector is R = R 1ˆ g 2 s The unit vector in the direction of R 2 is then rˆ 2 = R 2 R 2 Note that as the search direction approaches a direction normal to the gradient of the constraint less correction to the search direction is needed and vice versa Thus the correction vector should be proportional to the magnitude R 1 but in the direction of R 2 Hence we define the turning vector as T= R ˆ 1r 2

7 Computer Aided Optimum Design in Engineering IX 9 g λt S' S R 2 R 1 S g Figure 4: Construction of the turning vector and modified search direction The new search direction is then, S' = S+ λt (8) Where λ is a scalar multiplier that adusts the magnitude of the correction depending on the position of the solution point Typically, we would like λ to have smaller values for points far away from the active region and larger values for points closer to the active region It is proposed that: 0 gx ( 1, x2) C1 1 λ = CT g( x1, x2) > C1 (9) C2 g( x1, x2) 0 gx ( 1, x2) > CT Where CT is a small negative number called the constraint tolerance When a constraint takes a value greater than CT, it is considered active C 1 and C 2 are tuning constants In the examples that follow, setting each to 50 was found to work well The formulation can be generalized to the case of multiple constraints, g i (X), each producing a turning vector, T i, and an associated multiplier, λ i The modified search direction is then: T S' = S+λ T (10) 5 Implementation of the search direction modification The modification proposed above was implemented in the commercial optimizer DOT (Design Optimization Tools) TM, version 420, produced and marketed by Vanderplaats Research and Development, Inc This implementation was solely for academic purposes Each of the examples below used DOT s implementation of the Modified Method of Feasible Directions 51 A simple two dimensional example Consider the problem in two design variables, x and y, stated as:

8 10 Computer Aided Optimum Design in Engineering IX Minimize: x Subect to: y y 28x 10 = 0 Equality constraint 0 x 5 0 y x 14 0 Inequality constraint Fig 5 shows the progression of the solution using the original Modified Method of Feasible Directions algorithm starting from the point (3, 35) It is seen that the solution becomes trapped in the region of competing constraints and terminates before finding an optimum Infeasible Initial Design y Feasible Solution Path Inequality constraint Equality Constraint 10 Figure 5: x Performance of the original DOT implementation of the Modified Method of Feasible Directions Feasible and infeasible labels pertain to the inequality constraint Fig 6 demonstrates how the performance of DOT changes when the search direction modification is incorporated into the algorithm The solution quickly converges to the optimum, (0, 10), within the specified tolerance Infeasible Initial Design 30 Solution Path y 20 Feasible Inequality constraint Equality Constraint x Figure 6: Same as fig 5 but with modification of the search direction

9 Computer Aided Optimum Design in Engineering IX Multidisciplinary design optimization of a containership The search direction modification was implemented in the multidisciplinary optimization of a containership design The details of the containership synthesis model are extensive and are summarized in Neu et al [5] The obective function for the optimization is the required freight rate, a measure of how much it costs the owner to operate the ship per freight unit The eight design variables are ship geometry and performance characteristics Among the design variables is the amount of ballast that the ship carries The ballast does not enter directly into the obective function calculation but is important for the stability of the ship One of the ten constraints is that the ship must have a certain minimum stability Adding ballast increases the stability while stacking containers on the deck of the ship reduces the stability By increasing the amount of ballast, the ship can increase the amount of freight it can carry while still meeting the stability constraint and thereby reducing the required freight rate Using the unmodified search direction algorithm, the optimizer terminated with a relatively small amount of ballast yielding a design with a significantly larger than optimal required freight rate The optimizer got trapped in the competing constraints of minimum stability and the requirement that the weight of the ship be equal to the weight of the water it displaces After implementation of the search direction modification to the stability constraint, the optimizer was able to follow the stability constraint to a design with a significantly larger amount of ballast and smaller required freight rate A comparison of the iteration histories of obective function values for each case is shown in fig 7 Obective Iteration Original Search Algorithm Modified Search Algorithm Figure 7: Iteration history of the containership optimization with and without the search direction modification 6 Conclusion In this paper, a class of nonlinear, constrained optimization problems, with regions of competing constraints, that are particularly difficult to solve using conventional gradient-based algorithms has been addressed In conventional

10 12 Computer Aided Optimum Design in Engineering IX gradient-based algorithms, constraints are involved in calculating the search direction only if they are active This may cause optimization problems with regions of competing constraints to terminate prematurely The problem is augmented if some decision variables are not explicitly present in the obective function The practice of starting from various points within the decision space may help us avoid regions of competing constraints but as the nonlinearity and the dimensionality of a problem grows, the chances of finding a starting point that avoids these regions may be small Involving the constraints in calculating the search direction at every iteration of the optimization process is a more effective way of solving this class of problems, however, solution of a direction finding sub-optimization problem at each step can be expensive A modification of the search direction such that the solution path is partly driven by constraints, with little computational effort, before any constraint(s) is (are) active or violated is proposed This not only avoids regions of competing constraints, which can lead to premature termination, but also captures the full dimensionality of the problem in calculating the search direction This modification was implemented in a standard commercial gradient-based optimization program and applied to a simple two-dimensional problem to illustrate the mechanics of the modification The same modification was then found to produce superior results for a multidimensional, multidisciplinary design optimization problem It is to be noted that in the MDO problem, the presence of competing constraints was identified a posteriori and the modification was applied only to the constraints identified to be competing It remains as a topic for further research to automate the process of identifying the constraints that are competing and using ust those constraints to modify the search direction References [1] Zoutendik, G, Methods of Feasible Directions, Elisevier, Amsterdam, Netherlands, 1960 [2] Vanderplaats, GN, Numerical Optimization Techniques for Engineering Design, 3 rd ed, Vanderplaats Research & Development, Inc, Colorado Springs, 2001 [3] Wolfe, P, On the convergence of gradient methods under constraint IBM Journal of Research and Development, 16, pp , 1972 [4] Topkis, DM, & Veinott, AF, On the Convergence of Some Feasible Direction Algorithms for Nonlinear Programming, SIAM Journal of Control, 5, pp , 1967 [5] Neu, WL, Mason, WH, Ni, S, Lin, Z, Dasgupta, A, & Chen, Y, A Multidisciplinary Design Optimization Scheme for Containerships 8 th AIAA Symposium on Multidisciplinary Analysis and Optimization, 6-8 September 2000, Long Beach, CA

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions 1. Module - 2 Lecture Notes 5. Kuhn-Tucker Conditions

Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions 1. Module - 2 Lecture Notes 5. Kuhn-Tucker Conditions Optimization Methods: Optimization using Calculus Kuhn-Tucker Conditions Module - Lecture Notes 5 Kuhn-Tucker Conditions Introduction In the previous lecture the optimization of functions of multiple variables

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

A NEW APPROACH IN STACKING SEQUENCE OPTIMIZATION OF COMPOSITE LAMINATES USING GENESIS STRUCTURAL ANALYSIS AND OPTIMIZATION SOFTWARE

A NEW APPROACH IN STACKING SEQUENCE OPTIMIZATION OF COMPOSITE LAMINATES USING GENESIS STRUCTURAL ANALYSIS AND OPTIMIZATION SOFTWARE 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization 4-6 September 2002, Atlanta, Georgia AIAA 2002-5451 A NEW APPROACH IN STACKING SEQUENCE OPTIMIZATION OF COMPOSITE LAMINATES USING

More information

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading 11 th World Congress on Structural and Multidisciplinary Optimisation 07 th -12 th, June 2015, Sydney Australia Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response

More information

Demo 1: KKT conditions with inequality constraints

Demo 1: KKT conditions with inequality constraints MS-C5 Introduction to Optimization Solutions 9 Ehtamo Demo : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the points x (x, x ) (, 4) or x (x, x ) (6, ) are

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

A Novel Approach to Planar Mechanism Synthesis Using HEEDS

A Novel Approach to Planar Mechanism Synthesis Using HEEDS AB-2033 Rev. 04.10 A Novel Approach to Planar Mechanism Synthesis Using HEEDS John Oliva and Erik Goodman Michigan State University Introduction The problem of mechanism synthesis (or design) is deceptively

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions

More information

Multidimensional Image Registered Scanner using MDPSO (Multi-objective Discrete Particle Swarm Optimization)

Multidimensional Image Registered Scanner using MDPSO (Multi-objective Discrete Particle Swarm Optimization) Multidimensional Image Registered Scanner using MDPSO (Multi-objective Discrete Particle Swarm Optimization) Rishiganesh V 1, Swaruba P 2 PG Scholar M.Tech-Multimedia Technology, Department of CSE, K.S.R.

More information

Chapter Multidimensional Gradient Method

Chapter Multidimensional Gradient Method Chapter 09.04 Multidimensional Gradient Method After reading this chapter, you should be able to: 1. Understand how multi-dimensional gradient methods are different from direct search methods. Understand

More information

Module 4 : Solving Linear Algebraic Equations Section 11 Appendix C: Steepest Descent / Gradient Search Method

Module 4 : Solving Linear Algebraic Equations Section 11 Appendix C: Steepest Descent / Gradient Search Method Module 4 : Solving Linear Algebraic Equations Section 11 Appendix C: Steepest Descent / Gradient Search Method 11 Appendix C: Steepest Descent / Gradient Search Method In the module on Problem Discretization

More information

Support Vector Machines

Support Vector Machines Support Vector Machines SVM Discussion Overview. Importance of SVMs. Overview of Mathematical Techniques Employed 3. Margin Geometry 4. SVM Training Methodology 5. Overlapping Distributions 6. Dealing

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information

Recent Developments in Model-based Derivative-free Optimization

Recent Developments in Model-based Derivative-free Optimization Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING

APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING APPLIED OPTIMIZATION WITH MATLAB PROGRAMMING Second Edition P. Venkataraman Rochester Institute of Technology WILEY JOHN WILEY & SONS, INC. CONTENTS PREFACE xiii 1 Introduction 1 1.1. Optimization Fundamentals

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Constrained Optimization Marc Toussaint U Stuttgart Constrained Optimization General constrained optimization problem: Let R n, f : R n R, g : R n R m, h : R n R l find min

More information

Introduction to Design Optimization

Introduction to Design Optimization Introduction to Design Optimization James T. Allison University of Michigan, Optimal Design Laboratory June 23, 2005 1 Introduction Technology classes generally do a good job of helping students learn

More information

Elastic Bands: Connecting Path Planning and Control

Elastic Bands: Connecting Path Planning and Control Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis

More information

Object oriented implementation of a second-order optimization method

Object oriented implementation of a second-order optimization method Obect oriented implementation of a second-order optimization method L. F. D. Bras & A.F.M.Azevedo Civil Engineering Department, Faculty of Engineering, University of Porto, Portugal. Abstract A structural

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

Multi Layer Perceptron trained by Quasi Newton learning rule

Multi Layer Perceptron trained by Quasi Newton learning rule Multi Layer Perceptron trained by Quasi Newton learning rule Feed-forward neural networks provide a general framework for representing nonlinear functional mappings between a set of input variables and

More information

Numerical Method in Optimization as a Multi-stage Decision Control System

Numerical Method in Optimization as a Multi-stage Decision Control System Numerical Method in Optimization as a Multi-stage Decision Control System B.S. GOH Institute of Mathematical Sciences University of Malaya 50603 Kuala Lumpur MLYSI gohoptimum@gmail.com bstract: - Numerical

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function. 1 The Lagrange multipliers is a mathematical method for performing constrained optimization of differentiable functions. Recall unconstrained optimization of differentiable functions, in which we want

More information

10. Network dimensioning

10. Network dimensioning Partly based on slide material by Samuli Aalto and Jorma Virtamo ELEC-C7210 Modeling and analysis of communication networks 1 Contents Introduction Parameters: topology, routing and traffic Dimensioning

More information

MULTIDISCIPLINE DESIGN OPTIMIZATION

MULTIDISCIPLINE DESIGN OPTIMIZATION MULTIDISCIPLINE DESIGN OPTIMIZATION Garret N. Vanderplaats Founder & Chief Executive Officer Vanderplaats Research & Development, Inc. Colorado Springs, CO CHAPTER ONE 1BASIC CONCEPTS He Wants Us To Use

More information

Constrained Optimization of the Stress Function for Multidimensional Scaling

Constrained Optimization of the Stress Function for Multidimensional Scaling Constrained Optimization of the Stress Function for Multidimensional Scaling Vydunas Saltenis Institute of Mathematics and Informatics Akademijos 4, LT-08663 Vilnius, Lithuania Saltenis@ktlmiilt Abstract

More information

Lecture Notes: Constraint Optimization

Lecture Notes: Constraint Optimization Lecture Notes: Constraint Optimization Gerhard Neumann January 6, 2015 1 Constraint Optimization Problems In constraint optimization we want to maximize a function f(x) under the constraints that g i (x)

More information

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP): Linköping University Optimization TAOP3(0) Department of Mathematics Examination Oleg Burdakov of 30 October 03 Assignment Consider the following linear programming problem (LP): max z = x + x s.t. x x

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

arxiv: v2 [math.oc] 12 Feb 2019

arxiv: v2 [math.oc] 12 Feb 2019 A CENTRAL PATH FOLLOWING METHOD USING THE NORMALIZED GRADIENTS Long Chen 1, Wenyi Chen 2, and Kai-Uwe Bletzinger 1 arxiv:1902.04040v2 [math.oc] 12 Feb 2019 1 Chair of Structural Analysis, Technical University

More information

Computational Optimization. Constrained Optimization Algorithms

Computational Optimization. Constrained Optimization Algorithms Computational Optimization Constrained Optimization Algorithms Same basic algorithms Repeat Determine descent direction Determine step size Take a step Until Optimal But now must consider feasibility,

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

Constrained Optimization with Calculus. Background Three Big Problems Setup and Vocabulary

Constrained Optimization with Calculus. Background Three Big Problems Setup and Vocabulary Constrained Optimization with Calculus Background Three Big Problems Setup and Vocabulary Background Information In unit 3, you learned about linear programming, in which all constraints and the objective

More information

Interior Point I. Lab 21. Introduction

Interior Point I. Lab 21. Introduction Lab 21 Interior Point I Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen the discovery

More information

WE consider the gate-sizing problem, that is, the problem

WE consider the gate-sizing problem, that is, the problem 2760 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, VOL 55, NO 9, OCTOBER 2008 An Efficient Method for Large-Scale Gate Sizing Siddharth Joshi and Stephen Boyd, Fellow, IEEE Abstract We consider

More information

Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with

Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with Parallelization in Non-hierarchical Analytical Target Cascading Yongsu Jung Department of Mechanical Engineering,

More information

3 Interior Point Method

3 Interior Point Method 3 Interior Point Method Linear programming (LP) is one of the most useful mathematical techniques. Recent advances in computer technology and algorithms have improved computational speed by several orders

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574

More information

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case Ellipsoid Algorithm 15-853:Algorithms in the Real World Linear and Integer Programming II Ellipsoid algorithm Interior point methods First polynomial-time algorithm for linear programming (Khachian 79)

More information

Constrained Optimization COS 323

Constrained Optimization COS 323 Constrained Optimization COS 323 Last time Introduction to optimization objective function, variables, [constraints] 1-dimensional methods Golden section, discussion of error Newton s method Multi-dimensional

More information

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems 4 The Open Cybernetics and Systemics Journal, 008,, 4-9 Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems K. Kato *, M. Sakawa and H. Katagiri Department of Artificial

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

UNSTRUCTURED GRIDS ON NURBS SURFACES. The surface grid can be generated either in a parameter. surfaces. Generating grids in a parameter space is

UNSTRUCTURED GRIDS ON NURBS SURFACES. The surface grid can be generated either in a parameter. surfaces. Generating grids in a parameter space is UNSTRUCTURED GRIDS ON NURBS SURFACES Jamshid Samareh-Abolhassani 1 Abstract A simple and ecient computational method is presented for unstructured surface grid generation. This method is built upon an

More information

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS EVOLUTIONARY METHODS FOR DESIGN, OPTIMIZATION AND CONTROL P. Neittaanmäki, J. Périaux and T. Tuovinen (Eds.) c CIMNE, Barcelona, Spain 2007 A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER

More information

A Study on the Optimization Methods for Optomechanical Alignment

A Study on the Optimization Methods for Optomechanical Alignment A Study on the Optimization Methods for Optomechanical Alignment Ming-Ta Yu a, Tsung-Yin Lin b *, Yi-You Li a, and Pei-Feng Shu a a Dept. of Mech. Eng., National Chiao Tung University, Hsinchu 300, Taiwan,

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

Modern Methods of Data Analysis - WS 07/08

Modern Methods of Data Analysis - WS 07/08 Modern Methods of Data Analysis Lecture XV (04.02.08) Contents: Function Minimization (see E. Lohrmann & V. Blobel) Optimization Problem Set of n independent variables Sometimes in addition some constraints

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Linear Programming and its Applications

Linear Programming and its Applications Linear Programming and its Applications Outline for Today What is linear programming (LP)? Examples Formal definition Geometric intuition Why is LP useful? A first look at LP algorithms Duality Linear

More information

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss

More information

Lagrange multipliers. Contents. Introduction. From Wikipedia, the free encyclopedia

Lagrange multipliers. Contents. Introduction. From Wikipedia, the free encyclopedia Lagrange multipliers From Wikipedia, the free encyclopedia In mathematical optimization problems, Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the local extrema of a

More information

Optimal Control Techniques for Dynamic Walking

Optimal Control Techniques for Dynamic Walking Optimal Control Techniques for Dynamic Walking Optimization in Robotics & Biomechanics IWR, University of Heidelberg Presentation partly based on slides by Sebastian Sager, Moritz Diehl and Peter Riede

More information

The Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs

The Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs The Dynamic Hungarian Algorithm for the Assignment Problem with Changing Costs G. Ayorkor Mills-Tettey Anthony Stentz M. Bernardine Dias CMU-RI-TR-07-7 July 007 Robotics Institute Carnegie Mellon University

More information

LOAD SHEDDING AN EFFICIENT USE OF LTC TRANSFORMERS

LOAD SHEDDING AN EFFICIENT USE OF LTC TRANSFORMERS LOAD SHEDDING AN EFFICIENT USE OF LTC TRANSFORMERS Luciano V. Barboza André A. P. Lerm Roberto S. Salgado Catholic University of Pelotas Federal Center for Technological Education of Pelotas Federal University

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Chapter 3 Path Optimization

Chapter 3 Path Optimization Chapter 3 Path Optimization Background information on optimization is discussed in this chapter, along with the inequality constraints that are used for the problem. Additionally, the MATLAB program for

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks

A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 8, NO. 6, DECEMBER 2000 747 A Path Decomposition Approach for Computing Blocking Probabilities in Wavelength-Routing Networks Yuhong Zhu, George N. Rouskas, Member,

More information

Linear Discriminant Functions: Gradient Descent and Perceptron Convergence

Linear Discriminant Functions: Gradient Descent and Perceptron Convergence Linear Discriminant Functions: Gradient Descent and Perceptron Convergence The Two-Category Linearly Separable Case (5.4) Minimizing the Perceptron Criterion Function (5.5) Role of Linear Discriminant

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

MATLAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems

MATLAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems International Conference on Mathematical Computer Engineering - ICMCE - 8 MALAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems Raja Das a a School

More information

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection Paul Armand 1 Ngoc Nguyen Tran 2 Institut de Recherche XLIM Université de Limoges Journées annuelles

More information

Background for Surface Integration

Background for Surface Integration Background for urface Integration 1 urface Integrals We have seen in previous work how to define and compute line integrals in R 2. You should remember the basic surface integrals that we will need to

More information

Lecture 15: Log Barrier Method

Lecture 15: Log Barrier Method 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem

Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Sequential Coordinate-wise Algorithm for Non-negative Least Squares Problem Woring document of the EU project COSPAL IST-004176 Vojtěch Franc, Miro

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

Convexization in Markov Chain Monte Carlo

Convexization in Markov Chain Monte Carlo in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non

More information

An Efficient Constraint Handling Method for Genetic Algorithms

An Efficient Constraint Handling Method for Genetic Algorithms An Efficient Constraint Handling Method for Genetic Algorithms Kalyanmoy Deb Kanpur Genetic Algorithms Laboratory (KanGAL) Department of Mechanical Engineering Indian Institute of Technology Kanpur Kanpur,

More information

Introduction to Machine Learning Spring 2018 Note Sparsity and LASSO. 1.1 Sparsity for SVMs

Introduction to Machine Learning Spring 2018 Note Sparsity and LASSO. 1.1 Sparsity for SVMs CS 189 Introduction to Machine Learning Spring 2018 Note 21 1 Sparsity and LASSO 1.1 Sparsity for SVMs Recall the oective function of the soft-margin SVM prolem: w,ξ 1 2 w 2 + C Note that if a point x

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

Three Dimensional Geometry. Linear Programming

Three Dimensional Geometry. Linear Programming Three Dimensional Geometry Linear Programming A plane is determined uniquely if any one of the following is known: The normal to the plane and its distance from the origin is given, i.e. equation of a

More information

Optimizations in the Verification Technique of Automatic Assertion Checking with Non-linear Solver

Optimizations in the Verification Technique of Automatic Assertion Checking with Non-linear Solver Optimizations in the Verification Technique of Automatic Assertion Checking with Non-linear Solver AUTHORS AND AFFILATION DATA MUST NOT BE INCLUDED IN THE FIRST FULL PAPER VERSION FOR REVIEW Abstract This

More information

Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs

Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs The simplex method, invented by George Dantzig in 1947, is the basic workhorse for solving linear programs, even today. While

More information

COMPARISON OF SIMPLIFIED GRADIENT DESCENT ALGORITHMS FOR DECODING LDPC CODES

COMPARISON OF SIMPLIFIED GRADIENT DESCENT ALGORITHMS FOR DECODING LDPC CODES COMPARISON OF SIMPLIFIED GRADIENT DESCENT ALGORITHMS FOR DECODING LDPC CODES Boorle Ashok Kumar 1, G Y. Padma Sree 2 1 PG Scholar, Al-Ameer College Of Engineering & Information Technology, Anandapuram,

More information

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013 Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork

More information

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization

An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization An Optimality Theory Based Proximity Measure for Set Based Multi-Objective Optimization Kalyanmoy Deb, Fellow, IEEE and Mohamed Abouhawwash Department of Electrical and Computer Engineering Computational

More information

Controlling Air Pollution. A quick review. Reclaiming Solid Wastes. Chapter 4 The Simplex Method. Solving the Bake Sale problem. How to move?

Controlling Air Pollution. A quick review. Reclaiming Solid Wastes. Chapter 4 The Simplex Method. Solving the Bake Sale problem. How to move? ESE Operations Research 9// Controlling Air Pollution Technology can be use fully () or fractional thereof A quick review ESE Operations Research Reclaiming Solid Wastes Chapter The Simple Method ESE Operations

More information

Constrained and Unconstrained Optimization

Constrained and Unconstrained Optimization Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical

More information

11.1 Optimization Approaches

11.1 Optimization Approaches 328 11.1 Optimization Approaches There are four techniques to employ optimization of optical structures with optical performance constraints: Level 1 is characterized by manual iteration to improve the

More information

Lecture 7: Support Vector Machine

Lecture 7: Support Vector Machine Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each

More information

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming Gianni Di Pillo (dipillo@dis.uniroma1.it) Giampaolo Liuzzi (liuzzi@iasi.cnr.it) Stefano Lucidi (lucidi@dis.uniroma1.it)

More information

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Two common formulations of linear programming (LP) problems are: min Subject to: 1,,, 1,2,,;, max Subject to: 1,,, 1,2,,;, Linear Programming Problems The standard LP problem

More information