Numerical Method in Optimization as a Multi-stage Decision Control System

Size: px
Start display at page:

Download "Numerical Method in Optimization as a Multi-stage Decision Control System"

Transcription

1 Numerical Method in Optimization as a Multi-stage Decision Control System B.S. GOH Institute of Mathematical Sciences University of Malaya Kuala Lumpur MLYSI gohoptimum@gmail.com bstract: - Numerical methods to solve optimization problems are none other than multi-stage decision control systems. Furthermore, it is intrinsically a two phase system. When the local search region does not contain the minimum point, iterations should be defined so that the next point is on the boundary of the local search region. They are then approximated. Currently, most numerical methods tend to use quadratic models to construct iterations with assumptions some of which are analyzed here. In phase II, the local search region contains the minimum point. Then, the Newton method should be used and fast convergence is achieved. Key-Words: - Optimization, Newton Iteration, Steepest Descent, Equality Constraints, Lagrange Multipliers. 1 Introduction Numerical methods to solve nonlinear optimization problem are none other than dynamic multi-stage decision control systems. We have a sequence of local closed search regions, Z1, Z,..., Z N. Ideally, iterations should generate points on the boundary of Z1, Z,..., ZN 1 and the Newton iteration or equivalent should only be used in the final local search region, Z N, see [1], []. good model of a numerical method in optimization is the problem of driving a car. We need a set of tactics for driving along the main road. Once, we are very close to the final destination, we switch to tactics for parking. Numerical methods in optimization are essentially a two phase method. Two key questions are: how to construct an iteration when the current local search region does not contain the solution and how to use Newton method when it does. The latter requires some signals that the current point is near the solution. The most important value of formulating a numerical method in optimization problem as a multi-stage decision control system is that it can simplify the theory of numerical methods in optimization. We shall examine this approach, firstly by an analysis of unconstrained optimization. Secondly, we define how iterations should be defined for an optimization problem with equality constraints. Unconstrained optimization We wish to minimize a function f( x) for all values of the state vector, x. We have the control system (iterative equations), xk ( + 1) = xk ( ) + uk ( ) = xk ( ) + α( kdk ) ( ). (1) The state vector xk ( ) defines the current position. The control vector uk ( ) represents the decision variables which are available at the k-th iteration. The control vector uk ( ) can be expressed in terms of a relative steplength α( k) and a direction vector, dk ( ). Both α( k) and dk ( ) are decision variables. Generally, at a point far from the solution, given the steplength α ( k), the best choice of the control direction is to generate the next position on the boundary of the local search region such that the objective function is minimized. Thus, given α ( k), we choose dk ( ) which minimizes, f[ xx ( ) + α( kdk ) ( )], () such that, α( kdk ) ( ) = α d T ( kdk ) ( ) = k, (3) where, k is the radius of the local search region. Using a Lagrange multiplier we conclude that the optimal search direction is, dk ( ) = fxx [ ( ) + α( kdk ) ( )] = fxk [ ( + 1)]. (4) The Lagrange multiplier is, λ = 1/[ α( k)]. (5) ISBN:

2 Substitute (4) into (3) we deduce that the optimal Example 3.1. Consider, steplength is, Minimize, f( x) = x1+ x, (13) α ( k) = k / f[ xk ( + 1)] k / f[ xk ( )]. (6) such that cx ( ) = x For convenience, let 1 + x 1= 0. (14) µ ( k) = 1/ α( k) = f( xk ( )) / k. (7) The global minimum point is at M = (-1,0) and there is a local minimum point at LM = (1,0). This important choice of the Levenberg-Marquardt parameter was first suggested by Kelley in [3]. gain, we have a sequence of local search Subsequently, it was used by Fan and Yuan [4] and regions, Z they showed that it is a very effective choice. 1, Z,..., Z N. Ideally, iterations should We approximate the greatest descent direction generate points on the boundary of Z 1, Z,..., ZN 1 (4) and get, dk ( ) = f[ xk ( )] α( k) f( xdk ) ( ). (8) and the Newton iteration or equivalent should only be used in the final local search region, Z N. Thus Substitute (7), (8) into (1) we get the approximate greatest descent iteration, GDI, we have a two phase method. In phase I we construct iterations to generate points on the 1 xk ( + 1) = xk ( ) [ µ ( ki ) + f( x)] f( x). (9) boundaries of the local search region and (3) in phase II we use the Newton method or equivalent. By a backtracking line search, we choose the Let x * be the global minimum point. We note parameter µ ( k) so that the function decreases that the level set { x f( x) = f( x*)} divides the state monotonically. If so, and if the Lyapunov function, space into two subregions, Ω V( x) = f( x) f( x*) is properly nested globally, L = { x f( x) < f( x*)} we can conclude that the GDI method, is globally and Ω U = { x f( x) > f( x*)}. For a current state convergent, [1]. function is properly nested if its xk ( ) Ω L a good iteration must increase the value level sets are topologically equivalent to concentric of the objective function. This challenges the logic spherical surfaces. of the common practice of minimizing a quadratic The approximate greatest descent iteration (9) is model at every point in an optimization problem a special case of the Levenberg-Marquardt formula. with equality constraints to construct iterations. It exists everywhere. The steepest descent and the There is in fact a logical explanation to this Newton iterations are limiting cases of GDI. The counterexample. When we use a quadratic model, literature says that the Levenberg-Marquardt we are in fact solving another optimization problem formula approximates the Newton iteration. whose solution may converge to the minimum point However, it is more correct to say that near a of the original problem. solution, the Newton iteration approximates GDI. x 1 +x 4 Example.1. Let, f( x) = 0.5[( x1+ 1.5) + x] + 5( x1 + x 1). (10) Let the initial point be = (- 0.1,0.4). We find that the first GDI jumps over a hill. 3 Equality Constraints Consider the problem: n Minimize f( x ), x R, (11) m such that, cx ( ) = 0, cx ( ) R, m< n. (1) To enable us to understand new concepts easily, we shall analyse a simple example where we know exactly what needs to be done to construct a good numerical method. We have a bird s eye view of the best long optimal term trajectories from any point to the global minimum point. x P1 P4 P Optimal level set f(x)=f(x*) x1 M Constraint Fig.1. Bird s eye view of problem. Points P, and P4 represent good iterations. is best. Consider the current state at = (- 6,1) and a local spherical region of radius, = 0.5. This search region does not intersect the constraint set. Thus, the control system (1), restricted to this local search region and subject to the constraint (14) has LM ISBN:

3 no feasible solution. Hence, we cannot define and use Lagrange multipliers of the constraint set at this point. This raises a question on what is the basis for using a quadratic model and Lagrange multipliers at the point. gain, we shall provide an explanation to this second counterexample later on. Thus, the important question is how should we define a good iteration at the point far away from the solution and the constraint set? To see what needs to be done we take a bird s eye view on the geometry of the problem in the simple example. The line, joining to the global minimum point, provides the best long term iteration from the current point to a point on boundary of search region. The neighbouring points in the arc to P4 are acceptable iterations at the current state. Thus the question is how do we construct these points? We define the constraint violation function as, T vx ( ) = cx ( ) cx ( ). (15) We map the images of f( x) and vx ( ) as a point traces the boundary of the local search region. objective function T1 P T P1 P constraint violation Fig.. Iterations which generate points on arc P4 to P are acceptable. From an image space analysis we find that we can generate acceptable iterations if we can construct iterations which generate points in the South West and partly in the North West sectors, as in Fig., in the arc P4 to P. How can we construct such iterations? It turns out that these logical iterations are surprisingly easy to construct. We should minimize the accessory function, Fx (, σ) = σ f( x) + σ vx ( ), (16) 1 such that, x x =. (17) From studies of many sample points, e.g. B= (,3), we find that the best long term iterations can be on the South West, North West and South East boundaries in the image space like that in Fig.. Use of the accessory function to construct iterations implies that we should abandon the penalty function concept in optimization. In penalty function we are restricted to positive values of the parameters, σ1, σ. more serious objection to the use of the penalty function concept is the common expectation that we should use a large penalty parameter when the current point is on or near the constraint set. In fact the choice of the penalty parameter depends more on the condition whether or not the current local search region contains the minimum point. t a local search region which does not contain the solution and at the beginning of an iterative process, the accessory parameter σ should be small or comparable relative to σ 1. This is a way to move away from a local minimum point like LM in Fig.1. If the constraint violation function violates an upper bound on it then the accessory parameter σ can be increased to prevent divergence. Next, we consider what to do when a local search region contains the minimum point. Firstly, we need signals that the minimum point is inside the current local search region. We need some criteria to do so. Let ξ be a small parameter say of value 0.1. constraint cr ( x ) is said to be ξ -active if cr ( x) < ξ. First criterion to signal that the current local search region contains the minimum point, is that all the constraints must be ξ -active. When this happens we need to define the best initial values for the Lagrange multipliers. t the k- th iteration, these are defined by, T 1 p( x) = [ J( x) J ( x)] J( x) f( x). (18) Here, J( x ) is the Jacobian of the constraints. The Lagrangian is, T Lxp (, ) = f( x) pcx ( ). (19) The Lyapunov/merit function is, V( x, p) = (1/ )( xl + pl ). (0) Let γ be a relatively small positive number say. If Vxp (, ) < γ =, then we have the second criterion to confirm that the current point is close to the minimum point is satisfied. If these two criteria are satisfied, then we have signals that the minimum point is inside the current local search region. The Newton method is in the ( xp-space., ) The iterative equations are: xk ( + 1) = xk ( ) + α( kuk ) ( ), (1) pk ( + 1) = pk ( ) + α( kwk ) ( ). () The Newton method sets out to solve the nonlinear equations, T Lxp (, ) = f( x) J px ( ) = 0, (3) x ISBN:

4 p Lxp (, ) = cx ( ) = 0. (4) The Lyapunov/merit function (0) is used for a line search for the steplength α( k) in (1) and (). Example 3.. In phase I of problem (13) and (14), we minimize the accessory function (16) with σ1 = 1, σ = 0.1. Furthermore, we treat x as unconstrained because (17) is none other than the boundary of the local search region in an unconstrained problem. This defines an unconstrained optimization with a global minimum point at, M = ( ,0). With suitable choices of parameters, we determine that the global minimum point of the problem (13) and (14) is inside a local search region with centre at M. Thus, we switch to phase II and use the Newton method (18) to (4) and achieve rapid convergence. Hence, this two phase method provides a globally convergent method to (-1,0). This is surprising, given that the constrained optimization problem has a local minimum at LM = (1,0) and global maximum points at (0.5, 0.866) and (0.5, ). ny numerical method which uses a quadratic model to construct iterations would not provide the solution (-1,0), globally. If the initial point is at the local minimum (1,0) it will then remain there. 4 Surrogate Optimization Problem In a numerical method to solve optimization problem with equality constraints, it is not possible to use immediately Lagrange multipliers most of the time. This is due to the fact that most of the time the local search region does not intersect the constraint set and thus there is no feasible iteration. Hence, in an optimization with equality constraints, we have to define and use another associated optimization problem in order to use Lagrange multipliers and to construct iterations. Clarification of this issue helps to understand more deeply the roles of some of the methods like CDT, [5], and Vardi, [6], for a linear quadratic model which are used to construct iterations. fundamental concept in optimization is that the objective function and the feasible set in a problem must be clearly defined. Changes to the objective function or the feasible set create a new optimization problem. The challenge then is to determine how a new optimization problem is related to the original optimization problem and to demonstrate the usefulness of their relationship. By definition, we get a surrogate optimization problem if the feasible set in an optimization problem is altered in any way. Suppose we start with a constrained optimization problem which we shall designate as problem O, the original problem. We may not be able to construct iterations for problem O. We have to change the constraint set on a temporary basis somehow with the use of parameters. s these parameters tend to some limits they could steer the current point to a position where iterations can be defined in problem O. We shall now explain how this is done. x Shifted substitute constraint theta = 1.0 Q3 Q Shifted substitute constraint theta = Fig.3. Feasible shifted substitute constraints. t a point xk ( ) we call the new constraints, cx ( ) θcxk [ ( )] = 0, (5) shifted substitute constraints with parameter, θ. t the current point xk ( ) is a known vector and thus it should be treated as a constant vector in the construction of an iteration. The parameter θ is positive and it should be chosen in the range 0 θ < 1 so that the intersection of the current local search region and the shifted substitute constraint set is not empty. With θ = 1, the shifted substitute constraints pass through the current point, xk ( ). From Fig. 3 we find that at the point = (-6,1), we can only define Lagrange multipliers for (5) at the current point if θ 1. It is undesirable to use parameter values greater than one. For smaller values of the parameter, there is no feasible solution and then the Lagrange multipliers cannot be used. s the current point moves closer to the constraints, the lower limit of the parameter can be reduced, until zero is attained. Thus, in any numerical scheme we should initially choose the parameter θ to be close to one and reduce it monotonically, as the iterative method progresses. x1 Q ISBN:

5 Note that larger values of this parameter enable feasibility but they would generate poorer iterations. In this nonlinear form of the CDT method there is a logical rationale in the choice of the parameter, θ. It should be chosen to be close to the value one at the start of an iterative process. It is then monotonically decreased as the iterative process proceeds. This is because we can expect that good iterations would move the current point closer and closer to the original constraint equations. Furthermore, a smaller value of this parameter would provide better iterations. But too small a value may not generate feasible iterations, Fig.4. This way of prescribing the parameter θ arises naturally in the nonlinear analysis above but it is not obvious to do the same procedure in a CDT method, when it is applied to a linear quadratic model. Nocedal and Wright [7] in their classic book on page 555 noted the difficulties in choosing the equivalent parameter in the CDT method in a linear quadratic model, which is normally used to construct iterations. x P1 Q5 P4 Q4 Theta = x 1 +x feasible shifted substitute constraint Theta = Optimal level set f(x)=f(x*) Non-feasible shifted substitute constraint x1 M Theta =0 LM Constraint Fig.4. Two feasible iterations Q5 and Q4 with a feasible shifted substitute constraint. Values of theta also depend on the position of current point,. Fig. 4 provides a global picture of what happens when we replace the equality constraint by a shifted substitute equality constraint which now intersects the local search region centred at. In this example two feasible iterations can now be constructed when θ = The iteration Q5 maximizes the objective function in this surrogate optimization problem while the iteration Q4 minimizes the objective function. It is interesting to note that both iterations are acceptable. In fact, the maximizing iteration Q5 is a better iteration as it provides a value closer to the optimal value of the objective function. This means that at such a point, far from the original equality constraint, the need to minimize or maximize the original objective function may not be critical. In this example, the choice of the value of the parameter θ is more important in the construction of good iterations. In the two phase method for an optimization problem with equality constraints in Section 3, we also need to use a surrogate problem. We ignore altogether the constraint set in the construction of an iteration. We construct iterations as an unconstrained optimization problem. This is thus a surrogate optimization problem as we have changed the feasible set to construct the iterations. 5 Proxy Optimization Problem new optimization problem is called a proxy optimization problem and designated by as problem, P, if we use a new objective function instead of the objective function in the original optimization problem, O. Proxy optimization problems are in fact frequently used in the existing literature by the use of various merit functions in place of the original objective function. But the relationship between the original optimization problem and a proxy optimization is usually not explicitly defined. Example 5.1. Consider the ordinary differential equation steepest descent method to compute the minimum point of the unconstrained function, f( x) = x1 /(1 + x1 ) + x. (33) This function does not have properly nested level set because a level set f( x) = K > 1is not a closed surface. Thus, we cannot use it to construct a Lyapunov function to establish global convergence of a numerical method to compute its minimum. We can use the steepest descent dynamics of (33) and replace the objective function by, V( x) = x1 + x. (34) This new function has properly nested level sets, globally. We have a proxy optimization problem. Next, we seek the minimum point of (34) using the steepest descent trajectories of (33). We have a globally convergent method for (34) because the level sets of (34) are properly nested, globally. Furthermore, the function (34) is strictly monotonically decreasing along the steepest descent trajectories of (33). Both problems have the same minimum point. Thus, the proxy problem P in (34), enables us to conclude that the steepest descent ISBN:

6 trajectories of (33) are globally convergent in the original optimization problem. The important thing is to note that we have a new optimization problem whenever we change the objective function. The challenge then is to define carefully the relationships between the two optimization problems, O and P. In our proposed two phase method to solve an optimization problem with equality constraints, we also use a proxy optimization problem (16) and (17) to construct an iteration. The new objective function is the accessory function (16). It is a combination of the objective function and the constraints violation function. We cannot use the original objective function on its own to construct feasible iterations in example 3.1. We had earlier pointed out as a counterexample that at a point like = (-6,1), it is not logical to try to minimize the objective function because good iterations must increase the value of the objective function. This is because the optimal value of the objective function has a higher value than the value of the objective function at the current point. Thus, the use of a quadratic model and its minimization to construct an iteration everywhere in an optimization problem with equality constraints, can only be justified because a proxy optimization problem is used to construct an iteration. The use of the Lyapunov/merit function (0) in phase II and Newton method is an example of the use of a proxy optimization problem. From experience, the augmented Lagrangian function has been found to be a good replacement of the original objective function in optimization problems with equality constraints, [8]. This is an example of the use of a proxy optimization problem. Finally, we note that in practice we often use a new problem which is a P and S problem to solve a constrained O problem with equality constraints. It is important to understand the changes in the transition from the original problem to a surrogate and/or proxy problem. In a P and S problem we seek to construct iterations for the O problem by minimizing locally a new objective function subject to new constraints. Then, the use of a quadratic model and its minimization at a point far away from the solution may have a logical basis in an optimization problem with equality constraints. s pointed out earlier, at a point like in Example 3.1, a good iteration must increase the value of the objective function in the O problem and thus there is no immediate logic to minimize a quadratic model at such a point to construct an iteration. 6 Conclusion New concepts of a surrogate optimization problem and a proxy optimization problem are defined. These new concepts are needed to understand better the assumptions in the use of a quadratic model and Lagrange multipliers to construct an iteration at a point far away from the solution in an optimization problem with equality constraints. Most of the time, at a local search region, it is not possible to define and use Lagrange multipliers directly. Generally, the bounded and closed local search region, at a current point, does not intersect the equality constraints set. Thus, at such a current point, there is no feasible iteration in a numerical method for the original optimization problem. We conclude that to construct an iteration in an optimization problem with equality constraints, generally, there is no choice but to use a surrogate and/or a proxy optimization problem. References: [1] Goh, B. S., Greatest descent algorithms in unconstrained optimization, J Optim. Theory ppl Vol.14, 009, pp [] Goh, B. S., Greatest descent methods in optimization with equality constraints. J. Optim. Theory ppl. Vol. 148, 011, pp [3] Kelley, C.T., Iterative Methods in Optimization, SIM, Philadelphia, (1999). [4] Fan, J. Y. and Yuan, Y.X., On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Computing, Vol. 74, 005, pp [5] Cellis, M.R., Dennis, J. E. and Tapia, R.., trust-region strategy for nonlinear equality constrained optimization, in Numerical Optimization 1984 (P. Boggs, R. Byrd and R. Schnabel, eds.) SIM, Philadelphia, pp [6] Vardi,. trust region algorithm for equation constrained minimization: Convergence properties and implementations, SIM J. Numer. nal.,vol., 1985, pp [7] Nocedal, J. and Wright, S.J., Numerical Optimization, Springer-Verlag, New York, 006. [8] Bertsekas, D.P., Constrained Optimization and Lagrange Multiplier Methods, cademic Press, New York, 198. ISBN:

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles INTERNATIONAL JOURNAL OF MATHEMATICS MODELS AND METHODS IN APPLIED SCIENCES Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles M. Fernanda P.

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in ENGINEERING, Venice, Italy, November 21-23, 2007 18 Performance Evaluation of an Interior Point Filter Line Search Method for Constrained

More information

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case Ellipsoid Algorithm 15-853:Algorithms in the Real World Linear and Integer Programming II Ellipsoid algorithm Interior point methods First polynomial-time algorithm for linear programming (Khachian 79)

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

SYSTEMS OF NONLINEAR EQUATIONS

SYSTEMS OF NONLINEAR EQUATIONS SYSTEMS OF NONLINEAR EQUATIONS Widely used in the mathematical modeling of real world phenomena. We introduce some numerical methods for their solution. For better intuition, we examine systems of two

More information

Multi Layer Perceptron trained by Quasi Newton learning rule

Multi Layer Perceptron trained by Quasi Newton learning rule Multi Layer Perceptron trained by Quasi Newton learning rule Feed-forward neural networks provide a general framework for representing nonlinear functional mappings between a set of input variables and

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Handling of constraints

Handling of constraints Handling of constraints MTH6418 S. Le Digabel, École Polytechnique de Montréal Fall 2015 (v3) MTH6418: Constraints 1/41 Plan Taxonomy of constraints Approaches The Progressive Barrier (PB) References MTH6418:

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

8.B. The result of Regiomontanus on tetrahedra

8.B. The result of Regiomontanus on tetrahedra 8.B. The result of Regiomontanus on tetrahedra We have already mentioned that Plato s theory that the five regular polyhedra represent the fundamental elements of nature, and in supplement (3.D) to the

More information

Search direction improvement for gradient-based optimization problems

Search direction improvement for gradient-based optimization problems Computer Aided Optimum Design in Engineering IX 3 Search direction improvement for gradient-based optimization problems S Ganguly & W L Neu Aerospace and Ocean Engineering, Virginia Tech, USA Abstract

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

A nodal based evolutionary structural optimisation algorithm

A nodal based evolutionary structural optimisation algorithm Computer Aided Optimum Design in Engineering IX 55 A dal based evolutionary structural optimisation algorithm Y.-M. Chen 1, A. J. Keane 2 & C. Hsiao 1 1 ational Space Program Office (SPO), Taiwan 2 Computational

More information

Optimal Control Techniques for Dynamic Walking

Optimal Control Techniques for Dynamic Walking Optimal Control Techniques for Dynamic Walking Optimization in Robotics & Biomechanics IWR, University of Heidelberg Presentation partly based on slides by Sebastian Sager, Moritz Diehl and Peter Riede

More information

Lecture 12: Feasible direction methods

Lecture 12: Feasible direction methods Lecture 12 Lecture 12: Feasible direction methods Kin Cheong Sou December 2, 2013 TMA947 Lecture 12 Lecture 12: Feasible direction methods 1 / 1 Feasible-direction methods, I Intro Consider the problem

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

Instructions and information

Instructions and information Instructions and information. Check that this paper has a total of 5 pages including the cover page.. This is a closed book exam. Calculators and electronic devices are not allowed. Notes and dictionaries

More information

A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION. A Thesis Presented to The Academic Faculty. Ivan Dario Jimenez

A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION. A Thesis Presented to The Academic Faculty. Ivan Dario Jimenez A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION A Thesis Presented to The Academic Faculty By Ivan Dario Jimenez In Partial Fulfillment of the Requirements for the Degree B.S. in Computer Science with

More information

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces Chapter 6 Curves and Surfaces In Chapter 2 a plane is defined as the zero set of a linear function in R 3. It is expected a surface is the zero set of a differentiable function in R n. To motivate, graphs

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Computational study of the step size parameter of the subgradient optimization method

Computational study of the step size parameter of the subgradient optimization method 1 Computational study of the step size parameter of the subgradient optimization method Mengjie Han 1 Abstract The subgradient optimization method is a simple and flexible linear programming iterative

More information

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP): Linköping University Optimization TAOP3(0) Department of Mathematics Examination Oleg Burdakov of 30 October 03 Assignment Consider the following linear programming problem (LP): max z = x + x s.t. x x

More information

A CLASS OF INDIRECT UTILITY FUNCTIONS PREDICTING GIFFEN BEHAVIOUR. P.G. MOFFATT, School of Economics, University of East Anglia, UK.

A CLASS OF INDIRECT UTILITY FUNCTIONS PREDICTING GIFFEN BEHAVIOUR. P.G. MOFFATT, School of Economics, University of East Anglia, UK. A CLASS OF INDIRECT UTILITY FUNCTIONS PREDICTING GIFFEN BEHAVIOUR P.G. MOFFATT, School of Economics, University of East Anglia, UK September 2010 Abstract The problem of recognising Giffen behaviour is

More information

MATH 2400, Analytic Geometry and Calculus 3

MATH 2400, Analytic Geometry and Calculus 3 MATH 2400, Analytic Geometry and Calculus 3 List of important Definitions and Theorems 1 Foundations Definition 1. By a function f one understands a mathematical object consisting of (i) a set X, called

More information

Simulated Annealing Method for Regional Analysis

Simulated Annealing Method for Regional Analysis Simulated Annealing Method for Regional Analysis JAN PANUS, STANISLAVA SIMONOVA Institute of System Engineering and Informatics University of Pardubice Studentská 84, 532 10 Pardubice CZECH REPUBLIC http://www.upce.cz

More information

Topology Optimization of Two Linear Elastic Bodies in Unilateral Contact

Topology Optimization of Two Linear Elastic Bodies in Unilateral Contact 2 nd International Conference on Engineering Optimization September 6-9, 2010, Lisbon, Portugal Topology Optimization of Two Linear Elastic Bodies in Unilateral Contact Niclas Strömberg Department of Mechanical

More information

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading 11 th World Congress on Structural and Multidisciplinary Optimisation 07 th -12 th, June 2015, Sydney Australia Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection Paul Armand 1 Ngoc Nguyen Tran 2 Institut de Recherche XLIM Université de Limoges Journées annuelles

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Constrained Optimization 1 / 46 EC2040 Topic 5 - Constrained Optimization Reading 1 Chapters 12.1-12.3

More information

Truss structural configuration optimization using the linear extended interior penalty function method

Truss structural configuration optimization using the linear extended interior penalty function method ANZIAM J. 46 (E) pp.c1311 C1326, 2006 C1311 Truss structural configuration optimization using the linear extended interior penalty function method Wahyu Kuntjoro Jamaluddin Mahmud (Received 25 October

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

Thursday 14 June 2012 Morning

Thursday 14 June 2012 Morning Thursday 4 June 202 Morning A2 GCE MATHEMATICS 4726 Further Pure Mathematics 2 QUESTION PAPER *47325062* Candidates answer on the Printed Answer Book. OCR supplied materials: Printed Answer Book 4726 List

More information

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne Hartley - Zisserman reading club Part I: Hartley and Zisserman Appendix 6: Iterative estimation methods Part II: Zhengyou Zhang: A Flexible New Technique for Camera Calibration Presented by Daniel Fontijne

More information

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c ORIE 63 Mathematical Programming I November 13, 214 Lecturer: David P. Williamson Lecture 23 Scribe: Mukadder Sevi Baltaoglu 1 Interior Point Methods Consider the standard primal and dual linear programs:

More information

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming Gianni Di Pillo (dipillo@dis.uniroma1.it) Giampaolo Liuzzi (liuzzi@iasi.cnr.it) Stefano Lucidi (lucidi@dis.uniroma1.it)

More information

30. Constrained Optimization

30. Constrained Optimization 30. Constrained Optimization The graph of z = f(x, y) is represented by a surface in R 3. Normally, x and y are chosen independently of one another so that one may roam over the entire surface of f (within

More information

Linear Discriminant Functions: Gradient Descent and Perceptron Convergence

Linear Discriminant Functions: Gradient Descent and Perceptron Convergence Linear Discriminant Functions: Gradient Descent and Perceptron Convergence The Two-Category Linearly Separable Case (5.4) Minimizing the Perceptron Criterion Function (5.5) Role of Linear Discriminant

More information

A penalty based filters method in direct search optimization

A penalty based filters method in direct search optimization A penalty based filters method in direct search optimization ALDINA CORREIA CIICESI/ESTG P.PORTO Felgueiras PORTUGAL aic@estg.ipp.pt JOÃO MATIAS CM-UTAD Vila Real PORTUGAL j matias@utad.pt PEDRO MESTRE

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Augmented Lagrangian Methods

Augmented Lagrangian Methods Augmented Lagrangian Methods Mário A. T. Figueiredo 1 and Stephen J. Wright 2 1 Instituto de Telecomunicações, Instituto Superior Técnico, Lisboa, Portugal 2 Computer Sciences Department, University of

More information

MATLAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems

MATLAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems International Conference on Mathematical Computer Engineering - ICMCE - 8 MALAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems Raja Das a a School

More information

10. Network dimensioning

10. Network dimensioning Partly based on slide material by Samuli Aalto and Jorma Virtamo ELEC-C7210 Modeling and analysis of communication networks 1 Contents Introduction Parameters: topology, routing and traffic Dimensioning

More information

Classical Gradient Methods

Classical Gradient Methods Classical Gradient Methods Note simultaneous course at AMSI (math) summer school: Nonlin. Optimization Methods (see http://wwwmaths.anu.edu.au/events/amsiss05/) Recommended textbook (Springer Verlag, 1999):

More information

Pi at School. Arindama Singh Department of Mathematics Indian Institute of Technology Madras Chennai , India

Pi at School. Arindama Singh Department of Mathematics Indian Institute of Technology Madras Chennai , India Pi at School rindama Singh epartment of Mathematics Indian Institute of Technology Madras Chennai-600036, India Email: asingh@iitm.ac.in bstract: In this paper, an attempt has been made to define π by

More information

Recent Developments in Model-based Derivative-free Optimization

Recent Developments in Model-based Derivative-free Optimization Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:

More information

Chapter 7 Spatial Operation

Chapter 7 Spatial Operation 7.1 Introduction Chapter 7 Spatial Operation Q: What is spatial operation? A: Spatial operation is computational manipulation of spatial objects that deepen our understanding of spatial phenomena. In spatial

More information

Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm

Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm Ana Maria A. C. Rocha and Edite M. G. P. Fernandes Abstract This paper extends our previous work done

More information

Lecture 15: Log Barrier Method

Lecture 15: Log Barrier Method 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Optimal Proxy-Limited Lines for Representing Voltage Constraints in a DC Optimal Powerflow

Optimal Proxy-Limited Lines for Representing Voltage Constraints in a DC Optimal Powerflow Optimal Proxy-Limited Lines for Representing Voltage Constraints in a DC Optimal Powerflow by Michael Schlindwein A thesis submitted in fulfillment of the requirements for the degree of Master of Science

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Optimization Algorithms, Implementations and. Discussions (technical report for self-reference)

Optimization Algorithms, Implementations and. Discussions (technical report for self-reference) Optimization Algorithms, Implementations and Discussions (technical report for self-reference) By : Lam Ngok Introduction In this preliminary optimization study we tested and implemented six different

More information

Lagrangian methods for the regularization of discrete ill-posed problems. G. Landi

Lagrangian methods for the regularization of discrete ill-posed problems. G. Landi Lagrangian methods for the regularization of discrete ill-posed problems G. Landi Abstract In many science and engineering applications, the discretization of linear illposed problems gives rise to large

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

Convex Analysis and Minimization Algorithms I

Convex Analysis and Minimization Algorithms I Jean-Baptiste Hiriart-Urruty Claude Lemarechal Convex Analysis and Minimization Algorithms I Fundamentals With 113 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona

More information

Calculus III. Math 233 Spring In-term exam April 11th. Suggested solutions

Calculus III. Math 233 Spring In-term exam April 11th. Suggested solutions Calculus III Math Spring 7 In-term exam April th. Suggested solutions This exam contains sixteen problems numbered through 6. Problems 5 are multiple choice problems, which each count 5% of your total

More information

SPACE MAPPING AND DEFECT CORRECTION 1

SPACE MAPPING AND DEFECT CORRECTION 1 COMPUTATIONAL METHODS IN APPLIED MATHEMATICS, Vol.5(2005), No.2, pp.107 136 c 2005 Institute of Mathematics of the National Academy of Sciences of Belarus SPACE MAPPING AND DEFECT CORRECTION 1 D. ECHEVERRÍA

More information

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function. 1 The Lagrange multipliers is a mathematical method for performing constrained optimization of differentiable functions. Recall unconstrained optimization of differentiable functions, in which we want

More information

Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with

Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with Parallelization in Non-hierarchical Analytical Target Cascading Yongsu Jung Department of Mechanical Engineering,

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Lagrangian Relaxation

Lagrangian Relaxation Lagrangian Relaxation Ş. İlker Birbil March 6, 2016 where Our general optimization problem in this lecture is in the maximization form: maximize f(x) subject to x F, F = {x R n : g i (x) 0, i = 1,...,

More information

OPTIMIZATION, OPTIMAL DESIGN AND DE NOVO PROGRAMMING: DISCUSSION NOTES

OPTIMIZATION, OPTIMAL DESIGN AND DE NOVO PROGRAMMING: DISCUSSION NOTES OPTIMIZATION, OPTIMAL DESIGN AND DE NOVO PROGRAMMING: DISCUSSION NOTES MILAN ZELENY Introduction Fordham University, New York, USA mzeleny@fordham.edu Many older texts, with titles like Globally Optimal

More information

CSE151 Assignment 2 Markov Decision Processes in the Grid World

CSE151 Assignment 2 Markov Decision Processes in the Grid World CSE5 Assignment Markov Decision Processes in the Grid World Grace Lin A484 gclin@ucsd.edu Tom Maddock A55645 tmaddock@ucsd.edu Abstract Markov decision processes exemplify sequential problems, which are

More information

A penalty based filters method in direct search optimization

A penalty based filters method in direct search optimization A penalty based filters method in direct search optimization Aldina Correia CIICESI / ESTG P.PORTO Felgueiras, Portugal aic@estg.ipp.pt João Matias CM-UTAD UTAD Vila Real, Portugal j matias@utad.pt Pedro

More information

The Geometry of Carpentry and Joinery

The Geometry of Carpentry and Joinery The Geometry of Carpentry and Joinery Pat Morin and Jason Morrison School of Computer Science, Carleton University, 115 Colonel By Drive Ottawa, Ontario, CANADA K1S 5B6 Abstract In this paper we propose

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

N ondifferentiable optimization solver: basic theoretical assumptions

N ondifferentiable optimization solver: basic theoretical assumptions 48 N ondifferentiable optimization solver: basic theoretical assumptions Andrzej Stachurski Institute of Automatic Control, Warsaw University of Technology Nowowiejska 16/19, Warszawa, Poland. Tel: fj

More information

Nonparametric regression using kernel and spline methods

Nonparametric regression using kernel and spline methods Nonparametric regression using kernel and spline methods Jean D. Opsomer F. Jay Breidt March 3, 016 1 The statistical model When applying nonparametric regression methods, the researcher is interested

More information

Chapter 8 Visualization and Optimization

Chapter 8 Visualization and Optimization Chapter 8 Visualization and Optimization Recommended reference books: [1] Edited by R. S. Gallagher: Computer Visualization, Graphics Techniques for Scientific and Engineering Analysis by CRC, 1994 [2]

More information

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam Presented by Based on work by, Gilad Lerman, and Arthur Szlam What is Tracking? Broad Definition Tracking, or Object tracking, is a general term for following some thing through multiple frames of a video

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras. Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture No # 06 Simplex Algorithm Initialization and Iteration (Refer Slide

More information

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini In the name of God Network Flows 7. Multicommodity Flows Problems 7.2 Lagrangian Relaxation Approach Fall 2010 Instructor: Dr. Masoud Yaghini The multicommodity flow problem formulation: We associate nonnegative

More information

Friday 18 January 2013 Afternoon

Friday 18 January 2013 Afternoon Friday 18 January 2013 Afternoon AS GCE MATHEMATICS (MEI) 4752/01 Concepts for Advanced Mathematics (C2) QUESTION PAPER * 4 7 3 3 9 7 0 1 1 3 * Candidates answer on the Printed Answer Book. OCR supplied

More information

A TESSELLATION FOR ALGEBRAIC SURFACES IN CP 3

A TESSELLATION FOR ALGEBRAIC SURFACES IN CP 3 A TESSELLATION FOR ALGEBRAIC SURFACES IN CP 3 ANDREW J. HANSON AND JI-PING SHA In this paper we present a systematic and explicit algorithm for tessellating the algebraic surfaces (real 4-manifolds) F

More information

Measuring Lengths The First Fundamental Form

Measuring Lengths The First Fundamental Form Differential Geometry Lia Vas Measuring Lengths The First Fundamental Form Patching up the Coordinate Patches. Recall that a proper coordinate patch of a surface is given by parametric equations x = (x(u,

More information

INTERIOR POINT METHOD BASED CONTACT ALGORITHM FOR STRUCTURAL ANALYSIS OF ELECTRONIC DEVICE MODELS

INTERIOR POINT METHOD BASED CONTACT ALGORITHM FOR STRUCTURAL ANALYSIS OF ELECTRONIC DEVICE MODELS 11th World Congress on Computational Mechanics (WCCM XI) 5th European Conference on Computational Mechanics (ECCM V) 6th European Conference on Computational Fluid Dynamics (ECFD VI) E. Oñate, J. Oliver

More information

Chapter 6. Semi-Lagrangian Methods

Chapter 6. Semi-Lagrangian Methods Chapter 6. Semi-Lagrangian Methods References: Durran Chapter 6. Review article by Staniford and Cote (1991) MWR, 119, 2206-2223. 6.1. Introduction Semi-Lagrangian (S-L for short) methods, also called

More information

Lecture Notes: Widening Operators and Collecting Semantics

Lecture Notes: Widening Operators and Collecting Semantics Lecture Notes: Widening Operators and Collecting Semantics 15-819O: Program Analysis (Spring 2016) Claire Le Goues clegoues@cs.cmu.edu 1 A Collecting Semantics for Reaching Definitions The approach to

More information

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING DAVID G. LUENBERGER Stanford University TT ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California London Don Mills, Ontario CONTENTS

More information

7. The Gauss-Bonnet theorem

7. The Gauss-Bonnet theorem 7. The Gauss-Bonnet theorem 7.1 Hyperbolic polygons In Euclidean geometry, an n-sided polygon is a subset of the Euclidean plane bounded by n straight lines. Thus the edges of a Euclidean polygon are formed

More information

COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION. Ivan P. Stanimirović. 1. Introduction

COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION. Ivan P. Stanimirović. 1. Introduction FACTA UNIVERSITATIS (NIŠ) Ser. Math. Inform. Vol. 27, No 1 (2012), 55 66 COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION Ivan P. Stanimirović Abstract. A modification of the standard

More information