Numerical Method in Optimization as a Multi-stage Decision Control System

Similar documents
Programming, numerics and optimization

Characterizing Improving Directions Unconstrained Optimization

A Short SVM (Support Vector Machine) Tutorial

Comparison of Interior Point Filter Line Search Strategies for Constrained Optimization by Performance Profiles

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

Performance Evaluation of an Interior Point Filter Line Search Method for Constrained Optimization

Ellipsoid Algorithm :Algorithms in the Real World. Ellipsoid Algorithm. Reduction from general case

Introduction to Optimization

SYSTEMS OF NONLINEAR EQUATIONS

Multi Layer Perceptron trained by Quasi Newton learning rule

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Handling of constraints

Chapter 3 Numerical Methods

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

Introduction to Constrained Optimization

Computational Methods. Constrained Optimization

Geometrical Feature Extraction Using 2D Range Scanner

Applied Lagrange Duality for Constrained Optimization

8.B. The result of Regiomontanus on tetrahedra

Search direction improvement for gradient-based optimization problems

Introduction to Optimization Problems and Methods

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Mathematical Programming and Research Methods (Part II)

A nodal based evolutionary structural optimisation algorithm

Optimal Control Techniques for Dynamic Walking

Lecture 12: Feasible direction methods

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

MATH3016: OPTIMIZATION

COMS 4771 Support Vector Machines. Nakul Verma

Instructions and information

A FACTOR GRAPH APPROACH TO CONSTRAINED OPTIMIZATION. A Thesis Presented to The Academic Faculty. Ivan Dario Jimenez

Chapter 6. Curves and Surfaces. 6.1 Graphs as Surfaces

Chapter II. Linear Programming

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Computational study of the step size parameter of the subgradient optimization method

Department of Mathematics Oleg Burdakov of 30 October Consider the following linear programming problem (LP):

A CLASS OF INDIRECT UTILITY FUNCTIONS PREDICTING GIFFEN BEHAVIOUR. P.G. MOFFATT, School of Economics, University of East Anglia, UK.

MATH 2400, Analytic Geometry and Calculus 3

Simulated Annealing Method for Regional Analysis

Topology Optimization of Two Linear Elastic Bodies in Unilateral Contact

Optimal Design of a Parallel Beam System with Elastic Supports to Minimize Flexural Response to Harmonic Loading

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

Discrete Optimization. Lecture Notes 2

Some Advanced Topics in Linear Programming

An augmented Lagrangian method for equality constrained optimization with fast infeasibility detection

Constrained Optimization

Truss structural configuration optimization using the linear extended interior penalty function method

Algorithms for Integer Programming

Thursday 14 June 2012 Morning

Hartley - Zisserman reading club. Part I: Hartley and Zisserman Appendix 6: Part II: Zhengyou Zhang: Presented by Daniel Fontijne

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

A Truncated Newton Method in an Augmented Lagrangian Framework for Nonlinear Programming

30. Constrained Optimization

Linear Discriminant Functions: Gradient Descent and Perceptron Convergence

A penalty based filters method in direct search optimization

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Augmented Lagrangian Methods

MATLAB Simulink Modeling and Simulation of Recurrent Neural Network for Solving Linear Programming Problems

10. Network dimensioning

Classical Gradient Methods

Pi at School. Arindama Singh Department of Mathematics Indian Institute of Technology Madras Chennai , India

Recent Developments in Model-based Derivative-free Optimization

Chapter 7 Spatial Operation

Numerical Experiments with a Population Shrinking Strategy within a Electromagnetism-like Algorithm

Lecture 15: Log Barrier Method

Optimal Proxy-Limited Lines for Representing Voltage Constraints in a DC Optimal Powerflow

Lagrangian Relaxation: An overview

Optimization Algorithms, Implementations and. Discussions (technical report for self-reference)

Lagrangian methods for the regularization of discrete ill-posed problems. G. Landi

Programs. Introduction

Solution Methods Numerical Algorithms

Convex Analysis and Minimization Algorithms I

Calculus III. Math 233 Spring In-term exam April 11th. Suggested solutions

SPACE MAPPING AND DEFECT CORRECTION 1

In other words, we want to find the domain points that yield the maximum or minimum values (extrema) of the function.

Modified Augmented Lagrangian Coordination and Alternating Direction Method of Multipliers with

Lecture 2 September 3

Lagrangian Relaxation

OPTIMIZATION, OPTIMAL DESIGN AND DE NOVO PROGRAMMING: DISCUSSION NOTES

CSE151 Assignment 2 Markov Decision Processes in the Grid World

A penalty based filters method in direct search optimization

The Geometry of Carpentry and Joinery

Lecture 4 Duality and Decomposition Techniques

N ondifferentiable optimization solver: basic theoretical assumptions

Nonparametric regression using kernel and spline methods

Chapter 8 Visualization and Optimization

Using Subspace Constraints to Improve Feature Tracking Presented by Bryan Poling. Based on work by Bryan Poling, Gilad Lerman, and Arthur Szlam

Fundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.

Network Flows. 7. Multicommodity Flows Problems. Fall 2010 Instructor: Dr. Masoud Yaghini

Friday 18 January 2013 Afternoon

A TESSELLATION FOR ALGEBRAIC SURFACES IN CP 3

Measuring Lengths The First Fundamental Form

INTERIOR POINT METHOD BASED CONTACT ALGORITHM FOR STRUCTURAL ANALYSIS OF ELECTRONIC DEVICE MODELS

Chapter 6. Semi-Lagrangian Methods

Lecture Notes: Widening Operators and Collecting Semantics

INTRODUCTION TO LINEAR AND NONLINEAR PROGRAMMING

7. The Gauss-Bonnet theorem

COMPENDIOUS LEXICOGRAPHIC METHOD FOR MULTI-OBJECTIVE OPTIMIZATION. Ivan P. Stanimirović. 1. Introduction

Transcription:

Numerical Method in Optimization as a Multi-stage Decision Control System B.S. GOH Institute of Mathematical Sciences University of Malaya 50603 Kuala Lumpur MLYSI gohoptimum@gmail.com bstract: - Numerical methods to solve optimization problems are none other than multi-stage decision control systems. Furthermore, it is intrinsically a two phase system. When the local search region does not contain the minimum point, iterations should be defined so that the next point is on the boundary of the local search region. They are then approximated. Currently, most numerical methods tend to use quadratic models to construct iterations with assumptions some of which are analyzed here. In phase II, the local search region contains the minimum point. Then, the Newton method should be used and fast convergence is achieved. Key-Words: - Optimization, Newton Iteration, Steepest Descent, Equality Constraints, Lagrange Multipliers. 1 Introduction Numerical methods to solve nonlinear optimization problem are none other than dynamic multi-stage decision control systems. We have a sequence of local closed search regions, Z1, Z,..., Z N. Ideally, iterations should generate points on the boundary of Z1, Z,..., ZN 1 and the Newton iteration or equivalent should only be used in the final local search region, Z N, see [1], []. good model of a numerical method in optimization is the problem of driving a car. We need a set of tactics for driving along the main road. Once, we are very close to the final destination, we switch to tactics for parking. Numerical methods in optimization are essentially a two phase method. Two key questions are: how to construct an iteration when the current local search region does not contain the solution and how to use Newton method when it does. The latter requires some signals that the current point is near the solution. The most important value of formulating a numerical method in optimization problem as a multi-stage decision control system is that it can simplify the theory of numerical methods in optimization. We shall examine this approach, firstly by an analysis of unconstrained optimization. Secondly, we define how iterations should be defined for an optimization problem with equality constraints. Unconstrained optimization We wish to minimize a function f( x) for all values of the state vector, x. We have the control system (iterative equations), xk ( + 1) = xk ( ) + uk ( ) = xk ( ) + α( kdk ) ( ). (1) The state vector xk ( ) defines the current position. The control vector uk ( ) represents the decision variables which are available at the k-th iteration. The control vector uk ( ) can be expressed in terms of a relative steplength α( k) and a direction vector, dk ( ). Both α( k) and dk ( ) are decision variables. Generally, at a point far from the solution, given the steplength α ( k), the best choice of the control direction is to generate the next position on the boundary of the local search region such that the objective function is minimized. Thus, given α ( k), we choose dk ( ) which minimizes, f[ xx ( ) + α( kdk ) ( )], () such that, α( kdk ) ( ) = α d T ( kdk ) ( ) = k, (3) where, k is the radius of the local search region. Using a Lagrange multiplier we conclude that the optimal search direction is, dk ( ) = fxx [ ( ) + α( kdk ) ( )] = fxk [ ( + 1)]. (4) The Lagrange multiplier is, λ = 1/[ α( k)]. (5) ISBN: 978-1-61804-094-7 5

Substitute (4) into (3) we deduce that the optimal Example 3.1. Consider, steplength is, Minimize, f( x) = x1+ x, (13) α ( k) = k / f[ xk ( + 1)] k / f[ xk ( )]. (6) such that cx ( ) = x For convenience, let 1 + x 1= 0. (14) µ ( k) = 1/ α( k) = f( xk ( )) / k. (7) The global minimum point is at M = (-1,0) and there is a local minimum point at LM = (1,0). This important choice of the Levenberg-Marquardt parameter was first suggested by Kelley in [3]. gain, we have a sequence of local search Subsequently, it was used by Fan and Yuan [4] and regions, Z they showed that it is a very effective choice. 1, Z,..., Z N. Ideally, iterations should We approximate the greatest descent direction generate points on the boundary of Z 1, Z,..., ZN 1 (4) and get, dk ( ) = f[ xk ( )] α( k) f( xdk ) ( ). (8) and the Newton iteration or equivalent should only be used in the final local search region, Z N. Thus Substitute (7), (8) into (1) we get the approximate greatest descent iteration, GDI, we have a two phase method. In phase I we construct iterations to generate points on the 1 xk ( + 1) = xk ( ) [ µ ( ki ) + f( x)] f( x). (9) boundaries of the local search region and (3) in phase II we use the Newton method or equivalent. By a backtracking line search, we choose the Let x * be the global minimum point. We note parameter µ ( k) so that the function decreases that the level set { x f( x) = f( x*)} divides the state monotonically. If so, and if the Lyapunov function, space into two subregions, Ω V( x) = f( x) f( x*) is properly nested globally, L = { x f( x) < f( x*)} we can conclude that the GDI method, is globally and Ω U = { x f( x) > f( x*)}. For a current state convergent, [1]. function is properly nested if its xk ( ) Ω L a good iteration must increase the value level sets are topologically equivalent to concentric of the objective function. This challenges the logic spherical surfaces. of the common practice of minimizing a quadratic The approximate greatest descent iteration (9) is model at every point in an optimization problem a special case of the Levenberg-Marquardt formula. with equality constraints to construct iterations. It exists everywhere. The steepest descent and the There is in fact a logical explanation to this Newton iterations are limiting cases of GDI. The counterexample. When we use a quadratic model, literature says that the Levenberg-Marquardt we are in fact solving another optimization problem formula approximates the Newton iteration. whose solution may converge to the minimum point However, it is more correct to say that near a of the original problem. solution, the Newton iteration approximates GDI. x 1 +x 4 Example.1. Let, f( x) = 0.5[( x1+ 1.5) + x] + 5( x1 + x 1). (10) Let the initial point be = (- 0.1,0.4). We find that the first GDI jumps over a hill. 3 Equality Constraints Consider the problem: n Minimize f( x ), x R, (11) m such that, cx ( ) = 0, cx ( ) R, m< n. (1) To enable us to understand new concepts easily, we shall analyse a simple example where we know exactly what needs to be done to construct a good numerical method. We have a bird s eye view of the best long optimal term trajectories from any point to the global minimum point. x 3 1 0-1 - -3 P1 P4 P Optimal level set f(x)=f(x*) -4-7 -6-5 -4-3 - -1 0 1 x1 M Constraint Fig.1. Bird s eye view of problem. Points P, and P4 represent good iterations. is best. Consider the current state at = (- 6,1) and a local spherical region of radius, = 0.5. This search region does not intersect the constraint set. Thus, the control system (1), restricted to this local search region and subject to the constraint (14) has LM ISBN: 978-1-61804-094-7 6

no feasible solution. Hence, we cannot define and use Lagrange multipliers of the constraint set at this point. This raises a question on what is the basis for using a quadratic model and Lagrange multipliers at the point. gain, we shall provide an explanation to this second counterexample later on. Thus, the important question is how should we define a good iteration at the point far away from the solution and the constraint set? To see what needs to be done we take a bird s eye view on the geometry of the problem in the simple example. The line, joining to the global minimum point, provides the best long term iteration from the current point to a point on boundary of search region. The neighbouring points in the arc to P4 are acceptable iterations at the current state. Thus the question is how do we construct these points? We define the constraint violation function as, T vx ( ) = cx ( ) cx ( ). (15) We map the images of f( x) and vx ( ) as a point traces the boundary of the local search region. objective function -3.5-4 -4.5-5 -5.5-6 T1 P T P1 P4 800 900 1000 1100 100 1300 1400 1500 1600 1700 1800 constraint violation Fig.. Iterations which generate points on arc P4 to P are acceptable. From an image space analysis we find that we can generate acceptable iterations if we can construct iterations which generate points in the South West and partly in the North West sectors, as in Fig., in the arc P4 to P. How can we construct such iterations? It turns out that these logical iterations are surprisingly easy to construct. We should minimize the accessory function, Fx (, σ) = σ f( x) + σ vx ( ), (16) 1 such that, x x =. (17) From studies of many sample points, e.g. B= (,3), we find that the best long term iterations can be on the South West, North West and South East boundaries in the image space like that in Fig.. Use of the accessory function to construct iterations implies that we should abandon the penalty function concept in optimization. In penalty function we are restricted to positive values of the parameters, σ1, σ. more serious objection to the use of the penalty function concept is the common expectation that we should use a large penalty parameter when the current point is on or near the constraint set. In fact the choice of the penalty parameter depends more on the condition whether or not the current local search region contains the minimum point. t a local search region which does not contain the solution and at the beginning of an iterative process, the accessory parameter σ should be small or comparable relative to σ 1. This is a way to move away from a local minimum point like LM in Fig.1. If the constraint violation function violates an upper bound on it then the accessory parameter σ can be increased to prevent divergence. Next, we consider what to do when a local search region contains the minimum point. Firstly, we need signals that the minimum point is inside the current local search region. We need some criteria to do so. Let ξ be a small parameter say of value 0.1. constraint cr ( x ) is said to be ξ -active if cr ( x) < ξ. First criterion to signal that the current local search region contains the minimum point, is that all the constraints must be ξ -active. When this happens we need to define the best initial values for the Lagrange multipliers. t the k- th iteration, these are defined by, T 1 p( x) = [ J( x) J ( x)] J( x) f( x). (18) Here, J( x ) is the Jacobian of the constraints. The Lagrangian is, T Lxp (, ) = f( x) pcx ( ). (19) The Lyapunov/merit function is, V( x, p) = (1/ )( xl + pl ). (0) Let γ be a relatively small positive number say. If Vxp (, ) < γ =, then we have the second criterion to confirm that the current point is close to the minimum point is satisfied. If these two criteria are satisfied, then we have signals that the minimum point is inside the current local search region. The Newton method is in the ( xp-space., ) The iterative equations are: xk ( + 1) = xk ( ) + α( kuk ) ( ), (1) pk ( + 1) = pk ( ) + α( kwk ) ( ). () The Newton method sets out to solve the nonlinear equations, T Lxp (, ) = f( x) J px ( ) = 0, (3) x ISBN: 978-1-61804-094-7 7

p Lxp (, ) = cx ( ) = 0. (4) The Lyapunov/merit function (0) is used for a line search for the steplength α( k) in (1) and (). Example 3.. In phase I of problem (13) and (14), we minimize the accessory function (16) with σ1 = 1, σ = 0.1. Furthermore, we treat x as unconstrained because (17) is none other than the boundary of the local search region in an unconstrained problem. This defines an unconstrained optimization with a global minimum point at, M = ( 1.9044,0). With suitable choices of parameters, we determine that the global minimum point of the problem (13) and (14) is inside a local search region with centre at M. Thus, we switch to phase II and use the Newton method (18) to (4) and achieve rapid convergence. Hence, this two phase method provides a globally convergent method to (-1,0). This is surprising, given that the constrained optimization problem has a local minimum at LM = (1,0) and global maximum points at (0.5, 0.866) and (0.5, -0.866). ny numerical method which uses a quadratic model to construct iterations would not provide the solution (-1,0), globally. If the initial point is at the local minimum (1,0) it will then remain there. 4 Surrogate Optimization Problem In a numerical method to solve optimization problem with equality constraints, it is not possible to use immediately Lagrange multipliers most of the time. This is due to the fact that most of the time the local search region does not intersect the constraint set and thus there is no feasible iteration. Hence, in an optimization with equality constraints, we have to define and use another associated optimization problem in order to use Lagrange multipliers and to construct iterations. Clarification of this issue helps to understand more deeply the roles of some of the methods like CDT, [5], and Vardi, [6], for a linear quadratic model which are used to construct iterations. fundamental concept in optimization is that the objective function and the feasible set in a problem must be clearly defined. Changes to the objective function or the feasible set create a new optimization problem. The challenge then is to determine how a new optimization problem is related to the original optimization problem and to demonstrate the usefulness of their relationship. By definition, we get a surrogate optimization problem if the feasible set in an optimization problem is altered in any way. Suppose we start with a constrained optimization problem which we shall designate as problem O, the original problem. We may not be able to construct iterations for problem O. We have to change the constraint set on a temporary basis somehow with the use of parameters. s these parameters tend to some limits they could steer the current point to a position where iterations can be defined in problem O. We shall now explain how this is done. x 1.8 1.6 1.4 1. 1 0.8 0.6 0.4 0. Shifted substitute constraint theta = 1.0 Q3 Q Shifted substitute constraint theta = 0.8380 0-7 -6.8-6.6-6.4-6. -6-5.8-5.6-5.4-5. -5 Fig.3. Feasible shifted substitute constraints. t a point xk ( ) we call the new constraints, cx ( ) θcxk [ ( )] = 0, (5) shifted substitute constraints with parameter, θ. t the current point xk ( ) is a known vector and thus it should be treated as a constant vector in the construction of an iteration. The parameter θ is positive and it should be chosen in the range 0 θ < 1 so that the intersection of the current local search region and the shifted substitute constraint set is not empty. With θ = 1, the shifted substitute constraints pass through the current point, xk ( ). From Fig. 3 we find that at the point = (-6,1), we can only define Lagrange multipliers for (5) at the current point if 0.8380 θ 1. It is undesirable to use parameter values greater than one. For smaller values of the parameter, there is no feasible solution and then the Lagrange multipliers cannot be used. s the current point moves closer to the constraints, the lower limit of the parameter can be reduced, until zero is attained. Thus, in any numerical scheme we should initially choose the parameter θ to be close to one and reduce it monotonically, as the iterative method progresses. x1 Q ISBN: 978-1-61804-094-7 8

Note that larger values of this parameter enable feasibility but they would generate poorer iterations. In this nonlinear form of the CDT method there is a logical rationale in the choice of the parameter, θ. It should be chosen to be close to the value one at the start of an iterative process. It is then monotonically decreased as the iterative process proceeds. This is because we can expect that good iterations would move the current point closer and closer to the original constraint equations. Furthermore, a smaller value of this parameter would provide better iterations. But too small a value may not generate feasible iterations, Fig.4. This way of prescribing the parameter θ arises naturally in the nonlinear analysis above but it is not obvious to do the same procedure in a CDT method, when it is applied to a linear quadratic model. Nocedal and Wright [7] in their classic book on page 555 noted the difficulties in choosing the equivalent parameter in the CDT method in a linear quadratic model, which is normally used to construct iterations. x 4 3 1 0-1 - -3 P1 Q5 P4 Q4 Theta =0.8714 x 1 +x feasible shifted substitute constraint Theta =0.0833 Optimal level set f(x)=f(x*) Non-feasible shifted substitute constraint -4-7 -6-5 -4-3 - -1 0 1 x1 M Theta =0 LM Constraint Fig.4. Two feasible iterations Q5 and Q4 with a feasible shifted substitute constraint. Values of theta also depend on the position of current point,. Fig. 4 provides a global picture of what happens when we replace the equality constraint by a shifted substitute equality constraint which now intersects the local search region centred at. In this example two feasible iterations can now be constructed when θ = 0.8714. The iteration Q5 maximizes the objective function in this surrogate optimization problem while the iteration Q4 minimizes the objective function. It is interesting to note that both iterations are acceptable. In fact, the maximizing iteration Q5 is a better iteration as it provides a value closer to the optimal value of the objective function. This means that at such a point, far from the original equality constraint, the need to minimize or maximize the original objective function may not be critical. In this example, the choice of the value of the parameter θ is more important in the construction of good iterations. In the two phase method for an optimization problem with equality constraints in Section 3, we also need to use a surrogate problem. We ignore altogether the constraint set in the construction of an iteration. We construct iterations as an unconstrained optimization problem. This is thus a surrogate optimization problem as we have changed the feasible set to construct the iterations. 5 Proxy Optimization Problem new optimization problem is called a proxy optimization problem and designated by as problem, P, if we use a new objective function instead of the objective function in the original optimization problem, O. Proxy optimization problems are in fact frequently used in the existing literature by the use of various merit functions in place of the original objective function. But the relationship between the original optimization problem and a proxy optimization is usually not explicitly defined. Example 5.1. Consider the ordinary differential equation steepest descent method to compute the minimum point of the unconstrained function, f( x) = x1 /(1 + x1 ) + x. (33) This function does not have properly nested level set because a level set f( x) = K > 1is not a closed surface. Thus, we cannot use it to construct a Lyapunov function to establish global convergence of a numerical method to compute its minimum. We can use the steepest descent dynamics of (33) and replace the objective function by, V( x) = x1 + x. (34) This new function has properly nested level sets, globally. We have a proxy optimization problem. Next, we seek the minimum point of (34) using the steepest descent trajectories of (33). We have a globally convergent method for (34) because the level sets of (34) are properly nested, globally. Furthermore, the function (34) is strictly monotonically decreasing along the steepest descent trajectories of (33). Both problems have the same minimum point. Thus, the proxy problem P in (34), enables us to conclude that the steepest descent ISBN: 978-1-61804-094-7 9

trajectories of (33) are globally convergent in the original optimization problem. The important thing is to note that we have a new optimization problem whenever we change the objective function. The challenge then is to define carefully the relationships between the two optimization problems, O and P. In our proposed two phase method to solve an optimization problem with equality constraints, we also use a proxy optimization problem (16) and (17) to construct an iteration. The new objective function is the accessory function (16). It is a combination of the objective function and the constraints violation function. We cannot use the original objective function on its own to construct feasible iterations in example 3.1. We had earlier pointed out as a counterexample that at a point like = (-6,1), it is not logical to try to minimize the objective function because good iterations must increase the value of the objective function. This is because the optimal value of the objective function has a higher value than the value of the objective function at the current point. Thus, the use of a quadratic model and its minimization to construct an iteration everywhere in an optimization problem with equality constraints, can only be justified because a proxy optimization problem is used to construct an iteration. The use of the Lyapunov/merit function (0) in phase II and Newton method is an example of the use of a proxy optimization problem. From experience, the augmented Lagrangian function has been found to be a good replacement of the original objective function in optimization problems with equality constraints, [8]. This is an example of the use of a proxy optimization problem. Finally, we note that in practice we often use a new problem which is a P and S problem to solve a constrained O problem with equality constraints. It is important to understand the changes in the transition from the original problem to a surrogate and/or proxy problem. In a P and S problem we seek to construct iterations for the O problem by minimizing locally a new objective function subject to new constraints. Then, the use of a quadratic model and its minimization at a point far away from the solution may have a logical basis in an optimization problem with equality constraints. s pointed out earlier, at a point like in Example 3.1, a good iteration must increase the value of the objective function in the O problem and thus there is no immediate logic to minimize a quadratic model at such a point to construct an iteration. 6 Conclusion New concepts of a surrogate optimization problem and a proxy optimization problem are defined. These new concepts are needed to understand better the assumptions in the use of a quadratic model and Lagrange multipliers to construct an iteration at a point far away from the solution in an optimization problem with equality constraints. Most of the time, at a local search region, it is not possible to define and use Lagrange multipliers directly. Generally, the bounded and closed local search region, at a current point, does not intersect the equality constraints set. Thus, at such a current point, there is no feasible iteration in a numerical method for the original optimization problem. We conclude that to construct an iteration in an optimization problem with equality constraints, generally, there is no choice but to use a surrogate and/or a proxy optimization problem. References: [1] Goh, B. S., Greatest descent algorithms in unconstrained optimization, J Optim. Theory ppl Vol.14, 009, pp.75 89. [] Goh, B. S., Greatest descent methods in optimization with equality constraints. J. Optim. Theory ppl. Vol. 148, 011, pp. 505-57. [3] Kelley, C.T., Iterative Methods in Optimization, SIM, Philadelphia, (1999). [4] Fan, J. Y. and Yuan, Y.X., On the quadratic convergence of the Levenberg-Marquardt method without nonsingularity assumption, Computing, Vol. 74, 005, pp.3-39. [5] Cellis, M.R., Dennis, J. E. and Tapia, R.., trust-region strategy for nonlinear equality constrained optimization, in Numerical Optimization 1984 (P. Boggs, R. Byrd and R. Schnabel, eds.) SIM, Philadelphia, pp.71-8. [6] Vardi,. trust region algorithm for equation constrained minimization: Convergence properties and implementations, SIM J. Numer. nal.,vol., 1985, pp. 575-591. [7] Nocedal, J. and Wright, S.J., Numerical Optimization, Springer-Verlag, New York, 006. [8] Bertsekas, D.P., Constrained Optimization and Lagrange Multiplier Methods, cademic Press, New York, 198. ISBN: 978-1-61804-094-7 30