Optimization under uncertainty: modeling and solution methods

Size: px
Start display at page:

Download "Optimization under uncertainty: modeling and solution methods"

Transcription

1 Optimization under uncertainty: modeling and solution methods Paolo Brandimarte Dipartimento di Scienze Matematiche Politecnico di Torino URL: Lecture 2: Refresher on Optimization Theory and Methods

2 MOTIVATION Review essential concepts in convexity: convex sets and functions; subgradients; extreme points and extreme rays of polyhedra; convex cones. Introduce some essential concepts for solution algorithms: cutting planes; penalty functions; Lagrangian multipliers and KKT conditions. Emphasize the role of duality in: decomposition methods for large-scale problems; primal-dual interior point methods.

3 REFERENCES S. Boyd, L. Vandenberghe. Convex Optimization. Cambridge University Press, The book pdf can be downloaded from boyd/cvxbook/ R.J. Vanderbei. Linear Programming: Foundations and Extensions, (3rd ed.). Springer, M.Z. Bazaraa, H.D. Sherali, G.M. Shetty. Nonlinear Programming: Theory and Algorithms, (3rd ed.). Wiley, B. Gärtner, J. Matousek. Approximation Algorithms and Semidefinite Optimization. Springer, P. Brandimarte. Numerical Methods for Finance and Economics: A Matlab-Based Introduction, (2nd ed.). Wiley, P. Brandimarte. Quantitative Methods: An Introduction for Business Management. Wiley, 2011.

4 OUTLINE 1 BITS OF CONVEX ANALYSIS 2 SOLUTION METHODS 3 DUALITY IN OPTIMIZATION 4 LINEAR PROGRAMMING 5 CONVEX OPTIMIZATION

5 Convex sets Polyhedra Convex cones Convex functions CONVEXITY Convexity is arguably the most important attribute of an optimization problem. Convexity is introduced for sets and then generalized to functions. Relevant convex sets are polyhedra and polytopes, ellipsoids, convex cones, including polar and dual cones. Convexity plays a key role in duality for linear programming, and more generally in conic programming.

6 Convex sets Polyhedra Convex cones Convex functions CONVEX SETS A set S R n is a convex if x, y S λx + (1 λ)y S, λ [0, 1]. The concept of convexity can be grasped intuitively by observing that the points of the form λx + (1 λ)y, where 0 λ 1, are simply the points on the straight line segment joining x and y. So, a set S is convex if the line joining any pair of points x, y S is contained in S.

7 Convex sets Polyhedra Convex cones Convex functions CONVEX SETS x 2 x 2 x 2 S 1 S 1 is convex, but S 2 is not. x 1 S 2 x 1 x 1 S 3 is a discrete set and it is not convex; this fact has important consequences for discrete optimization problems. It is easy to see that the intersection of convex sets is a convex set. This does not necessarily apply to the union. S 3

8 Convex sets Polyhedra Convex cones Convex functions CONVEX HULLS The convex combination of p points x 1, x 2,..., x p R n is defined as x = px µ i x i, µ 1,..., µ p 0, i=1 px µ i = 1. i=1 The requirements on the weights µ i are that the linear combination is conic (positive) and affine. Note the similarity with the expected value of a discrete random variable. Given a set S R n, the set of points that are convex combinations of points in S is the convex hull of S (denoted by [S]).

9 Convex sets Polyhedra Convex cones Convex functions CONVEX HULLS The convex hull of a generic set S is the smallest convex set containing S; it can also be regarded as the intersection of all the convex sets containing S.

10 Convex sets Polyhedra Convex cones Convex functions POLYHEDRAL SETS Polyhedral sets are essential in describing the feasible set of a Linear Programming problem. A hyperplane a T i x = b i, where b i R and a i, x R n, divides R n into two half-spaces expressed by the linear inequalities a T i x b i and a T i x b i. A polyhedron P R n is a set of points satisfying a finite collection of linear inequalities, i.e., P = x R n Ax b, where the matrix A collects row vectors a T i. A polyhedron is therefore the intersection of a finite collection of half-spaces and is convex. The feasible set of an LP problem, S = {x R n Ax = b, x 0} or S = {x R n Ax b}, is a polyhedron.

11 Convex sets Polyhedra Convex cones Convex functions POLYHEDRAL SETS Polyhedra may be bounded or not. Sometimes, a bounded polyhedron is called a polytope. There are alternative ways to describe a polyhedron.

12 Convex sets Polyhedra Convex cones Convex functions DESCRIPTION OF POLYHEDRAL SETS Definition. A point x is an extreme point of a polyhedron P if x P and it is not possible to express x as x = 1 2 x x with x, x P and x x. A polytope P is the convex hull of its extreme points x 1,..., x J (a finite set). Any point x P can be represented as: x = JX λ j x j, j=1 JX λ j = 1, λ j 0; j=1 For unbounded polyhedra, we need to define extreme rays. A vector r R n is called a ray of the polyhedron if Ar 0. If x 0 P, then P = x R n Ax b y = x 0 + λr P λ 0. It is easy to adapt the definition to the case P = {x R n Ax = b, x 0}.

13 Convex sets Polyhedra Convex cones Convex functions DESCRIPTION OF POLYHEDRAL SETS Definition. A ray r of a polyhedron P is called an extreme ray if it cannot be expressed as r = 1 2 r r 2, where r 1, r 2 are rays of P such that r 1 λr 2 for any number λ > 0. A polyhedron P can be described in terms of its extreme rays and points, in the sense that any point x P can be expressed combining extreme rays and points: x = JX KX λ j x j + µ k r k, j=1 k=1 JX λ j = 1, λ j, µ k 0. j=1 So, the feasible set of an LP problem may be characterized in terms of extreme vertices and rays. Actually, we do not need all of them to spot the optimal solution and this plays a key role in some decomposition strategies for large-scale problems (Dantzig-Wolfe).

14 Convex sets Polyhedra Convex cones Convex functions CONVEX CONES Another way to express the feasible set of an LP problem with equality constraints Ax = b is obtained by thinking A R m,n in terms of its column vectors a j R m, j = 1,..., n: A = a 1 a 2 a n... Since x 0, we are expressing the right-hand side vector b as a linear combination of columns of A nx a j x j = b, with non-negative weights x j 0. j= Thus, the feasible set is a conic combination of vectors.

15 Convex sets Polyhedra Convex cones Convex functions CONVEX CONES A set C is called a cone if for every x C and λ 0 we have λx C. Cones can be convex or not closed or not pointed or not polyhedral or not A non-polyhedral cone is the Lorentz (or ice-cream) cone: C = {(x, t) R n+1 x 2 t}. This is also called second-order cone and is a particular case of norm cone. It is important in second-order cone programs, which are relevant in robust optimization.

16 Convex sets Polyhedra Convex cones Convex functions CONVEX CONES A trivial example of cone is the positive orthant R + n. The feasible set of an LP problem is the conic hull of the columns of matrix A. We may also regard the feasible set of an LP problem as the sum of a bounded polyhedron (convex hull of extreme points) and the cone of extreme rays (recession directions). A less trivial cone is the convex cone S n + of (symmetric) positive semidefinite matrices. Indeed, if Q 1, Q 2 S n +, i.e., x T Q i x 0, for any x and i = 1, 2, it is easy to see that λ 1 Q 1 + λ 2 Q 2 S n +, for any λ 1, λ 2 0. To streamline notation, we may write Q 0 rather than Q S n +.

17 Convex sets Polyhedra Convex cones Convex functions CONVEX CONES The following concepts are also useful in stochastic, conic, and robust optimization: Given a subset K R n, its polar cone is the set K = {y R n y T x 0, x K } The polar cone is a (closed) convex cone even if K is not convex. Given a subset K R n, its dual cone is the set We see that K = K. K = {y R n y T x 0, x K }

18 Convex sets Polyhedra Convex cones Convex functions DUAL CONES: TRIVIAL EXAMPLES 1 The nonnegative orthant R n + is self-dual, i.e., (R n +) = R n +. In fact, if y 0, then we have y T x 0 for all x 0, so R n + (R n +). However, if we have a component y i < 0, then y T e i < 0, where e i is the i-th unit vector. 2 It is also easy to see that (R n ) = {0} and ({0}) = R n.

19 Convex sets Polyhedra Convex cones Convex functions DUAL CONES: POSITIVE SEMIDEFINITE CONE Dual cones can be introduced in a generalized setting by considering inner products. On the set S n we introduce the inner product using the trace of matrix product: A, B tr(ab) = nx a ij b ij The notation A B is also used; note that in general (nonsymmetric) case the inner product should be defined as tr(a T B). A notable fact is that the cone S n + is self-dual, i.e., (S n +) = S n +, since it can be shown that tr(ab) 0, A 0 B 0 i,j=1

20 Convex sets Polyhedra Convex cones Convex functions DUAL CONES: NORM CONE Given a norm on R n, we have defined the norm cone K = {(x, t) R n+1 x t}. The dual norm is defined as u = sup{u T x x 1}. For instance: The dual of the l norm is the l 1 norm: nx sup{u T x x 1} = u i = u 1. By a similar token, the dual of the l 1 norm is the l norm. The Euclidean norm l 2 is self-dual. Then, the dual cone turns out to be: K = {(u, v) R n+1 u v}. i=1

21 Convex sets Polyhedra Convex cones Convex functions CONVEX FUNCTIONS A function f : R n R, defined over a convex set S R n, is a convex function on S if, for any y and z in S, for any λ [0, 1], we have f (λy + (1 λ)z) λf (y) + (1 λ)f (z). (1) The definition can be interpreted as follows f(x) f(y) f(y)+(1 )f(z) f(z) y y+(1 )z z x

22 Convex sets Polyhedra Convex cones Convex functions CONVEX FUNCTIONS In other words, a function is convex if its epigraph, i.e., the region above the function graph, is a convex set. A further link between convex sets and convex functions is that the set S = {x R n g(x) 0} is convex if g is a convex function. If the condition (1) is satisfied with strict inequality for all y z, the function is strictly convex. A function f is concave if ( f ) is convex.

23 Convex sets Polyhedra Convex cones Convex functions CONVEX FUNCTIONS f(x) f(x) f(x) (a) x (b) x (c) x The first function is convex, whereas the second is not. Note that in the second case we have a local minimum that is not a global one. Indeed, convexity is so relevant in minimization problems because it rules out local minima (in maximization problems, concavity is relevant). The third function is a polyhedral convex function: a convex function need not be differentiable everywhere.

24 Convex sets Polyhedra Convex cones Convex functions CONVEX FUNCTIONS Convexity of functions is preserved by some operations: A linear combination of convex functions f i, f (x) = mx λ i f i (x) i=1 is a convex function if λ i 0, for every i. The pointwise maximum of convex functions f i, is convex as well. f (x) = max{f 1 (x), f 2 (x),..., f m(x)} Some function compositions also preserve convexity, such as the composition with an affine mapping: g(x) = f (Ax + b)

25 Convex sets Polyhedra Convex cones Convex functions DIFFERENTIABLE CONVEX FUNCTIONS If f is a differentiable function, it is convex (over S) if and only if f (x) f (x 0 ) + f (x 0 ) T (x x 0 ), x, x 0 S. (2) Note that the hyperplane z = f (x 0 ) + f (x 0 ) T (x x 0 ) is the usual tangent hyperplane, i.e., the first-order Taylor expansion of f at x 0. For a differentiable function, convexity implies that the first-order approximation at a certain point x 0 consistently underestimates the true value of the function at all the other points x S.

26 Convex sets Polyhedra Convex cones Convex functions SUPPORT HYPERPLANES f(x) f(x) It is easy to see why stationarity at a point x, i.e., f (x ) = 0, is a necessary and sufficient condition for optimality in the unconstrained, convex, differentiable case. The concept of a tangent hyperplane applies only to differentiable convex functions, but it can be generalized by the concept of a support hyperplane.

27 Convex sets Polyhedra Convex cones Convex functions SUBGRADIENTS AND SUBDIFFERENTIALS OF CONVEX FUNCTIONS Definition. Given a convex function f and a point x 0, the hyperplane (in R n+1 ) given by z = f (x 0 ) + γ T (x x 0 ), which meets the epigraph of f in (x 0, f (x 0 )) and lies below it, is called the support hyperplane of f at x 0. A support hyperplane at x 0 is essentially defined by a vector γ such that f (x) f (x 0 ) + γ T (x x 0 ), x S. (3) The vector γ in inequality (3) plays the same role as the gradient does in inequality (2). If f is differentiable in x 0, the support hyperplane is the usual tangent hyperplane and γ = f (x 0 ). This is why a vector γ such that inequality (3) holds is called a subgradient of f at x 0.

28 Convex sets Polyhedra Convex cones Convex functions SUBGRADIENTS AND SUBDIFFERENTIALS OF CONVEX FUNCTIONS If f is non-differentiable, the support hyperplane need not be unique and there is a set of subgradients. The set of subgradients at a point x 0 is called the subdifferential of f at x 0, and it is denoted by f (x 0 ). It can be shown that a convex function on a set S is subdifferentiable on the interior of S, i.e., we can always find a subgradient (on the boundary of the set S some difficulties may occur due, e.g., to discontinuities, but we need not be concerned with this technicality in the following). Then, the stationarity condition may be extended to 0 f (x ).

29 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions OVERVIEW The generic constrained problem min x S f (x) is normally stated in more concrete terms of equality and inequality constraints as: min f (x) s.t. h i (x) = 0, i E g i (x) 0, x X R n i I The condition x X may include additional restrictions, such as integrality of some decision variables, which destroy the convexity of S. We have a convex problem if both f ( ) and S are convex. As a general rule, convex problems are relatively easy (no trouble with local optima). There is a wide array of methods depending on the nature of objective function and constraints.

30 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions OVERVIEW Unconstrained optimization Penalty function methods Lagrange multipliers and KKT conditions (continuous case) Duality: decomposition and primal dual methods

31 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions UNCONSTRAINED OPTIMIZATION To solve the unconstrained problem min x R n f (x), we may use derivative-based methods such as: Steepest descent x (k+1) = x (k) α (k) f (x (k) ) f (x (k) ), for some step-size α (k) (difficulties in convergence, zig-zagging). Newton method, relying on a second-order local model of the objective to find the displacement δ: f (x (k) + δ) f (x (k) ) + [ f (x (k) )] T δ δt H(x (k) )δ, where H is the Hessian matrix. If H is positive definite, find a minimizer for the quadratic approximation by solving the system of linear equations H(x (k) )δ = f (x (k) ) and then set x (k+1) = x (k) + δ

32 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions UNCONSTRAINED OPTIMIZATION Several variants such as quasi-newton and trust-region; often finite differences are used to approximate derivatives. If the function is non-differentiable but convex, a subgradient method and its variants can be applied. However, sometimes the function is just a black box, as in stochastic simulation-based optimization. A host of derivative-free methods are available: simplex search (not to be confused with the simplex method for LP), pattern search, genetic algorithms (OK for nonconvex case), particle swarm optimization (OK for nonconvex case). Derivative-free methods do not even require continuity, but there are intermediate cases, where we do not know the form of the function, but we may find its value and a subgradient at any given point.

33 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions KELLEY S CUTTING PLANES Consider the convex problem min x S f (x), where the objective function f is actually not known in analytical form. Suppose that, for a given point x k, we are not only able to compute the function value f (x k ) = α k, but also a subgradient γ k, which does exist if the function is convex on the set S. In other words, we are able to find an affine function such that f (x k ) = α k + γ T k xk (4) f (x) α k + γ T k x x S. (5) In stochastic programming with recourse, we can do so for the recourse function.

34 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions KELLEY S CUTTING PLANES The availability of such a support hyperplane suggests the possibility of approximating f from below, by the upper envelope of support hyperplanes, f(x) x The Kelley s cutting plane algorithm exploits this idea by building and improving a lower bounding function until some convergence criterion is met. If S is polyhedral, we solve a sequence of LPs.

35 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions KELLEY S CUTTING PLANES STEP 0. Let x 1 S be an initial feasible solution; initialize the iteration counter k 0, the upper bound u 0 = f (x 1 ), the lower bound l 0 =, and the lower bounding function β 0 (x) =. STEP 1. Increment the iteration counter k k + 1. Find a subgradient of f at x k, such that equation (4) and condition (5) hold. STEP 2. Update the upper bound u k = min{u k 1, f (x k )} and the lower bounding function β k (x) = max{β k 1 (x), α k + γ T k x}. STEP 3. Solve the problem l k = min x S β k (x), and let x k+1 be the optimal solution. STEP 4. If u k l k < ɛ, stop: x k+1 is a satisfactory approximation of the optimal solution; otherwise, go to step 1.

36 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions PENALTY FUNCTIONS The problem with equality constraints min f (x) s.t. h i (x) = 0, i E can be approximated by the unconstrained problem min Φ(x, σ) = f (x) + σ X hi 2 (x). i E If σ is large enough, the optimization algorithm will, in some sense, first drive the solution toward the feasible region by minimizing the penalty term; then it will try to minimize the objective f. Actually, convergence difficulties will arise if we try solving the unconstrained problem with a large value of the penalty coefficient σ. So it is advisable to solve a sequence of unconstrained problems using the optimal solution of each subproblem as the initial solution of the next one.

37 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions PENALTY FUNCTIONS In the case of inequality constraints min f (x) s.t. g i (x) 0 i I, we must only penalize positive values of the constraint functions g i. Using the notation y + = max{y, 0}, we may use a penalty function like f (x) + σ X i I ˆg+ i (x) 2 or f (x) + σ X i I g + i (x) for increasing values of σ.

38 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions INTERIOR VS. EXTERIOR PENALTIES So far, we have seen exterior penalty functions. The feasible set is approached from outside for increasing values of the penalty coefficient σ. If the optimal solution is on the boundary of the feasible set (which is usually the case, since some inequality constraints are active), a feasible solution is obtained only in the limit. In some cases, this is quite natural, as the constraints may be soft or elastic, expressing some desirable feature rather than a hard requirement. In other cases, we would like to be able to stop the algorithm whenever we want and still come up with a strictly feasible solution. To overcome this problem, an interior penalty approach can be pursued.

39 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions INTERIOR VS. EXTERIOR PENALTIES P(x) x 2 S (exterior penalty) increasing g(x) x 1 (interior penalty) P(x) x 2 S decreasing g(x) x 1

40 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions BARRIER FUNCTIONS Interior penalty methods are based on a suitable barrier function. One example is B(x) = X i I 1 g i (x). The barrier function goes to infinity when x tends to the boundary of the feasible region from inside. Then an unconstrained problem, min f (x) + σb(x), is solved for decreasing values of σ, until the term σb(x) is small enough. An alternative is the logarithmic barrier function: B(x) = X i I log( g i (x)).

41 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions CLASSICAL LAGRANGE MULTIPLIERS Given the equality constrained case: min f (x) (6) s.t. h j (x) = 0, j = 1,..., m a necessary condition for local optimality of x is that there exist numbers λ j, j = 1,..., m, called Lagrange multipliers, such that f (x ) + mx λ j h j (x ) = 0 j=1 Note that the theorem is somewhat weak, in that it gives a necessary condition for local optimality, assuming differentiability and some regularity condition on constraints.

42 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions CONSTRAINT QUALIFICATION Consider the problem min x 1 + x 2 s.t. h 1 (x) = x 2 x1 3 = 0 h 2 (x) = x 2 = 0 and build the Lagrangian function L(x 1, x 2, λ 1, λ 2 ) = x 1 + x 2 + λ 1 (x 2 x1 3 ) + λ 2 x 2 The stationarity conditions yield the system L = 1 3λ 1 x1 2 = 0 x 1 L = x 2 x1 3 = 0 λ 1 This system of equations has no solution. L = 1 + λ 1 + λ 2 = 0 x 2 L = x 2 = 0 λ 2

43 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions CONSTRAINT QUALIFICATION In the example, the feasible set boils down to (0, 0), which is the (trivial) optimal solution. Unfortunately, the gradients of the two constraints are parallel at the origin and they are not a basis able to express the gradient of f. There are different constraint qualification conditions that may be applied (see Bazaraa et al.), such as: the gradients of functions h j are linearly independent at x ; constraints are linear; the candidate point is an interior (Slater) point (this applies only to inequality constraints; a more precise statement is needed for equality constraints).

44 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions KARUSH-KUHN-TUCKER CONDITIONS Consider a general constrained problem (P EI ) min f (x) s.t. h i (x) = 0, i E g i (x) 0, i I, and build the Lagrangian function L(x, λ, µ) = f (x) + X i E λ i h i (x) + X i I µ i g i (x). (7) Then (subject to the aforementioned conditions) a necessary condition for the local optimality of x is that there exist numbers λ i (i E) and µ i 0 (i I) such that f (x ) + X λ i h i (x ) + X µ i g i (x ) = 0 i E i I µ i g i (x ) = 0, i I.

45 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions KARUSH-KUHN-TUCKER CONDITIONS The KKT conditions are similar to classical Lagrange theorem, with two differences 1 Multipliers associated with inequality constraints are restricted in sign. 2 There is an additional condition, known as complementary slackness. If we interpret Lagrange multipliers economically, as shadow prices for resources, we may understand the two conditions: Prices cannot be negative. If a resource is not used to the limit and the associated constraint is not active, i.e., g i (x ) < 0, its shadow price must be zero. If the shadow price is positive, then the resource budget constraint must be active, i.e., g i (x ) = 0.

46 Unconstrained nonlinear programming Kelley s cutting planes Penalty function methods Lagrange multipliers and KKT conditions MULTIPLIERS METHODS From an algorithmic point of view, the KKT conditions are not solved directly. They are the conceptual basis for computationally viable methods. There are methods integrating multipliers with penalty functions, such as augmented Lagrangians. They are fundamental in decomposition strategies. For that, we need duality theory.

47 The role of duality Weak duality Strong duality Dual decomposition THE ROLE OF DUALITY Duality is essential in developing solution methods. Dual simplex (essential in integer programming). Primal-dual methods, like interior point methods. Decomposition methods. Duality has a useful economic interpretation.

48 The role of duality Weak duality Strong duality Dual decomposition WEAK DUALITY Consider the inequality-constrained problem (P) min f (x) This problem is called the primal problem. s.t. g i (x) 0 i I (8) x S R n. The set S is any subset of R n, possibly a discrete one. Here, we do not assume differentiability nor convexity of the objective function. The results we get are therefore extremely general. Build the Lagrangian function by dualizing constraints (8): L(x, µ) = f (x) + X i I µ i g i (x) = f (x) + µ T g(x). For a given multiplier vector µ (dual variables), the minimization of the Lagrangian function with respect to x S is called the relaxed problem.

49 The role of duality Weak duality Strong duality Dual decomposition WEAK DUALITY The approach makes sense when the inequality constraints are the complicating ones, and the relaxed problem is easy to solve. The solution of the relaxed problem defines a function w(µ), called the dual function: w(µ) = min L(x, µ). x S Consider the dual problem: (D) max w(µ) = max µ 0 µ 0 j ff min L(x, µ). (9) x S (Weak duality theorem) For any µ 0, the dual function is a lower bound for the optimum f (x ) of the primal problem (P), i.e., w(µ) f (x ) µ 0.

50 The role of duality Weak duality Strong duality Dual decomposition WEAK DUALITY Proof. Let us adopt the notation ν(p) to denote the optimal value of the objective function for an optimization problem P. Under the hypothesis µ 0, it is easy to see that 0 1 min ν(p) s.t. 0 min s.t. f (x) x S µ T g(x) 0 min f (x) + µ T g(x) ν s.t. x S A (10) 1 f (x) + µ T g(x) x S A (11) µ T g(x) 0 «. (12)

51 The role of duality Weak duality Strong duality Dual decomposition WEAK DUALITY The case of equality constraints is treated similarly: in that case, multipliers are unrestricted in sign. We obtain a very general but weak relationship, since weak duality only yields a lower bound. The following theorem gives a sufficient condition for global optimality: THEOREM If there is a pair (x, µ ), where x S and µ 0, satisfying the conditions 1 f (x ) + (µ ) T g(x ) = min x S {f (x) + (µ ) T g(x)}, 2 (µ ) T g(x ) = 0, 3 g(x ) 0, then x is a global optimum for the primal problem (P).

52 The role of duality Weak duality Strong duality Dual decomposition STRONG DUALITY The last theorem is, in a sense, stronger than necessary. Luckily, there are cases in which we get necessary and sufficient conditions. Indeed, under suitable conditions (essentially convexity), a stronger property holds, known as strong duality: ν(d) = w(µ ) = f (x ) = ν(p). The convexity assumption does not hold, in particular, for the case of a discrete set and for general equality constraints (unless they are affine functions). To be precise, convexity is not enough: we may have a duality gap unless additional conditions, such as the Slater constraint qualification, hold.

53 The role of duality Weak duality Strong duality Dual decomposition STRONG DUALITY: COMPUTATIONAL APPROACH In principle, strong duality provides us with an alternative approach to solve the primal problem. This makes sense when we may get rid of complicating constraints by dualization. However, we need a computational scheme: 1 Assign an initial value µ (0) 0; set k 0. 2 Solve the relaxed problem with multipliers µ (k). 3 Given the solution ˆx (k) of the relaxed problem, compute a search direction s (k) and a step length α (k), and update the multipliers (making sure they stay non-negative): n µ (k+1) = max 0, µ (k) + α (k) s (k)o. Then set k k + 1, and go to step 2.

54 The role of duality Weak duality Strong duality Dual decomposition STRONG DUALITY: COMPUTATIONAL APPROACH In order to find a search direction, one would be tempted to compute a gradient of the dual function. Unfortunately, the dual function need not be everywhere differentiable, but the following facts help us: THEOREM The dual function w(µ) is a concave function. THEOREM Let ˆx be an optimal solution of the relaxed problem for a multiplier vector ˆµ. Then g(ˆx) is a subgradient of the dual function at ˆµ.

55 The role of duality Weak duality Strong duality Dual decomposition DUAL DECOMPOSITION To see how duality can help in a simple setting, let us consider a problem like nx max f i (x i ) (13) s.t. i=1 nx g i (x i ) b, (14) i=1 x i S i, i = 1,..., n (15) Let us interpret the decision variables x i, i = 1,..., n, as activities yielding a profit f i (x i ) and consuming a resource amount g i (x i ). The objective function (13) is total profit, and (14) is a budget constraint on the resource. Note that the objective function is measured in monetary terms, whereas b is measured in resource units. If we could get rid of the budget constraint, the problem could be decomposed.

56 The role of duality Weak duality Strong duality Dual decomposition DUAL DECOMPOSITION Let us dualize the budget constraint by introducing the multiplier µ 0 and writing the Lagrangian function:! nx nx nx L(x, µ) = f i (x i ) + µ b g i (x i ) = [f i (x i ) µg i (x i )] + µb. i=1 i=1 Note that here we must adjust the problem to account for the optimization sense. The Lagrangian function should be maximized with respect to the primal variable, resulting in a set of problems: i=1 max [f i (x i ) µg i (x i )] π i (x i ) x i S i

57 The role of duality Weak duality Strong duality Dual decomposition DUAL DECOMPOSITION Each subproblem requires maximizing profit contribution minus resource cost. The multiplier µ is a shadow price, measured in unit of money per unit amount of resource. In this case, we should minimize the dual function with respect to µ. Given relaxed solutions x i, we get a subgradient of the dual function: nx g i (x i ) b i=1 This is positive when the budget is exceeded, in which case we should increase the resource price. The price must be reduced when the budget is not exceeded. The resulting demand-offer scheme can be depicted as follows.

58 The role of duality Weak duality Strong duality Dual decomposition DUAL DECOMPOSITION min w( ) x 1 x n max 1(x 1) max n(x n) x 1 x n

59 The role of duality Weak duality Strong duality Dual decomposition DUAL DECOMPOSITION Dual decomposition may converge poorly in practice, but it might be a good approach for some specially structured large-scale problems. Sometimes, we are satisfied by a suitably food solution. If we may recover a good primal feasible solution from dual decomposition, we obtain dual heuristic algorithm. Lagrangian methods can be integrated with penalty function methods, resulting in augmented Lagrangian schemes based, e.g., on the minimization of f (x) + X λ i h i (x) + σ X hi 2 (x) i I i I for an equality-constrained problem.

60 Simplex method for linear programming Duality in linear programming An interior point method for LP LINEAR PROGRAMMING: SIMPLEX METHOD Extremely efficient and robust approach to solve linear programs in the form min s.t. c T x Ax = b x 0 where x R n, c R n, A R m,n, and b R m. The idea relies on the fact that there is an optimal solution corresponding to a vertex of the polyhedron. Given a vertex, we can explore neighboring vertices to see if there is a better one and move there. Since the problem is convex, we end up in the global solution. The geometric intuition must be translated into algebraic terms. The problem requires to express b by an optimal conic combination of columns of A.

61 Simplex method for linear programming Duality in linear programming An interior point method for LP LINEAR PROGRAMMING: SIMPLEX METHOD There are n columns, but a subset B of m columns suffices to express b as follows: X a j x j = b j B To be precise, we should make sure that this subset of columns is a basis, i.e., that they are linearly independent; let us cut a few corners and assume that this is the case. A solution of this system, in which n m variables are set to zero, and only m are allowed to assume a nonzero value is a basic solution. A basic feasible solution corresponds to an extreme point of the feasible set: hence, moving from one vertex to a neighboring one is obtained by bringing one column into the basis, eliminating one column. There are some computational issues to be tackled, related to degeneracy and redundant constraints and to efficient implementation. Alternative interior-point methods move in the interior of the polyhedron.

62 Simplex method for linear programming Duality in linear programming An interior point method for LP DUALITY IN LINEAR PROGRAMMING Duality in LP can be derived from first principles of convexity (separation theorems, Farkas lemma, and the like); however, it may be preferable to cast it in the more general, nonlinear framework. Let us start with an LP problem (P 1 ) in the following canonical form: (P 1 ) min c T x s.t. Ax b. If we dualize the inequality constraints, we get the dual problem: n o n o max min c T x + µ T (b Ax) = max µ T b + min c T µ T A x. µ 0 x µ 0 x Since x is unrestricted in sign, the inner minimization problem has a finite value if and only if c T µ T A = 0.

63 Simplex method for linear programming Duality in linear programming An interior point method for LP DUALITY IN LINEAR PROGRAMMING Since we want to maximize the dual function, we enforce the condition, and the dual problem (D 1 ) turns out to be (D 1 ) max b T µ s.t. A T µ = c µ 0. The dual problem is still an LP problem, resulting from: 1 the exchange of b with c, 2 the transposition of A, 3 a change in the sense of the objective. Using the same reasoning, we may build the dual of an arbitrary LP problem. Note, in particular, that dual variables associated with equality constraints will be unrestricted in sign.

64 Simplex method for linear programming Duality in linear programming An interior point method for LP DUALITY IN LINEAR PROGRAMMING Since there is no duality gap, if both primal and dual problems have a finite optimum, we have b T µ = c T x However other cases are possible: the primal is unbounded below, and the dual is infeasible; the dual is unbounded above, and the primal is infeasible; both problems are infeasible. Information on dual variables is provided by the primal simplex method. When an LP problem is unbounded, the simplex algorithm returns a recession direction, i.e., a direction on which we may move and improve the objective, without ever going infeasible.

65 Simplex method for linear programming Duality in linear programming An interior point method for LP PRIMAL-DUAL BARRIER METHOD FOR LP By combining several principles, we obtain an interior point method for LP Let us consider the primal problem max s.t. c T x Ax b x 0 max s.t. c T x Ax + w = b x, w 0 and its dual: min s.t. b T y A T y c y 0 where w and z are slack variables. min s.t. b T y A T y z = c y, z 0

66 Simplex method for linear programming Duality in linear programming An interior point method for LP PRIMAL-DUAL BARRIER METHOD FOR LP We can get rid of non-negativity constraints by using an interior penalty function based on a logarithmic barrier. max c T x + σ X j log x j + σ X i log w i s.t. Ax + w = b. Equality constraints can be dualized by Lagrange multipliers y, yielding the Lagrangian function L(x, w, y) = c T x + σ X j log x j + σ X i log w i + y T (b Ax w).

67 Simplex method for linear programming Duality in linear programming An interior point method for LP PRIMAL-DUAL BARRIER METHOD FOR LP We can now apply the first-order stationarity conditions on the Lagrangian. L = c j + σ 1 X y i a ij = 0, x j x j i L = σ 1 y i = 0, i w i w i L = b i X a ij x j w i = 0, y i j j i. These optimality equations may be rewritten in a compact matrix form: x 1 A T y σx 1 e = c y = σw 1 x 2 e where X = , e = 6 4 Ax + w = b. xn

68 Simplex method for linear programming Duality in linear programming An interior point method for LP PRIMAL-DUAL BARRIER METHOD FOR LP To make the meaning of the optimality conditions clearer, let us introduce the auxiliary vector z = σx 1 e and let us rearrange the conditions as: Ax + w = b A T y z = c XZe = σe YWe = σe. These equations have a nice interpretation in terms of (see Theorem 1). 1 primal feasibility, 2 dual feasibility, 3 and (if σ = 0) complementary slackness.

69 Simplex method for linear programming Duality in linear programming An interior point method for LP PRIMAL-DUAL BARRIER METHOD FOR LP For σ > 0, we have a set of nonlinear equations: 2 F(ξ) = 0, where ξ = which may be tackled by Newton s method. In principle, by solving this system of non-linear equations for different values of σ we get a path (x σ, y σ, w σ, z σ). This path is called central path and for 6 4 x y w z 3 7 5, σ 0, it leads to the optimal solution of the original LP.

70 Simplex method for linear programming Duality in linear programming An interior point method for LP PRIMAL-DUAL BARRIER METHOD FOR LP The method can be interpreted as an instance of more general homotopy-based strategies. Essentially, we keep a pair of primal and dual feasible solutions and we gradually enforce complementary slackness. There are different approaches to manage the interplay between Newton steps and adjusting the penalty parameter. The Newton steps in interior point methods can be more or less efficient, as they require solving large-scale systems of linear equations (sparsity of Cholesky factors is essential).

71 Simplex method for linear programming Duality in linear programming An interior point method for LP SIMPLEX VS. INTERIOR-POINT METHODS In the worst-case, the simplex method has exponential complexity, whereas interior-point methods are polynomial. We may get equivalent, but different solutions, as interior point methods tend to yield solutions in the midpoint of a face of the polyhedron, whereas simplex yields an extreme point (crossover to a basic solution is offered after solution with interior point methods). In practice, no method dominates, as behavior may depend on problem structure. Simplex has better warm-start capabilities (useful for integer programming). Interior-point methods have been extended to a much wider class of convex optimization problems, including second-order cone and semidefinite programming.

72 Conic programming Second order cone programming Semidefinite programming Conic duality CONIC PROGRAMMING is an important case of convex optimization. The next level in the hierarchy is a quadratic programming (QP) problem like: min s.t. 1 2 xt Qx + h T x Ax = b x 0 This is a convex optimization problem, provided that Q S n +. Interior-point methods are available for QP as well, but they are also available for a wider class of problems.

73 Conic programming Second order cone programming Semidefinite programming Conic duality CONIC PROGRAMMING Both LPs and QPs can be reformulated within a wider class of conic problems: Let V and W be (finite-dimensional) linear spaces equipped with an inner product. Let K V and L W be closed convex cones. Let A : V W be a linear operator. A conic program is an optimization problem like where c V and b W. min c, x s.t. b A(x) L x K

74 Conic programming Second order cone programming Semidefinite programming Conic duality SECOND ORDER CONE PROGRAMMING Second order cone programming problems (SOCP) are a generalization of LP and QP: min f T x s.t. A i x + b i 2 c T i x + d i, i = 1,..., n Fx = g A constraint like Ax + b 2 c T x + d requires that the affine mapping of point x R n to (Ax + b, c T x + d), where A R k,n and b R k, lies in the second-order (Lorentz) cone in R k+1.

75 Conic programming Second order cone programming Semidefinite programming Conic duality SEMIDEFINITE PROGRAMMING In semidefinite programming, decision variables are symmetric, semidefinite positive matrices X S n +. On the self-dual cone S n + we define the inner product nx X, Y tr(xy) = x ij y ij X Y i,j=1 (in the more general case, the first matrix should be transposed). Then we may formulate a model like max C X (16) s.t. A 1 X = b 1. A m X = b m X 0

76 Conic programming Second order cone programming Semidefinite programming Conic duality CONIC DUALITY SOCPs and SDPs have wide applicability, including some stochastic and robust optimization problems. Rather efficient interior-point methods are available for their solution, based on the following conic duality theorem. Consider problems: (P) max c, x (D) min b, y s.t. b A(x) L s.t. A T (y) c K x K where A T ( ) is the adjoint operator of A( ). y L

77 Conic programming Second order cone programming Semidefinite programming Conic duality CONIC DUALITY The adjoint operator is a linear operator A T : W V such that y, A(x) W = A T (y), x V, for every x V and y W. When dealing with matrices, i.e., linear mappings between usual vector spaces R m and R n, the adjoint boils down to the familiar transpose of a matrix. THEOREM If the primal problem (P) is feasible, has a finite value γ, and has an interior (Slater) point ˆx, then the dual problem (D) is also feasible and has the same value γ. The Slater constraint qualification condition is necessary, in general, to rule out some pathological cases resulting in duality gaps.

78 Conic programming Second order cone programming Semidefinite programming Conic duality CONIC DUALITY: THE SDP CASE Strong duality applies to the SDP (16), since the cone S n + of semidefinite positive matrices has an interior, the open cone S n ++ of positive definite matrices. To be precise, there must be a symmetric positive definite matrix ˆX such that the equality constraints hold, A(ˆX) = b, where b = [b 1,..., b m] T, and A( ) collects the matrices A i and maps S n + to R m. The dual of problem (16) is: min s.t. b T y mx y i A i C 0, i=1 y R m To devise primal-dual interior point methods, a barrier function is needed. A possible choice is log(det(x)).

79 Conic programming Second order cone programming Semidefinite programming Conic duality CONIC DUALITY: A NUMERICAL EXAMPLE Consider the following symmetric matrix: A = The matrix is not defined in sign, as its eigenvalues are , , We want to find a (symmetric) matrix X 0 with minimal trace (sum of eigenvalues) such that X A 0.

80 Conic programming Second order cone programming Semidefinite programming Conic duality CONIC DUALITY: A NUMERICAL EXAMPLE Actually, we can solve the problem by diagonalizing A = VΛV T, where Λ is a diagonal matrix consisting of the eigenvalues of A, and the columns of V are the corresponding (unit) eigenvectors. Then we collect the positive eigenvalues of A into Λ + and set X = VΛ +V T. Using SDP, we may solve: min tr(x) = I X s.t. X A 0 X 0, or its dual (if we just care about the objective): max A Y s.t. I Y 0 Y 0.

81 Conic programming Second order cone programming Semidefinite programming Conic duality HOMEWORK For Ph.D. students who formally need credits: 1 Using conic duality, find the dual of the LP problem in standard form: min s.t. c T x Ax = b x 0

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Convex Optimization M2

Convex Optimization M2 Convex Optimization M2 Lecture 1 A. d Aspremont. Convex Optimization M2. 1/49 Today Convex optimization: introduction Course organization and other gory details... Convex sets, basic definitions. A. d

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Convex Optimization. Lijun Zhang Modification of

Convex Optimization. Lijun Zhang   Modification of Convex Optimization Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Modification of http://stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf Outline Introduction Convex Sets & Functions Convex Optimization

More information

Chapter II. Linear Programming

Chapter II. Linear Programming 1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods 1 INTRODUCTION

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Tutorial on Convex Optimization for Engineers

Tutorial on Convex Optimization for Engineers Tutorial on Convex Optimization for Engineers M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 4 LINEAR PROGRAMMING (LP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 Mathematical programming (optimization) problem: min f (x) s.t. x X R n set of feasible solutions with linear objective function

More information

Kernel Methods & Support Vector Machines

Kernel Methods & Support Vector Machines & Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

Lecture 2: August 29, 2018

Lecture 2: August 29, 2018 10-725/36-725: Convex Optimization Fall 2018 Lecturer: Ryan Tibshirani Lecture 2: August 29, 2018 Scribes: Adam Harley Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

California Institute of Technology Crash-Course on Convex Optimization Fall Ec 133 Guilherme Freitas

California Institute of Technology Crash-Course on Convex Optimization Fall Ec 133 Guilherme Freitas California Institute of Technology HSS Division Crash-Course on Convex Optimization Fall 2011-12 Ec 133 Guilherme Freitas In this text, we will study the following basic problem: maximize x C f(x) subject

More information

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015 Convex Optimization - Chapter 1-2 Xiangru Lian August 28, 2015 1 Mathematical optimization minimize f 0 (x) s.t. f j (x) 0, j=1,,m, (1) x S x. (x 1,,x n ). optimization variable. f 0. R n R. objective

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Convex Sets (cont.) Convex Functions

Convex Sets (cont.) Convex Functions Convex Sets (cont.) Convex Functions Optimization - 10725 Carlos Guestrin Carnegie Mellon University February 27 th, 2008 1 Definitions of convex sets Convex v. Non-convex sets Line segment definition:

More information

CME307/MS&E311 Theory Summary

CME307/MS&E311 Theory Summary CME307/MS&E311 Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

Lecture 4: Convexity

Lecture 4: Convexity 10-725: Convex Optimization Fall 2013 Lecture 4: Convexity Lecturer: Barnabás Póczos Scribes: Jessica Chemali, David Fouhey, Yuxiong Wang Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

11 Linear Programming

11 Linear Programming 11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Convex sets and convex functions

Convex sets and convex functions Convex sets and convex functions Convex optimization problems Convex sets and their examples Separating and supporting hyperplanes Projections on convex sets Convex functions, conjugate functions ECE 602,

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

Simplex Algorithm in 1 Slide

Simplex Algorithm in 1 Slide Administrivia 1 Canonical form: Simplex Algorithm in 1 Slide If we do pivot in A r,s >0, where c s

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS

Linear Programming. Larry Blume. Cornell University & The Santa Fe Institute & IHS Linear Programming Larry Blume Cornell University & The Santa Fe Institute & IHS Linear Programs The general linear program is a constrained optimization problem where objectives and constraints are all

More information

Introduction to Convex Optimization. Prof. Daniel P. Palomar

Introduction to Convex Optimization. Prof. Daniel P. Palomar Introduction to Convex Optimization Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19,

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization? Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x

More information

Convexity I: Sets and Functions

Convexity I: Sets and Functions Convexity I: Sets and Functions Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 See supplements for reviews of basic real analysis basic multivariate calculus basic

More information

CME307/MS&E311 Optimization Theory Summary

CME307/MS&E311 Optimization Theory Summary CME307/MS&E311 Optimization Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization Teacher: Gianni A. Di Caro GLOBAL FUNCTION OPTIMIZATION Find the global maximum of the function f x (and

More information

Lecture 5: Properties of convex sets

Lecture 5: Properties of convex sets Lecture 5: Properties of convex sets Rajat Mittal IIT Kanpur This week we will see properties of convex sets. These properties make convex sets special and are the reason why convex optimization problems

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Lecture 2: August 31

Lecture 2: August 31 10-725/36-725: Convex Optimization Fall 2016 Lecture 2: August 31 Lecturer: Lecturer: Ryan Tibshirani Scribes: Scribes: Lidan Mu, Simon Du, Binxuan Huang 2.1 Review A convex optimization problem is of

More information

Convex sets and convex functions

Convex sets and convex functions Convex sets and convex functions Convex optimization problems Convex sets and their examples Separating and supporting hyperplanes Projections on convex sets Convex functions, conjugate functions ECE 602,

More information

Linear methods for supervised learning

Linear methods for supervised learning Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course?

This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course? Lec4 Page 1 Lec4p1, ORF363/COS323 This lecture: Convex optimization Convex sets Convex functions Convex optimization problems Why convex optimization? Why so early in the course? Instructor: Amir Ali Ahmadi

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension Antoine Vigneron King Abdullah University of Science and Technology November 7, 2012 Antoine Vigneron (KAUST) CS 372 Lecture

More information

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Hausdorff Institute for Mathematics (HIM) Trimester: Mathematics of Signal Processing

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c) Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

Lecture 12: Feasible direction methods

Lecture 12: Feasible direction methods Lecture 12 Lecture 12: Feasible direction methods Kin Cheong Sou December 2, 2013 TMA947 Lecture 12 Lecture 12: Feasible direction methods 1 / 1 Feasible-direction methods, I Intro Consider the problem

More information

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set

2. Convex sets. x 1. x 2. affine set: contains the line through any two distinct points in the set 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs

3. The Simplex algorithmn The Simplex algorithmn 3.1 Forms of linear programs 11 3.1 Forms of linear programs... 12 3.2 Basic feasible solutions... 13 3.3 The geometry of linear programs... 14 3.4 Local search among basic feasible solutions... 15 3.5 Organization in tableaus...

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

MTAEA Convexity and Quasiconvexity

MTAEA Convexity and Quasiconvexity School of Economics, Australian National University February 19, 2010 Convex Combinations and Convex Sets. Definition. Given any finite collection of points x 1,..., x m R n, a point z R n is said to be

More information

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if

POLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :

More information

Affine function. suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex

Affine function. suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex Affine function suppose f : R n R m is affine (f(x) =Ax + b with A R m n, b R m ) the image of a convex set under f is convex S R n convex = f(s) ={f(x) x S} convex the inverse image f 1 (C) of a convex

More information

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities

2. Convex sets. affine and convex sets. some important examples. operations that preserve convexity. generalized inequalities 2. Convex sets Convex Optimization Boyd & Vandenberghe affine and convex sets some important examples operations that preserve convexity generalized inequalities separating and supporting hyperplanes dual

More information

Constrained optimization

Constrained optimization Constrained optimization A general constrained optimization problem has the form where The Lagrangian function is given by Primal and dual optimization problems Primal: Dual: Weak duality: Strong duality:

More information

AMS : Combinatorial Optimization Homework Problems - Week V

AMS : Combinatorial Optimization Homework Problems - Week V AMS 553.766: Combinatorial Optimization Homework Problems - Week V For the following problems, A R m n will be m n matrices, and b R m. An affine subspace is the set of solutions to a a system of linear

More information

Solving the Master Linear Program in Column Generation Algorithms for Airline Crew Scheduling using a Subgradient Method.

Solving the Master Linear Program in Column Generation Algorithms for Airline Crew Scheduling using a Subgradient Method. Solving the Master Linear Program in Column Generation Algorithms for Airline Crew Scheduling using a Subgradient Method Per Sjögren November 28, 2009 Abstract A subgradient method for solving large linear

More information

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity

Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows

More information

Convex Optimization Lecture 2

Convex Optimization Lecture 2 Convex Optimization Lecture 2 Today: Convex Analysis Center-of-mass Algorithm 1 Convex Analysis Convex Sets Definition: A set C R n is convex if for all x, y C and all 0 λ 1, λx + (1 λ)y C Operations that

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574

More information

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points.

Optimization Methods. Final Examination. 1. There are 5 problems each w i t h 20 p o i n ts for a maximum of 100 points. 5.93 Optimization Methods Final Examination Instructions:. There are 5 problems each w i t h 2 p o i n ts for a maximum of points. 2. You are allowed to use class notes, your homeworks, solutions to homework

More information

Lecture 2. Topology of Sets in R n. August 27, 2008

Lecture 2. Topology of Sets in R n. August 27, 2008 Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,

More information

Artificial Intelligence

Artificial Intelligence Artificial Intelligence Combinatorial Optimization G. Guérard Department of Nouvelles Energies Ecole Supérieur d Ingénieurs Léonard de Vinci Lecture 1 GG A.I. 1/34 Outline 1 Motivation 2 Geometric resolution

More information

Convex Optimization and Machine Learning

Convex Optimization and Machine Learning Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction

More information

Constrained Optimization COS 323

Constrained Optimization COS 323 Constrained Optimization COS 323 Last time Introduction to optimization objective function, variables, [constraints] 1-dimensional methods Golden section, discussion of error Newton s method Multi-dimensional

More information