Projection-Based Methods in Optimization

Size: px
Start display at page:

Download "Projection-Based Methods in Optimization"

Transcription

1 Projection-Based Methods in Optimization Charles Byrne (Charles Department of Mathematical Sciences University of Massachusetts Lowell Lowell, MA 01854, USA June 3, 2013

2 Talk Available on Web Site This slide presentation and accompanying article are available on my web site, ; click on Talks.

3 Using Prior Knowledge Figure : Minimum Two-Norm and Minimum Weighted Two-Norm Reconstruction.

4 Fourier Transform as Data Figure : Far-field Measurements. The distance from x to P is approximately D x cos θ.

5 Far-field Measurements Each point x in [ L, L] sends out the signal f (x) exp(iωt). The known frequency ω is the same for each x. We must estimate the function f (x), for x L. What a point P in the far-field receives from each x is approximately f (x) exp(ixω cos(θ)), so our measurement at P provides an approximation of L L f (x) exp(ixω cos(θ))dx. For c the propagation speed and λ the wavelength, if we have cos(θ) = nπc ωl = n λ 2L, then we have the nth Fourier coefficient of the function f (x).

6 The Limited-Data Problem We can have only if cos(θ) = nπc ωl = n λ 2L n 2L λ, so we can measure only a limited number of the Fourier coefficients of f (x). Note that the upper bound on n is the length of the interval [ L, L] in units of wavelength.

7 Can We Get More Data? Clearly, we can take our measurements at any point P on the circle, not just at those satisfying the equation cos(θ) = nπc ωl = n λ 2L. These measurements provide additional information about f (x), but won t be additional Fourier coefficients for the Fourier series of f (x) on [ L, L]. How can we use these additional measurements to improve our estimate of f (x)?

8 Over-Sampled Data Suppose that we over-sample; let us take measurements at points P such that the associated angle θ satisfies cos(θ) = nπc ωkl, where K > 1 is a positive integer, instead of cos(θ) = nπc ωl. Now we have Fourier coefficients for the function g(x) that is f (x) for x L, and is zero on the remainder of the interval [ KL, KL].

9 Using Support Information Given our original limited data, we can calculate the orthogonal projection of the zero function onto the subset of all functions on [ L, L] consistent with this data; this is the minimum-norm estimate of f (x), also called the discrete Fourier transform (DFT) estimate, shown in the second graph below. If we use the over-sampled data as Fourier coefficients for g(x) on the interval [ KL, KL], we find that we haven t improved our estimate of f (x) for x L. Instead, we calculate the orthogonal projection of the zero function onto the subset of all functions on [ L, L] that are consistent with the over-sampled data. This is sometimes called the modified DFT (MDFT) estimate. The top graph below shows the MDFT for a case of K = 30.

10 Two Minimum-Norm Estimates Figure : The non-iterative band-limited extrapolation method (MDFT) (top) and the DFT (bottom); 30 times over-sampled.

11 Over-Sampled Fourier-Transform Data For the simulation in the figure above, f (x) = 0 for x > L. The top graph is the minimum-norm estimator, with respect to the Hilbert space L 2 ( L, L), called the modified DFT (MDFT); the bottom graph is the DFT, the minimum-norm estimator with respect to the Hilbert space L 2 ( 30L, 30L), shown only for [ L, L]. The MDFT is a non-iterative variant of Gerchberg-Papoulis band-limited extrapolation.

12 Using Orthogonal Projection In both of the examples above we see minimum two-norm solutions consistent with the data. These reconstructions involve the orthogonal projection of the zero vector onto the set of solutions consistent with the data. The improvements illustrate the advantage gained by the selection of an appropriate ambient space within which to perform the projection. The constraints of data consistency define a subset onto which we project, and the distance to zero is the function being minimized, subject to the constraints. This leads to the more general problem of optimizing a function, subject to constraints.

13 The Basic Problem The basic problem we consider here is to minimize a real-valued function f : X R, over a subset C X, where X is an arbitrary set. With ι C (x) = 0, for x C, and +, otherwise, we can rewrite the problem as minimizing f (x) + ι C (x), over all x X.

14 The Limited-Data Problem We want to reconstruct or estimate a function of several continuous variables. We have limited noisy measured data and some prior information that serve to constrain that function. We discretize the function, so that the object is described as an unknown vector x in R J.

15 Constraints We want our reconstruction to be at least approximately consistent with the measured data. We may know that the entries of x are non-negative. When x is a vectorized image, we may have prior knowledge of its general appearance (a head slice, for example). If x is an extrapolated band-limited time series, we may have prior knowledge of the extent of the band.

16 Representing Constraints Prior knowledge of general properties of x can be incorporated through the choice of the ambient space. Other constraints, such as the measured data, tell us that x lies in a subset C of the ambient space.

17 An Example: Linear Functional Data Suppose that the measured data vector b is linear in x, with Ax = b under-determined. The minimum two-norm solution minimizes J x j 2, j=1 subject to Ax = b. Let the vector p with entries p j > 0 be a prior estimate of the magnitudes x j. The minimum weighted two-norm solution minimizes subject to Ax = b. J x j 2 /p j, j=1

18 Using Optimization When the constraint set C is relatively small, any member of C may provide an adequate solution to the problem. More likely, C is relatively large, and we may choose to determine x by minimizing a cost function over the set C.

19 An Example- Systems of Linear Equations Let the data b pertaining to x be linear, so that Ax = b for some matrix A. 1. When there are infinitely many solutions, we may select x to minimize a norm or other suitable distance measure; 2. When there are no solutions, we may select a least-squares or other approximate solution; 3. When the data b is noisy, we may select a regularized solution.

20 Barrier Functions: An Example The problem is to minimize the function f (x) = f (x 1, x 2 ) = x1 2 + x 2 2, subject to x 1 + x 2 1. For each positive integer k, the vector x k with entries x k 1 = x k 2 = k minimizes the function B k (x) = x1 2 + x k log(x 1 + x 2 1) = f (x) + 1 k b(x). Notice that x1 k + x 2 k > 1, so each x k satisfies the constraint. As k +, x k converges to ( 1 2, 1 2 ), which solves the original problem.

21 Penalty Functions: An Example The problem is to minimize the function Our penalty function is f (x) = (x + 1) 2, subject to x 0. p(x) = x 2, for x 0, p(x) = 0, for x > 0. At the kth step we minimize f (x) + kp(x) to get x k = 1 k+1, which converges to the right answer, x = 0, as k. The limit x satisfies the constraint, but the x k do not; this is an exterior-point method.

22 Constrained Optimization The Problem: to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set. Auxiliary-Function Methods: At the k th step of an auxiliary-function (AF) algorithm we minimize a function G k (x) = f (x) + g k (x) over x C to get x k. Auxiliary-function algorithms are closely related to sequential unconstrained minimization (SUM) methods. Several SUM methods, such as barrier-function and penalty-function methods, can be reformulated as AF methods.

23 Properties of Auxiliary Functions Auxiliary functions g k have the properties 1. g k (x) 0, for all x C; 2. g k (x k 1 ) = 0. Therefore, we have the following theorem. Theorem Let {x k } be generated by an AF algorithm. Then the sequence {f (x k )} is non-increasing. Proof: We have f (x k 1 ) = G k (x k 1 ) G k (x k ) f (x k ).

24 GOALS: The vector x k minimizes G k (x) = f (x) + g k (x), over x C. 1. We would like for the sequence {x k } to converge to some x C that solves the problem. This requires a topology on X. 2. Failing that, we would like the sequence {f (x k )} to converge to d = inf{f (x) x C}. 3. At the very least, we want the sequence {f (x k )} to be non-increasing.

25 SUMMA An AF algorithm in said to be in the SUMMA class if for all x C. Theorem G k (x) G k (x k ) g k+1 (x) 0, For algorithms in the SUMMA class, the sequence {f (x k )} is non-increasing, and we have {f (x k )} d = inf{f (x) x C}.

26 SUMMA Proof: If f (x k ) d > f (z) d for all k, then g k (z) g k+1 (z) g k (z) (G k (z) G k (x k )) = f (x k ) f (z) + g k (x k ) d f (z) > 0. But the decreasing non-negative sequence {g k (z)} cannot have its successive differences bounded away from zero.

27 Examples of SUMMA A number of well known algorithms either are in the SUMMA class, or can be reformulated to be in the SUMMA class, including 1. Barrier-function methods; 2. Penalty-function methods; 3. Forward-backward splitting (the CQ algorithm, projected Landweber, projected gradient descent); 4. Alternating minimization with the 5-point property (simultaneous MART); 5. Certain cases of the EM algorithm; 6. Proximal minimization with Bregman distances; 7. Statistical majorization minimization.

28 Barrier-Function Methods A function b : C [0, + ] is a barrier function for C, that is, b is finite on C, b(x) = +, for x not in C, and, for topologized X, has the property that b(x) + as x approaches the boundary of C. At the kth step of the iteration we minimize f (x) + 1 k b(x) to get x k. Equivalently, we minimize with G k (x) = f (x) + g k (x), g k (x) = [(k 1)f (x) + b(x)] [(k 1)f (x k 1 ) + b(x k 1 )]. Then G k (x) G k (x k ) = g k+1 (x).

29 Penalty-Function Methods We select a non-negative function p : X R with the property that p(x) = 0 if and only if x is in C and then, for each positive integer k, we minimize f (x) + kp(x), or, equivalently, p(x) + 1 k f (x), to get x k. Most, but not all, of what we need concerning penalty-function methods can then be obtained from the discussion of barrier-function methods.

30 Bregman Distances Let f : Z R J R be convex on its domain Z, and differentiable on the interior U of Z. For x Z and y U, the Bregman distance from y to x is D f (x, y) = f (x) f (y) f (y), x y. Then, because of the convexity of f, D f (x, y) 0 for all x and y. The Euclidean distance is the Bregman distance for f (x) = 1 2 x 2 2.

31 Proximal Minimization Algorithms (PMA) Let f : C R J R be convex on C and differentiable on the interior U of C. At the kth step of a proximal minimization (prox min) algorithm (PMA), we minimize the function G k (x) = f (x) + D h (x, x k 1 ), to get x k. The function g k (x) = D h (x, x k 1 ) is the Bregman distance associated with the Bregman function h. We assume that each x k lies in U, whose closure is the set C. We have G k (x) G k (x k ) = D f (x, x k ) + D h (x, x k ) D h (x, x k ) = g k+1 (x).

32 Another Job for the PMA Suppose that f (x) = 1 2 Ax b 2 2, with AA T invertible. If the PMA sequence {x k } converges to some x and h(x 0 ) is in the range of A T then x minimizes h(x) over all x with Ax = b.

33 Obstacles in the PMA To obtain x k in the PMA we must solve the equation f (x k ) + h(x k ) = h(x k 1 ). This is usually not easy. However, we can modify the PMA to overcome this obstacle. This modified PMA is an interior-point algorithm that we have called the IPA.

34 The IPA We still get x k by minimizing G k (x) = f (x) + D h (x, x k 1 ). With a(x) = h(x) + f (x), we find that x k solves a(x k ) = a(x k 1 ) f (x k 1 ). Therefore, we look for a(x) so that 1. h(x) = a(x) f (x) is convex; 2. obtaining x k from a(x k ) is easy now.

35 The IPA and Constraints In the PMA, h can be chosen to force x k to be in U, the interior of the domain of h. In the IPA it is a(x) that we select, and incorporating the constraints within a(x) may not be easy.

36 The Projected Landweber Algorithm as IPA We want to minimize f (x) = 1 2 Ax b 2 2, over x C. We select γ so that 0 < γ < 1 ρ(a T A) and We have and we minimize a(x) = 1 2γ x 2 2. D f (x, y) = 1 2 Ax Ay 2 2, G k (x) = f (x) + 1 2γ x x k Ax Ax k over x in C to get x k = P C (x k 1 γa T (Ax k 1 b)).

37 An Example of the IPA: Projected Gradient Descent Let f be convex and differentiable on R J, with f L-Lipschitz, and 0 < γ 1 L. Let C RJ be closed, nonempty, and convex. Let a(x) = 1 2γ x 2 2. At the kth step we minimize G k (x) = f (x) + 1 2γ x x k D f (x, x k 1 ), over x C, obtaining x k = P C (x k 1 γ f (x k 1 )); P C is the orthogonal projection onto C.

38 Projected Gradient Descent as SUMMA The auxiliary function g k (x) can be written as g k (x) = 1 2γ x x k D f (x, x k 1 ) = D h (x, x k 1 ), where h(x) is the convex differentiable function Then h(x) = a(x) f (x) = 1 2γ x 2 2 f (x). G k (x) G k (x k ) = D a (x, x k ) D h (x, x k ) = g k+1 (x).

39 Majorization Minimization or Optimization Transfer Majorization minimization or optimization transfer is used in statistical optimization. For each fixed y, let g(x y) f (x), for all x, and g(y y) = f (y). At the kth step we minimize g(x x k 1 ) to get x k. This statistical method is equivalent to minimizing where f (x) + D(x, x k 1 ) = f (x) + g k (x), g k (x) = D(x, x k 1 ), for some distance measure D(x, z) 0, with D(z, z) = 0.

40 The Method of Auslander and Teboulle The method of Auslander and Teboulle is a particular example of an MM method that is not in SUMMA. At the kth step of their method minimize G k (x) = f (x) + D(x, x k 1 ) to get x k. They assume that D has an associated induced proximal distance H(a, b) 0, finite for a and b in C, with H(a, a) = 0 and 1 d(b, a), c b H(c, a) H(c, b), for all c in C. Here 1 D(x, y) denotes the x gradient. We do have {f (x k )} d. If D = D h, then H = D h also; Bregman distances are self-proximal.

41 Projected Gradient Descent (PGD) Revisited At the kth step of the PGD we minimize G k (x) = f (x) + 1 2γ x x k D f (x, x k 1 ), over x C. Equivalently, we minimize the function ι C (x) + f (x) + 1 2γ x x k D f (x, x k 1 ), over all x in R J, with ι C (x) = 0 for x C and ι C (x) = + otherwise. Now the objective function is a sum of two functions, one non-differentiable. The forward-backward splitting method then applies.

42 Moreau s Proximity Operators Let f : R J R be convex. For each z R J the function m f (z) := min x {f (x) x z 2 2 } is minimized by a unique x = prox f (z). Moreau s proximity operator prox f extends the notion of orthogonal projection onto a closed convex set: if f (x) = ι C (x) then prox f (x) = P C (x). Also x = prox f (z) iff z x f (x) iff x = J f (z), where J f (z) is the resolvent of the set-valued operator f.

43 Forward-Backward Splitting Let f : R J R be convex, with f = f 1 + f 2, both convex, f 2 differentiable, and f 2 L-Lipschitz continuous. The iterative step of the FBS algorithm is ) x k = prox γf1 (x k 1 γ f 2 (x k 1 ), which can be obtained by minimizing G k (x) = f (x) + 1 2γ x x k D f 2 (x, x k 1 ). Convergence of the sequence {x k } to a solution can be established, if γ is chosen to lie within the interval (0, 1/L].

44 Projected Gradient Descent as FBS To put the projected gradient descent method into the framework of the forward-backward splitting we let f 1 (x) = ι C (x), the indicator function of the set C, which is zero for x C and + otherwise. Then we minimize the function over all x R J. f 1 (x) + f 2 (x) = ι C (x) + f (x)

45 The CQ Algorithm Let A be a real I by J matrix, C R J, and Q R I, both closed convex sets. The split feasibility problem (SFP) is to find x in C such that Ax is in Q. The function f 2 (x) = 1 2 P QAx Ax 2 2 is convex and differentiable, f 2 is ρ(a T A)-Lipschitz, and f 2 (x) = A T (I P Q )Ax. We want to minimize the function f (x) = ι C (x) + f 2 (x), over all x. The FBS algorithm gives the iterative step for the CQ algorithm; with 0 < γ 1/L, x k = P C (x k 1 γa T (I P Q )Ax k 1).

46 Alternating Minimization Suppose that P and Q are arbitrary non-empty sets and the function Θ(p, q) satisfies < Θ(p, q) +, for each p P and q Q. The general AM method proceeds in two steps: we begin with some q 0, and, having found q k 1, we 1. minimize Θ(p, q k 1 ) over p P to get p = p k, and then 2. minimize Θ(p k, q) over q Q to get q = q k. The 5-point property of Csiszár and Tusnády is Θ(p, q) + Θ(p, q k 1 ) Θ(p, q k ) + Θ(p k, q k 1 ).

47 AM as SUMMA When the 5-point property holds for AM, the sequence {Θ(p k, q k )} converges to d = inf Θ(p, q). p,q For each p P, define q(p) to be some member of Q for which Θ(p, q(p)) Θ(p, q), for all q Q; then q(p k ) = q k. Define f (p) = Θ(p, q(p)). At the kth step of AM we minimize where G k (p) = Θ(p, q k 1 ) = f (p) + g k (p), g k (p) = Θ(p, q k 1 ) Θ(p, q(p)). The 5-point property is then the SUMMA condition G k (p) G k (p k ) g k+1 (p) 0.

48 Kullback-Leibler Distances For a > 0 and b > 0, the Kullback-Leibler distance from a to b is KL(a, b) = a log a a log b + b a, with KL(0, b) = b, and KL(a, 0) = +. The KL distance is extended to non-negative vectors entry-wise. Let y R I be positive, and P an I by J matrix with non-negative entries. The simultaneous MART (SMART) algorithm minimizes KL(Px, y), and the EMML algorithm minimizes KL(y, Px), both over all non-negative x R J. Both algorithms can be viewed as particular cases of alternating minimization.

49 The General EM Algorithm The general EM algorithm is essentially non-stochastic. Let Z be an arbitrary set. We want to maximize L(z) = b(x, z)dx, over z Z, where b(x, z) : R J Z [0, + ]. Given z k 1, we take f (z) = L(z) and minimize G k (z) = f (z) + KL(b(x, z k 1 ), b(x, z))dx to get z k. Since g k (z) = KL(b(x, z k 1 ), b(x, z))dx 0, for all z Z and g k (z k 1 ) = 0, we have an AF method.

50 KL Projections For a fixed x, minimizing the distance KL(z, x) over z in the hyperplane H i = {z (Pz) i = y i } generally cannot be done in closed form. However, assuming that I i=1 P ij = 1, the weighted KL projections z i = T i (x) onto hyperplanes H i obtained by minimizing J P ij KL(z j, x j ) j=1 over z in H i, are given in closed form by y i zj i = T i (x) j = x j, for j = 1,..., J. (Px) i

51 SMART and EMML Having found x k, the next vector in the SMART sequence is x k+1 j = I (T i (x k ) j ) P ij, i=1 so x k+1 is a weighted geometric mean of the T i (x k ). For the EMML algorithm the next vector in the EMML sequence is I x k+1 j = P ij T i (x k ) j, i=1 so x k+1 is a weighted arithmetic mean of the T i (x k ).

52 The MART The MART algorithm has the iterative step x k+1 j where i = k(mod I) + 1 and = x k j (y i /(Px k ) i ) P ij m 1 i, m i = max{p ij j = 1, 2,..., J}. We can express the MART in terms of the weighted KL projections T i (x k ); x k+1 j = (x k j ) 1 P ij m 1 i (T i (x k ) j ) P ij m 1 i. We see then that the iterative step of the MART is a relaxed weighted KL projection onto H i, and a weighted geometric mean of the current x k j and T i (x k ) j.

53 The EMART The iterative step of the EMART algorithm is x k+1 j = (1 P ij m 1 i )xj k + P ij m 1 i T i (x k ) j. We can express the EMART in terms of the weighted KL projections T i (x k ); x k+1 j = (1 P ij m 1 i )xj k + P ij m 1 i T i (x k ) j. We see then that the iterative step of the EMART is a relaxed weighted KL projection onto H i, and a weighted arithmetic mean of the current x k j and T i (x k ) j.

54 MART and EMART Compared When there are non-negative solutions of the system y = Px, the MART sequence {x k } converges to the solution x that minimizes KL(x, x 0 ). The EMART sequence {x k } also converges to a non-negative solution, but nothing further is known about this solution. One advantage that the EMART has over the MART is the substitution of multiplication for exponentiation.

55 SMART as SUMMA At the kth step of the SMART we minimize the function G k (x) = KL(Px, y) + KL(x, x k 1 ) KL(Px, Px k 1 ) to get x k with entries x k j = x k 1 j exp ( I i=1 ) P ij log(y i /(Px k 1 ) i ). We assume that P and x have been rescaled so that I i=1 P ij = 1, for all j. Then g k (x) = KL(x, x k 1 ) KL(Px, Px k 1 )) 0. and G k (x) G k (x k ) = KL(x, x k ) g k+1 (x) 0.

56 SMART as IPA With f (x) = KL(Px, y), the associated Bregman distance is D f (x, z) = KL(Px, Pz). With we have a(x) = J x j log(x j ) x j, j=1 D a (x, z) = KL(x, z) KL(Px, Pz) = D f (x, z). Therefore, h(x) = a(x) f (x) is convex.

57 Convergence of the FBS For each k = 1, 2,... let where G k (x) = f (x) + 1 2γ x x k D f 2 (x, x k 1 ), D f2 (x, x k 1 ) = f 2 (x) f 2 (x k 1 ) f 2 (x k 1 ), x x k 1. Here D f2 (x, y) is the Bregman distance formed from the function f 2. The auxiliary function can be rewritten as g k (x) = 1 2γ x x k D f 2 (x, x k 1 ) g k (x) = D h (x, x k 1 ), where h(x) = 1 2γ x 2 2 f 2(x).

58 Proof (p.2): Therefore, g k (x) 0 whenever h(x) is a convex function. We know that h(x) is convex if and only if h(x) h(y), x y 0, for all x and y. This is equivalent to 1 γ x y 2 2 f 2(x) f 2 (y), x y 0. Since f 2 is L-Lipschitz, the inequality holds for 0 < γ 1/L.

59 Proof (p.3): Lemma The x k that minimizes G k (x) over x is given by ) x k = prox γf1 (x k 1 γ f 2 (x k 1 ). Proof: We know that x k minimizes G k (x) if and only if 0 f 2 (x k ) + 1 γ (x k x k 1 ) f 2 (x k ) + f 2 (x k 1 ) + f 1 (x k ), or, equivalently, ( ) x k 1 γ f 2 (x k 1 ) x k (γf 1 )(x k ). Consequently, x k = prox γf1 (x k 1 γ f 2 (x k 1 )).

60 Proof (p.4): Theorem The sequence {x k } converges to a minimizer of the function f (x), whenever minimizers exist. Proof: G k (x) G k (x k ) = 1 2γ x x k ( f 1 (x) f 1 (x k ) 1 ) γ (x k 1 γ f 2 (x k 1 )) x k, x x k because 1 2γ x x k 2 2 g k+1(x), (x k 1 γ f 2 (x k 1 )) x k (γf 1 )(x k ).

61 Proof (p.5): Therefore, G k (x) G k (x k ) 1 2γ x x k 2 2 g k+1(x), and the iteration fits into the SUMMA class. Now let ˆx minimize f (x) over all x. Then G k (ˆx) G k (x k ) = f (ˆx) + g k (ˆx) f (x k ) g k (x k ) f (ˆx) + G k 1 (ˆx) G k 1 (x k 1 ) f (x k ) g k (x k ), so that ( ) ( ) G k 1 (ˆx) G k 1 (x k 1 ) G k (ˆx) G k (x k ) f (x k ) f (ˆx) + g k (x k ) 0.

62 Proof (p.6): Therefore, the sequence {G k (ˆx) G k (x k )} is decreasing and the sequences {g k (x k )} and {f (x k ) f (ˆx)} converge to zero. From G k (ˆx) G k (x k ) 1 2γ ˆx x k 2 2, it follows that the sequence {x k } is bounded. Therefore, we may select a subsequence {x kn } converging to some x, with {x kn 1 } converging to some x, and therefore f (x ) = f (x ) = f (ˆx). Replacing the generic ˆx with x, we find that {G k (x ) G k (x k )} is decreasing to zero. We conclude that the sequence { x x k 2 2 } converges to zero, and so {x k } converges to x. This completes the proof of the theorem.

63 Summary 1. Auxiliary-function methods can be used to impose constraints, but also to provide closed-form iterations. 2. When the SUMMA condition holds, the iterative sequence converges to the infimum of f (x) over C. 3. A wide variety of iterative methods fall into the SUMMA class. 4. The alternating minimization method is an auxiliary-function method and the 5-point property is identical to the SUMMA condition. 6. The SUMMA framework helps in proving convergence.

64 The End THE END

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

Alternating Projections

Alternating Projections Alternating Projections Stephen Boyd and Jon Dattorro EE392o, Stanford University Autumn, 2003 1 Alternating projection algorithm Alternating projections is a very simple algorithm for computing a point

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions. THREE LECTURES ON BASIC TOPOLOGY PHILIP FOTH 1. Basic notions. Let X be a set. To make a topological space out of X, one must specify a collection T of subsets of X, which are said to be open subsets of

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Convexization in Markov Chain Monte Carlo

Convexization in Markov Chain Monte Carlo in Markov Chain Monte Carlo 1 IBM T. J. Watson Yorktown Heights, NY 2 Department of Aerospace Engineering Technion, Israel August 23, 2011 Problem Statement MCMC processes in general are governed by non

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

M3P1/M4P1 (2005) Dr M Ruzhansky Metric and Topological Spaces Summary of the course: definitions, examples, statements.

M3P1/M4P1 (2005) Dr M Ruzhansky Metric and Topological Spaces Summary of the course: definitions, examples, statements. M3P1/M4P1 (2005) Dr M Ruzhansky Metric and Topological Spaces Summary of the course: definitions, examples, statements. Chapter 1: Metric spaces and convergence. (1.1) Recall the standard distance function

More information

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro

CMU-Q Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization. Teacher: Gianni A. Di Caro CMU-Q 15-381 Lecture 9: Optimization II: Constrained,Unconstrained Optimization Convex optimization Teacher: Gianni A. Di Caro GLOBAL FUNCTION OPTIMIZATION Find the global maximum of the function f x (and

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

Topology and Topological Spaces

Topology and Topological Spaces Topology and Topological Spaces Mathematical spaces such as vector spaces, normed vector spaces (Banach spaces), and metric spaces are generalizations of ideas that are familiar in R or in R n. For example,

More information

TOPOLOGY, DR. BLOCK, FALL 2015, NOTES, PART 3.

TOPOLOGY, DR. BLOCK, FALL 2015, NOTES, PART 3. TOPOLOGY, DR. BLOCK, FALL 2015, NOTES, PART 3. 301. Definition. Let m be a positive integer, and let X be a set. An m-tuple of elements of X is a function x : {1,..., m} X. We sometimes use x i instead

More information

Lecture 4: Convexity

Lecture 4: Convexity 10-725: Convex Optimization Fall 2013 Lecture 4: Convexity Lecturer: Barnabás Póczos Scribes: Jessica Chemali, David Fouhey, Yuxiong Wang Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer:

More information

Math 414 Lecture 2 Everyone have a laptop?

Math 414 Lecture 2 Everyone have a laptop? Math 44 Lecture 2 Everyone have a laptop? THEOREM. Let v,...,v k be k vectors in an n-dimensional space and A = [v ;...; v k ] v,..., v k independent v,..., v k span the space v,..., v k a basis v,...,

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Lecture 15: The subspace topology, Closed sets

Lecture 15: The subspace topology, Closed sets Lecture 15: The subspace topology, Closed sets 1 The Subspace Topology Definition 1.1. Let (X, T) be a topological space with topology T. subset of X, the collection If Y is a T Y = {Y U U T} is a topology

More information

Lecture 19 Subgradient Methods. November 5, 2008

Lecture 19 Subgradient Methods. November 5, 2008 Subgradient Methods November 5, 2008 Outline Lecture 19 Subgradients and Level Sets Subgradient Method Convergence and Convergence Rate Convex Optimization 1 Subgradients and Level Sets A vector s is a

More information

Topology 550A Homework 3, Week 3 (Corrections: February 22, 2012)

Topology 550A Homework 3, Week 3 (Corrections: February 22, 2012) Topology 550A Homework 3, Week 3 (Corrections: February 22, 2012) Michael Tagare De Guzman January 31, 2012 4A. The Sorgenfrey Line The following material concerns the Sorgenfrey line, E, introduced in

More information

Week 5. Convex Optimization

Week 5. Convex Optimization Week 5. Convex Optimization Lecturer: Prof. Santosh Vempala Scribe: Xin Wang, Zihao Li Feb. 9 and, 206 Week 5. Convex Optimization. The convex optimization formulation A general optimization problem is

More information

Iterative Optimization in Medical Imaging

Iterative Optimization in Medical Imaging Iterative Optimization in Medical Imaging Charles Byrne (Charles Byrne@uml.edu) http://faculty.uml.edu/cbyrne/cbyrne.html Department of Mathematical Sciences University of Massachusetts Lowell Lowell,

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

60 2 Convex sets. {x a T x b} {x ã T x b}

60 2 Convex sets. {x a T x b} {x ã T x b} 60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual

Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Sparse Optimization Lecture: Proximal Operator/Algorithm and Lagrange Dual Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know learn the proximal

More information

Convexity. 1 X i is convex. = b is a hyperplane in R n, and is denoted H(p, b) i.e.,

Convexity. 1 X i is convex. = b is a hyperplane in R n, and is denoted H(p, b) i.e., Convexity We ll assume throughout, without always saying so, that we re in the finite-dimensional Euclidean vector space R n, although sometimes, for statements that hold in any vector space, we ll say

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013 Machine Learning Topic 5: Linear Discriminants Bryan Pardo, EECS 349 Machine Learning, 2013 Thanks to Mark Cartwright for his extensive contributions to these slides Thanks to Alpaydin, Bishop, and Duda/Hart/Stork

More information

Numerical Optimization

Numerical Optimization Convex Sets Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Let x 1, x 2 R n, x 1 x 2. Line and line segment Line passing through x 1 and x 2 : {y

More information

MTAEA Convexity and Quasiconvexity

MTAEA Convexity and Quasiconvexity School of Economics, Australian National University February 19, 2010 Convex Combinations and Convex Sets. Definition. Given any finite collection of points x 1,..., x m R n, a point z R n is said to be

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015

Convex Optimization - Chapter 1-2. Xiangru Lian August 28, 2015 Convex Optimization - Chapter 1-2 Xiangru Lian August 28, 2015 1 Mathematical optimization minimize f 0 (x) s.t. f j (x) 0, j=1,,m, (1) x S x. (x 1,,x n ). optimization variable. f 0. R n R. objective

More information

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology

Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Aspects of Convex, Nonconvex, and Geometric Optimization (Lecture 1) Suvrit Sra Massachusetts Institute of Technology Hausdorff Institute for Mathematics (HIM) Trimester: Mathematics of Signal Processing

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics 1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!

More information

Math 302 Introduction to Proofs via Number Theory. Robert Jewett (with small modifications by B. Ćurgus)

Math 302 Introduction to Proofs via Number Theory. Robert Jewett (with small modifications by B. Ćurgus) Math 30 Introduction to Proofs via Number Theory Robert Jewett (with small modifications by B. Ćurgus) March 30, 009 Contents 1 The Integers 3 1.1 Axioms of Z...................................... 3 1.

More information

Introduction to Optimization Problems and Methods

Introduction to Optimization Problems and Methods Introduction to Optimization Problems and Methods wjch@umich.edu December 10, 2009 Outline 1 Linear Optimization Problem Simplex Method 2 3 Cutting Plane Method 4 Discrete Dynamic Programming Problem Simplex

More information

Convex Optimization / Homework 2, due Oct 3

Convex Optimization / Homework 2, due Oct 3 Convex Optimization 0-725/36-725 Homework 2, due Oct 3 Instructions: You must complete Problems 3 and either Problem 4 or Problem 5 (your choice between the two) When you submit the homework, upload a

More information

Algebraic Iterative Methods for Computed Tomography

Algebraic Iterative Methods for Computed Tomography Algebraic Iterative Methods for Computed Tomography Per Christian Hansen DTU Compute Department of Applied Mathematics and Computer Science Technical University of Denmark Per Christian Hansen Algebraic

More information

Support Vector Machines.

Support Vector Machines. Support Vector Machines srihari@buffalo.edu SVM Discussion Overview 1. Overview of SVMs 2. Margin Geometry 3. SVM Optimization 4. Overlapping Distributions 5. Relationship to Logistic Regression 6. Dealing

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 1 3.1 Linearization and Optimization of Functions of Vectors 1 Problem Notation 2 Outline 3.1.1 Linearization 3.1.2 Optimization of Objective Functions 3.1.3 Constrained

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS

Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Definition 1.1. Let X be a set and T a subset of the power set P(X) of X. Then T is a topology on X if and only if all of the following

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Open and Closed Sets

Open and Closed Sets Open and Closed Sets Definition: A subset S of a metric space (X, d) is open if it contains an open ball about each of its points i.e., if x S : ɛ > 0 : B(x, ɛ) S. (1) Theorem: (O1) and X are open sets.

More information

Convex optimization algorithms for sparse and low-rank representations

Convex optimization algorithms for sparse and low-rank representations Convex optimization algorithms for sparse and low-rank representations Lieven Vandenberghe, Hsiao-Han Chao (UCLA) ECC 2013 Tutorial Session Sparse and low-rank representation methods in control, estimation,

More information

REVIEW OF FUZZY SETS

REVIEW OF FUZZY SETS REVIEW OF FUZZY SETS CONNER HANSEN 1. Introduction L. A. Zadeh s paper Fuzzy Sets* [1] introduces the concept of a fuzzy set, provides definitions for various fuzzy set operations, and proves several properties

More information

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33

Convex Optimization. 2. Convex Sets. Prof. Ying Cui. Department of Electrical Engineering Shanghai Jiao Tong University. SJTU Ying Cui 1 / 33 Convex Optimization 2. Convex Sets Prof. Ying Cui Department of Electrical Engineering Shanghai Jiao Tong University 2018 SJTU Ying Cui 1 / 33 Outline Affine and convex sets Some important examples Operations

More information

Compact Sets. James K. Peterson. September 15, Department of Biological Sciences and Department of Mathematical Sciences Clemson University

Compact Sets. James K. Peterson. September 15, Department of Biological Sciences and Department of Mathematical Sciences Clemson University Compact Sets James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University September 15, 2017 Outline 1 Closed Sets 2 Compactness 3 Homework Closed Sets

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. Chapter 4 : Optimization for Machine Learning

Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey. Chapter 4 : Optimization for Machine Learning Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey Chapter 4 : Optimization for Machine Learning Summary of Chapter 2 Chapter 2: Convex Optimization with Sparsity

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

Convex sets and convex functions

Convex sets and convex functions Convex sets and convex functions Convex optimization problems Convex sets and their examples Separating and supporting hyperplanes Projections on convex sets Convex functions, conjugate functions ECE 602,

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

Elementary Topology. Note: This problem list was written primarily by Phil Bowers and John Bryant. It has been edited by a few others along the way.

Elementary Topology. Note: This problem list was written primarily by Phil Bowers and John Bryant. It has been edited by a few others along the way. Elementary Topology Note: This problem list was written primarily by Phil Bowers and John Bryant. It has been edited by a few others along the way. Definition. properties: (i) T and X T, A topology on

More information

CAT(0)-spaces. Münster, June 22, 2004

CAT(0)-spaces. Münster, June 22, 2004 CAT(0)-spaces Münster, June 22, 2004 CAT(0)-space is a term invented by Gromov. Also, called Hadamard space. Roughly, a space which is nonpositively curved and simply connected. C = Comparison or Cartan

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

Lecture 2: August 29, 2018

Lecture 2: August 29, 2018 10-725/36-725: Convex Optimization Fall 2018 Lecturer: Ryan Tibshirani Lecture 2: August 29, 2018 Scribes: Adam Harley Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have

More information

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example).

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example). 98 CHAPTER 3. PROPERTIES OF CONVEX SETS: A GLIMPSE 3.2 Separation Theorems It seems intuitively rather obvious that if A and B are two nonempty disjoint convex sets in A 2, then there is a line, H, separating

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

Chapter 4 Concepts from Geometry

Chapter 4 Concepts from Geometry Chapter 4 Concepts from Geometry An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Line Segments The line segment between two points and in R n is the set of points on the straight line joining

More information

Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points

Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points Submitted to Operations Research manuscript (Please, provide the manuscript number!) Finding Euclidean Distance to a Convex Cone Generated by a Large Number of Discrete Points Ali Fattahi Anderson School

More information

Constrained and Unconstrained Optimization

Constrained and Unconstrained Optimization Constrained and Unconstrained Optimization Carlos Hurtado Department of Economics University of Illinois at Urbana-Champaign hrtdmrt2@illinois.edu Oct 10th, 2017 C. Hurtado (UIUC - Economics) Numerical

More information

A primal-dual framework for mixtures of regularizers

A primal-dual framework for mixtures of regularizers A primal-dual framework for mixtures of regularizers Baran Gözcü baran.goezcue@epfl.ch Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) Switzerland

More information

Polytopes Course Notes

Polytopes Course Notes Polytopes Course Notes Carl W. Lee Department of Mathematics University of Kentucky Lexington, KY 40506 lee@ms.uky.edu Fall 2013 i Contents 1 Polytopes 1 1.1 Convex Combinations and V-Polytopes.....................

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

Foundations of Computing

Foundations of Computing Foundations of Computing Darmstadt University of Technology Dept. Computer Science Winter Term 2005 / 2006 Copyright c 2004 by Matthias Müller-Hannemann and Karsten Weihe All rights reserved http://www.algo.informatik.tu-darmstadt.de/

More information

Recent Developments in Model-based Derivative-free Optimization

Recent Developments in Model-based Derivative-free Optimization Recent Developments in Model-based Derivative-free Optimization Seppo Pulkkinen April 23, 2010 Introduction Problem definition The problem we are considering is a nonlinear optimization problem with constraints:

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Convex Geometry arising in Optimization

Convex Geometry arising in Optimization Convex Geometry arising in Optimization Jesús A. De Loera University of California, Davis Berlin Mathematical School Summer 2015 WHAT IS THIS COURSE ABOUT? Combinatorial Convexity and Optimization PLAN

More information

Optimization. Industrial AI Lab.

Optimization. Industrial AI Lab. Optimization Industrial AI Lab. Optimization An important tool in 1) Engineering problem solving and 2) Decision science People optimize Nature optimizes 2 Optimization People optimize (source: http://nautil.us/blog/to-save-drowning-people-ask-yourself-what-would-light-do)

More information

Finite Math Linear Programming 1 May / 7

Finite Math Linear Programming 1 May / 7 Linear Programming Finite Math 1 May 2017 Finite Math Linear Programming 1 May 2017 1 / 7 General Description of Linear Programming Finite Math Linear Programming 1 May 2017 2 / 7 General Description of

More information

Inverse and Implicit functions

Inverse and Implicit functions CHAPTER 3 Inverse and Implicit functions. Inverse Functions and Coordinate Changes Let U R d be a domain. Theorem. (Inverse function theorem). If ϕ : U R d is differentiable at a and Dϕ a is invertible,

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Generalized Nash Equilibrium Problem: existence, uniqueness and

Generalized Nash Equilibrium Problem: existence, uniqueness and Generalized Nash Equilibrium Problem: existence, uniqueness and reformulations Univ. de Perpignan, France CIMPA-UNESCO school, Delhi November 25 - December 6, 2013 Outline of the 7 lectures Generalized

More information

CMPSCI611: The Simplex Algorithm Lecture 24

CMPSCI611: The Simplex Algorithm Lecture 24 CMPSCI611: The Simplex Algorithm Lecture 24 Let s first review the general situation for linear programming problems. Our problem in standard form is to choose a vector x R n, such that x 0 and Ax = b,

More information

Rec 8 Notes. 1 Divide-and-Conquer Convex Hull. 1.1 Algorithm Idea. 1.2 Algorithm. Here s another random Divide-And-Conquer algorithm.

Rec 8 Notes. 1 Divide-and-Conquer Convex Hull. 1.1 Algorithm Idea. 1.2 Algorithm. Here s another random Divide-And-Conquer algorithm. Divide-and-Conquer Convex Hull Here s another random Divide-And-Conquer algorithm. Recall, possibly from CS 2, that the convex hull H(Q) of a set of points Q is the smallest convex set containing Q (that

More information

Primal Dual Schema Approach to the Labeling Problem with Applications to TSP

Primal Dual Schema Approach to the Labeling Problem with Applications to TSP 1 Primal Dual Schema Approach to the Labeling Problem with Applications to TSP Colin Brown, Simon Fraser University Instructor: Ramesh Krishnamurti The Metric Labeling Problem has many applications, especially

More information

EXTREME POINTS AND AFFINE EQUIVALENCE

EXTREME POINTS AND AFFINE EQUIVALENCE EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard

More information

Convex sets and convex functions

Convex sets and convex functions Convex sets and convex functions Convex optimization problems Convex sets and their examples Separating and supporting hyperplanes Projections on convex sets Convex functions, conjugate functions ECE 602,

More information

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions Chalmers/GU Mathematics EXAM TMA946/MAN280 APPLIED OPTIMIZATION Date: 03 05 28 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points

More information

Lecture 5: Properties of convex sets

Lecture 5: Properties of convex sets Lecture 5: Properties of convex sets Rajat Mittal IIT Kanpur This week we will see properties of convex sets. These properties make convex sets special and are the reason why convex optimization problems

More information

Lecture 2 Optimization with equality constraints

Lecture 2 Optimization with equality constraints Lecture 2 Optimization with equality constraints Constrained optimization The idea of constrained optimisation is that the choice of one variable often affects the amount of another variable that can be

More information

INTRODUCTION TO TOPOLOGY

INTRODUCTION TO TOPOLOGY INTRODUCTION TO TOPOLOGY MARTINA ROVELLI These notes are an outline of the topics covered in class, and are not substitutive of the lectures, where (most) proofs are provided and examples are discussed

More information

Cecil Jones Academy Mathematics Fundamentals

Cecil Jones Academy Mathematics Fundamentals Year 10 Fundamentals Core Knowledge Unit 1 Unit 2 Estimate with powers and roots Calculate with powers and roots Explore the impact of rounding Investigate similar triangles Explore trigonometry in right-angled

More information

DD2429 Computational Photography :00-19:00

DD2429 Computational Photography :00-19:00 . Examination: DD2429 Computational Photography 202-0-8 4:00-9:00 Each problem gives max 5 points. In order to pass you need about 0-5 points. You are allowed to use the lecture notes and standard list

More information

Manifolds. Chapter X. 44. Locally Euclidean Spaces

Manifolds. Chapter X. 44. Locally Euclidean Spaces Chapter X Manifolds 44. Locally Euclidean Spaces 44 1. Definition of Locally Euclidean Space Let n be a non-negative integer. A topological space X is called a locally Euclidean space of dimension n if

More information

On Unbounded Tolerable Solution Sets

On Unbounded Tolerable Solution Sets Reliable Computing (2005) 11: 425 432 DOI: 10.1007/s11155-005-0049-9 c Springer 2005 On Unbounded Tolerable Solution Sets IRENE A. SHARAYA Institute of Computational Technologies, 6, Acad. Lavrentiev av.,

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Constrained Optimization 1 / 46 EC2040 Topic 5 - Constrained Optimization Reading 1 Chapters 12.1-12.3

More information

PIPA: A New Proximal Interior Point Algorithm for Large-Scale Convex Optimization

PIPA: A New Proximal Interior Point Algorithm for Large-Scale Convex Optimization PIPA: A New Proximal Interior Point Algorithm for Large-Scale Convex Optimization Marie-Caroline Corbineau 1, Emilie Chouzenoux 1,2, Jean-Christophe Pesquet 1 1 CVN, CentraleSupélec, Université Paris-Saclay,

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

PARALLEL OPTIMIZATION

PARALLEL OPTIMIZATION PARALLEL OPTIMIZATION Theory, Algorithms, and Applications YAIR CENSOR Department of Mathematics and Computer Science University of Haifa STAVROS A. ZENIOS Department of Public and Business Administration

More information