A Center-Cut Algorithm for Quickly Obtaining Feasible Solutions and Solving Convex MINLP Problems

Size: px
Start display at page:

Download "A Center-Cut Algorithm for Quickly Obtaining Feasible Solutions and Solving Convex MINLP Problems"

Transcription

1 A Center-Cut Algorithm for Quickly Obtaining Feasible Solutions and Solving Convex MINLP Problems Jan Kronqvist a, David E. Bernal b, Andreas Lundell a, and Tapio Westerlund a a Faculty of Science and Engineering Åbo Akademi University, Turku, Finland b Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh PA, USA February, 2018 Abstract Here we present a center-cut algorithm for convex mixed-integer nonlinear programming (MINLP) that can either be used as a primal heuristic or as a deterministic solution technique. Like many other algorithms for convex MINLP, the center-cut algorithm constructs a linear approximation of the original problem. The main idea of the algorithm is to use the linear approximation differently in order to find feasible solutions within only a few iterations. The algorithm chooses trial solutions as the center of the current linear outer approximation of the nonlinear constraints, making the trial solutions more likely to satisfy the constraints. The ability to find feasible solutions within only a few iterations makes the algorithm well suited as a primal heuristic, and we prove that the algorithm finds the optimal solution within a finite number of iterations. Numerical results show that the algorithm obtains feasible solutions quickly and is able to obtain good solutions. 1 Introduction Many optimization problems arising in engineering and science contain both some form of distinct decision making and nonlinear correlations. Mixed-integer nonlinear programming (MINLP) deals with such optimization problems by combining the modeling capabilities of mixed-integer linear programming (MILP) and nonlinear programming (NLP) into a powerful modeling framework. The integer variables make it possible to incorporate discrete decisions and logical relations in the optimization problems, and the combination of linear and nonlinear functions make it possible to accurately describe a variety of different phenomena. The ability to accurately model real-world problems has made MINLP an active research area and there exists a vast number of applications in fields such as engineering, computational chemistry, and finance, e.g., see Biegler & Grossmann (2004) and Floudas (2000). MINLP problems are often classified as either convex or non-convex based on properties of the nonlinear functions (Lee & Leyffer, 2011). Here, we focus on convex MINLP problems, and the convex properties are exploited in the algorithm presented later on. Most deterministic methods for solving convex MINLP problems are based on some kind of decomposition technique, where the optimal solution is obtained by solving a sequence of tractable subproblems. Such methods are, e.g., extended cutting plane (ECP) (Westerlund & Petterson, 1995), extended supporting hyperplane (ESH) (Kronqvist et al., 2016), generalized Benders decomposition (GBD) (Geoffrion, 1972), outer approximation (OA) (Duran & Grossmann, 1986) and branch and bound (BB) techniques (Dakin, 1965). Even if there are several methods available for solving convex MINLP problems, these are still challenging as shown in the solver comparison by Kronqvist et al. (2016). Thus, further research in the field is still motivated. In recent years there has been a growing interest in so-called primal heuristics, i.e., algorithms intended to quickly obtain good feasible solutions to an optimization problem. Such algorithms are useful not only since they can provide a good feasible solution, but knowing a feasible solution can also greatly improve the performance of solvers, e.g., Fischetti & Lodi (2011) claimed that primal heuristics were one of the most 1

2 crucial improvements for MILP in the last decade. Different primal heuristics have also been proposed for MINLP problems, e.g., undercover (Berthold & Gleixner, 2014) and feasibility pump (Bonami et al., 2009). A review of several primal heuristics for MINLP are given by Berthold (2014). Primal heuristics can be a valuable tool, especially for difficult MINLP problems, since it may be the only way to obtain a good solution, and for some applications such as real-time optimization, it may be of utter importance to quickly obtain a feasible solution. Knowing a feasible solution can also improve the performance of MINLP solvers as shown in Berthold (2014) and Bernal et al. (2017). A good feasible solution can significantly reduce the search tree in branch and bound, and in solvers based on ECP or ESH, it provides an upper bound on the objective enabling the use of optimality gap as stopping criterion. It is also possible to solve pseudo convex MINLP problems as a sequence of feasibility problems as in the GAECP algorithm (Westerlund & Pörn, 2002). In this paper, we describe a new center-cut algorithm for convex MINLP problems, that can either be used as a primal heuristic or as deterministic solution technique. The algorithm was first presented briefly in a conference paper from the ESCAPE27 conference (Kronqvist et al., 2017), and here we continue with more details and a rigorous proof that the algorithm will obtain the optimal solution in a finite number of iterations. Like OA, ECP or ESH, the center-cut algorithm also constructs a polyhedral approximation of the feasible region defined by the nonlinear constraints. However, here we use the polyhedral approximation differently, in a way that will enable us to find feasible solutions within only a few iterations. The main idea of the algorithm is to choose the trial solutions in the center of the polyhedral approximation, instead of choosing the trial solutions on the boundary of the polyhedral approximation as in ECP and ESH. A similar concept for solving NLP problems was proposed by Elzinga & Moore (1975). The algorithm should be well suited as a primal heuristic, but it can also be used as a stand-alone solution technique with guaranteed convergence. 2 Background A convex MINLP problem can be defined compactly as min x,y N L Y ct 1 x + c T 2 y (P-MINLP) where the sets N, L and Y are given by N = {(x, y) R n R m g k (x, y) 0 k = 1,... l}, L = {(x, y) R n R m Ax + By b}, Y = {y Z m }. (1) In Eq. (1) A and B are matrices defining the linear constraints, including variable bounds. Throughout this paper we consider the following assumptions to be true: Assumption 1. The nonlinear functions g 1,..., g l are convex and continuously differentiable. Assumption 2. The intersection L Y defines a compact nonempty set, i.e., all variables must be bounded. Assumption 3. By fixing the integer variables in the MINLP problem to a feasible integer combination y, the resulting NLP problem satisfies Slater s condition, see Slater et al. (1959). These assumptions are needed in order to rigorously prove that the algorithm converges to the optimal solution in a finite number of iterations. Similar conditions are also needed to guarantee convergence of OA and ESH, see Fletcher & Leyffer (1994) and Kronqvist et al. (2016). It should be possible to handle MINLP problems with nonsmooth convex functions with the algorithm, although such problems are not considered here. One of the key elements in both ECP, ESH, and OA is to construct an iteratively improving polyhedral approximation of the set N. The approximation is obtained by first-order Taylor series expansions of the nonlinear constraints generated at the trial solutions; at iteration i it is given by ˆN i = { g k (x j, y j ) + g k (x j, y j ) T [ x x j y y j ] 0, j = 1,... i, k K j }, (2) 2

3 where K j contains the indexes of all nonlinear constraints active at the trial solution (x j, y j ), i.e., all nonlinear constraints such that g k (x j, y j ) 0. Due to convexity, we know that the polyhedral approximation will overestimate the set N and every point within N is also a point within ˆN i, i.e., N ˆN i. The standard approach for using the polyhedral approximation is to simply replace the set N by ˆN i in problem (P-MINLP). The next trial solution can then be obtained by solving the following MILP problem ( x i+1, y i+1) arg min c T (x,y) ˆN 1 x + c T 2 y. (3) i L Y Both ECP and ESH choose the trial solutions by solving problem (3), and OA selects the integer combination by the same approach. However, if we choose the trial solutions by solving problem (3), then we will not obtain a feasible solution before the very last iteration, see e.g., Kronqvist et al. (2016). By this approach, the trial solutions tend to be selected as points on the boundary of the set ˆN i. In the center-cut algorithm we will use the polyhedral approximation differently, instead of choosing points on the boundary we will select the trial solutions as points in the center of the polyhedral approximation. Since we know that the feasible set defined by the nonlinear constraints is contained somewhere in ˆN i, it seems natural to search for a feasible solution in the center of the set. 3 The center-cut algoritm As previously mentioned, the main idea of the center cut algorithm is to choose the trial solutions as the center of the polyhedral approximation of the feasible set defined by the nonlinear constraints. There are several definitions of the center of a set, and here we will use the Chebyshev center. The Chebyshev center is defined as the point furthest away from the boundary in all directions, which is also the center of the largest n-dimensional ball inscribed in the set (Boyd & Vandenberghe, 2004). Since the set ˆN i is a polyhedral set defined by linear inequality constraints, we can find the Chebyshev center of the set simply by solving the following linear programming (LP) problem max x,y,r r s.t. [ ] g k (x j, y j ) + g k (x j, y j ) T x x j y y j + r g k (x j, y j ) 2 0, x R n, y R m, r R, j = 1,... i, k K j, where r is the radius of the inscribed ball. For more details on how to find the Chebyshev center of a polyhedral set, see chapter 8.5 in Boyd & Vandenberghe (2004). To simplify notation we introduce a new set B i defined as [ ] B i = {g k (x j, y j ) + g k (x j, y j ) T x x j y y j + r g k (x j, y j ) 2 0, (5) j = 1,... i, k K j }, which contains all constraints defining the set ˆN i. In order to obtain a feasible solution to the MINLP problem, we also have to take the linear constraints and integer requirements into consideration. A new trial solution will, therefore, be chosen as the center of the largest ball inscribed in the set ˆN i, with the restrictions that the center has to satisfy all linear constraints and integer requirements. The linear constraints and integer restrictions therefore only affect the location of the center, and not directly the radius of the ball. A new trial solution is, thus, obtained by solving the following MILP problem (x i+1, y i+1, r i+1 ) arg max r. (MILP-i) (x,y,r) B i L Y Since we are maximizing the radius of the ball inscribed in ˆN i, it results in a trial solution minimizing the left-hand side of the linearized constraints in Eq. (2). Once we have obtain a new trial solution (x i+1, y i+1 ) there are two possibilities: either it violates some of the nonlinear constraints or it is a feasible solution. (4) 3

4 In case the trial solution (x i+1, y i+1 ) violates some of the nonlinear constraints, then we can improve the polyhedral approximation by generating cutting planes according to ( g k x i+1, y i+1) ( + g k x i+1, y i+1) [ ] T x x i+1 y y i+1 0 k K i+1, (6) where K i+1 is the index set of all active and violated constraints. The new cutting planes will exclude the solution (x i+1, y i+1 ) from the search space, see e.g., Westerlund & Pörn (2002). The new cutting planes are added to the polyhedral approximation, and we denote the new approximation as ˆN i+1. The polyhedral approximation of the set N is, thus, improved by the accumulation of cutting planes. In the next iteration, we solve subproblem (MILP-i) updated with new cuts to obtain a new trial solution. Now, in case the trial solution (x i+1, y i+1 ) is feasible it may still not be the best possible one with the integer combination given by y i+1. Therefore, we will fix the integer variables in the original MINLP problem to the values given by y i+1, resulting in the following convex NLP problem (x i+1, y i+1 ) arg min c T 1 x + c T 2 y (x,y) N L Y s.t. y = y i+1. (NLP-fixed) By solving problem (NLP-fixed) we obtain the optimal solution for this specific integer combination. However, the obtained solution may still not be the optimal one to the original MINLP problem. In order to obtain better solutions, we will therefore, generate an objective cut according to c T 1 x + c T 2 y c T 1 x i+1 + c T 2 y i+1, (7) where (x i+1, y i+1 ) is the solution obtained by solving subproblem (NLP-fixed). The cut given by Eq. (7) will exclude all solutions that have a worse objective function value than the obtained feasible solution, and will thus reduce the search space. To obtain better solutions we include the objective cut in the polyhedral approximation ˆN i+1. Subproblem (MILP-i), by which we choose the new trial solutions, will then contain the objective cut given by Eq. (7) in the following form c T 1 x + c T 2 y + r c 1 c T 1 x i+1 + c T 2 y i+1, (8) 2 c 2 forcing the next inscribed ball to also take the objective cut into consideration. As long as we obtain solutions to subproblems (MILP-i) with r i > 0, it is clear that the constraint given by Eq. (8) will force the trial solutions to have a strictly lower objective function value than the obtained feasible solution. Thus, the objective cut will force the algorithm to search for better solutions. Once an objective cut has been added to the polyhedral approximation ˆN i it will no longer be an outer approximation of the set N. However, due to convexity, we know that the optimal solution will not be excluded from the search space. The search space can further be reduced by generating cutting planes for all nonlinear constraints active at the feasible solution (x i+1, y i+1 ) according to Eq. (6). In each iteration the radius of the ball inscribed in ˆN i is reduced since the sets ˆN i shrink due to the added cuts. Later, we prove that the cuts added in each iteration force the radius to converge to zero. If the radius of the largest ball inscribed in ˆN i is zero, the set ˆN i will then have an empty interior, thus verifying that the optimal solution has been found. In case the original MINLP problem is infeasible, the radius will converge to zero without finding any feasible solution. The convergence properties are discussed in more detail in Section 5. The center-cut algorithm is summarized as a pseudo-code in Algorithm 1. In the algorithm, we use the radius as an optimality measure, since a smaller radius will ensure better solutions. However, in order to guarantee that the best-found solution is the optimal solution we must continue until the radius is reduced to zero. Note that, in case the original MINLP problem has a convex nonlinear objective function, we can simply replace the left-hand side of the objective cut in Eq. (7) by a linearization of the objective. There is, therefore, no need to reformulate the problem to obtain a linear objective function. In the next section, we apply the center-cut algorithm to an illustrative example with two variables to give a geometric interpretation of the algorithm. 4

5 Algorithm 1 Pseudo-code of the center-cut algorithm Specify a tolerance r min Initialization. 1.1 Set B 1 = R n+m, set iteration counter i = Solve problem (MILP-i) to obtain (x 1, y 1 ) and r While r i > r min. 2.a If (x i, y i ) satisfies all nonlinear constraints: Solve problem (NLP-fixed) to obtain the optimal solution with the given integer solution and store the solution as (x i, y i ). Construct cutting planes for any active constraint according to Eq. (6) and the objective cut according to Eq. (7). Generate the set B i+1 by adding the new cuts to B i 2.b If (x i, y i ) does not satisfies all nonlinear constraints: Obtain cutting planes for all violated constraints according to Eq. (6). Generate the set B i+1 by adding all cutting planes to B i 2.c Solve problem (MILP-i) to obtain (x i+1, y i+1, r i+1 ). and set i = i Return the best found feasible solution (x i, y i ). 4 Illustrative example For MINLP problems with only two variables, the center-cut algorithm chooses the trial solutions by inscribing the largest possible circle in ˆN i, such that the center of the circle satisfies all linear constraints and integer requirements. To illustrate the basics of the center-cut algorithm, consider the following simple MINLP problem min 3x y s.t. x 2 + y 2 25, x 2 + (5 y) 2 36, (5 x) 2 + y 2 25, 0 x 10, 0 y 10, x R, y Z. (Ex 1) The MINLP problem (Ex 1) is illustrated in Figure 1. We have applied the basic center-cut algorithm, as presented in Algorithm 1, to the illustrative example problem (Ex 1) and the iterations are shown in Figure 2. In the first iteration, we have no cutting planes approximating the set N and the radius is therefore not limited, and any solution satisfying the linear constraints and integer requirement can be chosen. In the second iteration, we obtain a solution where the circle s center satisfies all constraints, and the solution is improved by solving problem (NLP-fixed). In iteration 2 we generate an objective cut according to Eq. (7) and a cutting plane for the second nonlinear constraint. The optimal solution is obtained in iteration 4, but we need an additional iteration to verify optimality. In iteration 5 we find that largest inscribed circle has a radius of zero, thus proving that the optimal solution has been found. As a comparison, it takes 9 iterations with the basic ECP algorithm to find a feasible solution and 3 iterations with OA. In the next section, we describe why the radius of the inscribed ball can be used as an optimality measure, and we prove that the center-cut algorithm will find an optimal solution to the MINLP problem in a finite number of iterations. 5

6 Figure 1: The figure to the left shows the feasible regions defined by the nonlinear constraint of problem (Ex 1). The second figure shows the feasible region defined by the constraints, contours of the objective function and the optimal solution. 5 Proof of convergence Here we focus on the convergence properties of the center-cut algorithm. We show that the radius of the inscribed ball converges to zero, and from there we can show that the algorithm will converge to the optimal solution of a convex MINLP problem. To prove that the radius of the inscribed balls will converge to zero, we need some properties presented in Lemma 1. Lemma 1. In iteration i with the center-cut algorithm, the radius of the ball inscribed in ˆN i will be bounded by distance between the current center ( x i, y i) and any previously obtained center ( x i l, y i l) according to r i xi x i l y i y i l, 2 where 0 < l < i. Proof. At iteration i l, we generate cuts according to Eq. (6) and if we obtain a feasible solution we also add an objective cut according to Eq. (7). These cuts will either exclude the center ( x i l, y i l) from the set ˆN i l+1 or result in a cut which passes through it. The center ( x i l, y i l) is, thus, either outside ˆN i l+1 or on the boundary of ˆN i l+1. Therefore, the radius r i cannot be greater than the distance from the current center ( x i, y i) to the previously obtained center ( x i l, y i l), otherwise parts of the ball would be outside of ˆN i l+1. By using the bounds on the radius given by Lemma 1, it is possible to prove that the radius converges to zero as described by the following theorem. Theorem 1. With the center-cut algorithm, the radius r i of the inscribed balls converge to zero when i. Proof. Assume that the algorithm does not stop and we obtain an infinite sequence of centers { x i, y i} i=1. Due to Assumption 2, we know that all centers in the sequence belong to a compact subset of R n+m. According { } to the Bolzano-Weirstrass theorem, the sequence must contain at least one convergent subsequence x i j, y ij i j=1. The convergent subsequence also forms a Cauchy sequence with the following property lim j x ij 1 xij y i j y i j 1 = 0. 2 Note that Lemma 1 is true for any two centers, and therefore we obtain lim i ri = 0. 6

7 Iteration 1 Iteration 2 Iteration 3 Iteration 4 Figure 2: The figure shows the first four iterations of the center-cut algorithm applied to problem (Ex 1). The figures show the feasible region defined by the nonlinear constraints and the region defined by the sets ˆN i. The circular dot represents the center of the inscribed circle and the dashed curves represent the circle. The solution obtained by solving subproblem (NLP-fixed) is shown by the squared dot. In order to prove that the algorithm obtains the optimal solution in a finite number of iterations, we need some further properties presented in Lemma 2, regarding the geometry of the feasible region. The proof of Lemma 2 follows from Slater s condition, but for the sake of completeness, we have included the proof. Here we denote the optimal value of the objective function of the MINLP problem as z. Lemma 2. For any ϵ > 0, we can inscribe a ball with nonzero radius in the set N = { (x, y) c T 1 y + c T 2 x z + ϵ, g k (x, y) 0 k }, (9) such that the center of the ball satisfies all constraints of the MINLP problem. Proof. Note that z is given by z = c T 1 x + c T 2 y, where (x, y ) is an optimal solution to the MINLP problem. The optimal solution strictly satisfies the restriction on the objective function c T 1 x + c T 2 y < z + ϵ. 7

8 However, the optimal solution might be located on the boundary of the set N and therefore we cannot use it as center for the ball. By Assumption 3, we know that the nonlinear constraints satisfy Slater s condition even if we fix the integer variables to y, i.e., x : A x + By b, g k ( x, y ) < 0 k. Next, we define a new point as ˆx = αx + (1 α) x, (10) where α [0, 1) is an interpolation parameter. Now, we want to choose α such that (ˆx, y ) strictly satisfies all the constraints in the set N. If c T 1 ( x x ) < ϵ, it is sufficient to choose α = 0 to strictly satisfy the objective constraint in N. Otherwise α can be chosen as α = ( ϵ 2 + ct 1 (x x) ) / c T 1 (x x) < 1, which results in c T 1 ˆx + ct 2 y = z + ϵ 2. Since ˆx was chosen as an interpolation between two points, with the same integer combination and both satisfy all the constraints, it is clear that (ˆx, y ) will satisfy all constraints. Furthermore, since ( x, y ) strictly satisfies the nonlinear constraints and α < 1 we get g k (ˆx, y ) < 0 k. The point (ˆx, y ) is, thus, located within the interior of the set N, and therefore it is possible to put a ball with a nonzero radius at (ˆx, y ) such that the entire ball is contained in the set N. Now, we have all the intermediate results needed for proving that the optimal solution is obtained in a finite number of iterations. Theorem 2. The center-cut algorithm obtains the optimal solution to problem (P-MINLP) in a finite number of iterations. Proof. As before we denote the optimal solution to the MINLP problem as (x, y ) and the optimal objective value as z. Next we can choose an ϵ > 0, such that N L Y only contains optimal values for the integer variables y, where N is given by Eq. (9). From Lemma 2, we know that we can inscribe a ball with radius r > 0 in the set N, such that the center satisfies all constraints. As long as the algorithm has not obtained the optimal solution, the entire set N will be contained within ˆN i, i.e., N ˆN i. This is true because all the cutting planes added according to Eq. (6) are overestimating the feasible region defined by the nonlinear constraints. As long as N ˆN i, we know that the radius of the inscribed balls will be greater or equal to r. From Theorem 1 we know that the radius of the inscribed balls converges to zero, and therefore there exists a finite integer p such that i p r i < r. The only way to reduce the radius below r, is to generate an objective cut according to Eq. (7) stricter than the objective cut in N. Such an objective cut must be generated in iteration p at a feasible solution (x p, y p ) such that c T 1 x p + c T 2 y p < z + ϵ. In the beginning, we chose ϵ such that the objective function can only be within ϵ from the optimum if the integer variables takes on optimal values, i.e., y p = y. Furthermore, the variables in iteration p will be chosen by solving subproblem (NLP-fixed) with the integer variables fixed as y, and the subproblem will then return an optimal solution for the continuous variables, i.e., (x p, y p ) = (x, y ). The optimal solution to the MINLP problem was, thus, obtained in iteration p. From the proof of Theorem 2, it follows that the optimal solution will be obtained once the radius of the inscribed ball is reduced below a certain value. Furthermore, if the radius is reduced to zero it automatically verifies that the optimal solution has been obtained. In the algorithm, we, therefore, use the radius as an optimality measure and termination criterion. For rigorously verifying optimality, the radius needs to be reduced to zero, however, in practice it is often sufficient to stop once the radius is close to zero, e.g., smaller than In this section, we have proven that the algorithm will find the optimal solution to any convex MINLP problem satisfying Assumptions 1, 2 and 3. The next section deals with some details regarding the implementation of the algorithm. 8

9 6 Implementing the algorithm In previous sections, we have described the basics of the center-cut algorithm. To test the practical performance of the center-cut algorithm, we implemented it in Matlab 2017a and used Ipopt (Wächter & Biegler, 2006) and Gurobi as subsolvers for the NLP and MILP subproblems, respectively. We have also used OPTI Toolbox (Currie & Wilson, 2012) to read the test problems. When implementing the center-cut algorithm, it is possible to incorporate some tricks and features from other algorithms and solvers; next, we describe some of these that can easily be exploited. First, when solving an MINLP problem with the center-cut algorithm it is not necessary to solve every single MILP subproblem to optimality. It is sufficient to obtain a feasible solution, such that the inscribed ball has a radius strictly greater than zero. This is an important detail since solvers such as Cplex or Gurobi are often able to quickly find several feasible solutions to an MILP problems, and often the majority of the solution time is spent on proving optimality. The MILP subproblems are also by far the most time-consuming part of the center-cut algorithm. By stopping the MILP solver after a specific number of found feasible solutions, it is often possible to significantly reduce the solution time while still obtaining good solutions to the mixed-integer subproblems. This can be done with Gurobi by using the solution limit parameter. In the implementation of the center-cut algorithm, we simply start with the solution limit parameter set to 2 and increase the solution limit parameter by one each time the radius of the inscribed ball is less than and the solution was not reported as optimal. We also use a second test for increasing the solution limit, where the solution limit is increased if the radius is less than half the radius in the previous iteration and the solution was not optimal. When choosing the solution limit by this technique, we start with a low solution limit and gradually increase it during the iterations to make sure we obtain good solutions to the subproblems. When using this technique one must be careful with the termination criterion, and the search should not be terminated unless the MILP subproblem was solved to optimality in the last iteration. A similar approach for speeding up the MILP subproblems and consequently speeding up the MINLP solution procedure is also used with the GAECP algorithm, see Westerlund & Pörn (2002). In some cases, it might also be possible to speed up the algorithm by solving additional NLP subproblems. Even if the trial solution (x i, y i ) obtained by solving subproblem (MILP-i) does not satisfy the nonlinear constraints, y i may still be a feasible integer combination. It might, therefore, be possible to obtain a feasible solution by fixing the integer variables to y i and solving subproblem (NLP-fixed). This situation is illustrated in iteration 3 in Figure 2, where it would have been possible to obtain a feasible solution by solving an NLP subproblem. By solving such NLP problems it may be possible to obtain feasible solutions more frequently, but the additional NLP problems may also take some time to solve. In the implementation of the center-cut algorithm, we try to fix the integer variables and solve subproblem (NLP-fixed) in every third iteration. A similar technique is used in both the GAMS solver AlphaECP (Lastusilta et al., 2009) and in the SHOT solver (Kronqvist et al., 2016). The NLP problems with fixed integers may be infeasible in some iterations, however, in this case, Ipopt returns a solution that minimizes the constraint violation with the specific integer combination. By adding cuts according to Eq. (6) at the infeasible solution returned by Ipopt, we are able to exclude the infeasible integer combination from the search space, for details on such cuts see Fletcher & Leyffer (1994). In the implementation of the center-cut algorithm, we use this technique for generating cuts, when the NLP solver cannot find a feasible solution. In the implementation, we have used the center-cut algorithm as described in Algorithm 1 together with the additional features described in here. With the NLP solver Ipopt, we have used the default settings for all parameters. With Gurobi we have used the solution limit strategy as described earlier and for the other parameters, we have used default settings. 7 Numerical results To test the practical performance of the center-cut algorithm, we considered some convex MINLP test problems taken from the library MINLPLib2 (GAMSWorld, 2017). For the tests, we have used a standard desktop computer with an Intel i7 processor and 16GB of RAM. First, we have chosen 8 test problems from MINLPLib 2 that represent different types of MINLP problems, such as some facility layout problems (Castillo et al., 2005), retrofit planning problems (Sawaya, 2006) and trim loss problems (Harjunkoski et al., 1998). The largest of these problems contains 1500 binary variables, 9

10 Results obtained by the center-cut implementation. Name of MINLP problem flay06h gams01 ibs2 o7 Time/iterations to find a a feasible solution. 0.2 s / s / s / s / 8 Time/iterations to find a second feas. sol. * 1.9 s / 3 * 10 s / 10 Time/iterations to find a sol. within 5% of opt. 0.2 s / 2 52 s / s / s / 28 Time/iterations to find a sol. within 1% of opt. 0.2 s / 2 52 s / s / s / 30 Sub-optimality of best-found solution. 0 % 0.5 % 1 % 0 % Name of MINLP problem rsyn0815m04h stockcycle tls6 tls7 Time/iterations to find a feasible solution. 0.2 s / s / s / s / 14 Time/iterations to find a second feas. sol. 1.3 s / s / 2 13 s / 36 5 / 26 Time/iterations to find a sol. within 5% of opt. 13 s / s / s / 159 * Time/iterations to find a sol. within 1% of opt. 22 s / s / 157 * * Sub-optimality of best-found solution. 0.5 % 0.5 % 3 % 33 % Results obtained by the feasibility pump in DICOPT Name of MINLP problem flay06h gams01 ibs2 o7 Time/iterations to find a feasible solution. 0.2 s / s / 1 * 0.3 s / 4 Time/iterations to find a second feas. sol. 0.3 s / s / 2 * 0.4 s / 5 Time/iterations to find a sol. within 5%. 0.4 s / s / 21 * 3.1 s / 19 Time/iterations to find a sol. within 1% of opt. 0.6 s / 5 * 37 s / s / 23 Sub-optimality of best-found solution. 0% 0% 0% Name of MINLP problem rsyn0815m04h stockcycle tls6 tls7 Time/iterations to find a feasible solution. 2.7 s / s / s / 48 * Time/iterations to find a second feas. sol. 3.9 s / s / 2 * * Time/iterations to find a sol. within 5% of opt. 5.7 s / s / 51 * * Time/iterations to find a sol. within 1% of opt. 9.1 s / s / 295 * * Sub-optimality of best-found solution. 0% 1% 12% Table 1: The table shows the results obtained with the center-cut implementation and the feasibility pump in DICOPT. The sign * indicates that no such solution was obtained within the time limit of 1800 seconds continuous variables, and 1821 constraints. These specific problems were chosen since they are known to be difficult to solve. We have applied the center-cut implementation to the test problems and the results are shown in Table 1. The table shows the time and number of iterations needed to find a feasible solution, a second feasible solution, a solution within 5 % of the best-known solution, a solution within 1 % of the best-known solution and the quality of the best-found solution. Besides the settings described in the previous section, we have used a time limit of 1800 seconds. To get a reference point for evaluating the performance, we have used the feasibility pump available in the state-of-the-art solver DICOPT in GAMS (Grossmann et al., 2002) on the same problems. As previously mentioned, the feasibility pump is a primal heuristic intended for quickly finding good solutions. The results obtained with the feasibility pump are shown in the last part of Table 1. With the feasibility pump, we have used Conopt and Gurobi as subsolvers, and we have used the following parameters to make sure we can obtain a solution within 1 % of the optimal solution: fp_cutoffdecr = 0.01, fp_timelimit = 1800, fp_stalllimit = and fp_sollimit = We used Conopt as a NLP subsolver since it gave the best performance with the feasibility pump. Note this is not intended as a comparison because comparing the feasibility pump in DICOPT with the Matlab implementation of the center-cut algorithm is not fair. The Matlab implementation is quite simple and mainly intended as a proof of concept and to show the potential of the center-cut algorithm. TTable 1 shows that the center-cut implementation is able to find a feasible solution to all of the 8 problems within less than 3 seconds. Furthermore, we are able to find good solutions to all of the problems except tls7, where the best-obtained solution is still far from the best-known solution. However, it should be noted that these are difficult problems and tls7 is one of the few convex MINLP problems in MINLPLib2 10

11 Number of problems A solution within 1 % of best known solution for the Center-cut (290/295) A solution within 1 % of best known solution for the Feasibility pump (278/295) A feasible solution for the Center-Cut (294/295) A feasible solution for the Feasibility pump (290/295) A feasible solution for Outer Approximation (286/295) Time in seconds Figure 3: The lines shows number of problems that the center-cut implementation is able to find a solution to as a function of running time. that are still considered as unsolved. The feasibility pump struggles with some of the problems and is not able to find any solution for two of the problems. The feasibility pump is quicker at obtaining solutions to problems gams01 and o7_ar2_1, but for the other problems, the center-cut implementation seems to be more efficient. To further test the center-cut implementation, we applied it to 295 test problems from MINLPLib2. In this test set, we chose all convex problems from MINLPLib2 containing at least one discrete variable, and we removed the instances where the only nonlinearity was due to a quadratic objective function. Convex problems where the only nonlinear term is a quadratic objective can be solved efficiently directly with Gurobi. For such problems, the center-cut algorithm is not necessarily a good choice, and therefore, we removed these test problems. The results obtained with the center-cut implementation are presented in Figure 3. We have applied the feasibility pump to the test problems to get a reference point for the evaluating the results. We have also used a basic implementation of OA, described in?, to show the time needed to obtain feasible solution with OA. Figure 3 shows the number of problems that the center-cut implementation is able to find a feasible solution to as a function of time. The figure shows that the center-cut implementation is able to find feasible solutions to these problems quickly. For 250 of the test problems, it is possible to find a feasible solution by running center-cut implementation for less than 1 second. Furthermore, the implementation requires less than 10 seconds to find a feasible solution to 291 of the 295 test problems and there is only one problem (tls12) where the implementation fails to find a feasible solution. Compared with the feasibility pump, the center-cut is able to find feasible solution for more problems in 0.1 s and the overall performance is similar. Within the given time limit the center-cut is able to find feasible solutions to all but one of the test instances, while the feasibility pump is not able to find a feasible solution to five of the instances. The figure also shows that OA is significantly slower at obtaining feasible solutions. Finding a solution within 1% of the best-known solution requires more time. By running the center-cut implementation for 10 seconds it is possible to find a solution within the 1% tolerance for 236 of the test problems, and within the given time limit we managed to find a solution within the tolerance for 288 of the test problems. The feasibility pump is bit faster at obtaining solutions within the 1% tolerance for the easier test problems, which is partially due to a more efficient implementation. For the instances requiring more than 3 seconds, the performance of the feasibility pump and center-cut is quite similar. However, in the end, the feasibility pump fails to find a solution within the tolerance for 17 of the test problems, whereas the center-cut implementation only fails on 7 of the problems. The intention of the numerical results has not been to compare the feasibility pump in DICOPT with 11

12 the simple Matlab implementation of the center-cut algorithm, but to show the potential of the center-cut algorithm. However, the results have shown that the performance of center-cut implementation is actually on par with the feasibility pump in the state-of-the-art solver DICOPT. The numerical results have shown that the center-cut algorithm may be well suited as a primal heuristic. For the test problems we were able to quickly find feasible solutions to almost all of the 295 test problems, and furthermore, we are also able to obtain solutions of good quality. 8 Conclusions In this paper, we have given a detailed presentation of the center-cut algorithm for convex MINLP, and we have proven that the algorithm finds the optimal solution in a finite number of iterations. The algorithm uses a different approach to obtaining trial solutions which should make it able to quickly obtain feasible solutions, and this was verified by the numerical results. The ability to quickly obtain feasible solutions makes the algorithm well suited as a primal heuristic, but it can also be used as a deterministic solution technique. With the center-cut algorithm we do not directly obtain a lower bound as in ESH or ECP, and therefore it could be efficient to combine these algorithms in a solver to obtain both a lower and upper bound. Acknowledgement The authors are grateful for the support from GAMS Development Corporation, and furthermore, David Bernal would like to thank the Center for Advanced Process Decision-making (CAPD) for financial support. References Bernal, D. E., Vigerske, S., Trespalacios, F., & Grossmann, I. E. (2017). Improving the performance of dicopt in convex minlp problems using a feasibility pump,. Berthold, T. (2014). Heuristic algorithms in global MINLP solvers. Verlag Dr. Hut. Berthold, T., & Gleixner, A. M. (2014). Undercover: a primal minlp heuristic exploring a largest sub-mip. Mathematical Programming, 144, Biegler, L. T., & Grossmann, I. E. (2004). Retrospective on optimization. Computers & Chemical Engineering, 28, Bonami, P., Cornuéjols, G., Lodi, A., & Margot, F. (2009). A feasibility pump for mixed integer nonlinear programs. Mathematical Programming, 119, Boyd, S., & Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press. Castillo, I., Westerlund, J., Emet, S., & Westerlund, T. (2005). Optimization of block layout design problems with unequal areas: A comparison of milp and minlp optimization methods. Computers & Chemical Engineering, 30, Currie, J., & Wilson, D. I. (2012). OPTI: Lowering the Barrier Between Open Source Optimizers and the Industrial MATLAB User. In N. Sahinidis, & J. Pinto (Eds.), Foundations of Computer-Aided Process Operations. Savannah, Georgia, USA. Dakin, R. J. (1965). A tree-search algorithm for mixed integer programming problems. The Computer Journal, 8, Duran, M. A., & Grossmann, I. E. (1986). An outer-approximation algorithm for a class of mixed-integer nonlinear programs. Mathematical Programming, 36, Elzinga, J., & Moore, T. G. (1975). A central cutting plane algorithm for the convex programming problem. Mathematical Programming, 8,

13 Fischetti, M., & Lodi, A. (2011). Heuristics in mixed integer programming. Wiley Encyclopedia of Operations Research and Management Science,. Fletcher, R., & Leyffer, S. (1994). Solving mixed integer nonlinear programs by outer approximation. Mathematical Programming, 66, Floudas, C. A. (2000). Applications. Deterministic Global Optimization, vol. 37 of Nonconvex Optimization and its GAMSWorld (2017). Mixed-integer nonlinear programming library. URL: [Online; accessed 27-December-2017]. Geoffrion, A. M. (1972). Generalized Benders decomposition. Journal of Optimization Theory and Applications, 10, Grossmann, I. E., Viswanathan, J., Vecchietti, A., Raman, R., Kalvelagen, E. et al. (2002). Gams/dicopt: A discrete continuous optimization package. GAMS Corporation Inc,. Harjunkoski, I., Westerlund, T., Pörn, R., & Skrifvars, H. (1998). Different transformations for solving non-convex trim-loss problems by minlp. European Journal of Operational Research, 105, Kronqvist, J., Lundell, A., & Westerlund, T. (2016). The extended supporting hyperplane algorithm for convex mixed-integer nonlinear programming. Journal of Global Optimization, 64, Kronqvist, J., Lundell, A., & Westerlund, T. (2017). A center-cut algorithm for solving convex mixed-integer nonlinear programming problems. In Computer Aided Chemical Engineering (pp ). Elsevier volume 40. Lastusilta, T., Bussieck, M. R., & Westerlund, T. (2009). An experimental study of the GAMS/AlphaECP MINLP solver. Industrial & Engineering Chemistry Research, 48, Lee, J., & Leyffer, S. (Eds.) (2011). Mixed Integer Nonlinear Programming volume 154. Springer Science & Business Media. Sawaya, N. (2006). Reformulations, relaxations and cutting planes for generalized disjunctive programming volume 67. Slater, M. et al. (1959). Lagrange multipliers revisited. Technical Report Cowles Foundation for Research in Economics, Yale University. Wächter, A., & Biegler, L. T. (2006). On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical programming, 106, Westerlund, T., & Petterson, F. (1995). An extended cutting plane method for solving convex MINLP problems. Computers & Chemical Engineering, 19, S131 S136. Westerlund, T., & Pörn, R. (2002). Solving pseudo-convex mixed integer optimization problems by cutting plane techniques. Optimization and Engineering, 3,

An extended supporting hyperplane algorithm for convex MINLP problems

An extended supporting hyperplane algorithm for convex MINLP problems An extended supporting hyperplane algorithm for convex MINLP problems Jan Kronqvist, Andreas Lundell and Tapio Westerlund Center of Excellence in Optimization and Systems Engineering Åbo Akademi University,

More information

An extended supporting hyperplane algorithm for convex MINLP problems

An extended supporting hyperplane algorithm for convex MINLP problems An extended supporting hyperplane algorithm for convex MINLP problems Andreas Lundell, Jan Kronqvist and Tapio Westerlund Center of Excellence in Optimization and Systems Engineering Åbo Akademi University,

More information

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS

A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER NONLINEAR PROGRAMMING PROBLEMS EVOLUTIONARY METHODS FOR DESIGN, OPTIMIZATION AND CONTROL P. Neittaanmäki, J. Périaux and T. Tuovinen (Eds.) c CIMNE, Barcelona, Spain 2007 A NEW SEQUENTIAL CUTTING PLANE ALGORITHM FOR SOLVING MIXED INTEGER

More information

Comparison of Some High-Performance MINLP Solvers

Comparison of Some High-Performance MINLP Solvers Comparison of Some High-Performance MINLP s Toni Lastusilta 1, Michael R. Bussieck 2 and Tapio Westerlund 1,* 1,* Process Design Laboratory, Åbo Akademi University Biskopsgatan 8, FIN-25 ÅBO, Finland 2

More information

The Supporting Hyperplane Optimization Toolkit A Polyhedral Outer Approximation Based Convex MINLP Solver Utilizing a Single Branching Tree Approach

The Supporting Hyperplane Optimization Toolkit A Polyhedral Outer Approximation Based Convex MINLP Solver Utilizing a Single Branching Tree Approach The Supporting Hyperplane Optimization Toolkit A Polyhedral Outer Approximation Based Convex MINLP Solver Utilizing a Single Branching Tree Approach Andreas Lundell a, Jan Kronqvist b, and Tapio Westerlund

More information

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Björn Geißler, Antonio Morsi, Lars Schewe, Martin Schmidt FAU Erlangen-Nürnberg, Discrete Optimization

More information

Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Methods

Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Methods Carnegie Mellon University Research Showcase @ CMU Department of Chemical Engineering Carnegie Institute of Technology 2-2014 Review of Mixed-Integer Nonlinear and Generalized Disjunctive Programming Methods

More information

Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition

Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition Stochastic Separable Mixed-Integer Nonlinear Programming via Nonconvex Generalized Benders Decomposition Xiang Li Process Systems Engineering Laboratory Department of Chemical Engineering Massachusetts

More information

Global Solution of Mixed-Integer Dynamic Optimization Problems

Global Solution of Mixed-Integer Dynamic Optimization Problems European Symposium on Computer Arded Aided Process Engineering 15 L. Puigjaner and A. Espuña (Editors) 25 Elsevier Science B.V. All rights reserved. Global Solution of Mixed-Integer Dynamic Optimization

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Benders in a nutshell Matteo Fischetti, University of Padova

Benders in a nutshell Matteo Fischetti, University of Padova Benders in a nutshell Matteo Fischetti, University of Padova ODS 2017, Sorrento, September 2017 1 Benders decomposition The original Benders decomposition from the 1960s uses two distinct ingredients for

More information

The AIMMS Outer Approximation Algorithm for MINLP

The AIMMS Outer Approximation Algorithm for MINLP The AIMMS Outer Approximation Algorithm for MINLP (using GMP functionality) By Marcel Hunting marcel.hunting@aimms.com November 2011 This document describes how to use the GMP variant of the AIMMS Outer

More information

The AIMMS Outer Approximation Algorithm for MINLP

The AIMMS Outer Approximation Algorithm for MINLP The AIMMS Outer Approximation Algorithm for MINLP (using GMP functionality) By Marcel Hunting Paragon Decision Technology BV An AIMMS White Paper November, 2011 Abstract This document describes how to

More information

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs

A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma Shabbir Ahmed George L. Nemhauser H. Milton Stewart School of Industrial and Systems

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Modern Benders (in a nutshell)

Modern Benders (in a nutshell) Modern Benders (in a nutshell) Matteo Fischetti, University of Padova (based on joint work with Ivana Ljubic and Markus Sinnl) Lunteren Conference on the Mathematics of Operations Research, January 17,

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Experiments On General Disjunctions

Experiments On General Disjunctions Experiments On General Disjunctions Some Dumb Ideas We Tried That Didn t Work* and Others We Haven t Tried Yet *But that may provide some insight Ted Ralphs, Serdar Yildiz COR@L Lab, Department of Industrial

More information

Integrating Mixed-Integer Optimisation & Satisfiability Modulo Theories

Integrating Mixed-Integer Optimisation & Satisfiability Modulo Theories Integrating Mixed-Integer Optimisation & Satisfiability Modulo Theories Application to Scheduling Miten Mistry and Ruth Misener Wednesday 11 th January, 2017 Mistry & Misener MIP & SMT Wednesday 11 th

More information

Marcia Fampa Universidade Federal do Rio de Janeiro Rio de Janeiro, RJ, Brazil

Marcia Fampa Universidade Federal do Rio de Janeiro Rio de Janeiro, RJ, Brazil A specialized branch-and-bound algorithm for the Euclidean Steiner tree problem in n-space Marcia Fampa Universidade Federal do Rio de Janeiro Rio de Janeiro, RJ, Brazil fampa@cos.ufrj.br Jon Lee University

More information

ALGORITHMS AND SOFTWARE FOR CONVEX MIXED INTEGER NONLINEAR PROGRAMS

ALGORITHMS AND SOFTWARE FOR CONVEX MIXED INTEGER NONLINEAR PROGRAMS ALGORITHMS AND SOFTWARE FOR CONVEX MIXED INTEGER NONLINEAR PROGRAMS PIERRE BONAMI, MUSTAFA KILINÇ, AND JEFF LINDEROTH Abstract. This paper provides a survey of recent progress and software for solving

More information

GAMS/DICOPT: A Discrete Continuous Optimization Package

GAMS/DICOPT: A Discrete Continuous Optimization Package GAMS/DICOPT: A Discrete Continuous Optimization Package IGNACIO E. GROSSMANN JAGADISAN VISWANATHAN ALDO VECCHIETTI RAMESH RAMAN ERWIN KALVELAGEN May 10, 2002 1 Introduction DICOPT is a program for solving

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

mixed-integer convex optimization

mixed-integer convex optimization mixed-integer convex optimization Miles Lubin with Emre Yamangil, Russell Bent, Juan Pablo Vielma, Chris Coey April 1, 2016 MIT & Los Alamos National Laboratory First, Mixed-integer linear programming

More information

LaGO - A solver for mixed integer nonlinear programming

LaGO - A solver for mixed integer nonlinear programming LaGO - A solver for mixed integer nonlinear programming Ivo Nowak June 1 2005 Problem formulation MINLP: min f(x, y) s.t. g(x, y) 0 h(x, y) = 0 x [x, x] y [y, y] integer MINLP: - n

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge October 20, 2016 Abstract We consider the problem of minimizing

More information

LaGO. Ivo Nowak and Stefan Vigerske. Humboldt-University Berlin, Department of Mathematics

LaGO. Ivo Nowak and Stefan Vigerske. Humboldt-University Berlin, Department of Mathematics LaGO a Branch and Cut framework for nonconvex MINLPs Ivo Nowak and Humboldt-University Berlin, Department of Mathematics EURO XXI, July 5, 2006 21st European Conference on Operational Research, Reykjavik

More information

On the Global Solution of Linear Programs with Linear Complementarity Constraints

On the Global Solution of Linear Programs with Linear Complementarity Constraints On the Global Solution of Linear Programs with Linear Complementarity Constraints J. E. Mitchell 1 J. Hu 1 J.-S. Pang 2 K. P. Bennett 1 G. Kunapuli 1 1 Department of Mathematical Sciences RPI, Troy, NY

More information

Optimality certificates for convex minimization and Helly numbers

Optimality certificates for convex minimization and Helly numbers Optimality certificates for convex minimization and Helly numbers Amitabh Basu Michele Conforti Gérard Cornuéjols Robert Weismantel Stefan Weltge May 10, 2017 Abstract We consider the problem of minimizing

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints Daniel Porumbel CEDRIC CS Lab, CNAM, 292 rue Saint-Martin, F-75141 Paris, France

More information

FINITE DISJUNCTIVE PROGRAMMING CHARACTERIZATIONS FOR GENERAL MIXED-INTEGER LINEAR PROGRAMS

FINITE DISJUNCTIVE PROGRAMMING CHARACTERIZATIONS FOR GENERAL MIXED-INTEGER LINEAR PROGRAMS FINITE DISJUNCTIVE PROGRAMMING CHARACTERIZATIONS FOR GENERAL MIXED-INTEGER LINEAR PROGRAMS BINYUAN CHEN, SİMGE KÜÇÜKYAVUZ, SUVRAJEET SEN Abstract. In this paper, we give a finite disjunctive programming

More information

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Vijay Gupta Ignacio E. Grossmann Department of Chemical Engineering Carnegie Mellon University, Pittsburgh Bora Tarhan ExxonMobil

More information

DETERMINISTIC OPERATIONS RESEARCH

DETERMINISTIC OPERATIONS RESEARCH DETERMINISTIC OPERATIONS RESEARCH Models and Methods in Optimization Linear DAVID J. RADER, JR. Rose-Hulman Institute of Technology Department of Mathematics Terre Haute, IN WILEY A JOHN WILEY & SONS,

More information

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 1. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 1 Dr. Ted Ralphs ISE 418 Lecture 1 1 Reading for This Lecture N&W Sections I.1.1-I.1.4 Wolsey Chapter 1 CCZ Chapter 2 ISE 418 Lecture 1 2 Mathematical Optimization Problems

More information

Standard dimension optimization of steel frames

Standard dimension optimization of steel frames Computer Aided Optimum Design in Engineering IX 157 Standard dimension optimization of steel frames U. Klanšek & S. Kravanja University of Maribor, Faculty of Civil Engineering, Slovenia Abstract This

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Convex Optimization CMU-10725

Convex Optimization CMU-10725 Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for

More information

Cutting Planes for Some Nonconvex Combinatorial Optimization Problems

Cutting Planes for Some Nonconvex Combinatorial Optimization Problems Cutting Planes for Some Nonconvex Combinatorial Optimization Problems Ismael Regis de Farias Jr. Department of Industrial Engineering Texas Tech Summary Problem definition Solution strategy Multiple-choice

More information

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions

TMA946/MAN280 APPLIED OPTIMIZATION. Exam instructions Chalmers/GU Mathematics EXAM TMA946/MAN280 APPLIED OPTIMIZATION Date: 03 05 28 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points

More information

A Nonlinear Presolve Algorithm in AIMMS

A Nonlinear Presolve Algorithm in AIMMS A Nonlinear Presolve Algorithm in AIMMS By Marcel Hunting marcel.hunting@aimms.com November 2011 This paper describes the AIMMS presolve algorithm for nonlinear problems. This presolve algorithm uses standard

More information

Introduction to Modern Control Systems

Introduction to Modern Control Systems Introduction to Modern Control Systems Convex Optimization, Duality and Linear Matrix Inequalities Kostas Margellos University of Oxford AIMS CDT 2016-17 Introduction to Modern Control Systems November

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

The Branch-and-Sandwich Algorithm for Mixed-Integer Nonlinear Bilevel Problems

The Branch-and-Sandwich Algorithm for Mixed-Integer Nonlinear Bilevel Problems The Branch-and-Sandwich Algorithm for Mixed-Integer Nonlinear Bilevel Problems Polyxeni-M. Kleniati and Claire S. Adjiman MINLP Workshop June 2, 204, CMU Funding Bodies EPSRC & LEVERHULME TRUST OUTLINE

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

FilMINT: An Outer-Approximation-Based Solver for Nonlinear Mixed Integer Programs 1

FilMINT: An Outer-Approximation-Based Solver for Nonlinear Mixed Integer Programs 1 ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 FilMINT: An Outer-Approximation-Based Solver for Nonlinear Mixed Integer Programs Kumar Abhishek, Sven Leyffer, and Jeffrey T.

More information

A Deterministic Global Optimization Method for Variational Inference

A Deterministic Global Optimization Method for Variational Inference A Deterministic Global Optimization Method for Variational Inference Hachem Saddiki Mathematics and Statistics University of Massachusetts, Amherst saddiki@math.umass.edu Andrew C. Trapp Operations and

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

EARLY INTERIOR-POINT METHODS

EARLY INTERIOR-POINT METHODS C H A P T E R 3 EARLY INTERIOR-POINT METHODS An interior-point algorithm is one that improves a feasible interior solution point of the linear program by steps through the interior, rather than one that

More information

SBB: A New Solver for Mixed Integer Nonlinear Programming

SBB: A New Solver for Mixed Integer Nonlinear Programming SBB: A New Solver for Mixed Integer Nonlinear Programming Michael R. Bussieck GAMS Development Corp. Arne Drud ARKI Consulting & Development A/S Overview Introduction: The MINLP Model The B&B Algorithm

More information

Lecture 2. Topology of Sets in R n. August 27, 2008

Lecture 2. Topology of Sets in R n. August 27, 2008 Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,

More information

Lagrangean Relaxation of the Hull-Reformulation of Linear Generalized Disjunctive Programs and its use in Disjunctive Branch and Bound

Lagrangean Relaxation of the Hull-Reformulation of Linear Generalized Disjunctive Programs and its use in Disjunctive Branch and Bound Lagrangean Relaxation of the Hull-Reformulation of Linear Generalized Disjunctive Programs and its use in Disjunctive Branch and Bound Francisco Trespalacios, Ignacio E. Grossmann Department of Chemical

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

Branch-and-cut implementation of Benders decomposition Matteo Fischetti, University of Padova

Branch-and-cut implementation of Benders decomposition Matteo Fischetti, University of Padova Branch-and-cut implementation of Benders decomposition Matteo Fischetti, University of Padova 8th Cargese-Porquerolles Workshop on Combinatorial Optimization, August 2017 1 Mixed-Integer Programming We

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints Daniel Porumbel CEDRIC CS Lab, CNAM, 292 rue Saint-Martin, F-75141 Paris, France

More information

Numerical Optimization

Numerical Optimization Convex Sets Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Let x 1, x 2 R n, x 1 x 2. Line and line segment Line passing through x 1 and x 2 : {y

More information

Exact Algorithms for Mixed-Integer Bilevel Linear Programming

Exact Algorithms for Mixed-Integer Bilevel Linear Programming Exact Algorithms for Mixed-Integer Bilevel Linear Programming Matteo Fischetti, University of Padova (based on joint work with I. Ljubic, M. Monaci, and M. Sinnl) Lunteren Conference on the Mathematics

More information

Convexity: an introduction

Convexity: an introduction Convexity: an introduction Geir Dahl CMA, Dept. of Mathematics and Dept. of Informatics University of Oslo 1 / 74 1. Introduction 1. Introduction what is convexity where does it arise main concepts and

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

February 19, Integer programming. Outline. Problem formulation. Branch-andbound Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland February 19,

More information

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension

CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension CS 372: Computational Geometry Lecture 10 Linear Programming in Fixed Dimension Antoine Vigneron King Abdullah University of Science and Technology November 7, 2012 Antoine Vigneron (KAUST) CS 372 Lecture

More information

Implementing a B&C algorithm for Mixed-Integer Bilevel Linear Programming

Implementing a B&C algorithm for Mixed-Integer Bilevel Linear Programming Implementing a B&C algorithm for Mixed-Integer Bilevel Linear Programming Matteo Fischetti, University of Padova 8th Cargese-Porquerolles Workshop on Combinatorial Optimization, August 2017 1 Bilevel Optimization

More information

Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning

Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning Solving Large-Scale Nonlinear Programming Problems by Constraint Partitioning Benjamin W. Wah and Yixin Chen Department of Electrical and Computer Engineering and the Coordinated Science Laboratory, University

More information

Fundamentals of Integer Programming

Fundamentals of Integer Programming Fundamentals of Integer Programming Di Yuan Department of Information Technology, Uppsala University January 2018 Outline Definition of integer programming Formulating some classical problems with integer

More information

Advanced Use of GAMS Solver Links

Advanced Use of GAMS Solver Links Advanced Use of GAMS Solver Links Michael Bussieck, Steven Dirkse, Stefan Vigerske GAMS Development 8th January 2013, ICS Conference, Santa Fe Standard GAMS solve Solve william minimizing cost using mip;

More information

The goal of this paper is to develop models and methods that use complementary

The goal of this paper is to develop models and methods that use complementary for a Class of Optimization Problems Vipul Jain Ignacio E. Grossmann Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, Pennsylvania, 15213, USA Vipul_Jain@i2.com grossmann@cmu.edu

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh Lecture 3 Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh January 2016 Mixed Integer Linear Programming

More information

Exploiting Degeneracy in MIP

Exploiting Degeneracy in MIP Exploiting Degeneracy in MIP Tobias Achterberg 9 January 2018 Aussois Performance Impact in Gurobi 7.5+ 35% 32.0% 30% 25% 20% 15% 14.6% 10% 5.7% 7.9% 6.6% 5% 0% 2.9% 1.2% 0.1% 2.6% 2.6% Time limit: 10000

More information

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?

Linear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization? Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x

More information

15. Cutting plane and ellipsoid methods

15. Cutting plane and ellipsoid methods EE 546, Univ of Washington, Spring 2012 15. Cutting plane and ellipsoid methods localization methods cutting-plane oracle examples of cutting plane methods ellipsoid method convergence proof inequality

More information

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example).

However, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example). 98 CHAPTER 3. PROPERTIES OF CONVEX SETS: A GLIMPSE 3.2 Separation Theorems It seems intuitively rather obvious that if A and B are two nonempty disjoint convex sets in A 2, then there is a line, H, separating

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

The Geometry of Carpentry and Joinery

The Geometry of Carpentry and Joinery The Geometry of Carpentry and Joinery Pat Morin and Jason Morrison School of Computer Science, Carleton University, 115 Colonel By Drive Ottawa, Ontario, CANADA K1S 5B6 Abstract In this paper we propose

More information

Visibility: Finding the Staircase Kernel in Orthogonal Polygons

Visibility: Finding the Staircase Kernel in Orthogonal Polygons Visibility: Finding the Staircase Kernel in Orthogonal Polygons 8 Visibility: Finding the Staircase Kernel in Orthogonal Polygons Tzvetalin S. Vassilev, Nipissing University, Canada Stefan Pape, Nipissing

More information

Lagrangian Relaxation: An overview

Lagrangian Relaxation: An overview Discrete Math for Bioinformatics WS 11/12:, by A. Bockmayr/K. Reinert, 22. Januar 2013, 13:27 4001 Lagrangian Relaxation: An overview Sources for this lecture: D. Bertsimas and J. Tsitsiklis: Introduction

More information

An FPTAS for minimizing the product of two non-negative linear cost functions

An FPTAS for minimizing the product of two non-negative linear cost functions Math. Program., Ser. A DOI 10.1007/s10107-009-0287-4 SHORT COMMUNICATION An FPTAS for minimizing the product of two non-negative linear cost functions Vineet Goyal Latife Genc-Kaya R. Ravi Received: 2

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Primal Heuristics for Branch-and-Price Algorithms

Primal Heuristics for Branch-and-Price Algorithms Primal Heuristics for Branch-and-Price Algorithms Marco Lübbecke and Christian Puchert Abstract In this paper, we present several primal heuristics which we implemented in the branch-and-price solver GCG

More information

Week 5. Convex Optimization

Week 5. Convex Optimization Week 5. Convex Optimization Lecturer: Prof. Santosh Vempala Scribe: Xin Wang, Zihao Li Feb. 9 and, 206 Week 5. Convex Optimization. The convex optimization formulation A general optimization problem is

More information

Conic Duality. yyye

Conic Duality.  yyye Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

Recent Work. Methods for solving large-scale scheduling and combinatorial optimization problems. Outline. Outline

Recent Work. Methods for solving large-scale scheduling and combinatorial optimization problems. Outline. Outline Seminar, NTNU, Trondheim, 3.1.2001 Methods for solving large-scale scheduling and combinatorial optimization s Iiro Harjunkoski (in collaboration with Ignacio E. Grossmann) Department of Chemical Engineering

More information

AIMMS Language Reference - AIMMS Outer Approximation Algorithm for MINLP

AIMMS Language Reference - AIMMS Outer Approximation Algorithm for MINLP AIMMS Language Reference - AIMMS Outer Approximation Algorithm for MINLP This file contains only one chapter of the book. For a free download of the complete book in pdf format, please visit www.aimms.com

More information

Chapter 4 Concepts from Geometry

Chapter 4 Concepts from Geometry Chapter 4 Concepts from Geometry An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Line Segments The line segment between two points and in R n is the set of points on the straight line joining

More information

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov An Extension of the Multicut L-Shaped Method INEN 698 - Large-Scale Stochastic Optimization Semester project Svyatoslav Trukhanov December 13, 2005 1 Contents 1 Introduction and Literature Review 3 2 Formal

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

The MINLP approach to structural optimization

The MINLP approach to structural optimization Proceedings of the 6th WSEAS International Conference on Applied Computer Science, Tenerife, Canary Islands, Spain, December 16-18, 2006 49 The MINLP approach to structural optimization STOJAN KRAVANJA

More information

A robust optimization based approach to the general solution of mp-milp problems

A robust optimization based approach to the general solution of mp-milp problems 21 st European Symposium on Computer Aided Process Engineering ESCAPE 21 E.N. Pistikopoulos, M.C. Georgiadis and A. Kokossis (Editors) 2011 Elsevier B.V. All rights reserved. A robust optimization based

More information

On the selection of Benders cuts

On the selection of Benders cuts Mathematical Programming manuscript No. (will be inserted by the editor) On the selection of Benders cuts Matteo Fischetti Domenico Salvagnin Arrigo Zanette Received: date / Revised 23 February 2010 /Accepted:

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018 In this lecture, we describe a very general problem called linear programming

More information

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use:

PLEASE SCROLL DOWN FOR ARTICLE. Full terms and conditions of use: This article was downloaded by: [North Carolina State University] On: 26 March 2010 Access details: Access Details: [subscription number 917267962] Publisher Taylor & Francis Informa Ltd Registered in

More information

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs

Financial Optimization ISE 347/447. Lecture 13. Dr. Ted Ralphs Financial Optimization ISE 347/447 Lecture 13 Dr. Ted Ralphs ISE 347/447 Lecture 13 1 Reading for This Lecture C&T Chapter 11 ISE 347/447 Lecture 13 2 Integer Linear Optimization An integer linear optimization

More information