Decomposition of loosely coupled integer programs: A multiobjective perspective

Size: px
Start display at page:

Download "Decomposition of loosely coupled integer programs: A multiobjective perspective"

Transcription

1 Decomposition of loosely coupled integer programs: A multiobjective perspective Merve Bodur, Shabbir Ahmed, Natashia Boland, and George L. Nemhauser H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA, USA August 23, 2016 Abstract We consider integer programming (IP) problems consisting of (possibly a large number of) subsystems and a small number of coupling constraints that link variables from different subsystems. Such problems are called loosely coupled or nearly decomposable. Motivated by recent developments in multiobjective programming (MOP), we develop a MOP-based decomposition algorithm to solve loosely coupled IPs. More specifically, we reformulate the problem so that it can be decomposed into a (resource-directive) master problem and a set of MOP subproblems. The proposed algorithm iteratively generates columns for the master problem. However, unlike traditional column generation methods, the master problem is an IP and considers a differently structured (and usually smaller) set of columns. Columns are added to the master problem IP until its solution provides an optimal solution to the original problem. One advantage of the approach is that the solution of the master problem and subproblems can be done with standard IP solvers, exploiting the sophisticated techniques they embed; there is no need for a tailored branch-and-price. We provide preliminary computational results, demonstrating the potential benefits of our approach. Keywords: Integer programming, resource-directive decomposition, multiobjective integer programming, column generation addresses: sahmed@isye.gatech.edu (Shabbir Ahmed), merve.bodur@gatech.edu (Merve Bodur), natashia.boland@isye.gatech.edu (Natashia Boland), george.nemhauser@isye.gatech.edu (George L. Nemhauser) 1

2 1 Introduction We consider integer programs of the form min i M(c i ) x i (1a) s.t. x i X i, i M (1b) A i x i b, (1c) i M where the input data is defined as follows: b R m, m Z +, M := {1,..., M} is the index set of blocks and there are M Z + blocks; for each i M, c i R n i, n i Z + \ {0} and A i R m n i ; and ( ) denotes the transpose operator. We assume that X i Z n i, and is nonempty and bounded, for all i M. For each block i M, problem (1) has n i variables, x i R n i, and constraints (1b), which include the integrality constraints. Problem (1) also has m coupling constraints, (1c), linking different blocks together. As in many practical applications, such as scheduling and production problems, the linking constraints correspond to the limits on a set of resources shared among the blocks; we refer the vector b as the resource vector. When m = 0, the model (1) is fully decomposable by block, thus can be solved by solving M smaller (integer) problems. We are interested in problems where the blocks are loosely coupled, i.e., the number of linking constraints, m, is small. For loosely coupled IPs, one potentially beneficial tool is Lagrangian relaxation [13] which is primarily used to obtain lower bounds on the optimal objective value. The key advantage is that when the coupling constraints are dualized, the problem decomposes by blocks. Also, stronger relaxation bounds can be obtained via Lagrangian relaxation, compared to the linear programming (LP) relaxation of the problem. Some of the main issues with the use of Lagrangian methods are the existence of a duality gap and the difficulty in recovering primal feasible solutions. A common practice, used to address these issues, is to perform a branch-and-bound algorithm where the LP relaxation is replaced by Lagrangian relaxation. An alternative approach is Dantzig-Wolfe reformulation [25, 26], which decomposes the problem into a master problem, with possibly an exponential number of columns, and M subproblems. The master problem s LP relaxation is usually solved by column generation [8], enhanced, for example, with the use of stabilization strategies, to address issues such as tailing-off and degeneracy [21]. To enforce integrality constraints in the master problem, a branch-and-price algorithm [1] is used. Branch-and-price algorithms usually require careful design of the branching rule, to ensure 2

3 compatibility of the branching constraints with the subproblem structure [24]. They also require a tailored implementation of branch-and-bound, in which many of the sophisticated techniques available in current IP solvers must either be forgone, or specially reconstructed. In this paper, motivated by recent developments in multiobjective programming (MOP), we develop a MOP-based decomposition algorithm to solve loosely coupled IPs. More specifically, we first use the idea of resource-directive decomposition to derive a reformulation of the problem. This reformulation allows us to establish a relationship between an optimal solution of the original IP and nondominated points in related, multiobjective, problems. The reformulation can be decomposed into a master problem and a set of subproblems, one for each block, i M. The subproblem for i consists of a multiobjective integer program with m + 1 objectives and n i variables. The master problem consists of columns corresponding to nondominated points of the MOP subproblems, and thus is possibly of exponential size. Therefore, we do not generate the entire nondominated frontier for each subproblem, but, instead, develop a new column generation algorithm. The algorithm does not require complete solution of the MOP subproblem; it generates a new nondominated point for a MOP subproblem only as required by the master problem. Prior to completion of the algorithm, the master problem includes, for each MOP subproblem, two types of columns: columns corresponding to nondominated points of the MOP subproblem and columns that represent regions in the objective space of the MOP subproblem in which as yet undiscovered nondominated points may lie. At any stage in the algorithm, the union of the nondominated points and regions for a MOP subproblem that currently appear in the master problem is a relaxation of the nondominated frontier of the MOP subproblem. The solution to the master problem hence provides a lower bound on the value of the original problem. Moreover, we strengthen this lower bound problem with the addition of cutting planes that are obtained as a byproduct of the column generation process. In addition, if we only use the columns corresponding to the nondominated points, the solution, if feasible, is a feasible solution to the original problem and hence provides an upper bound on the value of the original problem. Our preliminary computational experiments, which compare the performance of our algorithm to a standard IP solver, demonstrate the potential benefits of our approach. The remainder of this paper is organized as follows. Section 2 presents some preliminary material on resource-directive decomposition and multiobjective optimization. Section 3 describes our proposed reformulation, and how it can be decomposed. Section 4 describes the proposed solution methodology, giving the overview of the MOP-based decomposition algorithm, and the details and enhancements of the algorithm. Numerical illustrations are provided in Section 5 and conclusions are given in Section 6. 3

4 2 Preliminaries 2.1 Resource-directive decomposition We first parameterize problem (1) according to how the resource vector, b, is partitioned between the blocks. Specifically, introducing resource variables u i R m for each block i M explicitly, we rewrite the problem (1) as min (c i ) x i i M s.t. x i X i, A i x i u i, u i b, i M i M i M which can be written equivalently as a resource-directive master problem RDMP : min i M f i (u i ) (2a) s.t. u i b, i M (2b) where for each i M, f i : R m R is the value function of the subproblem: RDSP(i, u) : f i (u) = min (c i ) x s.t. x X i, A i x u. The general idea of such a decomposition is very old. See, for example, [12, 18] and references therein, notably [19], in which dynamic programming is used to find optimal values of the resource variables for linear programs. Our idea is also based on the resource-directive decomposition framework but is motivated by recent developments in multiobjective integer programming. Therefore, we first briefly outline some of the concepts and terminology of multiobjective optimization (see, e.g., [5, 7, 9] for a comprehensive review), and then observe the connection between the resource-directive decomposition and multiobjective optimization. 4

5 2.2 Multiobjective optimization In this section, we review some basic concepts in multiobjective optimization, mostly following the presentation of [9]. A multiobjective optimization problem with feasible set X R n and J Z +, J 2 objective functions (or criteria) g j : X R, j = 1,..., J can be written as min g(x) := {g 1(x),..., g J (x)}, (4) x X where g = (g 1,..., g J ) : X R J is the objective function vector, which maps the feasible set defined in the decision space to the objective space. The codomain of g, i.e., R J, is called the criterion space. The image of the feasible set is denoted by Z := g(x ) := {z R J X s.t. z = g(x)} and usually referred to as the feasible set in criterion space. : x In multiobjective optimization, the optimal value is a set, rather than a single point, often called the nondominated (Pareto-optimal or efficient) frontier. It is defined to be the set of vectors, z R J, in the criterion space, having the property that (i) z is the criterion-space image of some feasible solution, i.e., z = g(x) for some x X, or z Z, in which case we say z is feasible, and (ii) there does not exist any other feasible solution, z Z, which dominates z, i.e., for which z j z j for all j = 1,..., J and z j < z j for at least one index j {1,..., J}. An element of the nondominated frontier (NDF) is known as a nondominated point (NDP). In other words, an NDP is a feasible objective vector for which none of its components can be improved without making at least one of its other components worse. The union of preimages of NDPs is called the efficient set, whose elements are the efficient solutions. So, a feasible solution x X is efficient (or Pareto optimal) if its image z = g(x) is an NDP, i.e., nondominated. On the other hand, x X is called weakly efficient if there is no x X such that g j (x ) < g j (x) for all j = 1,..., J, and the point z = g(x) is called weakly nondominated. Note that a nondominated point is a weakly nondominated point, but not vice versa. A useful construct is the so-called ideal point, denoted by z I, whose components are obtained by minimizing individual objective functions over the feasible set of the problem, so z I j := min{z j : z Z} for all j = 1,..., J. We call the point whose components are obtained by maximizing individual objectives the supernal point, z S, where zj S := max{z j : z Z}. The NDF is contained in the hypercube defined by z I and z S : z I z z S for all z an NDP. An NDP can be found in a variety of ways. One of the most commonly used is the weightedsum method [28], which solves an optimization problem with a single objective obtained as a 5

6 positive (convex) combination of the objective functions of the multiobjective problem: min x X J λ j g j (x), (5) j=1 where λ j > 0 for all j = 1,..., J. For any positive weight vector λ, any optimal solution of (5) is an efficient solution for (4), i.e., its image is nondominated. Such a solution and its image are called a supported efficient solution and a supported nondominated point, respectively. Thus, an efficient solution x is supported if there exists a positive vector λ for which x is an optimal solution of (5), otherwise x is called unsupported. We also note that if the weight vector is nonnegative, rather than positive, then an optimal solution of the scalarized problem is only guaranteed to be weakly efficient. Another way to find an NDP is to optimize with respect to each objective function in turn, in an hierarchical manner. More specifically, we can solve lexicographic optimization problems, by minimizing one objective at a time, sequentially, and using optimal objective values of solved problems as constraints in the next ones. For example, for the order 1,..., J, first determine ẑ 1 := min{g 1 (x) : x X }, and then for each j = 2,..., J, in turn, sequentially solve ẑ j := min{g j (x) : x X and g j (x) ẑ j, j = 1,..., j 1}. Then the vector ẑ := (ẑ 1,..., ẑ J ) is clearly an NDP. We represent the above lexicographical optimization problem to find ẑ as: lex min x X (g 1(x),..., g J (x)). (6) Note that parentheses in (6) signify that the objective functions are ordered, whereas curly brackets in (4) denote that the objective functions are given as an unordered set. Methods for solving multiobjective integer programs, so as to generate the complete NDF, have developed rapidly in recent years; we refer the interested reader to [10] for an overview and [2, 3, 4, 6, 11, 14, 16, 20, 22, 23, 27, 29] for a representative list of recent papers. There is also interest in approximating the NDF, as, for example, is discussed in [15], which treats multiobjective integer programming as a special case. 6

7 3 Problem reformulation from a multiobjective perspective Towards our reformulation of problem (1), we start with a lemma about multiobjective optimization. We omit its proof, as it is brief and straightforward. Lemma 3.1. For a given δ R J, if z R J is an NDP of min {g 1 (x),..., g J (x)} (7a) s.t. x X, (7b) g j (x) δ j, j J, (7c) where J {1,..., J}, then z is also an NDP of (4). Next, we define the following multiobjective problems, which will be extensively used and referred to as the MOP subproblems in the remainder of the paper: MOP(i) : min {A i 1x,..., A i mx, (c i ) x} s.t. x X i, for i M, where A i k denotes the kth row of matrix A i for each k = 1,..., m. For the problems that we are interested in, we assume that, for each block i M, we can solve the single-objective IPs needed to find an NDP of MOP(i), with relative ease. Indeed, our algorithm never seeks to completely solve a MOP subproblem; its key subroutine is the search for a (new) NDP of a MOP subproblem. Due to Lemma 3.1, in the rest of the paper, if needed, we modify the MOP subproblems by including some upper bound constraints of type (7c). In other words, we also assume that the following constraints are included in MOP(i): A i jx δ i j, j = 1,..., m, (c i ) x δ i m+1, where δ i (R {+ }) m+1 is a given upper bound vector. Unless specifically mentioned, we have δ i = {+ } m+1. We denote the feasible set of MOP(i) in criterion space by Z i, for all i M. Then, we have the following observation for the resource-directive master problem, RDMP, given in (2). 7

8 Proposition 3.2. Provided that RDMP has an optimal solution, there exists an optimal solution {û i } i M to RDMP with the property that, for each i M, the point (û i, f i (û i )) is an NDP of MOP(i). Proof. Let {u i } i M be an arbitrary optimal solution of RDMP, and, for each i M, let ˆx i be an optimal solution of the i th subproblem RDSP(i, u). Without loss of generality, we may take ˆx i to be an optimal solution such that A iˆx i is not dominated by A i x i for any other optimal solution x i ( for example, take ˆx i to be an optimal solution of lex min x X i :A i x u i((ci ) x, A i 1 x,..., Ai mx) ). Let û i := A iˆx i for all i M. Then, f i (û i ) = (c i ) ˆx i = f i (u i ), and (û i, f i (û i )) is the criterion space image of ˆx i, thus a feasible solution to MOP(i). Also, by construction of ˆx i and Lemma 3.1, it must be that (û i, f i (û i )) is an NDP for MOP(i). Lastly, we observe that {û i } i M is feasible for RDMP since i M û i = i M A iˆx i i M and has the same objective value as that of {u i } i M as which completes the proof. i M f i (û i ) = i M(c i ) ˆx i = i M u i b, f i (u i ), Proposition 3.2 immediately implies that the resource-directive master problem (2) can be stated as: min i M f i (u i ) (9a) s.t. u i b, (9b) i M (u i, f i (u i )) N i, i M, (9c) where N i denotes the NDF of MOP(i) and constraints (9c) force the point (u i, f i (u i )) to be a nondominated point of MOP(i), for any block i M. Note that the alternative formulation (9) is not equivalent to RDMP, in the sense that some feasible (even optimal) solutions of RDMP might not be feasible for (9). However, Proposition 3.2 guarantees that at least one optimal solution to RDMP is also optimal to (9). Here, decomposable structure in the constraints permits us to observe that there is an optimal 8

9 solution composed of one efficient solution for each of a set of multiobjective problems. The power of establishing a relationship between optimal solutions of an IP and NDPs of a related multiobjective problem is shown, in a different context, in [17]: decomposable structure in the objective function of a nonlinear combinatorial optimization (binary IP) problem relates an optimal solution to a NDP of a multiobjective problem in which the decomposed objective function elements form the multiple objective functions. In [17], this relationship is exploited to yield fully polynomial time approximation schemes for several important problem classes. Here, we are able to exploit the relationship in the design of a column generation IP algorithm. A useful observation in the design of such an algorithm is that MOP(i) does not depend on {u i } i M. Specifically, the set N i, for each i M, can be enumerated, at least, in principle, without the need for any particular values for the variables in problem (2), leading to the reformulation below. For each block i M, the set N i is finite, as X i is assumed to be a bounded set in Z n i. We let N i = {z i,1, z i,2,..., z i, N i }. Also, for any vector z and a given index l, we use z [l] to denote the projection of z onto the first l components, i.e., z [l] = (z 1,..., z l ). Defining binary decision variable λ i k = 1 if the kth NDP is chosen for block i M in an optimal solution, and 0 otherwise, the model (9) can be written as (IP-M) : ν IP := min i M s.t. N i k=1 N i i M k=1 N i λ i k k=1 λ i k zi,k m+1 (10a) λ i k zi,k [m] b, (10b) = 1, i M, (10c) λ i {0, 1} N i, i M. (10d) Constraints (10c) and (10d) together replace (9c), i.e., enforce that for each block exactly one NDP of the corresponding multiobjective subproblem is chosen. Then, by construction of the MOP subproblems, and the definition of value functions, f i ( ), the objective (10a) and the resource constraints (10b) are equivalent to (9a) and (9b), respectively. A naive exact solution method for (10) would be to first fully solve the subproblem MOP(i) to obtain the set N i for each i M, and then solve the model (10). An important advantage of such an approach is that it allows the MOP subproblems to be solved asynchronously. Also, any 9

10 MOP algorithm can be used to generate the NDFs. Another advantage, is that efficient solutions, i.e., feasible solutions in terms of the x variables, of the MOP subproblems need not be revealed. In other words, MOP algorithms can be used as black boxes to provide the objective function (including resource usage) values. Keeping the solutions private is an important role in some applications. The biggest challenge in solving (IP-M) is the potentially large size of the NDFs. With pure integer data and a fixed number of objectives, the number of NDPs is at worst pseudopolynomial in the size of the MOP. However, in general, it can be exponentially large, and is certainly, in the worst case, exponential in m, which may prevent the solution of (IP-M) by a standard IP algorithm. However, most of the variables in (IP-M) will take value zero in an optimal solution. This suggests a possibility of using concepts from column generation [8]. Motivated by recent developments in MOP algorithms, we next propose a new MOP-based column generation algorithm, which we call the MOP-based decomposition algorithm, to solve (IP-M). 4 A MOP-based decomposition algorithm In this section, we describe our solution methodology. We first introduce upper and lower bounding problems. Then, we provide an overview of our proposed algorithm. Finally, we explain the details of the algorithm, and present some enhancements. 4.1 Upper and lower bounding problems There are two, main, classes of multiobjective optimization algorithms available to generate the entire set of NDPs: decision space search algorithms and criterion space search algorithms. Like most criterion space search algorithms, our MOP-based decomposition algorithm maintains, for each block, a set, P, of NDPs found so far, and a set of polyhedral regions, {Q 1,..., Q q }, in the criterion space that the remaining undiscovered NDPs belong to, where the number of regions, q, varies throughout the algorithm. In other words, the algorithm ensures that the NDF lies within q Ω := P Q r. We call such a set a disjunctive relaxation of the NDF. At each iteration of the algorithm, newly generated NDPs are added to P and the regions are refined. At the completion of the algorithm, the union of regions is empty, and Ω = P gives the complete NDF. In Figure 1, we illustrate two, r=1 10

11 different, disjunctive relaxations of the NDF of an integer program with two objectives (g 1 (x) and g 2 (x)), obtained from two (consecutive) intermediate steps of the so-called balanced box method in [2], where circle points, square points, shaded regions and dots correspond to discovered NDPs, undiscovered NDPs, regions and integer points, respectively. g 2 (x) g 2 (x) g 1 (x) g 1 (x) (a) P = 6, q = 3 (b) P = 7, q = 4 Figure 1: Decomposition of the criterion search space of a biobjective integer program into a) six NDPs and three rectangular regions b) seven NDPs and four rectangular regions Now, suppose that each MOP(i) has been (partially) solved by such a criterion space search algorithm, to obtain a set of NDPs, P i N i, and a set of polyhedra, Λ i = {Q i 1,..., Qi q i }, in R m+1, with the property that N i Ω i := P i q i r=1 Q i r. (11) Without loss of generality, we assume that N i = {z i,1, z i,2,..., z i, N i } is ordered such that its first P i elements are the ones of P i, for all i M. Then, the problem defined by (UB-M) : ν UB := min i M s.t. P i k=1 P i i M k=1 P i λ i k k=1 λ i k zi,k m+1 (12a) λ i k zi,k [m] b, (12b) = 1, i M, (12c) λ i {0, 1} Pi, i M, (12d) 11

12 gives an upper bound on the optimal value ν IP of (IP-M), which becomes exact when Ω i = P i for all i M. We note that if all data in (UB-M) is nonnegative and integer, then P i is O( m j=1 b j), equivalently O(B m ) where B := max j=1,...,m b j. In this case, it is easy to see that (UB-M) can be solved in O(MB m ) time by dynamic programming, which implies that (UB-M) can be solved in pseudopolynomial time for small m. A traditional way of obtaining a lower bound for an optimization problem is to consider a relaxation of the feasible set. If Ω i N i, for all i M, then we have that the problem { min i M w i : i M u i b and (u i, w i ) Ω } i, i M gives a lower bound on ν IP, as it is a relaxation of (9). Specifically, using relaxations Ω i, i M, which are defined in (11), we obtain the following lower bounding problem: (LB-M) : ν LB := min i M s.t. i M P i ( P i k=1 ( P i k=1 qi λ i k zi,k m+1 + r=1 qi λ i k zi,k [m] + r=1 u i r w i r ) ) b, (13a) (13b) q i λ i k + µ i r = 1, i M, (13c) k=1 r=1 µ i r = 1 (u i r, w i r) Q i r, i M, r = 1,..., q i, (13d) µ i r = 0 (u i r, w i r) = (0, 0), i M, r = 1,..., q i, (13e) λ i {0, 1} Pi, µ i {0, 1} q i, i M, (13f) u i r R m, w i r R, i M, r = 1,..., q i. (13g) For block i M, binary variable λ i k represents the selection of the kth NDP as before, while binary variable µ i r denotes the selection of r th region, Q i r. Continuous variables u i r and w i r correspond to the resource and cost components of a point chosen from the region Q i r, respectively. Constraints (13c) state that, for each block, either a previously generated NDP or a region is chosen in the solution. In the latter case, the logical constraints (13d) ensure that the chosen point belongs to the region. Otherwise, due to constraints (13e), no point from the region appears in the solution. Depending on the point selection for a block, the terms in parentheses in (13a) and (13b) signify the corresponding cost and the resource consumption, respectively. 12

13 The logical constraints (13d) and (13e) can be represented by a set of linear inequalities, as the regions are assumed to be polyhedral and can be bounded, since the MOP feasible sets are assumed bounded. Specifically, since X i is assumed bounded, for each i M and each region r = 1,..., q i, there are vectors ξ i r, ξi r R m+1 so that ξ i r z ξi r for any z Q i r N i. For example, ξ i r may be taken to be the ideal point of MOP(i) and ξi r taken to be its supernal point, for all r. Then, if each region is given as a polyhedron, Q i r = {z R m+1 : Q i rz p i r}, where Q i r and p i r are a matrix and vector of appropriate dimension, for each i and r, the logical constraints (13d) and (13e) can be modeled linearly as Q i r(u i r, w i r) p i rµ i r, i M, r = 1,..., q i, (14a) ξ i r µi r (u i r, w i r) ξ i rµ i r, i M, r = 1,..., q i. (14b) Note that the inclusion of lower bounds on the w i r variables ensures that (LB-M) is bounded. Lastly, we note that if each region is represented by its ideal point, then (LB-M) can also be solved in pseudopolynomial time by dynamic programming, similar to (UB-M). In the next proposition, we show that (LB-M) is indeed a valid lower bounding problem. Proposition 4.1. ν LB ν IP. Proof. Given an optimal solution λ of (10), let k i be the index such that λ i k i = 1 for i M. Now, we will construct a feasible solution (ˆλ, ˆµ, û, ŵ) to (13). For any i M, if k i P i, let ˆλ i k i = 1, otherwise let ˆµ i r i = 1, where r i {1,..., q i } is an arbitrary index of a region that includes the nondominated point z i,k i. (Such a region exists, by (11).) In the latter case, we also let û i,r i = z i,k i [m] and ŵr i i = z i,k i m+1. We set all other variable values to zero. Then, (13b)-(13g) hold and the objective function (13a) takes value ν IP at (ˆλ, ˆµ, û, ŵ). 4.2 Overview of the MOP-based decomposition algorithm In this section, we give a complete algorithm to solve integer programs of the form (1). (LB-M) is initialized, and then iteratively improved with the addition of new columns. The flow chart of our algorithm, which we call the MOP-based decomposition algorithm, is provided in Figure 2. The box with dashed borders, i.e., the solution of (UB-M), is optional. If we choose to always skip that process, then the algorithm only finds a feasible solution when it finds an optimal solution. The algorithm is divided into three steps. In the first step, we generate an initial set of NDPs, P i, for each block i M. In the second step, we use this information to construct a disjunctive 13

14 Step 1 Step 2 START Find an initial set of NDPs for each MOP subproblem {P i } i M Create regions s.t. {Ω i } i M (11) holds Form (LB-M) Step 3 (ˆλ, ˆµ, û, ŵ) cut off? no yes (ˆλ, ˆµ, û, ŵ) Solve (LB-M) Find NDPs within region, refine Qîˆr î, ˆr Pick block, region i, r with ˆµ i r = 1 no Stopping criterion? Solve (UB-M) no yes ˆµ = 0? STOP yes Figure 2: Flow chart of the MOP-based decomposition algorithm. (ˆλ, ˆµ, û, ŵ) denotes an optimal solution of the (LB-M) problem. relaxation of the NDF of each MOP(i) subproblem, identifying a set of polyhedral regions, Q i r, r = 1,..., q i, so that Ω i, defined as in (11), contains N i. This means that, in the next step, it suffices to focus on the regions in order to generate new columns for the lower and upper bound problems. In the third step of the algorithm, after forming and solving the current lower bound master problem, (LB-M), we iteratively generate more NDPs from existing regions, and then use those NDPs to decompose the regions even further, by eliminating the areas that they dominate, which are now known to include no NDPs. More specifically, at every iteration of Step 3, we solve the current lower bound problem, (LB-M), (and optionally the current upper bound problem, (UB-M)), and unless a stopping criterion is satisfied, we select a region and refine it by generating new NDPs and creating smaller regions from it. This tightens the disjunctive relaxation of the NDF for the MOP with the selected region. We make the decision of which region(s) to refine based on the optimal solution returned for the current (LB-M) problem: any region, Q i r, that is used in the (LB- 14

15 M) solution, i.e., with µ i r = 1 in this solution, is a candidate. The algorithm seeks to refine such a region, tightening the disjunctive relaxation of a MOP subproblem s NDF. This tightening may include the addition of cutting planes, and can be interpreted as the solution to a kind of separation problem: if (û i r, ŵr) i is the point found by (LB-M) in a selected region, Q i r, then the tightening aims to separate (û i r, ŵr) i from the NDF of MOP(i). 4.3 Details and enhancements of the MOP-based decomposition algorithm In this section, we provide details of our MOP-based decomposition algorithm, and discuss some possible enhancements. The main structure of the algorithm is given in Algorithm 1. We next Algorithm 1 MOP-based decomposition. ɛ gap denotes the given threshold for the absolute optimality gap. Step 1. Solve the LP relaxation of (IP-M) via (a modified form of) column generation Step 2. Using the NDPs generated in the previous step, construct a disjunctive relaxation of the NDF of each MOP(i), creating regions so that (11) is satisfied, and form (LB-M). Step 3. do (i) Solve (LB-M) (ii) Solve (UB-M) [optional, e.g., at every τ iterations] (iii) do a) Select a block for which (LB-M) has selected a region b) Refine this region, after generating new NDPs and/or valid inequalities, ensuring that (11) remains valid until the current optimal solution of (LB-M) has been cut off (iv) Refine all regions that include any of the newly generated NDPs, ensuring (11) until ν UB ν LB ɛ gap or another stopping criterion has been reached elaborate on each step of Algorithm 1. Step 1: In order to create an initial set of NDPs, we solve the LP relaxation of (IP-M). This can be done by a modified form of column generation. We start with a set of columns that is guaranteed to yield a feasible LP solution. (In our implementation, we use dummy columns: we initialize the LP with one column for each MOP, consisting of its ideal point, but with a high objective coefficient in (IP-M). This ensures that if the original problem is feasible, then the initial LP will be feasible.) Then, we solve the LP relaxation of (IP-M) via column generation. In particular, we use a modified version of column generation, which we refer to as Pareto column generation, where we ensure that only NDPs of MOP subproblems are generated as columns. Let γ and α be the 15

16 dual multipliers of the resource linking constraints (12b) and the convexity constraints (12c), respectively. Then, the dual of the LP relaxation of (IP-M), restricted to the set of columns (NDPs) generated so far, which are stored in P i, i M, is ν RMP := max b γ + 1 α (15a) z i,k m+1 γ z i,k [m] α i 0, k = 1,..., P i, i M, (15b) γ 0, γ R m, α R M (15c) (15d) where 1 and 0 denote the vectors of ones and zeros of appropriate dimension, respectively. At every Pareto column generation iteration, we solve the LP relaxation of (IP-M), restricted to the set of columns in P i, i M, to get optimal dual multipliers (ˆγ, ˆα). Then, for each block i M, we solve the pricing problem r i := min z z m+1 ˆγ z [m] ˆα i min x (c i ) x + m ˆγ j A i jx ˆα i j=1 (16a) s.t. z Z i s.t. x X i (16b) in order to find a column with the most negative reduced cost. Let z i be an optimal solution of the i th pricing problem. If ˆγ < 0, meaning that ˆγ j < 0 for all j = 1,..., m, then z i is a supported NDP of MOP(i). In that case, if its reduced cost r i < 0, then we add z i to P i. Otherwise, (if ˆγ j = 0 for some j), z i may not be an NDP, since it is only guaranteed to be a weakly NDP. In such a case, we solve the following integer program min z {1 z : z Z i and z z i } (17) to get an optimal solution z i which is guaranteed to be an NDP, as all of the objective coefficients in (17) are positive. If the reduced cost of z i is negative, i.e., z i m+1 ˆγ z i [m] ˆα i < 0, then we add z i to P i. Note that the MIPs (16) and (17) are assumed to be tractable, as they correspond to a weighted-sum version of MOP(i) with δ i = {+ } m+1 and δ i = z i, respectively. After the addition of new columns, the LP relaxation of the restricted (IP-M) is re-solved. The column generation algorithm stops when all pricing problems have nonnegative optimal objective value, which means that no more columns with negative reduced cost are found. When the algorithm stops, the LP relaxation of (IP-M) has been solved to optimality. 16

17 Observe that only supported NDPs can be generated as columns in the LP relaxation of (IP-M): to obtain unsupported NDPs, which may be required in a solution to the original IP, it is necessary to restrict search in the MOP subproblems to regions between supported NDPs. Such regions are initialized in Step 2, and explored and further refined, as needed, in Step 3, of the algorithm. To complete our discussion of Step 1, we note that, as an enhancement, the pricing problems are also used to obtain valid inequalities to tighten the lower bounding problem, (LB-M), which will be formed in the next step. Consider an arbitrary set of regions, {Q i } i,r, to be used in the lower bounding problem. For instance, for i M, since the solution to the pricing problem, (16), has value r i, it must be that z m+1 ˆγ z [m] r i + ˆα i (18) holds for any z Z i. As N i Z i, (18) is satisfied by all points in N i. However, there might be points in some region, Q i r, at which (18) is violated. Therefore we add a constraint to enforce the logic that if a region r {1,..., q i } is selected in a feasible solution of (LB-M) then w i r ˆγ u i r r i + ˆα i, (19) has to be satisfied for all (u i r, wr) i Q i r N i. This logic can be modeled linearly, and added to the (LB-M) formulation, as wr i ˆγ u i r ( r i + ˆα i )µ i r. (20) The binary variable, µ i r, is required in the right-hand side, since it may be that r i + ˆα i is positive: in this case, if µ i r is omitted from (20), then (u i r, wr) i = (0, 0) would be cut off, forcing µ i r = 1, (by (14b)), and thus forcing the region, Q i r, to be selected; the possibility that the region, Q i r, is not selected in the (LB-M) solution, and hence that (u i r, wr) i = (0, 0) is feasible, must be retained. We refer to the inequality, (20), as a pricing cut, and, for convenience, we denote the polyhedron defined by such cuts, for block i and region r, to be Cr. i Then, the set of constraints defined by the pricing cuts, (u i r, wr, i µ i r) Cr, i i M, r = 1,..., q i, (21) are valid for (LB-M). Since the regions are not yet known at this step, we only keep the cut information for each block i M and use it in the next step. More specifically, at every Pareto column generation iteration, if the pricing problem for i M is solved, we record its optimal value, r i, together with the optimal dual multipliers, (ˆγ, ˆα), at that iteration. Then, once the regions have been determined for i M in the next step, we construct the set Cr i by adding all the recorded cuts for i M, for all r = 1,..., q i. Then, we add the pricing cuts, of the form (21), to (LB-M). If 17

18 a region lies in the relative interior of the halfspace defined by a cut, then this cut is redundant for the region. A useful feature of the pricing cuts, especially those added in the final round of the Pareto column generation procedure, are that they guarantee that the value of (LB-M), ν LB, is at least as good a lower bound on the original IP value as the value of the LP relaxation of (IP-M), which is given by ν RMP when r i 0 for all i M. Proposition 4.2. Suppose that for some {P i } i M, (ˆγ, ˆα) solves (15) and r i 0 for all i M, where r i is calculated in (16). (So (ˆγ, ˆα) are optimal LP dual multipliers for the LP relaxation of (IP-M) and ν RMP = b T ˆγ + 1 T ˆα is its value.) If (LB-M) is formed from {P i } i M and any collection of regions, {Q i r} i M r=1,...,q i, and the pricing cuts, (20), are included in (LB-M), for every region, then ν LB ν RMP. Proof. Given the conditions of the proposition, we will construct a feasible solution to the LP dual of the LP relaxation of (LB-M). We use γ 0 to denote the LP dual multipliers for constraint (13b), α (unrestricted in sign) to denote the vector of LP dual multipliers for the constraint (13c), and β i r 0 to denote the LP dual multipliers for the pricing cut (20), for each i M, r = 1,..., q i. We claim that setting γ := ˆγ, α := ˆα, β := 1, and setting all other LP dual variables to zero, yields a solution feasible to the LP dual of the LP relaxation of (LB-M). To verify this, we consider the LP dual constraints (after removing the terms including LP dual variables set to zero), corresponding to each (LB-M) variable, in turn. The LP dual constraint corresponding to λ i k is: γ T z i,k [m] + αi z i,k m+1, which is satisfied for γ = ˆγ and α i = ˆα i since (ˆγ, ˆα) satisfies (15b). The LP dual constraint corresponding to µ i r is α i ( r i + ˆα i )β i r 0, which is satisfied for α = ˆα and β i r = 1, since r i 0. The LP dual constraint corresponding to u i r,j (unrestricted in sign) is γ j ˆγ j β i r = 0, which is satisfied for γ j = ˆγ j and β i r = 1. Finally, the LP dual constraint corresponding to w i r (unrestricted in sign) is simply β i r = 1, and so is satisfied. Since γ := ˆγ, α := ˆα, β := 1 yields a feasible solution to the LP dual to the LP relaxation of (LB-M), its LP dual objective value, which is ˆγ T b + ˆα T 1 = ν RMP provides a lower bound on the value of (LB-M). 18

19 Step 2: Now that we have a set of known NDPs, P i, for each i M, we use this information to decompose the part of the criterion space of the MOP(i) subproblem not dominated by any point in P i into a set of polyhedral regions. The idea is to refine the criterion search space by eliminating the parts which are known to include no NDPs. Specifically, we will use the fact that if z N i, then the pointed cone obtained by adding the nonnegative orthant to z does not include any NDPs besides z, that is, ( {z } + R m+1 ) + N i = {z }. Therefore, we can eliminate {z } + R m+1 + from the search space. Then, for the MOP(i) subproblem, the criterion space that still needs to be considered for the remaining NDPs is R m+1 \ P i k=1 ( {z i,k } + R m+1 + P ) i = k=1 ( m+1 j=1 {z R m+1 : z j < z i,k j } Given this observation, Algorithm 2 provides one simple way to create polyhedral regions, so that (11) is satisfied. ). Algorithm 2 Decompose the criterion space of MOP(i), for i M, into a set Λ i of polyhedral regions, based on the set P i of known NDPs 1. Initialize the list of regions Λ i = { {z R m+1 : z z S (i)} } 2. for all z P i 3. for all Q Λ i 4. if z Q m+1 5. Let Λ i := (Λ i \ {Q}) {{z Q : z j zj ɛ}}. j=1 Here ɛ is a small positive number that is used to express strict inequalities, as is customary in multiobjective optimization. Note that, as we consider only pure integer problems with integer data, we can use ɛ = 1. The algorithm starts with the initial criterion search region {z R m+1 : z z S (i)} where z S (i) denotes the supernal point of MOP(i), goes through all of the known NDPs one by one, and at every step replaces a region on the list with m + 1 smaller regions if the region includes the NDP under consideration. It provides a correct decomposition of the search space because only the already determined NDPs together with the points (weakly) dominated by them are excluded. In other words, any other, as-yet-unknown, NDP belongs to at least one of the regions obtained at the end of the algorithm. 19

20 As an example, consider a case with three objectives where two NDPs, namely (4, 4, 4) and (1, 8, 1), are discovered for MOP(i), which has supernal point (50, 50, 50). The regions created during Algorithm 2 (where ɛ = 1) are shown in Figure 3 in a tree. A region is represented by a triplet in brackets, where each component represents an upper bound on the corresponding objective, e.g., δ 1, δ 2, δ 3 defines the polyhedral region {z R 3 : z j δ j, j = 1, 2, 3 }. Level l of the tree corresponds to the l th iteration of the algorithm and therefore to the list of regions in that iteration. The boxed regions are the ones returned at the end of the algorithm. The shaded boxes represent redundant regions, which are contained in other regions, and can safely be omitted. We discuss these further next. 50, 50, 50 z 1 < 4 z 2 < 4 z 3 < 4 3, 50, 50 50, 3, 50 50, 50, 3 z 1 < 1 z 2 < 8 z 3 < 1 z 1 < 1 z 2 < 8 z 3 < 1 0, 50, 50 3, 7, 50 3, 50, 0 0, 50, 3 50, 7, 3 50, 50, 0 Figure 3: Algorithm 2 applied to MOP(i) with supernal point (50, 50, 50), provided ɛ = 1 and P i = {(4, 4, 4), (1, 8, 1)} We note that the decomposition technique used in Algorithm 2 is the same as the so-called full m-split in [6] when all the initial lower and upper bounds are taken as negative infinity and the supernal point, respectively. In [6], the authors only generate hypercubes as the regions, which are called boxes. They point out that the decomposition algorithm based on full m-split typically creates nested, and hence, redundant, boxes. The authors analyze conditions under which redundant boxes occur, and suggest to detect and remove them from the decomposition immediately in every iteration. In the case illustrated in Figure 3, as (3, 50, 0) (50, 50, 0), the region represented by 3, 50, 0 is redundant, as is the region represented by 0, 50, 3, since (0, 50, 3) (0, 50, 50). Note that the tree node corresponding to the region represented by 50, 3, 50 has no descendants, since (1, 8, 1) (50, 3, 50). In our implementation, we detect and remove redundant regions at every iteration. Note that, in the case of a single coupling constraint, (i.e., m = 1), redundancy does not occur, as any NDP belongs to exactly one of the regions on the list at any point in Algorithm 2. In this special case, the redundancy check is skipped. 20

21 Lastly, at the end of Step 2, after decomposing the criterion space of each MOP(i) subproblem into NDPs and polyhedral regions, we form the lower bounding problem (LB-M). Also, we add the pricing cuts, of the form (21), saved in the previous step, to (LB-M). Step 3: In the main part of the algorithm, starting with the lower (and upper bound) problem, (LB-M) (and (UB-M)), built as explained above, we iteratively tighten it by generating more NDPs, refining the regions based on these new NDPs and adding more pricing cuts. The key idea is to make sure at every iteration that the lower bounding problem makes progress by refining some regions, which are selected based on the current solution of the (LB-M), in such a way that the solution of (LB-M) is cut off. Also, the NDPs discovered during the refinement process help the upper bounding problem to make progress. At every iteration, we first solve (LB-M). If µ = 0 in the optimal solution, then the existing regions do not include any points that can improve the current lower bound. In this case, the lower and upper bounding problems would be equivalent and optimal, so we can stop. Since solving (UB-M) is optional, we can solve (UB-M) at every τ > 0 iterations and stop if its optimal value is close enough to the value of (LB-M), i.e., the gap is less than a prespecified threshold ɛ gap > 0. Other than the optimality gap, we can have stopping criteria such as time, iteration or memory limit. In our implementation, we skip the optional step and use a time limit (in addition to µ = 0) as the stopping criterion. Consider an iteration where we do not satisfy any of the stopping conditions. Let (ˆλ, ˆµ, û, ŵ) be an optimal solution of (LB-M). Also, let î M and ˆr {1,..., q i } be such that ˆµîˆr = 1, e.g., in our implementation we select the smallest block index. For part (iii) of Step 3, we choose to refine only the region Qîˆr, which we explain next, rather than iteratively selecting different regions to be refined. We denote the underlying solution of (LB-M) in the criterion space of MOP(î) by ẑ, that is, ẑ := (ûîˆr, ŵîˆr ) Qîˆr. In order to make sure that (LB-M) will make progress in the next iteration, the goal is to cut off its current solution. Specifically, we either prove that ẑ is an NDP of MOP(î), in which case we add it to Pî, or refine Qîˆr in such a way that ẑ does not belong to it anymore. Thus, to satisfy (11), we create a family of subregions of Qîˆr, say Γ = {T 1,..., T Γ }, and a subset of the NDF of MOP(î), say S, so that all NDPs of MOP(î) that are in Qîˆr lie in S or in some set in Γ, i.e., Qîˆr N î S T Γ T, (22) 21

22 and the family of subregions cuts off ẑ, i.e., ẑ T. (23) T Γ Then, we make the updates Pî = Pî S and Λî = (Λî \ {Qîˆr }) T T Γ to improve (LB-M) and (UB-M). That is, we add the NDPs in S to (UB-M), while we replace Qîˆr in (LB-M) by the NDPs in S and the subsets in Γ. Moreover, we inherit all pricing cuts of the form (21) corresponding to Qîˆr, i.e., Cîˆr, for all T Γ and possibly generate new cuts for them as explained in detail next. Algorithm 3 Given a region Qîˆr, a point ẑ Qîˆr and a vector β Rm+1 +, refine the region in such a way that (22) and (23) are satisfied 1. Initialize L = {Qîˆr }, Γ = and S =. 2. while L (or any early stopping criterion has been satisfied) 3. Pick T L and remove T from L. 4. Let ν T := min {β z : z T Zî}. 5. if ν T < 6. Save cut β z ν T for T. 7. Let z(t ) arg min {β z : z T N î}, add z(t ) to S. 8. if ν T > β ẑ 9. for j = 1,..., m Add T {z : z j z j (T ) ɛ} to Γ, and inherit saved cuts for T. 11. else 12. for j = 1,..., m if z j (T ) ẑ j 14. Add T {z : z j z j (T ) ɛ} to Γ, and inherit saved cuts for T. 15. else 16. Add T {z : z j z j (T ) ɛ} to L, and inherit saved cuts for T. In Algorithm 3, we propose a general procedure that constructs the desired family Γ of subsets of Qîˆr, together with the set S. The set L represents a list of subsets of Qîˆr that have yet to be resolved, in the sense that they contain ẑ. At every iteration, we remove one element, T Qîˆr, from L and possibly add new elements to Γ and/or L. We use a given vector β R m+1 + to search 22

23 the region for unknown NDPs. Unless infeasible, the minimization problem at Step 4 provides a valid inequality given at Step 6 for the NDPs in T, hence saved to be added to (LB-M) in the form of (21) later on. Moreover, if β > 0, then its optimal solution, say z Step4, is guaranteed to be an NDP, thus can be directly added to S (i.e., we let z(t ) = z Step4 at Step 7). Otherwise, we make an additional step where we search for an actual NDP, z(t ), on the hyperplane β z = ν T. Such an NDP exists whenever the problem at Step 4 is feasible, and can be found by lexicographic minimization of linear objectives over T Zî with additional upper bound constraints, e.g., by solving lex min (z 1,..., z m+1 ) s.t. z T Zî and z z Step4. (24) We decompose T into m + 1 smaller sets based on the newly found NDP. If the found valid inequality cuts off ẑ, then we add the subsets obtained to Γ (see Figure 4(a)). Otherwise, we add the ones including ẑ to L to be processed later, and add the rest to Γ (see Figure 4(b)). We note that due to the construction of the T sets, we always preserve the structure of MOP(î). Therefore, it is assumed that the solution of (24) can be found with relative ease. T Γ z(t ) ẑ S Γ (a) The case where the condition at Step 8 is satisfied. T Γ z(t ) ẑ S L (b) The case where the condition at Step 8 is not satisfied. Figure 4: Illustration of an iteration of Algorithm 3 for two cases. On the left figures, the shaded area, the straight line, the dashed line represent the region to be refined (T ), a previously found cut (e.g., one in the description of Cîˆr ), and the objective function used to search for new NDPs (β z), respectively. Lemma 4.3. Algorithm 3 converges finitely. Proof. Observe that the generated cuts for a set of the form T in the above procedure are valid, 23

24 in the sense that they cannot remove NDPs from T, other than z(t ), which is added to S. Also, observe that each time a set T is removed from L, either no more sets are added to L, or the sets that are added exclude z(t ), an NDP. Since there is a finite number of NDPs, the algorithm terminates in a finite number of iterations. Note that each of the sets added to Γ excludes ẑ. Hence, the (LB-M) problem is guaranteed to make progress as its optimal solution has been cut off. Theorem 4.4. Algorithm 1 with Step 3 (iii) (b) performed using Algorithm 3 converges finitely. Proof. Each MOP subproblem has finitely many NDPs since its feasible region is bounded. At each iteration of Algorithm 1, when we refine a region by Algorithm 3, we either discard the region, or discover a new NDP from that region. Each NDP can be discovered at most once: in Algorithm 3 and in Algorithm 1, Step 3(iv) any NDPs discovered are removed from all regions by the refinement operation. Each NDP creates at most m + 1 subregions for each region it is contained in, so a finite number of regions are created in Steps 3(iii)(b) and 3(iv) of Algorithm 1. Therefore, Lemma 4.3 implies the finite convergence of Algorithm 1. There are different choices for the nonnegative vector, β, used to search a selected region for unknown NDPs. Below, we state a few options together with possible motivations. 1. Any positive vector: In the case that β > 0, if feasible, any optimal solution of the optimization problem at Step 4 of Algorithm 3 is guaranteed to be an NDP. Therefore, solving an additional lexicographic minimization problem at Step 7 would not be needed. In our computational experiments, we perform tests with β = Duals from (LB-M): After solving the (LB-M) problem, we fix all integer variables to their optimal values, and re-solve (LB-M) as an LP. Then, we let β = ( γ LB, 1) where γ LB is the vector of optimal dual multipliers corresponding to the resource linking constraints, (13b), in this LP. In that case, ν T < β ẑ can not happen as ẑ has zero reduced cost, and any other point has nonnegative reduced cost. This might lead Algorithm 3 to terminate faster as it would be less likely to get into the case at Step 11, and thus the case at Step 16, where a new unresolved set is created. In the final version of Algorithm 1, we implement this option. Moreover, whenever such a β vector has a zero component, we choose to use β = 1 instead of solving an additional lexicographic minimization problem. We also note that Algorithm 3 immediately stops if it finds a cut (i.e., the case at Step 8) at the first iteration. Therefore, in order to increase efficiency, it is possible to try all indices î M and 24

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Solving lexicographic multiobjective MIPs with Branch-Cut-Price

Solving lexicographic multiobjective MIPs with Branch-Cut-Price Solving lexicographic multiobjective MIPs with Branch-Cut-Price Marta Eso (The Hotchkiss School) Laszlo Ladanyi (IBM T.J. Watson Research Center) David Jensen (IBM T.J. Watson Research Center) McMaster

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Selected Topics in Column Generation

Selected Topics in Column Generation Selected Topics in Column Generation February 1, 2007 Choosing a solver for the Master Solve in the dual space(kelly s method) by applying a cutting plane algorithm In the bundle method(lemarechal), a

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

More information

Decomposition in Integer Linear Programming

Decomposition in Integer Linear Programming Decomposition in Integer Linear Programming T.K. Ralphs M.V. Galati December 3, 00 Abstract Both cutting plane methods and traditional decomposition methods are procedures that compute a bound on the optimal

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

Decomposition in Integer Linear Programming

Decomposition in Integer Linear Programming Decomposition in Integer Linear Programming M.V. Galati Advanced Analytics - Operations R & D, SAS Institute, Chesterbrook, PA 9 Ted K. Ralphs Department of Industrial and System Engineering, Lehigh University,

More information

Integer Programming Chapter 9

Integer Programming Chapter 9 1 Integer Programming Chapter 9 University of Chicago Booth School of Business Kipp Martin October 30, 2017 2 Outline Branch and Bound Theory Branch and Bound Linear Programming Node Selection Strategies

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints Daniel Porumbel CEDRIC CS Lab, CNAM, 292 rue Saint-Martin, F-75141 Paris, France

More information

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 2 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 2 Review Dr. Ted Ralphs IE316 Quiz 2 Review 1 Reading for The Quiz Material covered in detail in lecture Bertsimas 4.1-4.5, 4.8, 5.1-5.5, 6.1-6.3 Material

More information

Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition

Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition Wireless frequency auctions: Mixed Integer Programs and Dantzig-Wolfe decomposition Laszlo Ladanyi (IBM T.J. Watson Research Center) joint work with Marta Eso (The Hotchkiss School) David Jensen (IBM T.J.

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

The Size Robust Multiple Knapsack Problem

The Size Robust Multiple Knapsack Problem MASTER THESIS ICA-3251535 The Size Robust Multiple Knapsack Problem Branch and Price for the Separate and Combined Recovery Decomposition Model Author: D.D. Tönissen, Supervisors: dr. ir. J.M. van den

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem

A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem A Generic Separation Algorithm and Its Application to the Vehicle Routing Problem Presented by: Ted Ralphs Joint work with: Leo Kopman Les Trotter Bill Pulleyblank 1 Outline of Talk Introduction Description

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

x ji = s i, i N, (1.1)

x ji = s i, i N, (1.1) Dual Ascent Methods. DUAL ASCENT In this chapter we focus on the minimum cost flow problem minimize subject to (i,j) A {j (i,j) A} a ij x ij x ij {j (j,i) A} (MCF) x ji = s i, i N, (.) b ij x ij c ij,

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints

From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints From the Separation to the Intersection Sub-problem in Benders Decomposition Models with Prohibitively-Many Constraints Daniel Porumbel CEDRIC CS Lab, CNAM, 292 rue Saint-Martin, F-75141 Paris, France

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Lecture 2 - Introduction to Polytopes

Lecture 2 - Introduction to Polytopes Lecture 2 - Introduction to Polytopes Optimization and Approximation - ENS M1 Nicolas Bousquet 1 Reminder of Linear Algebra definitions Let x 1,..., x m be points in R n and λ 1,..., λ m be real numbers.

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems Keely L. Croxton Fisher College of Business The Ohio State University Bernard Gendron Département

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone:

MA4254: Discrete Optimization. Defeng Sun. Department of Mathematics National University of Singapore Office: S Telephone: MA4254: Discrete Optimization Defeng Sun Department of Mathematics National University of Singapore Office: S14-04-25 Telephone: 6516 3343 Aims/Objectives: Discrete optimization deals with problems of

More information

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1.

A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM. 1. ACTA MATHEMATICA VIETNAMICA Volume 21, Number 1, 1996, pp. 59 67 59 A PARAMETRIC SIMPLEX METHOD FOR OPTIMIZING A LINEAR FUNCTION OVER THE EFFICIENT SET OF A BICRITERIA LINEAR PROBLEM NGUYEN DINH DAN AND

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Integer Programming as Projection

Integer Programming as Projection Integer Programming as Projection H. P. Williams London School of Economics John Hooker Carnegie Mellon University INFORMS 2015, Philadelphia USA A Different Perspective on IP Projection of an IP onto

More information

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014

Outline. CS38 Introduction to Algorithms. Linear programming 5/21/2014. Linear programming. Lecture 15 May 20, 2014 5/2/24 Outline CS38 Introduction to Algorithms Lecture 5 May 2, 24 Linear programming simplex algorithm LP duality ellipsoid algorithm * slides from Kevin Wayne May 2, 24 CS38 Lecture 5 May 2, 24 CS38

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

EXTREME POINTS AND AFFINE EQUIVALENCE

EXTREME POINTS AND AFFINE EQUIVALENCE EXTREME POINTS AND AFFINE EQUIVALENCE The purpose of this note is to use the notions of extreme points and affine transformations which are studied in the file affine-convex.pdf to prove that certain standard

More information

Lagrangean Relaxation of the Hull-Reformulation of Linear Generalized Disjunctive Programs and its use in Disjunctive Branch and Bound

Lagrangean Relaxation of the Hull-Reformulation of Linear Generalized Disjunctive Programs and its use in Disjunctive Branch and Bound Lagrangean Relaxation of the Hull-Reformulation of Linear Generalized Disjunctive Programs and its use in Disjunctive Branch and Bound Francisco Trespalacios, Ignacio E. Grossmann Department of Chemical

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

COLUMN GENERATION IN LINEAR PROGRAMMING

COLUMN GENERATION IN LINEAR PROGRAMMING COLUMN GENERATION IN LINEAR PROGRAMMING EXAMPLE: THE CUTTING STOCK PROBLEM A certain material (e.g. lumber) is stocked in lengths of 9, 4, and 6 feet, with respective costs of $5, $9, and $. An order for

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality 6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

Linear Programming in Small Dimensions

Linear Programming in Small Dimensions Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional

More information

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov An Extension of the Multicut L-Shaped Method INEN 698 - Large-Scale Stochastic Optimization Semester project Svyatoslav Trukhanov December 13, 2005 1 Contents 1 Introduction and Literature Review 3 2 Formal

More information

Approximation Algorithms: The Primal-Dual Method. My T. Thai

Approximation Algorithms: The Primal-Dual Method. My T. Thai Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the

More information

1. Lecture notes on bipartite matching February 4th,

1. Lecture notes on bipartite matching February 4th, 1. Lecture notes on bipartite matching February 4th, 2015 6 1.1.1 Hall s Theorem Hall s theorem gives a necessary and sufficient condition for a bipartite graph to have a matching which saturates (or matches)

More information

Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point

Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point Ethan Liu Supervisor: Professor Matthias Ehrgott Lancaster University Outline

More information

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh

Lecture 3. Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets. Tepper School of Business Carnegie Mellon University, Pittsburgh Lecture 3 Corner Polyhedron, Intersection Cuts, Maximal Lattice-Free Convex Sets Gérard Cornuéjols Tepper School of Business Carnegie Mellon University, Pittsburgh January 2016 Mixed Integer Linear Programming

More information

Investigating Mixed-Integer Hulls using a MIP-Solver

Investigating Mixed-Integer Hulls using a MIP-Solver Investigating Mixed-Integer Hulls using a MIP-Solver Matthias Walter Otto-von-Guericke Universität Magdeburg Joint work with Volker Kaibel (OvGU) Aussois Combinatorial Optimization Workshop 2015 Outline

More information

ACO Comprehensive Exam October 12 and 13, Computability, Complexity and Algorithms

ACO Comprehensive Exam October 12 and 13, Computability, Complexity and Algorithms 1. Computability, Complexity and Algorithms Given a simple directed graph G = (V, E), a cycle cover is a set of vertex-disjoint directed cycles that cover all vertices of the graph. 1. Show that there

More information

Linear programming and duality theory

Linear programming and duality theory Linear programming and duality theory Complements of Operations Research Giovanni Righini Linear Programming (LP) A linear program is defined by linear constraints, a linear objective function. Its variables

More information

lpsymphony - Integer Linear Programming in R

lpsymphony - Integer Linear Programming in R lpsymphony - Integer Linear Programming in R Vladislav Kim October 30, 2017 Contents 1 Introduction 2 2 lpsymphony: Quick Start 2 3 Integer Linear Programming 5 31 Equivalent and Dual Formulations 5 32

More information

NOTATION AND TERMINOLOGY

NOTATION AND TERMINOLOGY 15.053x, Optimization Methods in Business Analytics Fall, 2016 October 4, 2016 A glossary of notation and terms used in 15.053x Weeks 1, 2, 3, 4 and 5. (The most recent week's terms are in blue). NOTATION

More information

1. Lecture notes on bipartite matching

1. Lecture notes on bipartite matching Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI Outline Review the column generation in Generalized Assignment Problem (GAP) GAP Examples in Branch and Price 2 Assignment Problem The assignment

More information

Locally convex topological vector spaces

Locally convex topological vector spaces Chapter 4 Locally convex topological vector spaces 4.1 Definition by neighbourhoods Let us start this section by briefly recalling some basic properties of convex subsets of a vector space over K (where

More information

Branch-price-and-cut for vehicle routing. Guy Desaulniers

Branch-price-and-cut for vehicle routing. Guy Desaulniers Guy Desaulniers Professor, Polytechnique Montréal, Canada Director, GERAD, Canada VeRoLog PhD School 2018 Cagliari, Italy, June 2, 2018 Outline 1 VRPTW definition 2 Mathematical formulations Arc-flow formulation

More information

New developments in the primal-dual column generation technique

New developments in the primal-dual column generation technique New developments in the primal-dual column generation technique Jacek Gondzio a, Pablo González-Brevis a,1,, Pedro Munari b,2 a School of Mathematics, The University of Edinburgh, James Clerk Maxwell Building,

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods 5.8J Integer Programming and Combinatorial Optimization Fall 9 A knapsack problem Enumerative Methods Let s focus on maximization integer linear programs with only binary variables For example: a knapsack

More information

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM

9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM 9.4 SOME CHARACTERISTICS OF INTEGER PROGRAMS A SAMPLE PROBLEM Whereas the simplex method is effective for solving linear programs, there is no single technique for solving integer programs. Instead, a

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Branch-and-bound for biobjective mixed-integer programming

Branch-and-bound for biobjective mixed-integer programming Noname manuscript No. (will be inserted by the editor) Branch-and-bound for biobjective mixed-integer programming Nathan Adelgren Akshay Gupte Received: date / Accepted: date Abstract We present a generic

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

12.1 Formulation of General Perfect Matching

12.1 Formulation of General Perfect Matching CSC5160: Combinatorial Optimization and Approximation Algorithms Topic: Perfect Matching Polytope Date: 22/02/2008 Lecturer: Lap Chi Lau Scribe: Yuk Hei Chan, Ling Ding and Xiaobing Wu In this lecture,

More information

Target Cuts from Relaxed Decision Diagrams

Target Cuts from Relaxed Decision Diagrams Target Cuts from Relaxed Decision Diagrams Christian Tjandraatmadja 1, Willem-Jan van Hoeve 1 1 Tepper School of Business, Carnegie Mellon University, Pittsburgh, PA {ctjandra,vanhoeve}@andrew.cmu.edu

More information

In this lecture, we ll look at applications of duality to three problems:

In this lecture, we ll look at applications of duality to three problems: Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,

More information

Decomposition and Dynamic Cut Generation in Integer Linear Programming

Decomposition and Dynamic Cut Generation in Integer Linear Programming Decomposition and Dynamic Cut Generation in Integer Linear Programming M.V. Galati Advanced Analytics - Operations R & D, SAS Institute, Chesterbrook, PA 908 Ted K. Ralphs Department of Industrial and

More information

DEGENERACY AND THE FUNDAMENTAL THEOREM

DEGENERACY AND THE FUNDAMENTAL THEOREM DEGENERACY AND THE FUNDAMENTAL THEOREM The Standard Simplex Method in Matrix Notation: we start with the standard form of the linear program in matrix notation: (SLP) m n we assume (SLP) is feasible, and

More information

AMS : Combinatorial Optimization Homework Problems - Week V

AMS : Combinatorial Optimization Homework Problems - Week V AMS 553.766: Combinatorial Optimization Homework Problems - Week V For the following problems, A R m n will be m n matrices, and b R m. An affine subspace is the set of solutions to a a system of linear

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

AM 221: Advanced Optimization Spring 2016

AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminology,

More information

ONLY AVAILABLE IN ELECTRONIC FORM

ONLY AVAILABLE IN ELECTRONIC FORM MANAGEMENT SCIENCE doi 10.1287/mnsc.1070.0812ec pp. ec1 ec7 e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 2008 INFORMS Electronic Companion Customized Bundle Pricing for Information Goods: A Nonlinear

More information

Lecture 6: Faces, Facets

Lecture 6: Faces, Facets IE 511: Integer Programming, Spring 2019 31 Jan, 2019 Lecturer: Karthik Chandrasekaran Lecture 6: Faces, Facets Scribe: Setareh Taki Disclaimer: These notes have not been subjected to the usual scrutiny

More information

Lecture 5: Duality Theory

Lecture 5: Duality Theory Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane

More information

Small Survey on Perfect Graphs

Small Survey on Perfect Graphs Small Survey on Perfect Graphs Michele Alberti ENS Lyon December 8, 2010 Abstract This is a small survey on the exciting world of Perfect Graphs. We will see when a graph is perfect and which are families

More information

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps

Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Penalty Alternating Direction Methods for Mixed- Integer Optimization: A New View on Feasibility Pumps Björn Geißler, Antonio Morsi, Lars Schewe, Martin Schmidt FAU Erlangen-Nürnberg, Discrete Optimization

More information

Polyhedral Compilation Foundations

Polyhedral Compilation Foundations Polyhedral Compilation Foundations Louis-Noël Pouchet pouchet@cse.ohio-state.edu Dept. of Computer Science and Engineering, the Ohio State University Feb 15, 2010 888.11, Class #4 Introduction: Polyhedral

More information

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem Column Generation: Cutting Stock A very applied method thst@man.dtu.dk Outline History The Simplex algorithm (re-visited) Column Generation as an extension of the Simplex algorithm A simple example! DTU-Management

More information