The Size Robust Multiple Knapsack Problem

Size: px
Start display at page:

Download "The Size Robust Multiple Knapsack Problem"

Transcription

1 MASTER THESIS ICA The Size Robust Multiple Knapsack Problem Branch and Price for the Separate and Combined Recovery Decomposition Model Author: D.D. Tönissen, Supervisors: dr. ir. J.M. van den Akker dr. J.A. Hoogeveen October 23, 2013

2 Abstract In this thesis we investigate the size robust multiple knapsack problem. The differences with the standard knapsack problem are, that there is more than one knapsack and that the knapsack sizes can decrease with a certain probability. To deal with this, we allow recovery by removing items. Our goal is to find a solution where the expected value is maximal. We solve this problem with two decomposition approaches: The combined and separate decomposition model. We show that the speed-up for the demand robust shortest path problem for the combined model [18] also works for the size robust multiple knapsack problem. We show that this speed-up can be adapted for the separate recovery decomposition model. Together with other algorithm optimizations this allows us to solve the LP-relaxation more than ten times faster than with the naive approach. The separate recovery decomposition model appeared to be faster for the size robust knapsack problem [2]. This is because the separate model has an easier pricing problem. However, when the number of knapsacks increases, the number of columns and constraints grows faster in the separate model. This indicates that the combined model could become better when we have more knapsacks. Moreover, the LP-relaxation of the combined model is stronger than the LPrelaxation of the separate model. Our experiments show that the separate model outperforms the combined model in solving the LP-relaxation. However, the combined model outperforms the separate model when we solve the ILP and have more than four knapsacks. We also introduce two greedy approaches. The first approach removes the scenarios from the decomposition and moves them to the pricing problem while not increasing the difficulty of the pricing problem. This approach is expected to strongly decrease the solution time. The second greedy approach simplifies the pricing problem from the combined model. This gives a small decrease in the solution time, but gives a lower solution value for 10% of the instances. Keywords. Size Robust Multiple Knapsacks; Recoverable Robustness; Branch and Price; Column Generation; Separate Recovery Decomposition; Combined Recovery Decomposition.

3 Contents 1 Introduction and Basic Techniques Introduction Linear Programming Techniques Linear Programming Basics Relaxations Duality Column Generation Branch and Price Recoverable Robustness Decomposition Models for Recoverable Robustness Dynamic Programming The Size Robust Multiple Knapsack Problem Recoverable Robust Problems Size Robust Knapsack Problem Demand Robust Shortest Path Problem The Multiple Knapsack Problem Models for the Size Robust Multiple Knapsack Problem Separate Recovery Decomposition Model Combined Recovery Decomposition Model Comparing the Separate and Combined Decomposition Model Proof LP-relaxations Generating Test Data Previous Research Generating Instances for the Knapsack Problem Generating Instances for the Size Robust Knapsack Problem Generating instances for the Multiple Knapsack Problem Generating Test Data for the Demand Robust Multiple Knapsack Generating the Items and the Knapsack Sizes Generating Instances Column Generation for the Combined Recovery Decomposition Model Generating Columns Scenario Based Speed-up Optimizing the BestF k Method Knapsack Based Speed-up i

4 4.2 Stabilized Column Generation Further Investigation of the Best k Methods Scenarios and the Methods Knapsacks and the Methods Items and the Methods Conclusions Branch and Price for the Combined Recovery Decomposition Model Branch and Price Previous Work The Branching Tree Branching Strategy Traversing the Tree Optimizing Branch and Price Sorting the Knapsacks Sorting the Items Flexible knapsacks Traversal strategies Removing Duplicate Columns Properties of the ILP Scenarios Knapsacks Items Combining Items, Knapsacks and Scenarios Conclusion and Further Research Greedy Recovery Greedy Method The Easy Method LP-Relaxation Pricing Problem Easy Method Optimizing the Column Generation Solution Value and Time Difference Scenarios Knapsacks Items Solving the Instances with the Easy Method Scenarios Knapsacks Items Difficult Instances Conclusion Comparing the Combined and Separate Decomposition Model Optimizing Column Generation Comparing the LP-relaxation Difficult Instances Branch and Price Optimization Comparing the Combined and Separate Decomposition Model Scenarios ii

5 7.4.2 Knapsacks Items Difficult instance set Conclusion Conclusion and Further Research 102 A Dynamic Programming Algorithms 106 A.1 Iterative Dynamic Programming Algorithm A.2 Recurrence Dynamic Programming Algorithm A.3 Iterative Initial/Recovery Dynamic Programming Algorithm A.4 Recursive Initial/Recovery Dynamic Programming Algorithm B Implementation 113 B.1 The problem and the Basic Data Structures B.2 Generating the Master Problem B.3 Column Generation B.4 Branch and Price

6 Chapter 1 Introduction and Basic Techniques 1.1 Introduction When we want to solve real life optimization or planning problems we often deal with uncertainties. An approach would be to assume all data to be known and to solve that problem. However such a solution could cause a variety of problems, as for example the solution could not be feasible anymore. Even if the solution could be made feasible this solution is most likely not optimal. A relatively new way to deal with this uncertainty is recoverable robustness [5]. Recoverable robustness combines robust optimization and stochastic programming. Robust optimization is often used in high risk problems or basic services and the solutions have to remain feasible for all possible disturbances. For stochastic programming we generate a solution based on the current data and when something changes we take a recourse decision. In this case we optimize the cost of the initial solution in combination with the expected cost of the recourse actions we possibly have to take. Recoverable robustness is a lot like stochastic programming as the optimization works in the same way. However our recovery actions are restricted by a fast and simple algorithm which guarantees feasible solutions. Because the algorithms are simple and predefined recoverable robustness is very suitable for combinatorial optimization. Two decomposition approaches are presented in the thesis of Bouman [2]. The separate recovery decomposition generates separate problems for the initial and recovery parts of a solution, while the combined recovery decomposition generates single problems which contains an initial solution as well as the recovery solution for every disturbance. These two models were extensively tested on the size robust knapsack problem, a knapsack problem where the knapsack size is subject to uncertainty. For this problem some good results were achieved with the separate decomposition model. It was expected that the combined recovery decomposition would yield good results for the Demand Robust Shortest Path Problem, a problem where we want to buy the shortest/cheapest path from source to sink. The location of the sink is uncertain and when the location of the sink is known, the cost of the paths increases. That this is indeed an excellent problem for the combined recovery decomposition model was proved in my experimentation project [18]. The same experimentation project also showed that optimizing column generation by trying out different methods and using a speed-up where we add additional columns based 2

7 on theoretical insights of the problem, instead of only using columns from the pricing problem leads to large decreases in the solution time. The size robust multiple knapsack problem, which we study in this thesis, is a more difficult problem than has been studied so far. Instead of one knapsack we have multiple knapsacks and their sizes are all subject to uncertainty. This is a difficult problem as the standard knapsack problem with several knapsacks is already NP-hard in the strong sense. However, by making use of the separate recovery decomposition model and combined decomposition model we can find the LP-relaxation by column generation where we have a different column for every knapsack and scenario combination. The pricing problem now consists of finding the best filling for just one knapsack, where the revenue of the items is computed with help of the duals of the constraints. We solve the pricing problem with dynamic programming, and the final integer solution is found with Branch and Price. We optimize the column generation and use theoretical insights to generate additional columns for this problem. We expect to gain large decreases in the solution time for the combined model and hope to see a similar result for the separate model. Getting an identical result as found with the demand robust shortest path problem, will be a large indication that this optimization step is an important part of any implementation of these models. Furthermore, we will explore the differences between the models. The separate and combined recovery decomposition model have different characteristics. The pricing problem of the separate model is a lot easier than that of the combined. However when we have more knapsacks the amount of columns and constraints grows faster in the separate model than in the combined model. This indicates that the combined model could become better when we have more knapsacks. Thereby is the LP-relaxation of the combined model stronger than that of the separate model. We investigate the following research questions: Which model is the best and under which circumstances? How do the properties of the problem such as the amount of disturbances, knapsacks and items influence the solution time of both models? How many and which columns should we add per iteration and which influence has this for the solution time? Does the speed-up for the size robust shortest path problem also work for the multiple knapsack problem? Can we adapt this speed-up in such a way that it also works for the separate decomposition model? Which effect on the solution time and value have greedy approaches? This chapter will continue by explaining the basic techniques and principles used in this thesis. We will first explain the basic principles behind linear programming, column generation and branch and price. Then we shall introduce the reader to recoverable robustness and the separate and combined recovery decomposition model. The chapter ends with an introduction to dynamic programming. In the next chapter we briefly summarize some of the previous research done on this subject, followed by a full explanation of the size robust multiple knapsack problem. The chapter then continues with defining the separate and combined recovery decomposition for this problem 3

8 and ends with a theoretical comparison between the two models. In Chapter 3 we describe the theoretical background, our contribution and implementation for the generation of good instances for our problem. In Chapter 4 and 5 we optimize the combined recovery decomposition model and investigate its properties. Chapter 6 investigates greedy approaches to the problem and our last chapter optimize and explores the properties of the separate recovery decomposition model and compares them with the combined model. 1.2 Linear Programming Techniques Linear Programming Basics A linear programming problem is a problem where we want to optimize a linear objective function subject to linear constraints. We can generalize the linear objective function with constants c and a vector of variables x and the linear constraints Ax = b, where A is a matrix and b a vector with constants. This gives the following general form for a linear program: Constraints: max (or min) c T x Ax = b x 0 We use the knapsack problem to show an example of a linear programming model and to demonstrate Lagrangian relaxation. This example was also used in [8]. The knapsack problem is a problem where we are given a knapsack with a certain size B. Further we have n items which each have a revenue c j and a weight a j. We can put items in the knapsack as long as the total weight of the items is smaller or equal to the knapsack size. Our goal is to maximize the revenue of the items we take with us. This problem we formulate as follow as a linear programming problem: max n c j x j j=0 Constraints: n a j x j B j=0 x {0, 1} j {1,.., n} In this problem the variable x is always 0 or 1, this we call an integer constraint. We have a Mixed Integer Program (MIP) when we only have integer constraints on some variables and an Integer Linear Program (ILP) when all variables have integer constraints. 4

9 1.2.2 Relaxations If we relax the x {0, 1} from the previous knapsack problem to 0 x 1, we have a LP or linear relaxation. Other popular relaxations are the Lagrangian relaxation and the surrogate relaxation. The Lagrangian relaxation gives a stronger bound which means that for a maximization problem the solution value of the ILP Lagrangian relaxation linear relaxation. With the Lagrangian relaxation we are allowed to violate our constraints against a cost. For each of the constraints we relax, we define a Lagrangian multiplier and add the difference between the right and the left side, weighted by the Lagrangian multiplier to the objective. If we relax the only constraint we get the following problem: Subject to: n n max c j x j + λ(b a j x j ) j=0 j=0 x {0, 1} j {1,.., n} We want to find as tight an upper bound as possible. For every value of λ > 0 we find an upper bound, finding the value of λ which finds the tightest upper bound is called the Lagrangian Dual Problem: min λ 0 L(λ) If Ω contains all possible subsets of items, S is a subset of items and we define a(s) and c(s) as the total weight and revenue of those subsets. Then we have as objective function: L(λ) = max{c(s) + λ(b a(s))} S Ω Since L(λ) is the maximum of a finite set of linear function in λ, it is piecewise-linear, continues and convex. Because L(λ) is convex we know that any local minimum will also be a global minimum. We rename the items in such a way that c0 a 0 c1 a 1... cn a n. The set of selected items will be the same for any λ [ c k+1 a k+1, c k a k ]. In this interval it is optimal to take the items 0,.., k with us and we get: k 1 L(λ) = c j + λ(b j=0 k a j ) We can observe that the value of L(λ) will increase when k j=0 a j < B and decrease when k j=0 a j < B. We thus want to find: j=0 k 1 a j < B j=0 When k j=0 a j = B. We have found the optimal solution and have directly solved the ILP. k j=0 a j 5

10 The surrogate relaxation can be better than the Lagrangian relaxation in some cases. For the surrogate relaxation we transform all constraints in just one constraint. We can not do this with the knapsack problem, as we only have one constraint so we shall demonstrate this with another example lets assume we have the following linear model: min 5x 1 + 3x 2 + 2x 3 constraints: The surrogate relaxation then is: constraints: 3x 1 + 2x 2 + x 3 7 x 1 + x 2 2 x 2 + x 3 5 x 1, x 2, x 3 0 min 5x 1 + 3x 2 + 2x 3 ω 1 (3x 1 + 2x 2 + x 3 ) + ω 2 (x 1 + x 2 ) + ω 3 (x 2 + x 3 ) 7ω 1 + 2ω 2 + 5ω 4 x 1, x 2, x 3 0 ω 1, ω 2 and ω 3 are the duals of the first, second and third constraint of the original LP formulation. We will explain what duals are and how they work in the next section Duality The linear program so far we call the primal problem and the dual problem we can derive by finding the best lower bound for a minimization problem or the best upper bound for a maximization problem. The dual is important as it gives insight into the primal solution. If we use the previous example then we can make different lower bound estimates, we can for example say that 5x 1 +3x 2 +2x 3 3x 1 +2x 2 +x 3 7 and therefore the value of the LP is at least 7. By the same argument we can say that 5x 1 +3x 2 +2x 3 (3x 1 +2x 2 +x 3 )+(x 1 +x 2 )+(x 2 +x 3 ) = 14. The dual problem is finding the highest lower bound estimation, we do that by generalizing the previous method: 5x 1 + 3x 2 + 2x 3 ω 1 (3x 1 + 2x 2 + x 3 ) + ω 2 (x 1 + x 2 ) + ω 3 (x 2 + x 3 ) 7ω 1 + 2ω 2 + 5ω 3 Because this is not valid per definition we need to set constraints. The first ensures that we do not obtain more than 5x 1, and is 3ω 1 + ω 2 5, the second and third ensure that we do not get more than 3x 2 and 2x 3. Further ω 1, ω 2 and ω 3 need to be positive else the constraints would change direction and we do not derive a lower bound. We now have a linear objective and constraints, which makes this a LP problem. The dual problem therefore is: max 7ω 1 + 2ω 2 + 5ω 3 6

11 constraints: 3ω 1 + ω 2 5 2ω 1 + ω 2 + ω3 3 ω 1 + ω3 2 ω 1, ω 2, ω 3 0 It is not necessary to derive the dual problem every time by hand, if we generalize this we gain: min c T x max ω T b s.t. Ax b s.t. A T ω c x 0 ω 0 Thereby we can derive the following rules for the constraints: b i c j x i 0 ωj 0 b i c j x i 0 ωj 0 = b i = c j x i = 0 ωj unbounded Then there are two important theorems about duality: Weak duality. If x is a feasible solution for the primal problem and ω is a feasible solution of the dual than c T x b T ω. Therefore we can conclude the following two things: When one of the two problems is unbounded, than the other problem has no feasible solution. When x and ω are both feasible solutions and c T x = b T ω than x is an optimal solution for the primal problem and ω for the dual problem. Strong duality. If the primal problem has an optimal solution then so has the dual and their solution value are the same. Because for the optimal solution c T x = b T ω, we can easily derive what the net gain is of increasing b i with ɛ. The solution value now becomes b T ω b i ω + (b i + ɛ)ω (assuming that the solution basis stays the same), which is b T ω + ɛω. The solution value therefore increases with ɛω, thus ω is a measure of how much the solution value increases per constraint Column Generation A lot of problems are too large to consider all variables directly. Because most of the variables of a problem will assume the value 0 in the optimal solution, it is possible to find an optimal solution without using all variables. We are able to calculate for each variable which is not yet in the problem if it is beneficial to add it to the problem or not. This we can do with the duals of the constraints, which we call the shadow price. A shadow price is the unit price you want to pay for a little more. With the shadow prices we can calculate the reduced cost of a variable, which is the net gain per unit. For a maximization problem we want to add a variable when those reduced costs are positive and for a minimization problem when negative. We can calculate the reduced cost as c A T ω, where ω is the dual or shadow price vector. 7

12 The problem where we search for improving variables using shadow prices is called the pricing problem and the problem where we solve the linear program and get a solution we call the master problem. When we can not add any good columns anymore to the problem, we have solved the problem to optimality. Column generation makes it possible to generate decomposition algorithms as it allows you to use subsolutions as column and add them to the problem. The master problem combines these subsolutions to a solution. We show how column generation works with the multiple knapsack problem. The multiple knapsack problem is an extension of the standard knapsack problem. Instead of one knapsack, we have m > 1 knapsacks. Instead of finding the best filling from the n items for just one knapsack we have to find the best filling for m knapsacks. Every knapsack i has a size b i > 0 and every item has a revenue c j > 0 and a weight a j > 0. This gives us the following model: Target function: max m i=1 j=1 n c j x ij Constraints: m x ij 1 j {1,.., n} i=1 sum n j=1a j x ij b i i {1,.., m} x ij {0, 1} i {1,.., m}, j {1,.., n} When we solve it with column generation, we decompose the problem in such a way that every column represents one knapsack. The pricing problem therefore is finding the best filling for one knapsack. The master problem combines the different columns in such a way that we get a solution for the multiple knapsack problem. We define K(b i ) as the set of feasible knapsack fillings for knapsack i with size at most b i. The revenue of knapsack i is denoted by C ik = j K(b i) c j. We introduce the decision variable x ik for the columns. x ik is 1 when knapsack k K i is selected and 0 otherwise. Objective: max p 0 m i=1 k K(b i) C ik x ik Subject to: x ik = 1 i {1, 2,.., m} (1.1) k K(b i) m a ijk x ik 1 j {1, 2,.., n} (1.2) i=1 k K(b i) x ik {0, 1} i {1, 2,.., m}, k K(b i ) (1.3) Constraint (1.1) ensures that exactly one filling is selected for every knapsack and (1.2) makes sure that every item is at most in one selected knapsack. 8

13 We relax x ik to 0. The value of the maximization increases only and only if the reduced costs are positive. Let λ i, µ j be the dual variables of constraints (1.1) and (1.2). The reduced cost for the initial situation c red (x ik ) can now be given by: c red (x ik ) = j k c j λ i n µ j = j=1 n a ijk (c j µ j ) λ i This gives as pricing problem a knapsack problem where the revenue of item j equals c j µ j. j=1 1.3 Branch and Price Branch and Price is an extension of Branch and Bound. So let us first explain how Branch and Bound works. When we have a problem we can solve it by enumerating all candidate solutions, but that takes time. With Branch and Bound we try to discard sets of fruitless candidates solutions by using lower and upper bounds before they are enumerated. The Branch and Bound algorithm uses two steps to achieve this, the first is the branching step and the second is the bounding step. In the branching step we split the set S of all candidate solutions into at least two sets based on the structure of the solutions. In the bounding step we try to prove that one of those sets can not contain the solution. By doing this we gain a tree structure with the set S as root. We can demonstrate this principle with the knapsack problem. We start in the root of the tree with all possible solutions and our splitting rule is taking an item with us or not. Our upper bound (UB) we get with the LP-relaxation of the problem. For the knapsack problem the LPrelaxation can easily be found by solving the knapsack problem in a greedy way. Sort all items in descending cj a j (and renumber them) and then add the items in ascending order to the knapsack until the knapsack is full. If we take the last item fully with us we already have an ILP solution and thus we can stop directly as we know it is optimal, if we take the last item with a fraction we have to branch. Finding a lower bound (LB) is easy as we only have to set the last fractional value in the solution to 0. The highest found lower bound so far we call HLB. When the root solution is not integer, we have on level one of the tree two nodes, in which all solution of the left node has item 0 in the knapsack, and in the left node we do not take that item with us. When the upper bound of one of the two subsets is lower than the HLB, we know that there is an integer solution with an higher solution value than the fractional solution value of this set of candidate solutions and therefore we can discard this set of solutions. If one of the two subsets has an integer solution or LB = UB then we know we do not have to branch the set anymore. In the other cases we branch the solution set further and repeat. In all cases when LB > HLB, we update the value of HLB. When there are no nodes to branch anymore we stop and return HLB as the solution value. Branch and Price works on the same principle, but it uses column generation to find the upper bound. The generated columns of the solution are split in the same way as the candidate solution. Every column generation process thus starts with the columns inherited from the parents, but which do satisfy the rules set for the solutions and we only generate new columns which satisfy those conditions. Branch and Bound or Branch and Price are often used with a depth first or best first traversal strategy. In Depth First we traverse the tree by expanding the first child and going into depth 9

14 before backtracking and visiting the other children. Best First always expands the node with the best upper bound. Another known traversal strategy is Breadth First which visits all the children before visiting all their children. 1.4 Recoverable Robustness When we have to deal with uncertainty in combinatorial optimization we can make use of scenarios which each deal with a disturbance. There are different ways to represent those scenario, most common are the discrete scenario, the interval scenario and the Γ-scenario set [3]. For the discrete scenario set every scenario is explicitly given with its cost function, the interval set gives an implicit description of all possible scenarios, the Γ-scenario set works like the interval set but with only γ disturbances at the same time. In this thesis we will only work with discrete scenario sets. There are two major approaches which use scenario sets: (2-stage) Stochastic programming finds a solution which is feasible for almost all scenarios and minimizes some stochastic function like expected cost. This method is only applicable if we have a probability distribution or can estimate one. Further we assume some flexibility as the solution may be unfeasible for some scenarios and thus we need some recourse action. Also it assumes that that the cost in the unlikely worst-case scenario is reasonably. These properties exclude this method for high risk situations and basis services. Robust optimization: which gives a high level of security but has a risk averse attitude. A solution is robust when it remains feasible under all considered scenarios, such a solution may be difficult to find and may even be infeasible. Recovery actions afterwards are not possible. Which means that if we find a solution that it may generate high cost, which may not be representative for most scenarios. Recoverable robustness [5] combines these two methods, we have to find a solution which is feasible for all scenarios, but we are allowed to change our solution with a restricted recovery algorithm. In the initial stage we find a solution which we use for all scenarios and in the recovery stage we use our recovery algorithm. The recovery algorithms fall into two groups, the first group limits the action of the recovery and the second the computational power of the recovery. If we have the following optimization problem, where x R are the decisions variables, f is the objective function and X is the set of feasible solution: {min f(x) x X} We model disturbances by the set scenarios S, where Y s is the set of feasible solutions for scenario s. We denote decision variables for scenarios s by y s. The set of restricted recovery algorithms are denoted by A, where A(x, s) A computes a feasible solution y s from a given initial solution x in case of scenario s. The recovery robust optimization problem is now denoted as: min f(x) + s S g(y s, s) x X 10

15 A A y s = A(x, s) s S Here, s S g(ys, s) denotes the cost associated with the recovery variables y s. Depending on the problem we can choose g differently. If we are only interested in a feasible solution we can choose the null function. We can formulate g as the revenue of the lowest scenarios which equals optimizing the worst-case scenario, or we can use the average revenue of the scenarios and thus optimize the expected revenue. 1.5 Decomposition Models for Recoverable Robustness In most cases the set Y s is way too large and most solutions of that set will be ignored. Therefore we would like to combine column generation with recoverable robustness. Bouman [2] introduced in his master thesis two decomposition models which make this possible. They first consider a modification of the recovery optimization problem, in which they restrict their problem to one single admissible recovery algorithm. In addition, they define two sets of feasible solutions, the feasible scenarios solution set and the feasible recovery set. They defined the feasible scenario solution set as: s S : Y s {y s x : A(x, s) = y s } and the feasible recovery set as: s S : R s = {(x, y s ) A(x, s) = y s } This gives the Feasible Region Recoverable Robust Minimization Problem (F RRRP min ): min f(x) + s S g(y s, s) x X s S : y s Y s s S : (x, y s ) R s When we enumerate the full set X and each set Y s and take the best combination that is in R s, we get the best solution. The problem is that the number of combination, which is X s S Y s, is generally very large. They therefore only want to look at subset of X and Y s and find the best solution with a column generation approach. These sets we define as X X and s SY s Y s. For the Separate recovery decomposition model, they generate with help of the pricing problems columns for the set X and s S : Y s separately. The model which is used for the separate model is: min f(x) + s S g(y s, s) 11

16 x X s S : y s Y s s S : (x, y s ) R s The only difference with the F RRRP min is that the sets X and Y s are replaced by X and Y s. This model has a master problem, which get solves, then one or more columns are added from the pricing problems and the master problem get resolved. We keep generating columns with the pricing problems until the solution cannot be improved anymore. Assuming that the pricing problem is defined in such a way that it can find the optimal solution, we have in that case found an optimal solution. The combined model uses a different way to decompose the F RRRP min into sub-problems. For the combined model the recovery is directly included in the solution, thus they move the recovery in subproblems as well. For this model a restricted recovery set R S R s is used for each s S. They then define the following model: min f(x) + s S g(y s, s) s S : (x s, y s ) R s s S : x = x s Notice here that although we have a combined solution of an initial and recovery solution, all initial solutions (x s) need to be the same for all scenarios. We now have one pricing problem from every combined initial with a scenario problem where we have to find combinations which improve the master problem. If there can be found an algorithm which finds both an initial as a recovery solution, we can use the algorithm for the pricing problem. A possible drawback of the current framework is that there are possible similarities between the scenarios, which may lead to redundant work when solving the pricing problems. 1.6 Dynamic Programming Dynamic Programming is an algorithm for optimization problems which combines subproblems to reach an overall solution. We can use this approach for all problems where Bellmann s Principle of Optimality is applicable. This principle states that if we are maximizing a value D(j), where i represents the ith choice, the value only depends on the optimality of the values max k D(i k). The problem is often formulated by a recursive formulation. We demonstrate this with the knapsack problem. We define D(j, w) as the best value for the knapsack, such that the knapsack filling is a subset of items {1, 2..., j} and the total weigh of the sum of the items is exactly w. Therefore every D(j, w) is a subsolution which we use to find the final solution value. For the knapsack problem we can express the following recursive formulation: D(j, 0) = 0 for j {1, 2..., n} D(0, w) = { for w > 0 D(j 1, w) D(j, w) = max D(j 1, w a j ) + c j 12

17 With this recursion we calculate the solution value of the problem, if we want to know how this solution is build, we should store our decisions in an array. This is called memoization which is an important part of dynamic programming. For the knapsack problem we can use the array C(j, w), which is 0 if we did not include the item in the knapsack and else 1. Backtracking can now be done with: Algorithm 1 Find the solution by backtracking from the array w B for j = n 1 do if C(j, w) = 1 then Put item j in the knapsack b b a j else Do not take item j with you. end if end for With dynamic programming weakly NP-complete problems such as the knapsack problem can be solved in pseudo polynomial time, in the knapsack case the complexity is O(nB), which is not polynomial since B is not polynomial to the length of the input of the problem. 13

18 Chapter 2 The Size Robust Multiple Knapsack Problem We start with a summary of the previous research done on the separate and combined recovery decomposition model, followed by the research for the standard multiple knapsack problem. Then we explain the size robust multiple knapsack problem and model the problem according to the combined and separate recovery decomposition. We end with the theoretical comparison between the two models. 2.1 Recoverable Robust Problems Size Robust Knapsack Problem Bouman [2] used in his thesis the separate and combined decomposition model for the size robust knapsack problem. The size robust knapsack problem is an adaption of the regular knapsack problem. Like the normal knapsack problem we are given n items, where item j has revenue c j and weight a j, the knapsack size is b and every item can only be selected once. For the size robust knapsack problem the knapsack size is subject to uncertainty. The knapsack size b can decrease for a scenario and thus s S : b s < b. They assume that the knapsack will keep its original size with probability p 0 and that scenario s will occur with probability p s. They studied the situation in which recovery has to be performed by removing items. They solved instances for this problem with different kind of algorithms of which: Branch and Price on the separate recovery decomposition (12 configurations) Branch and Price on the combined recovery decomposition (12 configurations) Branch and Bound (12 configurations) Exact Dynamic Programming Different Local Search algorithms. 14

19 The 12 configurations are all combinations of ascending or descending ratio, revenue or weight of the items, where we can first include or exclude the item in the branch and price or branch and bound tree. The separate decomposition model and some of the local search algorithms performed well. The Exact Dynamic Programming performed very badly, while the combined model had a poor performance. The performance of the Branch and Bound was in between that of the separate and the combined model. We expect that the separate recovery decomposition model preformed better than the combined model for this problem because the separate model has an easier pricing problem. The pricing problem for the separate model consist of a simple knapsack problem, while the pricing problem for the combined model is a double knapsack problem, where both an initial as recovery filling has to be found Demand Robust Shortest Path Problem The demand robust shortest path problem is an extension of the shortest path problem. The goal is to find the shortest or cheapest path from the source to the sink, given a graph G(V, E). This graph has edges e E with costs c e and a source v source V. The location of the sink is unknown therefore we define multiple scenarios s S. Each of these scenarios defines a sink v s sink and contain a factor f s > 1 by which the cost of the recovery edges are scaled. The objective is to find the path which has the lowest cost for the worst-case situation. The combined recovery model for this model was introduced in [2] and extensive research was done in my experimentation project [18]. Because we use the combined recovery decomposition a pricing problem is solved for each scenario. For every iteration of the problem we can choose which pricing problem we solve. We called the different choices methods and gave them the following names: Interleaved: Solves the pricing problems in ascending order of their scenario index. If the reduced cost of the pricing problem is negative, the column is added and the master problem is resolved. When we are unable to add a column for all scenarios, the column generation is stopped and we have an optimal solution. Best: Solves in every iteration all pricing problems, but only adds the columns with the lowest reduced cost to the master problem. If the lowest reduced cost is positive the column generation is stopped and we have an optimal solution. All: Solves the pricing problem for all scenarios, and adds all columns with negative reduced cost to the master problem. When there is no column with a negative reduced cost the column generation is stopped and we have an optimal solution. We have tested the methods for the LP-relaxation and the All method performed the best. Additionally we have generated a speed-up, where we generate for every column with negative reduced costs found by the pricing columns, the best columns for the other scenarios. This can be done by Dijkstra s algorithm as we only have to find the shortest path from source to sink and use the edges from that path which are not in the initial solution as recovery. In combination with this speed-up, the Best method was the fastest and the time decreased with a factor 1.3 up too almost 27 for the relatively small graphs we tested. Higher speed-ups are expected for larger graphs. The integer linear problem we solved with Branch and Price, where we used the LP-relaxation as lower bound. We tested some Branch and Price configurations and decided to use Best First, 15

20 where we first branched on the edges which are closest to 0.5, for our experiments. The speed-up has a much stronger effect than for the LP-relaxation and decreases the solution time of the instances with a factor 3 up to almost 300. Keeping exactly the same graph, but changing the costs of the edges could result in solving time differences with a factor of 100. Doing the same but for the scaling factor f s, has a less drastic effect but we still see differences of a factor of 30. The boundary of the problem size seems to be around edges, when we have more edges the time to solve the problem becomes days and we run into memory problems. To get a better idea on the influence of the edges and scenarios we used a very large test set of instances where we varied the amount of edges and scenarios. We came to the conclusion that the influence of the edges is exponential and that the scenarios have a polynomial influence which we can approximate with S The Multiple Knapsack Problem The multiple knapsack problem is an extension of the regular knapsack problem. Instead of one knapsack, we have m > 1 knapsacks. Instead of finding the best filling from the n items for just one knapsack we have to find the best filling for m knapsacks. Every knapsack i has a size b i > 0 and every item has a revenue c j > 0 and a weight a j > 0. This gives us the following model: Target function: max m i=1 j=1 n c j x ij Constraints: m x ij 1 j {1,.., n} (2.1) i=1 n a j x ij b i i {1,.., m} (2.2) j=1 x ij {0, 1} i {1,.., m}, j {1,.., n} (2.3) The multiple knapsack problem is NP-Hard in the strong sense [11], which means that it is proven that there is no pseudo-polynomial time algorithm to solve the problem. In the literature this problem has been studied extensively. To find an exact solution a Branch and Bound or Bound and Bound algorithm is generally used. Hung and Fisk [9] used a Branch and Bound algorithm which uses a depth-first strategy. They branch at each level by assigning the item to each knapsack or excluding it from all knapsacks. The branching factor of that tree is therefore m + 1. As relaxation they used the Lagrangian and the surrogate relaxation. For the Lagrangian relaxation they branched at the item which has been inserted in the most knapsacks in the relaxation. For the surrogate relaxation the knapsacks are ordered such that b 1 b 2.. b m and thus the largest knapsack is branched as first. Martello and Toth [12] proposed a Bound and Bound algorithm. The upper bounds in this algorithm are found with the surrogate relaxation while the lower bounds are found by solving m individual knapsack problems. Further they use the greedy solution to guide the branching process and split into two nodes, one assigning the next item of a greedy solution to the knapsack 16

21 and the other excluding it. The extension which they use is used when U > L. In a normal branch and bound this node is never pruned. However branch and bound uses a heuristic to validate this bound, which attempts to prove if the upper bound can be achieved somehow in the current subtree. If so we have found a value of the optimal sub-solution under the current node and can backtrack and L gets the value of U. In Martello and Toth [13] and Pisinger [15] they talk more about relaxations and upper bounds. They say that the surrogate or Lagrangian relaxation are generally used for the upper bound in any sort of branch and bound tree of the multiple knapsack problem. It is possible to find the linear relaxation of the multiple knapsack problem in O(n) time, which is considerably faster than the previous upper bounds. However computational experiments have shown that the linear relaxation is generally to weak to cut of branches. Pisinger [15] uses a branch and bound algorithm with as upper bound the surrogate relaxation and as lower bound the solution found by splitting the surrogate solution into m knapsacks and solving a series of subset-sum problems. It has generally good results and solves some instances with more than items in less than a second. A bin-orientate branch and bound algorithm is introduced by Fukunaga and Korf in [1] and [7]. It integrates the same bound-and-bound mechanisms as Pisinger [15] with the bin-orientated approach and further it uses path-symmetry and path-dominance for pruning nodes. This algorithm outperforms the previous algorithms with low n m ratios and is competitive for higher ratios. Making the instances for the multiple knapsack problem more difficult can be done by increasing the number of items, knapsacks or the numerical precision. Thereby [7] identified that instances with n m slightly above two are the most difficult and that high n m are relatively easy for boundand-bound based multiple knapsack solvers. Further we can generate more difficult instance classes, where we generate the items and the weights in different ways [16]. See the next chapter for more detail about generating difficult instances. Another interesting question is if we can use reduction algorithms which reduce the size of the multiple knapsack problem? Ingargiola and Korsh [10] designed an reduction which works on dominance. When item A is dominated by another item B it means that for any solution which includes A, but excludes B there is a better solution that excludes A but includes B. This algorithm however works in exponential time but it can be useful for very difficult problems. 2.3 Models for the Size Robust Multiple Knapsack Problem The size robust multiple knapsack problem is an adaption of the standard knapsack problem. Like the normal multiple knapsack problem we are given m knapsacks which each have size b i. Further we are given n items, were item j has revenue c j, weight a j and can only be selected once. However the sizes of the knapsacks are subject to uncertainty, in every scenario a few or all knapsack sizes decrease and thus s S : b s i b i. All knapsack sizes remain the same with probability p 0 and every scenario s will occur with a probability p s. We only allow recovery by removing items. 17

22 2.3.1 Separate Recovery Decomposition Model We explained in Chapter 1 the basics behind the separate recovery decomposition model. This model decomposes the model in such a way that we can use column generation, we have for the initial situation and for every scenario a separate optimization problem with which we can generate a column. The model for the size robust multiple knapsack problem is an extension of the separate model for the size robust knapsack problem [2]. The difference however is that instead of one knapsack filling for either the initial situation or a scenario we seek a multiple knapsack filling. This problem has a difficult pricing problem due to NP-hardness in the strong sense [11], which means that we can not use a pseudo-polynomial time algorithm to find the answer. Therefore we apply one more decomposition step which splits the multiple knapsack problem, into different knapsack problems which we have to solve separately. We define K(b i ) as the set of feasible knapsack fillings with size at most b i. The revenue of initial knapsack i is denoted by C ik = j K(b i) c j. In the same way we denote the revenue of the recovery situations as C s ik = j K(b s i ) c j. We define two types of decision variables: { 1 if knapsack k Ki is selected, x ik = 0 otherwise and { yik s 1 if knapsack k K s = i is selected, 0 otherwise The index variables for choosing an item are defined as follow: { 1 if item j for knapsack k Ki is selected a ijk = 0 otherwise and { a s 1 if item j for knapsack k K s ijk = i is selected, 0 otherwise The separate recovery decomposition model for the size robust multiple knapsack problem is now: m m max p 0 p s i=1 k K(b i) C ik x ik + s S Ciky s ik s i=1 k K(b s i ) Subject to: x ik = 1 i {1, 2,.., m} (2.4) k K(b i) yik s = 1 i {1, 2,.., m}, s S (2.5) K(b s i ) a ijk x ik a s ijkyik s 0 i {1, 2,.., m}, j {1, 2,.., n}, s S (2.6) k K(b i) k Ki s 18

23 m i=1 k K(b i) a ijk x ik 1 j {1, 2,.., n} (2.7) x ik {0, 1} i {1, 2,.., m}, k K(b i ) (2.8) y s ik {0, 1} i {1, 2,.., m}, s S, k K(b s i ) (2.9) Constraint (2.4) ensures that exactly one filling is selected for every knapsack for the original situation and constraint (2.5) that exactly one knapsack filling is selected for every recovery situation. Constraint (2.6) ensures that recovery is done by removing items and constraint (2.7) makes sure that every item is at most in one selected initial knapsack. We relax the integrality constraints (2.8) and (2.9) and use Branch and Price to find the integral solution. The value of the maximization increases only and only if the reduced costs are positive. Let λ i, µ is, π ijs and ρ j be the dual variables of constraints (2.4), (2.5), (2.6) and (2.7). The reduced cost for the initial situation c red (x ik ) can now be given by: c red (x ik ) = p 0 c j λ i j k n a ijk π ijs j=0 s S n a ijk ρ j = j=1 n j=1 a ijk (p 0 c j s S π ijs ρ j ) λ i This gives as pricing problem a knapsack problem where the revenue of item j equals p 0 c j s S π ijs ρ j. Similarly the reduced cost of the recovery knapsacks are given by: c red (yik) s = p s c j µ is + j k n a s ijkπ ijs = j=1 n a s ijk(p s c j + π ijs ) µ is The recovery knapsacks also give a knapsack problem as pricing problem but now the revenue of item j equals p s c j + π ijs. If we denote the revenue of the items in the pricing problem as r j, and D(j, w) as the best value for a knapsack filling for one of the pricing problem, where the items are a subset of {1, 2..., j}. The sum of the weights of the chosen items are equal to w i and n is the amount of items left after preprocessing. Our preprocessing consist of removing all items with r j < 0 or a j > b i. After preprocessing we rename the items in such a way that every item has a different index between 1 and n. We define the following recursion to find the best knapsack filling: D(j, 0) = 0 for j {1, 2..., n } D(0, w i ) = { for w i > 0 D(j 1, wi ) D(j, w i ) = max D(j 1, w i a j ) + r j We only need to decide between the two options for an item j when w i a j 0. When that is not the case we only have to calculate D(j 1, w i ). In the worst case we need O(2 n ) time. Because there is also an iterative algorithm which always uses at least as many entries than the recursive algorithm, which has an O(n b i ) complexity. We can say that the complexity of this algorithm is O(min(n b i, 2 n )). j=1 19

Algorithms for Integer Programming

Algorithms for Integer Programming Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

Recoverable Robust Optimization for (some) combinatorial problems

Recoverable Robust Optimization for (some) combinatorial problems Recoverable Robust Optimization for (some) combinatorial problems Marjan van den Akker joint work with 1 Han Hoogeveen, Paul Bouman, Denise Tönissen, Judith Stoef Recoverable robustness Method to cope

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Algorithms for Decision Support. Integer linear programming models

Algorithms for Decision Support. Integer linear programming models Algorithms for Decision Support Integer linear programming models 1 People with reduced mobility (PRM) require assistance when travelling through the airport http://www.schiphol.nl/travellers/atschiphol/informationforpassengerswithreducedmobility.htm

More information

CSE 417 Branch & Bound (pt 4) Branch & Bound

CSE 417 Branch & Bound (pt 4) Branch & Bound CSE 417 Branch & Bound (pt 4) Branch & Bound Reminders > HW8 due today > HW9 will be posted tomorrow start early program will be slow, so debugging will be slow... Review of previous lectures > Complexity

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs

15.082J and 6.855J. Lagrangian Relaxation 2 Algorithms Application to LPs 15.082J and 6.855J Lagrangian Relaxation 2 Algorithms Application to LPs 1 The Constrained Shortest Path Problem (1,10) 2 (1,1) 4 (2,3) (1,7) 1 (10,3) (1,2) (10,1) (5,7) 3 (12,3) 5 (2,2) 6 Find the shortest

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis

Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Improving Dual Bound for Stochastic MILP Models Using Sensitivity Analysis Vijay Gupta Ignacio E. Grossmann Department of Chemical Engineering Carnegie Mellon University, Pittsburgh Bora Tarhan ExxonMobil

More information

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods 5.8J Integer Programming and Combinatorial Optimization Fall 9 A knapsack problem Enumerative Methods Let s focus on maximization integer linear programs with only binary variables For example: a knapsack

More information

February 19, Integer programming. Outline. Problem formulation. Branch-andbound

February 19, Integer programming. Outline. Problem formulation. Branch-andbound Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland February 19,

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours

More information

Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far:

Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far: Last topic: Summary; Heuristics and Approximation Algorithms Topics we studied so far: I Strength of formulations; improving formulations by adding valid inequalities I Relaxations and dual problems; obtaining

More information

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem

Outline. Column Generation: Cutting Stock A very applied method. Introduction to Column Generation. Given an LP problem Column Generation: Cutting Stock A very applied method thst@man.dtu.dk Outline History The Simplex algorithm (re-visited) Column Generation as an extension of the Simplex algorithm A simple example! DTU-Management

More information

Column Generation: Cutting Stock

Column Generation: Cutting Stock Column Generation: Cutting Stock A very applied method thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline History The Simplex algorithm (re-visited) Column Generation as an extension

More information

Integer Programming Chapter 9

Integer Programming Chapter 9 1 Integer Programming Chapter 9 University of Chicago Booth School of Business Kipp Martin October 30, 2017 2 Outline Branch and Bound Theory Branch and Bound Linear Programming Node Selection Strategies

More information

Column Generation Based Primal Heuristics

Column Generation Based Primal Heuristics Column Generation Based Primal Heuristics C. Joncour, S. Michel, R. Sadykov, D. Sverdlov, F. Vanderbeck University Bordeaux 1 & INRIA team RealOpt Outline 1 Context Generic Primal Heuristics The Branch-and-Price

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

COLUMN GENERATION IN LINEAR PROGRAMMING

COLUMN GENERATION IN LINEAR PROGRAMMING COLUMN GENERATION IN LINEAR PROGRAMMING EXAMPLE: THE CUTTING STOCK PROBLEM A certain material (e.g. lumber) is stocked in lengths of 9, 4, and 6 feet, with respective costs of $5, $9, and $. An order for

More information

Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems

Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems Dual-Based Approximation Algorithms for Cut-Based Network Connectivity Problems Benjamin Grimmer bdg79@cornell.edu arxiv:1508.05567v2 [cs.ds] 20 Jul 2017 Abstract We consider a variety of NP-Complete network

More information

In this lecture, we ll look at applications of duality to three problems:

In this lecture, we ll look at applications of duality to three problems: Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,

More information

6. Algorithm Design Techniques

6. Algorithm Design Techniques 6. Algorithm Design Techniques 6. Algorithm Design Techniques 6.1 Greedy algorithms 6.2 Divide and conquer 6.3 Dynamic Programming 6.4 Randomized Algorithms 6.5 Backtracking Algorithms Malek Mouhoub, CS340

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

ONLY AVAILABLE IN ELECTRONIC FORM

ONLY AVAILABLE IN ELECTRONIC FORM MANAGEMENT SCIENCE doi 10.1287/mnsc.1070.0812ec pp. ec1 ec7 e-companion ONLY AVAILABLE IN ELECTRONIC FORM informs 2008 INFORMS Electronic Companion Customized Bundle Pricing for Information Goods: A Nonlinear

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours

More information

The MIP-Solving-Framework SCIP

The MIP-Solving-Framework SCIP The MIP-Solving-Framework SCIP Timo Berthold Zuse Institut Berlin DFG Research Center MATHEON Mathematics for key technologies Berlin, 23.05.2007 What Is A MIP? Definition MIP The optimization problem

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.

More information

Branch-price-and-cut for vehicle routing. Guy Desaulniers

Branch-price-and-cut for vehicle routing. Guy Desaulniers Guy Desaulniers Professor, Polytechnique Montréal, Canada Director, GERAD, Canada VeRoLog PhD School 2018 Cagliari, Italy, June 2, 2018 Outline 1 VRPTW definition 2 Mathematical formulations Arc-flow formulation

More information

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998

56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998 56:272 Integer Programming & Network Flows Final Examination -- December 14, 1998 Part A: Answer any four of the five problems. (15 points each) 1. Transportation problem 2. Integer LP Model Formulation

More information

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008

LP-Modelling. dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven. January 30, 2008 LP-Modelling dr.ir. C.A.J. Hurkens Technische Universiteit Eindhoven January 30, 2008 1 Linear and Integer Programming After a brief check with the backgrounds of the participants it seems that the following

More information

CLASS: II YEAR / IV SEMESTER CSE CS 6402-DESIGN AND ANALYSIS OF ALGORITHM UNIT I INTRODUCTION

CLASS: II YEAR / IV SEMESTER CSE CS 6402-DESIGN AND ANALYSIS OF ALGORITHM UNIT I INTRODUCTION CLASS: II YEAR / IV SEMESTER CSE CS 6402-DESIGN AND ANALYSIS OF ALGORITHM UNIT I INTRODUCTION 1. What is performance measurement? 2. What is an algorithm? 3. How the algorithm is good? 4. What are the

More information

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs

Introduction to Mathematical Programming IE496. Final Review. Dr. Ted Ralphs Introduction to Mathematical Programming IE496 Final Review Dr. Ted Ralphs IE496 Final Review 1 Course Wrap-up: Chapter 2 In the introduction, we discussed the general framework of mathematical modeling

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

Linear Programming. Course review MS-E2140. v. 1.1

Linear Programming. Course review MS-E2140. v. 1.1 Linear Programming MS-E2140 Course review v. 1.1 Course structure Modeling techniques Linear programming theory and the Simplex method Duality theory Dual Simplex algorithm and sensitivity analysis Integer

More information

Decomposition approaches for recoverable robust optimization problems

Decomposition approaches for recoverable robust optimization problems Decomposition approaches for recoverable robust optimization problems J.M. van den Akker P.C. Bouman J.A. Hoogeveen D.D. Tönissen Technical Report September 2014 Department of

More information

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018 CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.

More information

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario Coping with the Limitations of Algorithm Power Tackling Difficult Combinatorial Problems There are two principal approaches to tackling difficult combinatorial problems (NP-hard problems): Use a strategy

More information

A Row-and-Column Generation Method to a Batch Machine Scheduling Problem

A Row-and-Column Generation Method to a Batch Machine Scheduling Problem The Ninth International Symposium on Operations Research and Its Applications (ISORA 10) Chengdu-Jiuzhaigou, China, August 19 23, 2010 Copyright 2010 ORSC & APORC, pp. 301 308 A Row-and-Column Generation

More information

A Branch-and-Bound Algorithm for the Knapsack Problem with Conflict Graph

A Branch-and-Bound Algorithm for the Knapsack Problem with Conflict Graph A Branch-and-Bound Algorithm for the Knapsack Problem with Conflict Graph Andrea Bettinelli, Valentina Cacchiani, Enrico Malaguti DEI, Università di Bologna, Viale Risorgimento 2, 40136 Bologna, Italy

More information

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling

EXERCISES SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM. 1 Applications and Modelling SHORTEST PATHS: APPLICATIONS, OPTIMIZATION, VARIATIONS, AND SOLVING THE CONSTRAINED SHORTEST PATH PROBLEM EXERCISES Prepared by Natashia Boland 1 and Irina Dumitrescu 2 1 Applications and Modelling 1.1

More information

Lecture 14: Linear Programming II

Lecture 14: Linear Programming II A Theorist s Toolkit (CMU 18-859T, Fall 013) Lecture 14: Linear Programming II October 3, 013 Lecturer: Ryan O Donnell Scribe: Stylianos Despotakis 1 Introduction At a big conference in Wisconsin in 1948

More information

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I MCA 312: Design and Analysis of Algorithms [Part I : Medium Answer Type Questions] UNIT I 1) What is an Algorithm? What is the need to study Algorithms? 2) Define: a) Time Efficiency b) Space Efficiency

More information

Towards a Memory-Efficient Knapsack DP Algorithm

Towards a Memory-Efficient Knapsack DP Algorithm Towards a Memory-Efficient Knapsack DP Algorithm Sanjay Rajopadhye The 0/1 knapsack problem (0/1KP) is a classic problem that arises in computer science. The Wikipedia entry http://en.wikipedia.org/wiki/knapsack_problem

More information

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI

GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI GENERAL ASSIGNMENT PROBLEM via Branch and Price JOHN AND LEI Outline Review the column generation in Generalized Assignment Problem (GAP) GAP Examples in Branch and Price 2 Assignment Problem The assignment

More information

Backtracking. Chapter 5

Backtracking. Chapter 5 1 Backtracking Chapter 5 2 Objectives Describe the backtrack programming technique Determine when the backtracking technique is an appropriate approach to solving a problem Define a state space tree for

More information

Integer Programming. Xi Chen. Department of Management Science and Engineering International Business School Beijing Foreign Studies University

Integer Programming. Xi Chen. Department of Management Science and Engineering International Business School Beijing Foreign Studies University Integer Programming Xi Chen Department of Management Science and Engineering International Business School Beijing Foreign Studies University Xi Chen (chenxi0109@bfsu.edu.cn) Integer Programming 1 / 42

More information

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality 6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,

More information

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov

An Extension of the Multicut L-Shaped Method. INEN Large-Scale Stochastic Optimization Semester project. Svyatoslav Trukhanov An Extension of the Multicut L-Shaped Method INEN 698 - Large-Scale Stochastic Optimization Semester project Svyatoslav Trukhanov December 13, 2005 1 Contents 1 Introduction and Literature Review 3 2 Formal

More information

Local search heuristic for multiple knapsack problem

Local search heuristic for multiple knapsack problem International Journal of Intelligent Information Systems 2015; 4(2): 35-39 Published online February 14, 2015 (http://www.sciencepublishinggroup.com/j/ijiis) doi: 10.11648/j.ijiis.20150402.11 ISSN: 2328-7675

More information

Modeling and Solving Location Routing and Scheduling Problems

Modeling and Solving Location Routing and Scheduling Problems Modeling and Solving Location Routing and Scheduling Problems Z. Akca R.T. Berger T.K Ralphs October 13, 2008 Abstract This paper studies location routing and scheduling problems, a class of problems in

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

More information

Parallel Auction Algorithm for Linear Assignment Problem

Parallel Auction Algorithm for Linear Assignment Problem Parallel Auction Algorithm for Linear Assignment Problem Xin Jin 1 Introduction The (linear) assignment problem is one of classic combinatorial optimization problems, first appearing in the studies on

More information

An Introduction to Dual Ascent Heuristics

An Introduction to Dual Ascent Heuristics An Introduction to Dual Ascent Heuristics Introduction A substantial proportion of Combinatorial Optimisation Problems (COPs) are essentially pure or mixed integer linear programming. COPs are in general

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

On the Robustness of Distributed Computing Networks

On the Robustness of Distributed Computing Networks 1 On the Robustness of Distributed Computing Networks Jianan Zhang, Hyang-Won Lee, and Eytan Modiano Lab for Information and Decision Systems, Massachusetts Institute of Technology, USA Dept. of Software,

More information

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo Two-Stage Service Provision by Branch and Bound Shane Dye Department ofmanagement University of Canterbury Christchurch, New Zealand s.dye@mang.canterbury.ac.nz Asgeir Tomasgard SINTEF, Trondheim, Norway

More information

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions

More information

FINAL EXAM SOLUTIONS

FINAL EXAM SOLUTIONS COMP/MATH 3804 Design and Analysis of Algorithms I Fall 2015 FINAL EXAM SOLUTIONS Question 1 (12%). Modify Euclid s algorithm as follows. function Newclid(a,b) if a

More information

Conflict Graphs for Combinatorial Optimization Problems

Conflict Graphs for Combinatorial Optimization Problems Conflict Graphs for Combinatorial Optimization Problems Ulrich Pferschy joint work with Andreas Darmann and Joachim Schauer University of Graz, Austria Introduction Combinatorial Optimization Problem CO

More information

On the Robustness of Distributed Computing Networks

On the Robustness of Distributed Computing Networks 1 On the Robustness of Distributed Computing Networks Jianan Zhang, Hyang-Won Lee, and Eytan Modiano Lab for Information and Decision Systems, Massachusetts Institute of Technology, USA Dept. of Software,

More information

Search Algorithms. IE 496 Lecture 17

Search Algorithms. IE 496 Lecture 17 Search Algorithms IE 496 Lecture 17 Reading for This Lecture Primary Horowitz and Sahni, Chapter 8 Basic Search Algorithms Search Algorithms Search algorithms are fundamental techniques applied to solve

More information

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997

56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997 56:272 Integer Programming & Network Flows Final Exam -- December 16, 1997 Answer #1 and any five of the remaining six problems! possible score 1. Multiple Choice 25 2. Traveling Salesman Problem 15 3.

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

General Methods and Search Algorithms

General Methods and Search Algorithms DM811 HEURISTICS AND LOCAL SEARCH ALGORITHMS FOR COMBINATORIAL OPTIMZATION Lecture 3 General Methods and Search Algorithms Marco Chiarandini 2 Methods and Algorithms A Method is a general framework for

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization Frank de Zeeuw EPFL 2012 Today Introduction Graph problems - What combinatorial things will we be optimizing? Algorithms - What kind of solution are we looking for? Linear Programming

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 018 D/Q Greed SP s DP LP, Flow B&B, Backtrack Metaheuristics P, NP Design and Analysis of Algorithms Lecture 8: Greed Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Optimization

More information

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel.

Problem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel. CS261, Winter 2017. Instructor: Ashish Goel. Problem set 2 Electronic submission to Gradescope due 11:59pm Thursday 2/16. Form a group of 2-3 students that is, submit one homework with all of your names.

More information

CSE 417 Network Flows (pt 4) Min Cost Flows

CSE 417 Network Flows (pt 4) Min Cost Flows CSE 417 Network Flows (pt 4) Min Cost Flows Reminders > HW6 is due Monday Review of last three lectures > Defined the maximum flow problem find the feasible flow of maximum value flow is feasible if it

More information

Methods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization

Methods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization Methods and Models for Combinatorial Optimization Heuristis for Combinatorial Optimization L. De Giovanni 1 Introduction Solution methods for Combinatorial Optimization Problems (COPs) fall into two classes:

More information

Final Exam Spring 2003

Final Exam Spring 2003 .8 Final Exam Spring Name Instructions.. Please answer all questions in the exam books that are provided.. Please budget your time carefully. It is often a good idea to read the entire exam first, so that

More information

J Linear Programming Algorithms

J Linear Programming Algorithms Simplicibus itaque verbis gaudet Mathematica Veritas, cum etiam per se simplex sit Veritatis oratio. [And thus Mathematical Truth prefers simple words, because the language of Truth is itself simple.]

More information

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms

MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms MVE165/MMG631 Linear and integer optimization with applications Lecture 9 Discrete optimization: theory and algorithms Ann-Brith Strömberg 2018 04 24 Lecture 9 Linear and integer optimization with applications

More information

UNIT 4 Branch and Bound

UNIT 4 Branch and Bound UNIT 4 Branch and Bound General method: Branch and Bound is another method to systematically search a solution space. Just like backtracking, we will use bounding functions to avoid generating subtrees

More information

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes

Greedy Algorithms. CLRS Chapters Introduction to greedy algorithms. Design of data-compression (Huffman) codes Greedy Algorithms CLRS Chapters 16.1 16.3 Introduction to greedy algorithms Activity-selection problem Design of data-compression (Huffman) codes (Minimum spanning tree problem) (Shortest-path problem)

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Algorithms Dr. Haim Levkowitz

Algorithms Dr. Haim Levkowitz 91.503 Algorithms Dr. Haim Levkowitz Fall 2007 Lecture 4 Tuesday, 25 Sep 2007 Design Patterns for Optimization Problems Greedy Algorithms 1 Greedy Algorithms 2 What is Greedy Algorithm? Similar to dynamic

More information

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3) COMPSCI 632: Approximation Algorithms September 18, 2017 Lecturer: Debmalya Panigrahi Lecture 7 Scribe: Xiang Wang 1 Overview In this lecture, we will use Primal-Dual method to design approximation algorithms

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

Lagrangean Methods bounding through penalty adjustment

Lagrangean Methods bounding through penalty adjustment Lagrangean Methods bounding through penalty adjustment thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline Brief introduction How to perform Lagrangean relaxation Subgradient techniques

More information

3 SOLVING PROBLEMS BY SEARCHING

3 SOLVING PROBLEMS BY SEARCHING 48 3 SOLVING PROBLEMS BY SEARCHING A goal-based agent aims at solving problems by performing actions that lead to desirable states Let us first consider the uninformed situation in which the agent is not

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

1 Non greedy algorithms (which we should have covered

1 Non greedy algorithms (which we should have covered 1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the all-pairs shortest paths problem, which is a problem where we want to find the shortest

More information

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs Graphs and Network Flows IE411 Lecture 21 Dr. Ted Ralphs IE411 Lecture 21 1 Combinatorial Optimization and Network Flows In general, most combinatorial optimization and integer programming problems are

More information

MINIZSAT. A semi SAT-based pseudo-boolean solver. Master thesis by Rogier Poldner

MINIZSAT. A semi SAT-based pseudo-boolean solver. Master thesis by Rogier Poldner MINIZSAT A semi SAT-based pseudo-boolean solver Master thesis by Rogier Poldner MINIZSAT A semi SAT-based pseudo-boolean solver Master thesis by Rogier Poldner committee: Dr. H. van Maaren Dr. M.J.H.

More information

Heuristic Algorithms for the Fixed-Charge Multiple Knapsack Problem

Heuristic Algorithms for the Fixed-Charge Multiple Knapsack Problem The 7th International Symposium on Operations Research and Its Applications (ISORA 08) Lijiang, China, October 31 Novemver 3, 2008 Copyright 2008 ORSC & APORC, pp. 207 218 Heuristic Algorithms for the

More information

1. Lecture notes on bipartite matching February 4th,

1. Lecture notes on bipartite matching February 4th, 1. Lecture notes on bipartite matching February 4th, 2015 6 1.1.1 Hall s Theorem Hall s theorem gives a necessary and sufficient condition for a bipartite graph to have a matching which saturates (or matches)

More information

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G.

Name: Lirong TAN 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G. 1. (15 pts) (a) Define what is a shortest s-t path in a weighted, connected graph G. A shortest s-t path is a path from vertex to vertex, whose sum of edge weights is minimized. (b) Give the pseudocode

More information