KBP: A New Pattern Reduction Heuristic for the Cutting Stock Problem
|
|
- Winifred Dinah Gallagher
- 6 years ago
- Views:
Transcription
1 KBP: A New Pattern Reduction Heuristic for the Cutting Stock Problem Constantine Goulimis 1, Alfredo Olivera Greycon Ltd., 7 Calico House, Plantation Wharf, London SW11 3TN. The classic one-dimensional cutting stock problem exhibits a great deal of degeneracy, in that multiple solutions with the same waste level are possible. An industrially-relevant aspect is to find, within the minimum-waste solution population, those with as few patterns as possible. There have been many attempts at this problem over the years, but these have not satisfactorily resolved the issue. In this paper we present a new type of heuristic, which is computationally cheap and nicely complements previous ones. We present experimental results for two variants: the first obtains a 3.4% reduction in the total number of patterns (average time 0.15 seconds / instance) and the second 4.2% (average time 0.24 seconds / instance), for a testbed of 120 problems. A third variant was also examined, but proved inferior. Key words: Cutting stock problem, pattern reduction, setup minimisation, paper industry, plastic film industry 1 Corresponding author, cng@greycon.com.
2 1. Background & Related Work The many practical applications of the onedimensional cutting stock problem ( 1D-CSP ) have provided a rich source of challenges to the mathematical optimisation community. In this paper we look at one such aspect, namely the minimisation of cutting patterns within the universe of minimum waste solutions. It is well known that the 1D-CSP is quite degenerate, i.e. multiple different solutions with the same waste often exist. This can be explained by geometrical re-arrangement, i.e. it is sometimes possible, for example, to swap two items belonging to different patterns, creating new patterns in the process. In some industrial settings, particularly in the plastic film industry, minimisation of patterns is quite important because the technical characteristics of the slitter winders that cut the material are such that changing patterns can cause production bottlenecks. The picture below shows a pattern change on a modern, 10 m wide, film slitter. Here, the operators are changing the (red) backing rolls, to match the width of the roll to be slit next. Depending on the machinery, the actual effort in producing a particular 1D-CSP solution is a multi-faceted problem; pattern minimisation is an important aspect, but not the only one. One could argue that the time cost of a pattern change is not fixed, but depends on the differences from the previous one. For example, the sequencing of a given set of patterns and the relative position of the rolls in each pattern gives rise to the knife change minimisation problem, which can be modelled as a generalised travelling salesman problem. Nonetheless, pattern count has become a common key performance indicator. From an optimisation perspective, the pattern minimisation problem is harder than the 1D- CSP. Practitioners have observed this for a long time. Complexity theory provides a clue: First of all, the problem has been shown to be to be NP-hard (McDiarmid, 1999). However, given the advice in (Goulimis, Appeal to NP- Completeness Considered Harmful: Does the Fact That a Problem Is NP-Complete Tell Us Anything?, 2007), this does not necessarily imply very much. A more compelling argument is presented in (Aldridge, et al., 1996), which considers the special class of the 1D-CSP where the item size exceeds W / 3 (so each pattern contains at most two items). For this class, the first-fit-decreasing rule gives an optimal answer to the minimum waste problem. So, the waste minimisation problem is easy. However, the corresponding pattern minimisation problem for this class has been shown to be strongly NP-hard. Part of the difficulty of the pattern minimisation problem is that good lower bounds are difficult to find. Linear programming indicates that with d distinct sizes there will be ~d patterns in an optimal solution to the 1D-CSP. However, trivial examples can be constructed where d distinct sizes have a one-pattern minimum waste solution. This process generalises so that for 2
3 any m, 1 m d, examples can be produced where the minimum waste (zero in this case) solution has no more than m patterns. The figure below shows the construction for m = 2: p q W w 1 w 2... w a w a+1 w a+2... w d Also, (only) one easy lower bound is known in the literature; this adds one instance of each item and divides the sum by the master size. For the above example, this bound is 2, regardless of p & q. This trivial lower bound, is so weak in practice as to be almost useless, in stark contrast to the traditional 1D-CSP where the integer round-up property (see (Scheithauer & Terno, 1993)) almost holds. (Alves, Macedo, & de Carvalho, 2009) provide stronger bounds based on a combination of column generation and constraint programming; needless to say, these are nontrivial to implement, even in the absence of additional constraints (see section 2 below). In terms of the upper bounds little is known. For the conventional formulation of the equality-constrained 1D-CSP (where a ij is the i th pattern, x i is the number of times that it will be used and q j is the requirement for the j th item): a ij x i = q j, j i (1) we conjecture that there exist minimum waste solutions for d distinct sizes with no more than (d + 1) patterns. We are not aware of any work concerning the impact of additional pattern constraints (discussed in section 2 below) on the pattern count. We do not know either what to expect in terms of pattern count when the equality in (1) becomes a twosided constraint. We provide in section 4 some statistical results about the number of patterns as a function of d. Over the years, three broad approaches have been proposed to minimise the number of patterns. The first approach controls the number of patterns during the solution of the 1D-CSP. Within this class, one sub-approach solves a multi-objective optimisation problem, see e.g. (Haessler, 1975), (Moretti & Neto, 2008), (Cerqueira & Yanasse, 2009), (Kallrath, Rebennack, & Kallrath, 2014), (Sykora, Potts, Hantanga, Goulimis, & Donnelly, 2015). Whilst powerful, these face two challenges: (a) determining robustly the trade-off of waste vs. pattern count and (b) implementing the various real-world pattern constraints may be difficult or impossible (see next section). The (Cui, Zhong, & Yao, 2015) paper involves a clever way to restrict the number of considered patterns to 5,000 and then uses a commercial solver to solve a problem with a binary variable to account for the setup cost of each pattern. The objective function minimises a weighted composite of the width waste and the pattern setup cost. The algorithm is critically dependent on the correct patterns being included in the set of 5,000, an aspect facilitated by the fact that the quantity constraint (1) is replaced by a onesided inequality (uncontrolled over-makes are allowed and therefore patterns with waste greater than the smallest item need not be considered). Despite this, the approach overall is a heuristic: the only instance for which a solution is shown has 7 patterns and 0.173% waste, but in fact solutions with 7 patterns and 0.016% waste are possible: The above solution contains one more item of size 500, within the same run length and pattern count as the solution in the paper. Another sub-approach involves having a constraint on the number of patterns, e.g. the heuristic of (Umetani, Yagiura, & Ibaraki, 2006); the difficulty here is that the user is asked to specify a desired pattern count; in practice this would lead to iterations. The second approach uses an exact optimisation algorithm to solve a suitable integer programming formulation (which can range from a mixed-binary / integer formulation with a generic commercial solver to custom advanced algorithms, see (Vanderbeck, 2000) and (Belov & Scheithauer, 2003), but note that (Cui, Zhong, & Yao, 2015) has cast doubts on the validity of the results of the latter). Although this approach can deliver some big improvements, it remains computationally unattractive in general for 3
4 even modest (e.g. those whose solution has more than 25 patterns) real-world problems. The approaches of the two papers cited above are not only complex to implement, but also employed time limits of 2 and ½ hours respectively; even allowing for improvements in hardware, very few industrial customers would accept this. Even if this were not the case, the inclusion of the real-world constraints (some of whom are listed below) is problematic. However, we have had some success in applying a similar approach to smaller problems (with up to a few thousand feasible patterns) or to subsets of an existing solution. The third approach involves taking an existing (typically minimum waste) solution and then applying a series of fast transformations, each of which maintains the original order allocation and run length (and therefore the waste), but reduces the pattern count. We now call them transformation heuristics. The first such heuristic, which we call the 2:1 rule, was described by (Johnston, 1986); this provides necessary and sufficient conditions for two patterns to be combined into one. This was followed by the staircase heuristic (Goulimis, Optimal solutions for the cutting stock problem, 1990) which looks for pattern triplets of the following form: n x m x n x where the labelled blocks each contain one or more of the required items. This form can be transformed to: n x (n+m) x A C B A B D B C subject to the new pattern (consisting of A + D) being feasible. Thus we have a transformation of three patterns into two. These two heuristics, which both require trivial computational effort, can be applied exhaustively to any starting solution, until no further improvement can be found. Their usefulness in practice is immense: many formulations for the 1D-CSP use an integer D C variable for each pattern in a linear / integer programming framework. Branching on these pattern variables tends to increase the solution s pattern count, as it is correlated with the depth of the node where the solution was found. We have seen examples where a solution with 28 patterns is reduced by the 2:1 rule to 26, but adding the staircase heuristic to the mix ends up with just 9 patterns. Over the years additional transformation heuristics have been published, (Allwood, 1988), (Diegel, Walters, van Schalwyk, & Naidoo, 1994), (Aldridge, et al., 1996) culminating in the KOMBI family, see (Foerster & Wascher, 2000), which looks at triples and quadruples and applies a recursive procedure. All the transformation heuristics published so far have the same structure, they examine subsets of cardinality s where s depends on the heuristic and 2 s 5. Each may be embedded in a parallelisable loop that examines potentially all ( n s ) = O(ns ) combinations for a starting solution with n patterns. This may pose a computational challenge when the number of initial patterns starts exceeding 50. For example, ( 50 5 ) 2 million quintuples, so even on a 6-core machine where each core can examine 100,000 quintuples / second, it will take 3+ seconds to examine them all. Once a reduction is found, its new pattern(s) present further opportunities, so additional passes, which do not need to examine previously rejected tuples, may take place. In addition to the transformation heuristics published in the literature, practitioners have added proprietary ones. The 8.6 (November 2015) release of the X-Trim commercial application contains 12 in total; two of those are somewhat different in that they do not reduce the overall pattern count, but target the number of singleton patterns (those that are produced just once). Shop floor operators particularly dislike singleton patterns; given two solutions with same overall pattern count, the one with the fewest singletons will be preferable. Transformation heuristics offer the following advantages over the other approaches: They do not disturb the waste and allocation aspects of the starting solution. 4
5 They can easily accommodate realworld constraints. They generate meaningful improvements very quickly. Although the published transformation heuristics are valuable in reducing the pattern count, we have reason to believe that they remain far from optimality. Given the difficulties of establishing good bounds and the lack of standardised benchmarks, it is hard to be precise about the magnitude of the gap. Nevertheless, we would, at least for the data set we will use later, estimate it at around 1/3 (i.e. the total minimum pattern count is 2/3 of what the published transformation heuristics can achieve). The transformation heuristic presented here, called KBP, works in a different manner and is capable of examining larger subsets, albeit with a special structure. Our aim in this paper is twofold: the KBP heuristic closes some of the suboptimality gap (with a very modest computational budget), and, more generally, we present a realistic testbed that can be used to support future work. 2. Constraints There are very many different pattern constraints in the real world, because the machinery for cutting is by no means standardised. A complete enumeration of the constraints we have been asked over the years to satisfy would take many pages to describe. In fact, we still regularly receive requests for new ones. The most common constraints are: 1. Minimum width: patterns of total size below a user-specified minimum may be unacceptable. 2. Knives: the maximum number of items in a pattern is constrained by the number of available slitting knives. 3. Small / big items: the user defines certain items as small / big and places constraints on the minimum / maximum number of instances of each class. 4. Occurrences: each item may have restrictions on the number of times it appears in each pattern. The most common of these is the multi-pack constraint, where the occurrences must be a multiple of a user-specified value (e.g. the pattern can contain 0/3/6/ instances of an item, but 1 or 4 occurrences are not acceptable). 5. Distinct: patterns may not contain items with similar size; this is encountered in situations where knife placement or labelling is manual and operators cannot be relied on to distinguish very similar sizes. 6. Minimum pattern multiplicity: in some cases there is a minimum run length for each pattern. Perhaps counter-intuitively, examples exist where the solutions that adhere to this constraint have a higher pattern count to those that do not. These constraints create difficulties in column generation and other decomposition algorithms for the main 1D-CSP. It is no longer sufficient to solve a pure knapsack as the auxiliary problem. The constraints cause exactly the same difficulties in the pattern minimisation. Transformation heuristics offer a solution to this: as each new pattern is constructed, an oracle can be consulted about its feasibility. 3. The KBP Heuristic Motivation The KBP heuristic is also a transformation heuristic and so starts with an existing solution and attempts to transform it into an equivalent solution with fewer patterns. For expositional clarity we describe first of all a simplified version of the heuristic and then generalise it. The design of the KBP heuristic hinges on the existence of fast algorithms for solving the knapsack and bin-packing problems; it uses these to find a reduction in the pattern count. Let us suppose that a particular solution contains m ( 3) patterns that have a common multiplicity of one (i.e. each of the m patterns is produced exactly once). These m patterns will consume m master items. We will attempt to find a set of m 1 patterns, which satisfy exactly the same items, using a total of m master items; the waste will remain the same. This new set will have the structure that 5
6 one pattern will be produced twice and the m 2 remaining patterns will be produced once. We index the items satisfied by these m patterns by i; the width of item i is w i and the quantity supplied by this collection of m patterns is q i. We observe that q i 1 (by construction). The simplified KBP heuristic has two steps: 1 : : : : Graphically: Step 1 The first step involves constructing a seed pattern that can be produced twice. Any heuristic for this will do, but we have chosen to do it with a bounded knapsack (for reasons that we will explain later): subject to: max z = i w i x i w i x i W i x i q i 2, x i N (2) This value-independent knapsack is solved in effect over the subset of the items that are produced at least twice. The resulting solution to the knapsack is therefore a pattern that can be produced twice. Step 2 Assuming that we have been able to find in the first step at least one pattern that can be produced twice, the second step examines whether the remaining items can be put into m 2 patterns. This involves solving a binpacking problem on the remaining items, so taking advantage of the fact that when the pattern multiplicity is one, the concept of a bin and a pattern merge. If the solution to the bin packing problem consumes m 2 bins or less, then we are done. When the knapsack has multiple solutions (quite frequently in practice), we examine each one in turn, until we either exhaust the possibilities, or we find a reduction. Example In this real-world example the master item has size 2725 and there are no additional constraints. The six patterns with multiplicity one are: 1 : : This solution is not reducible by any of the 12 existing transformation heuristics contained in the X-Trim commercial application (which include the 2:1 and staircase mentioned earlier). There are 129 possible patterns in this case. The trivial lower bound on the number of cutting patterns is 4.1 = 5. The items that appear more than once are: w i This solution happens to achieve the trivial lower bound, so no further improvements are possible. The two solutions have no common patterns; this implies that this particular reduction would not have been found by any transformation heuristic that examines tuples 6 q i One solution to the knapsack of size 2725 from the above items is (= 2725). This is our candidate seed pattern to be produced twice. Bin packing the remaining items into four bins of size 2725 is possible, yielding this overall solution: 1 : : : : :
7 up to size five; this includes all published ones. Generalisation The algorithm described so far, depending as it does on the subset of the solution with multiplicity one, would appear to have limited applicability. We will now give a number of generalisations that increase its effectiveness greatly. The first generalisation allows the seed pattern to be produced p times, p 2. The only difference to the knapsack formulation in order to look for a pattern produced p times is that we need m p and constraint (2) becomes: x i qi p, x i N The remaining problem is still a bin-packing. A seed pattern produced p times implies a reduction to m p + 1 patterns. Obviously, the likelihood of finding such a reduction decreases with increasing p. Given the desirability of high values of p, the KBP algorithm starts with the highest possible value of p (= max {q i }) and progressively examines all lower values down to p = 2. In the above example there are clearly no solutions for p 4. For p = 3 the only solution to the knapsack has too much waste (or, equivalently, the bin-packing is not solvable within 6-3 = 3 bins). Here is an example (from instance 020) with p = 3; the initial solution has m = 13: 1 : : : : : : : : : : : : : The reduced solution has 11 patterns (9 new ones and two unchanged): 1 : : : : : : : : : : : We also notice here that, by accident, two of the bins (in bold, 5 & 6) are identical, so a merge operation now creates one less pattern, for a total of 10. Even further, one of the new patterns also happens to exist in the rest of the solution (the also appears with a multiplicity of 4); the one real reduction therefore triggers 4 fewer patterns in total. The second generalisation allows suboptimal solutions to the knapsack, provided their waste, when the associated pattern is produced p times, is not more than the original sub-problem s. Specifically, if the waste of the original solution is L, then we can restrict the width of the seed pattern to the range: [ W L/p, W] (3) The classic dynamic programming approach for solving the knapsack yields these additional suboptimal patterns at no additional cost. Examples involving suboptimal patterns are exceedingly rare, but here is one with 11 starting patterns and a master size of 5350: 1 : : : : : : : : : : : The suboptimal seed pattern (in bold) has width 5345; the reduced solution is: 1 : : : :
8 1 : : : : : : The effect of these two generalisations is important theoretically: we now have an algorithm that takes a solution subset consisting of m patterns with multiplicity one that will always find an equivalent solution with fewer patterns, if such a solution exists. This is encapsulated in the following lemma. Lemma Improvement Guarantee When reducing m patterns with multiplicity one, the KBP algorithm will find a reduction if one exists. Proof The proof is based on the pigeonhole principle: any reduction will involve at least one pattern produced more than once. The exhaustive seed pattern generation (with width in the range given by (3)) will necessarily find it. The fact that the bin-packing is solved to optimality then ensures that we completely search the feasible space. If we solve the bin packing problem to optimality for all candidate seed patterns (as opposed to terminating when we first find a reduction), the algorithm is not only guaranteed to find a reduction, but it will find a maximal one, i.e. one with the maximum value of p. The reduced problem (what is left after a seed pattern with maximum multiplicity p* has been removed) consists of m p* + 1 patterns with multiplicity one. The process can therefore repeat. This does not quite guarantee that the algorithm will end up with the least number of patterns; even if each step is maximal, the final solution need not be optimal (analogous to nearest neighbour heuristics for the travelling salesman problem). The third generalisation is an obvious one, where the m initial patterns have (common) multiplicity k > 1. The seed pattern will then have multiplicity p k. Here is an example (instance 025, m = 9): 3 : : : : : : : : : This is reducible to 8 patterns: 3 : : : : : : : : This generalisation allows an arbitrary 1D-CSP solution to decompose into separate subproblems, one for each distinct value of the pattern multiplicity k. These sub-problems are not independent (the output of one may be used as the input to another), but can be examined in parallel with a first-past-thepost strategy. The number of these subproblems is equal to the number of distinct multiplicity values, a number bounded by n and therefore much smaller than the ( n s ) of the other transformation heuristics. The fourth generalisation is more powerful, but we also lose the improvement guarantee. Consider the situation where, in the starting solution, in addition to the m patterns with multiplicity one, we borrow from the rest of the solution an extra pattern (called surplus) with multiplicity q 2. The total number of master items is therefore m + q. We search (using the knapsack) for a seed pattern with multiplicity q + p, p 1. Assuming we find such seed pattern(s), the remaining items have to fit into (m + q) (q + p) = m p bins. This is the same bin packing problem as before. Synthesising the previous steps yields an algorithm for transforming any solution subset consisting of m patterns with multiplicity k 1 pattern with multiplicity k q (where m 3, k 1 and q = 0, 2, 3, 4, ) into an equivalent solution with fewer patterns: m p patterns with multiplicity k 8
9 1 pattern with multiplicity k (q + p) Here is an example (from instance 018) of a reduction involving a surplus pattern (in bold); the 9-pattern starting solution has k = 2: 4 : : : : : : : : : The reduced solution has 8 patterns (seed pattern in bold, p = 1 also note that, fortuitously, the second and third patterns below are identical and can be merged): 2 : : : : : : : : The presence of the surplus pattern increases the complexity of the algorithm. We no longer have a single problem to solve for the patterns of multiplicity one, we have O(n), as we iterate over surplus patterns from the remaining patterns. This is still much better than the O(n s ), s 2 number of sub-problems encountered by the previous generation of transformation heuristics, although each iteration may be more expensive. The use of the surplus pattern also prevents the parallelisation of the original solution into separate problems, one for each multiplicity value. The improvement guarantee is lost because we are looking for solutions with structure m p patterns with multiplicity one 1 pattern with multiplicity q + p However this structure does not include all possibilities: reductions may exist where there is more than one pattern with multiplicity greater than one. So, this algorithm is optimal for q = 0 (no surplus pattern) in the sense of guaranteeing to find a reduction if one exists; it is a heuristic for other values. The fifth generalisation extends the idea of using one surplus pattern to two or more. The two-surplus-pattern scenario starts with a solution with the structure: m patterns with multiplicity one 1 pattern with multiplicity q 1 1 pattern with multiplicity q 2 where q 1 q 2 2. There are m + 2 patterns and (q 1 + q 2 + m) bins. The seed pattern search is for a pattern that can be produced (q 1 + q 2 + p) times (p 1). If such a seed pattern can be found, then the same bin-packing problem needs to be solved with (q 1 + q 2 + m) (q 1 + q 2 + p) = m p bins. The number of pattern reductions is: Δ = (m + 2) (1 + m p) = 1 + p So we achieve a reduction by at least two patterns for the two-surplus-pattern scenario, which involves solving O(n 2 ) sub-problems. Observations Any new patterns created by this process can be evaluated against the constraints of the previous section. As mentioned in an example, it is worthwhile to have a merge operation after every reduction, to check whether the bin-packing solution contains duplicates and whether any newly-created pattern matches an existing one from the rest of the solution. Finally, taking an arbitrary solution, we can repeat the process, i.e. in pseudo-code: repeat if kbp_found then update solution no_more := false else no_more := true until no_more In fact, the above pseudo-code is somewhat naïve, because if a reduction is found and the process starts again, we can avoid reexamining those subsets that have not changed. Nevertheless, the idea of looping until no further reductions are found is very fruitful in practice. Because each reduction creates a new pattern with higher multiplicity, it makes sense to structure the search starting from the patterns with the lowest multiplicity first. 9
10 4. Computational Results Implementation There are specialist algorithms for solving the value-independent knapsack (see (Faaland, 1973) and (Sun & Wang, 1995)). However, the problems we have to solve are so small that it was deemed not worth the effort to implement them. Instead we have used the conventional dynamic programming recursion. In addition to being very simple, it has the benefit that all the solutions for the permissible range of maximum widths (as specified by (3)) can be extracted from the final table. In a pre-processing step, we divide all the sizes by their greatest common divisor. In the first example of the previous section, all sizes (including the master size) are a multiple of 5. This reduces the size of the dynamic programming table and therefore the time to solve the knapsack problem. In a second pre-processing step, for each subproblem for multiplicity k, we divide the pattern multiplicities by k. For the bin packing problem, we used the improved bin completion algorithm described in (Korf, 2003). This algorithm is optimal and has proven competitive in large-scale testing. It uses a branch-and-bound search where each new node corresponds to a full bin added to the partial solution. Two key elements of this algorithm are the efficient generation of non-dominated bins and the nogood dominance pruning strategy. The algorithm examines each subset with a common multiplicity in turn. For each subset, it attempts to find reductions without using a surplus pattern. Once these have been exhausted, it tries surplus patterns. If a reduction is found, it starts again. Otherwise, it moves to the next subset. For the test data we used, the time taken by the KBP heuristic was quite modest and therefore we have not attempted to parallelise the code. Test Data We harvested 120 test instances (available for download at the ESICUP website /Pattern%20Reduction%20Test%20Instances %201.1.zip) from industrial customers or from our own random generator (which aims to mimic the type of problems we encounter in practice). The number of patterns varies from 10 to 100 and the number of distinct sizes ranges from 14 to 86. The total number of patterns is 4,104. The only real-world constraints are (a) minimum pattern width and (b) number of knives. To avoid any confusion between logical and physical pattern counts, in each instance there are no items with a common size. The starting solutions have been preprocessed (with a limited time budget) by the 10 simplest of the 12 transformation heuristics in X-Trim, so the easy pattern reductions have already been eliminated. Results We conducted nine experiments, in two dimensions. In the first dimension we varied the time limit per instance t in {, 5, 1} seconds, on an Intel i7-4510u 2.0 GHz processor with 8 GB of RAM. We employ this time limit as a defence against instances that take a very long time to solve fully. For the second dimension we allowed {0, 1, 2} surplus patterns. Each experiment applied the KBP heuristic to the 120 instances. No Surplus Pattern In the table below Reducible instances shows the number of instances for which a reduction was found. Reductions is the improvement in the total number of patterns, expressed as an absolute number and also as a percentage of the original (4,104). Finally, the table contains the total and average execution times in seconds. Time limit (s) 5 1 Reducible instances Reductions (patterns) Reductions (%) Total time (s) Average time (s) To our surprise, the KBP heuristic found improvements in over ¼ of the 120 instances in all three experiments. 10
11 As far as the dependence on the time budget is concerned, the results for t = 1 are a bit unstable (the search gets truncated at somewhat random points). The average execution time reported above is quite misleading, because it was less than one second in all but three instances. For the t = case, these three instances consumed 87% of the total time. They exhibit the following trade-off between execution time and reductions (m 0 is the initial number of patterns): Instance m 0 Time (s) Reductions Total (Korf, 2003), when solving the bin packing problem, noticed the same behaviour: practically all problems solve very quickly, but a small number are difficult. Somewhat frustratingly, when t 5 no reduction was found. We have not been able to identify a unique shared characteristic for the difficult instances. In all cases, the solution of the bin packing sub-problems took more than 95% of the execution time. In terms of the type of reduction found, most of the benefits are found for low values of the multiplicity k. The repeated application of the algorithm to the same initial subset of patterns is valuable, e.g. for 085 starting with 16 patterns with multiplicity one, three reductions are achieved. Chain effects are noticeable also: in the same instance, the 3 reductions for k = 1, increase the population of patterns with multiplicity k = 2 (by three), which are then reduced from 16 to 12 in two further reductions. Another way to view the results is to examine the statistical relationship between the number of sizes and the number of patterns, before (blue crosses) and after (red dots) the KBP heuristic (computational budget 1 sec per instance): With the unlimited budget (but actually 33 seconds) the statistical behaviour remains almost the same: As a side note, 25 years ago (Goulimis, Optimal solutions for the cutting stock problem, 1990) contained a similar analysis, over a different set of 500 (smaller) instances, using the 2:1 and staircase rules. The regression of the resulting solutions was: y = x The results in this paper provide some evidence of the evolution of this metric over time. Although a linear regression was fitted to the above data, this is somewhat misleading. The evidence suggests that bigger problems offer greater opportunities to the KBP heuristic. If we split the 120 instances into four quartiles (each of size 30) based on the number of initial patterns m 0, then the total pattern 11
12 count for each of these quartiles behaves like this: Quartile Σm 0 Δm Δ% Q Q Q Q Total This increased benefit for larger problems is highly desirable in practice, as it is these problems that pose the greatest challenge to other approaches. One Surplus Pattern We illustrate the trajectory of the algorithm on instance 001, with an initial pattern count of 44. The algorithm first examines the 9 patterns with multiplicity one: 1 : : : : : : : : : A reduction is found to: 1 : : : : : : : : Of these, the last singleton pattern is merged with a copy with multiplicity 3, leaving us with just 6 singleton patterns. These are not reducible any further. The algorithm then looks at patterns with multiplicity two: 2 : : : : : : : : : : Using the surplus pattern 10 : these can be reduced to: 2 : : : : : : : : : : No further improvements are possible, so the search terminates with 41 patterns. Notice how the seed pattern of the first reduction (in bold) was used in the last reduction and was actually consumed by it. The overall results for the 120 instances when allowing one surplus pattern are summarised below: Time limit (s) 5 1 Reducible instances Reductions (patterns) Reductions (%) Total time (s) Average time (s) The availability of the surplus pattern reduces the total pattern count by 0.6% - 1.0%. We will now look in some detail at the results for t = 5. The KBP algorithm reduced the total pattern count by 172 via 123 reductions in 44 problems. The vast majority of the 123 reductions led to a reduction by one pattern, but there were 7 reductions that each 12
13 Problems No. of Reductions Difference in Total Pattern Count achieved a reduction by 4 (in all 7 cases there was a spurious merge that contributed): Reduction Effectiveness precludes this being guaranteed to be the case always): No Surplus vs. One Surplus Difference in Pattern Count The 44 problems with an improvement had the following distribution: < Problems Problems vs. Reductions Difference in Total Pattern Count The additional flexibility offered by the use of the surplus pattern is evident in these results. Of the 123 reductions, 30 involved a surplus pattern; these 30 induced an increase in the total number of reductions from 138 to 172, i.e. 34 fewer patterns in total. This is either because they had a reduction by more than one pattern, or because they triggered further reductions when the process was re-applied. Comparing with the version without a surplus pattern, the results were uniformly equal or better (the greedy nature of the search Using a surplus pattern increases the fraction of amenable instances from 30% to over 37%. The computational cost for this increase is large in relative terms (from 18 to 28 seconds), but remains trivial at an average of 0.24 seconds per problem. We initially contemplated eliminating the search for a seed pattern by re-using the surplus pattern in this role. However, for this data, this would eliminate 20 of the additional 30 reductions. We therefore abandoned this idea. Remarkably, there are no reductions in this data set involving a suboptimal seed pattern. This may relate to the relatively low waste of the solutions. Eliminating such patterns from consideration would speed up the algorithm without any adverse effect on the pattern count (at least for this data). For around ¼ of the 120 problems no seed pattern can be generated (with or without a surplus pattern). This may be due to the fact that the initial solutions already had the previous generation of transformation heuristics applied to them, leaving few further opportunities. For the unlimited budget, which took 125 seconds overall, one instance, 088, consumed 89 seconds (= 71% of the total time). Two reductions were found, one more than without surplus patterns. Such pathological behaviour justifies the use of a time budget. 13
14 The statistical regression (for the unlimited budget) becomes: Two Surplus Patterns To our surprise, the 120 instances do not contain a single case of such a reduction. We speculate that this is because, once the other reductions have been found, an example that reduces the pattern count by two in one step is quite unlikely. In fact, because of the extra computational effort, when operating with a time budget of t = 1, the results are worse. 5. Discussion & Extensions The impact of the KBP heuristic is greater than shown above because, once this heuristic has terminated with one or more reductions, we can still apply the other transformation heuristics and repeat the process until there is no further improvement by any of the heuristics. The KBP heuristic in effect also acts as a mechanism to move away from a local minimum for the others. However, we have chosen not to show these results, because they might confuse the issue and they depend on unpublished proprietary algorithms. In conclusion, the experiments show that, provided we do not spend too much time on the few very difficult problems, the KBP heuristic can obtain a non-trivial number of pattern reductions in a very short time. The algorithm finds savings in roughly 1/3 of the instances attempted. The improvement increases with the size of the problem. Since the two-surplus-pattern scenario has proved ineffective, one possible avenue for future research is to adapt it so that two seed patterns are generated. The total production of these two patterns should be (q 1 + q 2 + 1) in order for remaining problem to be solved using bin packing. There are various ways to try and find such a pair of patterns. The advantage of such an approach is that the result is a reduction by one pattern; this lower target than the two-surplus-pattern scenario described earlier may well increase the chances of success. We suspect however that this would be in the realm of diminishing returns. A topic not yet explored is the sequence for applying the transformation heuristics and how to avoid getting stuck in a local minimum. As mentioned earlier, there are already 12 such heuristics in our commercial implementation the KBP heuristic becomes the 13 th, admittedly with a different structure. The heuristics may clash: a no-surplus reduction involving three singleton patterns maybe would be found by the staircase rule as well. An obvious approach, given that the user may wish to interrupt the calculation, is to rank them in order of efficiency (defined as the ratio of reductions over effort) and apply the most efficient first. A multi-start / GRASP approach would appear advisable. These (or other) extensions are worth trying, because we know that, particularly for the larger problems, there is considerable scope for further improvement. For example, instance 010, with 49 patterns, is known to be reducible to 29, but the KBP heuristic only gets to Acknowledgments The authors would like to acknowledge constructive comments by Sophia Drossopoulou on an earlier draft. 7. References Aldridge, C., Leese, R., Tuenter, H., Chapman, S. J., McDiarmid, C., Wilson, H.,... Zinober, A. (1996). Pattern Reduction in Paper Cutting. European Study Group with Industry, (pp. 1-15). Oxford. Allwood, J. M. (1988). Reducing the number of patterns in the 1-dimensional cutting stock problem. London: Imperial College. 14
15 Alves, C., Macedo, R., & de Carvalho, J. V. (2009). New lower bounds based on column generation and constraint programming for the pattern minimization problem. Computers & Operations Research, Belov, G., & Scheithauer, G. (2003). The number of setups (different patterns) in one-dimensional stock cutting. Dresden: Department of Mathematics, Dresden University of Technology. Cerqueira, G. R., & Yanasse, H. H. (2009). A pattern reduction procedure in a onedimensional cutting stock problem by grouping items according to their demands. Journal of Computational Interdisciplinary Sciences, Cui, Y., Zhong, C., & Yao, Y. (2015). Pattern-set generation algorithm for the onedimensional cutting stock problem with setup cost. Europeran Journal of Operational Research, pp Diegel, A., Walters, E., van Schalwyk, S., & Naidoo, S. (1994). Setup combining in the trim loss problem: 3-to-2 & 2-to-1. Durban: University of Natal. Faaland, B. (1973). Solution of the Value- Independent Knapsack Problem by Partitioning. Operations Research, Foerster, H., & Wascher, G. (2000). Pattern reduction in one-dimensional cutting stock problems. International Journal of Production Research, Goulimis, C. N. (1990). Optimal solutions for the cutting stock problem. European Journal of Operational Research, Goulimis, C. N. (2007). Appeal to NP- Completeness Considered Harmful: Does the Fact That a Problem Is NP- Complete Tell Us Anything? Interfaces, Haessler, R. W. (1975). Controlling Cutting Pattern Changes in One-Dimensional Trim Problems. Operations Research, Johnston, R. E. (1986). Rounding algorithms for cutting stock problems. Asia Pacific Journal of Operations Research, Kallrath, J., Rebennack, S., & Kallrath, J. (2014). Solving real-world cutting stock-problems in the paper industry: Mathematical approaches, experience and challenges. European Journal of Operational Research, Korf, R. E. (2003). An improved algorithm for optimal bin packing. Proceedings of the International Joint Conference on Artificial Intelligence, (pp ). Acapulco. McDiarmid, C. (1999). Pattern Minimisation in Cutting Stock Problems. Discrete Applied Mathematics, Moretti, A. C., & Neto, L. D. (2008). Nonlinear Cutting Stock Problem to Minimize the Number of Different Patterns and Objects. Computational & Applied Mathematics, Scheithauer, G., & Terno, J. (1993). Theoretical Investigations on the Modified Integer Round-Up Property for the One- Dimensional Cutting Stock Problem. Dresden: Technische Universitat Dresden. Sun, C.-H., & Wang, S.-D. (1995). An efficient pruning algorithm for value independent knapsack problem using a DAG structure. Computers & Operations Research, Sykora, A. M., Potts, C., Hantanga, C., Goulimis, C. N., & Donnelly, R. (2015). A Tabu Search Algorithm for a Two- Objective One-Dimensional Cutting Stock Problem. 12th ESICUP Meeting. Portsmouth. Umetani, S., Yagiura, M., & Ibaraki, T. (2006). One-Dimensional Cutting Stock Problem with a Given Number of Setups: A Hybrid Approach of Metaheuristics and Linear Programming. Journal of Mathematical Modelling and Algorithms, Vanderbeck, F. (2000). Exact Algorithm for Minimising the Number of Setups in the One-Dimensional Cutting Stock Problem. Operations Research,
Theorem 2.9: nearest addition algorithm
There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used
More informationGeneral properties of staircase and convex dual feasible functions
General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia
More informationA Hybrid Recursive Multi-Way Number Partitioning Algorithm
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence A Hybrid Recursive Multi-Way Number Partitioning Algorithm Richard E. Korf Computer Science Department University
More information3 No-Wait Job Shops with Variable Processing Times
3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select
More informationNotes for Lecture 24
U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined
More informationVARIANTS OF THE CUTTING STOCK PROBLEM AND THE SOLUTION METHODS. M. M. Malik Abu Dhabi University Assistant Professor
VARIANTS OF THE CUTTING STOCK PROBLEM AND THE SOLUTION METHODS M. M. Malik Abu Dhabi University Assistant Professor E-mail: mohsin.malik@adu.ac.ae J. H. Taplin University of Western Australia Professor
More informationInteger Programming ISE 418. Lecture 7. Dr. Ted Ralphs
Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint
More informationAn evolutionary algorithm for the one-dimensional cutting stock. problem
International Transactions in Operations Research: aceito para publicação em 2010 An evolutionary algorithm for the one-dimensional cutting stock problem Silvio A. Araujo a, Ademir A. Constantino b and
More informationOptimal Sequential Multi-Way Number Partitioning
Optimal Sequential Multi-Way Number Partitioning Richard E. Korf, Ethan L. Schreiber, and Michael D. Moffitt Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 IBM
More informationChapter S:II. II. Search Space Representation
Chapter S:II II. Search Space Representation Systematic Search Encoding of Problems State-Space Representation Problem-Reduction Representation Choosing a Representation S:II-1 Search Space Representation
More informationLocal search heuristic for multiple knapsack problem
International Journal of Intelligent Information Systems 2015; 4(2): 35-39 Published online February 14, 2015 (http://www.sciencepublishinggroup.com/j/ijiis) doi: 10.11648/j.ijiis.20150402.11 ISSN: 2328-7675
More informationMetaheuristic Optimization with Evolver, Genocop and OptQuest
Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:
More informationSome Applications of Graph Bandwidth to Constraint Satisfaction Problems
Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept
More informationApproximation Algorithms
Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours
More informationThe Encoding Complexity of Network Coding
The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network
More informationCutting Stock with Binary Patterns: Arc-flow Formulation with Graph Compression
Cutting Stock with Binary Patterns: Arc-flow Formulation with Graph Compression Filipe Brandão INESC TEC and Faculdade de Ciências, Universidade do Porto, Portugal fdabrandao@dcc.fc.up.pt arxiv:1502.02899v1
More information5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing
1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.
More informationMulti-Way Number Partitioning
Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09) Multi-Way Number Partitioning Richard E. Korf Computer Science Department University of California,
More informationOptimal Packing of High-Precision Rectangles
Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence Optimal Packing of High-Precision Rectangles Eric Huang Palo Alto Research Center 3333 Coyote Hill Rd. Palo Alto, CA 94304 ehuang@parc.com
More informationCS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem
CS61: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem Tim Roughgarden February 5, 016 1 The Traveling Salesman Problem (TSP) In this lecture we study a famous computational problem,
More informationGreedy Algorithms CHAPTER 16
CHAPTER 16 Greedy Algorithms In dynamic programming, the optimal solution is described in a recursive manner, and then is computed ``bottom up''. Dynamic programming is a powerful technique, but it often
More informationPolynomial-Time Approximation Algorithms
6.854 Advanced Algorithms Lecture 20: 10/27/2006 Lecturer: David Karger Scribes: Matt Doherty, John Nham, Sergiy Sidenko, David Schultz Polynomial-Time Approximation Algorithms NP-hard problems are a vast
More informationTreewidth and graph minors
Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under
More information7. Decision or classification trees
7. Decision or classification trees Next we are going to consider a rather different approach from those presented so far to machine learning that use one of the most common and important data structure,
More informationCSE 417 Branch & Bound (pt 4) Branch & Bound
CSE 417 Branch & Bound (pt 4) Branch & Bound Reminders > HW8 due today > HW9 will be posted tomorrow start early program will be slow, so debugging will be slow... Review of previous lectures > Complexity
More informationA CSP Search Algorithm with Reduced Branching Factor
A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il
More information6.001 Notes: Section 4.1
6.001 Notes: Section 4.1 Slide 4.1.1 In this lecture, we are going to take a careful look at the kinds of procedures we can build. We will first go back to look very carefully at the substitution model,
More informationConstraint Satisfaction Problems. Chapter 6
Constraint Satisfaction Problems Chapter 6 Constraint Satisfaction Problems A constraint satisfaction problem consists of three components, X, D, and C: X is a set of variables, {X 1,..., X n }. D is a
More informationAn Approach to Task Attribute Assignment for Uniprocessor Systems
An Approach to ttribute Assignment for Uniprocessor Systems I. Bate and A. Burns Real-Time Systems Research Group Department of Computer Science University of York York, United Kingdom e-mail: fijb,burnsg@cs.york.ac.uk
More informationApproximation Algorithms
Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationFramework for Design of Dynamic Programming Algorithms
CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied
More informationHybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems
Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems Fabio Parisini Tutor: Paola Mello Co-tutor: Michela Milano Final seminars of the XXIII cycle of the doctorate
More informationThe Branch & Move algorithm: Improving Global Constraints Support by Local Search
Branch and Move 1 The Branch & Move algorithm: Improving Global Constraints Support by Local Search Thierry Benoist Bouygues e-lab, 1 av. Eugène Freyssinet, 78061 St Quentin en Yvelines Cedex, France tbenoist@bouygues.com
More informationMVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg
MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:
More informationLecture 15 : Review DRAFT
CS/Math 240: Introduction to Discrete Mathematics 3/10/2011 Lecture 15 : Review Instructor: Dieter van Melkebeek Scribe: Dalibor Zelený DRAFT Today slectureservesasareviewofthematerialthatwillappearonyoursecondmidtermexam.
More informationColumn Generation Based Primal Heuristics
Column Generation Based Primal Heuristics C. Joncour, S. Michel, R. Sadykov, D. Sverdlov, F. Vanderbeck University Bordeaux 1 & INRIA team RealOpt Outline 1 Context Generic Primal Heuristics The Branch-and-Price
More informationChapter 2 Basic Structure of High-Dimensional Spaces
Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,
More informationMassachusetts Institute of Technology Department of Electrical Engineering and Computer Science Algorithms For Inference Fall 2014
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms For Inference Fall 2014 Recitation-6: Hardness of Inference Contents 1 NP-Hardness Part-II
More informationCrew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm. Santos and Mateus (2007)
In the name of God Crew Scheduling Problem: A Column Generation Approach Improved by a Genetic Algorithm Spring 2009 Instructor: Dr. Masoud Yaghini Outlines Problem Definition Modeling As A Set Partitioning
More informationEvolving Variable-Ordering Heuristics for Constrained Optimisation
Griffith Research Online https://research-repository.griffith.edu.au Evolving Variable-Ordering Heuristics for Constrained Optimisation Author Bain, Stuart, Thornton, John, Sattar, Abdul Published 2005
More informationFebruary 19, Integer programming. Outline. Problem formulation. Branch-andbound
Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis and Dimensioning II Department of Electronics and Communications Engineering Tampere University of Technology, Tampere, Finland February 19,
More information3 INTEGER LINEAR PROGRAMMING
3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=
More informationAn Eternal Domination Problem in Grids
Theory and Applications of Graphs Volume Issue 1 Article 2 2017 An Eternal Domination Problem in Grids William Klostermeyer University of North Florida, klostermeyer@hotmail.com Margaret-Ellen Messinger
More information3 SOLVING PROBLEMS BY SEARCHING
48 3 SOLVING PROBLEMS BY SEARCHING A goal-based agent aims at solving problems by performing actions that lead to desirable states Let us first consider the uninformed situation in which the agent is not
More informationGraph Coloring via Constraint Programming-based Column Generation
Graph Coloring via Constraint Programming-based Column Generation Stefano Gualandi Federico Malucelli Dipartimento di Elettronica e Informatica, Politecnico di Milano Viale Ponzio 24/A, 20133, Milan, Italy
More informationA Relative Neighbourhood GRASP for the SONET Ring Assignment Problem
A Relative Neighbourhood GRASP for the SONET Ring Assignment Problem Lucas de Oliveira Bastos 1, Luiz Satoru Ochi 1, Elder M. Macambira 2 1 Instituto de Computação, Universidade Federal Fluminense Address:
More informationFOUR EDGE-INDEPENDENT SPANNING TREES 1
FOUR EDGE-INDEPENDENT SPANNING TREES 1 Alexander Hoyer and Robin Thomas School of Mathematics Georgia Institute of Technology Atlanta, Georgia 30332-0160, USA ABSTRACT We prove an ear-decomposition theorem
More informationDesign and Analysis of Algorithms
Design and Analysis of Algorithms Instructor: SharmaThankachan Lecture 10: Greedy Algorithm Slides modified from Dr. Hon, with permission 1 About this lecture Introduce Greedy Algorithm Look at some problems
More informationV1.0: Seth Gilbert, V1.1: Steven Halim August 30, Abstract. d(e), and we assume that the distance function is non-negative (i.e., d(x, y) 0).
CS4234: Optimisation Algorithms Lecture 4 TRAVELLING-SALESMAN-PROBLEM (4 variants) V1.0: Seth Gilbert, V1.1: Steven Halim August 30, 2016 Abstract The goal of the TRAVELLING-SALESMAN-PROBLEM is to find
More information/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang
600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization
More informationCore Membership Computation for Succinct Representations of Coalitional Games
Core Membership Computation for Succinct Representations of Coalitional Games Xi Alice Gao May 11, 2009 Abstract In this paper, I compare and contrast two formal results on the computational complexity
More informationChapter 16: Greedy Algorithm
Chapter 16: Greedy Algorithm 1 About this lecture Introduce Greedy Algorithm Look at some problems solvable by Greedy Algorithm 2 Coin Changing Suppose that in a certain country, the coin dominations consist
More informationTabu Search for Constraint Solving and Its Applications. Jin-Kao Hao LERIA University of Angers 2 Boulevard Lavoisier Angers Cedex 01 - France
Tabu Search for Constraint Solving and Its Applications Jin-Kao Hao LERIA University of Angers 2 Boulevard Lavoisier 49045 Angers Cedex 01 - France 1. Introduction The Constraint Satisfaction Problem (CSP)
More informationOn Covering a Graph Optimally with Induced Subgraphs
On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number
More informationMetaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini
Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution
More informationA noninformative Bayesian approach to small area estimation
A noninformative Bayesian approach to small area estimation Glen Meeden School of Statistics University of Minnesota Minneapolis, MN 55455 glen@stat.umn.edu September 2001 Revised May 2002 Research supported
More informationExact Algorithms Lecture 7: FPT Hardness and the ETH
Exact Algorithms Lecture 7: FPT Hardness and the ETH February 12, 2016 Lecturer: Michael Lampis 1 Reminder: FPT algorithms Definition 1. A parameterized problem is a function from (χ, k) {0, 1} N to {0,
More informationPre-requisite Material for Course Heuristics and Approximation Algorithms
Pre-requisite Material for Course Heuristics and Approximation Algorithms This document contains an overview of the basic concepts that are needed in preparation to participate in the course. In addition,
More informationNP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.
CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT
More informationClustering Using Graph Connectivity
Clustering Using Graph Connectivity Patrick Williams June 3, 010 1 Introduction It is often desirable to group elements of a set into disjoint subsets, based on the similarity between the elements in the
More informationImproved Bin Completion for Optimal Bin Packing and Number Partitioning
Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Improved Bin Completion for Optimal Bin Packing and Number Partitioning Ethan L. Schreiber and Richard E. Korf
More informationAlgorithms for Euclidean TSP
This week, paper [2] by Arora. See the slides for figures. See also http://www.cs.princeton.edu/~arora/pubs/arorageo.ps Algorithms for Introduction This lecture is about the polynomial time approximation
More informationPincer-Search: An Efficient Algorithm. for Discovering the Maximum Frequent Set
Pincer-Search: An Efficient Algorithm for Discovering the Maximum Frequent Set Dao-I Lin Telcordia Technologies, Inc. Zvi M. Kedem New York University July 15, 1999 Abstract Discovering frequent itemsets
More informationOn Using Machine Learning for Logic BIST
On Using Machine Learning for Logic BIST Christophe FAGOT Patrick GIRARD Christian LANDRAULT Laboratoire d Informatique de Robotique et de Microélectronique de Montpellier, UMR 5506 UNIVERSITE MONTPELLIER
More informationThe Heuristic (Dark) Side of MIP Solvers. Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL
The Heuristic (Dark) Side of MIP Solvers Asja Derviskadic, EPFL Vit Prochazka, NHH Christoph Schaefer, EPFL 1 Table of content [Lodi], The Heuristic (Dark) Side of MIP Solvers, Hybrid Metaheuristics, 273-284,
More informationV Advanced Data Structures
V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,
More informationMaximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube
Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube Kavish Gandhi April 4, 2015 Abstract A geodesic in the hypercube is the shortest possible path between two vertices. Leader and Long
More informationGreedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.
Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,
More informationReal-time grid computing for financial applications
CNR-INFM Democritos and EGRID project E-mail: cozzini@democritos.it Riccardo di Meo, Ezio Corso EGRID project ICTP E-mail: {dimeo,ecorso}@egrid.it We describe the porting of a test case financial application
More informationCopyright 2000, Kevin Wayne 1
Guessing Game: NP-Complete? 1. LONGEST-PATH: Given a graph G = (V, E), does there exists a simple path of length at least k edges? YES. SHORTEST-PATH: Given a graph G = (V, E), does there exists a simple
More informationDecision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.
Decision Problems Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not. Definition: The class of problems that can be solved by polynomial-time
More informationIntroduction. Chapter 15. Optimization Modeling: Applications. Integer Programming. Manufacturing Example. Three Types of ILP Models
Chapter 5 Optimization Modeling: Applications Integer Programming Introduction When one or more variables in an LP problem must assume an integer value we have an Integer Linear Programming (ILP) problem.
More informationPACKING DIGRAPHS WITH DIRECTED CLOSED TRAILS
PACKING DIGRAPHS WITH DIRECTED CLOSED TRAILS PAUL BALISTER Abstract It has been shown [Balister, 2001] that if n is odd and m 1,, m t are integers with m i 3 and t i=1 m i = E(K n) then K n can be decomposed
More informationComputational Complexity CSC Professor: Tom Altman. Capacitated Problem
Computational Complexity CSC 5802 Professor: Tom Altman Capacitated Problem Agenda: Definition Example Solution Techniques Implementation Capacitated VRP (CPRV) CVRP is a Vehicle Routing Problem (VRP)
More informationCache-Oblivious Traversals of an Array s Pairs
Cache-Oblivious Traversals of an Array s Pairs Tobias Johnson May 7, 2007 Abstract Cache-obliviousness is a concept first introduced by Frigo et al. in [1]. We follow their model and develop a cache-oblivious
More information15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015
15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015 While we have good algorithms for many optimization problems, the previous lecture showed that many
More informationScan Scheduling Specification and Analysis
Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract
More informationSolutions for the Exam 6 January 2014
Mastermath and LNMB Course: Discrete Optimization Solutions for the Exam 6 January 2014 Utrecht University, Educatorium, 13:30 16:30 The examination lasts 3 hours. Grading will be done before January 20,
More informationOptimally Scheduling Small Numbers of Identical Parallel Machines
Proceedings of the Twenty-Third International Conference on Automated Planning and Scheduling Optimally Scheduling Small Numbers of Identical Parallel Machines Richard E. Korf and Ethan L. Schreiber Computer
More informationLecturer: Shuchi Chawla Topic: Euclidean TSP (contd.) Date: 2/8/07
CS880: Approximations Algorithms Scribe: Dave Andrzejewski Lecturer: Shuchi Chawla Topic: Euclidean TSP (contd.) Date: 2/8/07 Today we continue the discussion of a dynamic programming (DP) approach to
More informationON WEIGHTED RECTANGLE PACKING WITH LARGE RESOURCES*
ON WEIGHTED RECTANGLE PACKING WITH LARGE RESOURCES* Aleksei V. Fishkin, 1 Olga Gerber, 1 Klaus Jansen 1 1 University of Kiel Olshausenstr. 40, 24118 Kiel, Germany {avf,oge,kj}@informatik.uni-kiel.de Abstract
More informationOn Computing Minimum Size Prime Implicants
On Computing Minimum Size Prime Implicants João P. Marques Silva Cadence European Laboratories / IST-INESC Lisbon, Portugal jpms@inesc.pt Abstract In this paper we describe a new model and algorithm for
More informationSearch Algorithms. IE 496 Lecture 17
Search Algorithms IE 496 Lecture 17 Reading for This Lecture Primary Horowitz and Sahni, Chapter 8 Basic Search Algorithms Search Algorithms Search algorithms are fundamental techniques applied to solve
More information6. Relational Algebra (Part II)
6. Relational Algebra (Part II) 6.1. Introduction In the previous chapter, we introduced relational algebra as a fundamental model of relational database manipulation. In particular, we defined and discussed
More information6. Algorithm Design Techniques
6. Algorithm Design Techniques 6. Algorithm Design Techniques 6.1 Greedy algorithms 6.2 Divide and conquer 6.3 Dynamic Programming 6.4 Randomized Algorithms 6.5 Backtracking Algorithms Malek Mouhoub, CS340
More informationApproximation Algorithms
Approximation Algorithms Subhash Suri June 5, 2018 1 Figure of Merit: Performance Ratio Suppose we are working on an optimization problem in which each potential solution has a positive cost, and we want
More informationA Visualization Program for Subset Sum Instances
A Visualization Program for Subset Sum Instances Thomas E. O Neil and Abhilasha Bhatia Computer Science Department University of North Dakota Grand Forks, ND 58202 oneil@cs.und.edu abhilasha.bhatia@my.und.edu
More informationAlgorithms. Lecture Notes 5
Algorithms. Lecture Notes 5 Dynamic Programming for Sequence Comparison The linear structure of the Sequence Comparison problem immediately suggests a dynamic programming approach. Naturally, our sub-instances
More informationA Hybrid Improvement Heuristic for the Bin Packing Problem
MIC 2001-4th Metaheuristics International Conference 63 A Hybrid Improvement Heuristic for the Bin Packing Problem Adriana C.F. Alvim Dario J. Aloise Fred Glover Celso C. Ribeiro Department of Computer
More informationArc-Flow Model for the Two-Dimensional Cutting Stock Problem
Arc-Flow Model for the Two-Dimensional Cutting Stock Problem Rita Macedo Cláudio Alves J. M. Valério de Carvalho Centro de Investigação Algoritmi, Universidade do Minho Escola de Engenharia, Universidade
More informationCS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018
CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.
More informationUnit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.
: Coping with NP-Completeness Course contents: Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems Reading: Chapter 34 Chapter 35.1, 35.2 Y.-W. Chang 1 Complexity
More informationExploring Econometric Model Selection Using Sensitivity Analysis
Exploring Econometric Model Selection Using Sensitivity Analysis William Becker Paolo Paruolo Andrea Saltelli Nice, 2 nd July 2013 Outline What is the problem we are addressing? Past approaches Hoover
More information2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006
2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,
More informationReading 1 : Introduction
CS/Math 240: Introduction to Discrete Mathematics Fall 2015 Instructors: Beck Hasti and Gautam Prakriya Reading 1 : Introduction Welcome to CS 240, an introduction to discrete mathematics. This reading
More informationChapter 8. NP-complete problems
Chapter 8. NP-complete problems Search problems E cient algorithms We have developed algorithms for I I I I I finding shortest paths in graphs, minimum spanning trees in graphs, matchings in bipartite
More informationEvaluating Classifiers
Evaluating Classifiers Charles Elkan elkan@cs.ucsd.edu January 18, 2011 In a real-world application of supervised learning, we have a training set of examples with labels, and a test set of examples with
More information1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM
1. NUMBER SYSTEMS USED IN COMPUTING: THE BINARY NUMBER SYSTEM 1.1 Introduction Given that digital logic and memory devices are based on two electrical states (on and off), it is natural to use a number
More informationSLS Methods: An Overview
HEURSTC OPTMZATON SLS Methods: An Overview adapted from slides for SLS:FA, Chapter 2 Outline 1. Constructive Heuristics (Revisited) 2. terative mprovement (Revisited) 3. Simple SLS Methods 4. Hybrid SLS
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on
More information