Multi-Way Number Partitioning

Size: px
Start display at page:

Download "Multi-Way Number Partitioning"

Transcription

1 Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09) Multi-Way Number Partitioning Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, CA Abstract The number partitioning problem is to divide a given set of integers into a collection of subsets, so that the sum of the numbers in each subset are as nearly equal as possible. While a very efficient algorithm exists for optimal two-way partitioning, it is not nearly as effective for multi-way partitioning. We develop two new linear-space algorithms for multi-way partitioning, and demonstrate their performance on three, four, and five-way partitioning. In each case, our algorithms outperform the previous state of the art by orders of magnitude, in one case by over six orders of magnitude. Empirical analysis of the running times of our algorithms strongly suggest that their asymptotic growth is less than that of previous algorithms. The key insight behind both our new algorithms is that if an optimal k-way partition includes a particular subset, then optimally partitioning the numbers not in that set k 1 ways results in an optimal k-way partition. 1 Introduction: Number Partitioning Given a set of integers, the number partitioning problem is to divide them into a collection of mutually exclusive and collectively exhaustive subsets so that the sum of the numbers in each subset are as nearly equal as possible. For example, given the integers (4, 5, 6, 7, 8), if we divide them into the subsets (4, 5, 6) and (7, 8), the sum of the numbers in each subset is 15, and the difference between the subset sums is zero, which is optimal. For partitioning into more than two subsets, the objective function to be minimized is the difference between the maximum and minimum subset sums. We focus here on optimal solutions. All the algorithms described here use space that is only linear in the number of numbers. Number-partitioning is NP-complete, and one of the easiest NP-complete problems to describe. It is also a very simple scheduling problem, often referred to as multi-processor scheduling [Garey and Johnson, 1979]. For example, given a set of jobs, each with an associated completion time, and two or more identical processors, assign each job to a processor to minimize the total time to complete all the jobs. 1.1 Easy-Hard-Easy Complexity Transition An important feature of this problem is that its difficulty shows a classic easy-hard-easy transition as the problem size increases. Let n be the number of numbers, k the number of subsets, and m the maximum possible value. Problems with small n are easy since there are only k n partitions. As n grows, the number of partitions grows as k n, but the number of different possible values for the difference of the subset sums grows only as n m. Thus, for fixed m and k and large n, there are many more partitions than subset sum differences, and hence most differences have many associated partitions. In particular, if the sum of all the numbers is divisible by k, there can be a subset sum difference of zero, and otherwise, the minimum possible subset sum difference is one. Once such a perfect partition is found, search is terminated. For uniform random instances, as n grows large, the number of perfect partitions increases, making them easier to find, and the problem easier. The most difficult problems occur where the probability of a perfect partition is about one-half. Much has been written about this phase transition, e.g. [Mertens, 1998], but it is tangential to our work, as we are concerned exclusively with algorithms for finding optimal partitions. 2 Previous Work Here we briefly review previous work on this problem. For simplicity, we focus on two-way partitioning, but all these algorithms are extensible to multi-way partitioning as well. 2.1 Greedy Heuristic The obvious greedy heuristic for this problem is to sort the numbers in decreasing order, and then assign each number in turn to the subset with the smaller sum so far. For example, given the numbers (8,7,6,5,4), we would assign the 8 and 7 to different subsets, the 6 to the subset with the 7, the 5 to the subset with the 8, and finally the 4 to either subset, yielding for example the partition (8,5,4) and (7,6), with a subset difference of = 4. The average solution quality of this heuristic is on the order of the smallest number. 2.2 Complete Greedy Algorithm (CGA) This heuristic is easily extended to a complete greedy algorithm (CGA) [Korf, 1995; 1998]. First we sort the numbers in decreasing order, and then search a binary tree, where each 538

2 level assigns a different number, and each branch point alternately assigns that number to one subset or the other. Each leaf of this tree corresponds to a complete partition. Several pruning rules can improve the efficiency of CGA: 1) If a complete partition is found with a difference of zero or one, it is returned and the search is terminated. 2) If the current difference between the two partial subset sums is at least the sum of the numbers not yet assigned, then all remaining numbers are assigned to the subset with the smaller sum, terminating that branch. 3) If both partial subsets have the same sum, the next number is only assigned to one of them, to eliminate duplicate nodes. Finally, to minimize the time to find a perfect partition, we always assign the next number to the subset with the smaller sum first. CGA is easily extended to partitioning into k subsets. Instead of a binary tree, we search a k-ary tree, where at each branch the corresponding number is assigned to one of the k subsets. The second pruning rule above is replaced by the following. Let t be the sum of all the numbers, s the current largest subset sum, and d the difference of the best complete partition found so far. If s t s k 1 d, terminate this branch. The reason is that the best we could do would be to perfectly equalize the remaining k 1 subsets, and if this would result in a partition no better than the best so far, there is no reason to continue searching that path. At each branch, we place the next number in the subsets in increasing order of their sums, to minimize the time to find a good solution. 2.3 Karmarkar-Karp Heuristic (KK) A heuristic much better than greedy was called set differencing by its authors [Karmarkar and Karp, 1982], but is usually referred to as the KK heuristic. It places the two largest numbers in different subsets, without determining which subset each goes into. This is equivalent to replacing the two numbers with their difference. For example, placing 8 in subset A and 7 in subset B is equivalent to placing their difference of 1 in subset A, since we can always subtract the same amount from both sets without affecting the solution. Swapping their positions is equivalent to placing the 1 in subset B. The KK heuristic repeatedly replaces the two largest numbers with their difference, inserting the new number in the sorted order, until there is only one number left, which is the final partition difference. In our example, this results in the series of sets (8,7,6,5,4), (6,5,4,1), (4,1,1), (3,1), (2). Some additional bookkeeping is required to extract the actual partition, which in this case is (7,5,4) and (8,6), with a partition difference of 16-14=2. The solution quality of this heuristic is the last remaining number, which is much smaller than the smallest original number, due to the repeated differencing. 2.4 Complete Karmarkar-Karp Algorithm (CKK) We extended the KK heuristic to the complete Karmarkar- Karp algorithm (CKK) [Korf, 1998]. While the KK heuristic always places the two largest numbers in different subsets, the only other option is to assign them to the same subset. This is done by replacing the two largest numbers by their sum. CKK searches a binary tree where at each node the left branch replaces the two largest numbers by their difference, and the right branch replaces them by their sum. By searching from left to right, the first solution found is the KK solution. If we find a complete partition with a difference of zero or one, the search terminates. In addition, if the largest number is greater than or equal to the sum of the remaining numbers, we place all the remaining numbers in the opposite subset from the largest, since this is the best we can do. For two-way partitioning, CKK is slightly faster than CGA for problem instances without a perfect partition, but is much faster for problem instances with many perfect partitions, since it finds one much faster [Korf, 1998]. The extension of the KK heuristic and CKK algorithm to multi-way partitioning is more complex than for the greedy heuristic and CGA [Korf, 1998]. We describe here their extension to three-way partitioning. A state of the KK algorithm is described by a set of triples of partial subset sums, with each triple sorted in decreasing order, and the triples sorted in decreasing order of their largest sum. Initially, each number is in a separate triple, with zero for the remaining numbers. For example, the initial state of a three-way KK partition of the set (4,5,6,7,8) would be ((8,0,0),(7,0,0),(6,0,0),(5,0,0),(4,0,0)). At each step of the KK heuristic, if (a, b, c) and (x, y, z) are the triples with the largest numbers, they are replaced with (a + z,b + y, c + x), which is then normalized by subtracting the smallest element of the triple from each element. This combination is chosen to minimize the largest values. In our example, this results in the set ((8,7,0)(6,0,0)(5,0,0)(4,0,0)). Combining the next two largest triples results in the set, ((8,7,6)(5,0,0)(4,0,0)), which after normalization is represented by ((5,0,0),(4,0,0),(2,1,0)). Combining the next two results in ((5,4,0),(2,1,0)), and combing the last two produces (5,5,2) or (3,3,0) after normalizing, for a final partition difference of 3, corresponding to the partition (8), (7,4), and (6,5), which happens to be optimal in this case. For the complete CKK algorithm, at each node we combine the two triples with the largest sums in every possible way rather than just one way, branching on each combination. For three-way partitioning, there are six ways to combine two triples, corresponding to different permutations of three elements, and for k-way partitioning, there are k! branches at each node. For example, given the triples (a,b,c) and (x,y,z), their different possible combinations are (a + x, b + y, c+ z), (a+x, b+ z,c+y), (a+y, b+ x, c+z), (a+y, b+z,c+x), (a+z,b+x, c+y),and(a+z,b+y, c+x). Since the smallest sum of each k-tuple is always zero after normalization, we only maintain tuples of k 1 sums for k-way partitioning. 2.5 Pseudo-Polynomial-Time Algorithms Technically, number partitioning is not strongly NPcomplete, but can be solved in pseudo-polynomial-time by dynamic programming. This requires memory that is proportional to n(k 1) m k 1 for k-way partitioning of n numbers with a maximum value of m. As a result, these algorithms are not practical for multi-way partitioning. For example, three-way partitioning of 40 7-digit integers requires a petabyte (10 15 )ofstorage. 2.6 Previous State-Of-The-Art For two-way partitioning, CKK is the algorithm of choice. It is slightly faster than CGA without perfect partitions, and 539

3 much faster with many perfect partitions. In fact, if we restrict the numbers to 32-bit integers, arbitrarily large instances can be solved in less than a second. The success of this algorithm probably contributed to the complete absence of papers on algorithms for optimal number partitioning in the last decade. For three-way partitioning, CKK also outperforms CGA. To solve large problems, however, the precision of the numbers must be restricted. For 3-way partitioning of six-digit numbers, for example, CKK takes about a minute to solve the hardest random instances, which contain 30 numbers. We are not aware of any literature on optimal four-way partitioning. For partitioning four or more ways, CGA is much more efficient than CKK. The reason is that CKK gets increasingly complex as k increases, increasing the constant time per node generation. Our implementation of CKK for three-way partitioning is specialized to three subsets, but this is impractical with more subsets, since the number of ways of combining tuples is k!. Specializing CGA for a particular value of k is easy, however. On problems without perfect partitions, our general implementation of CKK for four-way partitioning is ten times slower per node than our specialized CGA. We now present our new work. 3 Principle of Optimality The key property of multi-way partitioning that underlies both our new algorithms is the following principle of optimality. In any optimal three-way partition, the numbers in any two of the subsets must be optimally partitioned two ways. More generally, if an optimal k-way partition includes a particular subset, then optimally partitioning the numbers not in that subset k 1 ways will yieldy an optimal k-way partition. To prove this, there are three cases to consider: 1) If the given subset has neither the smallest nor the largest sum, then it doesn t affect the partition difference, and the remaining numbers must be optimally partitioned k 1 ways to minimize the partition difference. 2) If the given subset has the smallest sum, then an optimal k 1 way partition of the remaining numbers minimizes the largest subset sum, thus minimizing the overall partition difference. 3) Similarly, if the given subset has the largest sum, then an optimal k 1 way partition of the remaining numbers maximizes the smallest subset sum, thus minimizing the overall partition difference. Note that it is not the case that in any optimal partition, any collection of subsets must be optimally partitioned. For example, in an optimal four-way partition, the two subsets with intermediate sums need not be optimally partitioned two ways, since they don t affect the partition difference. 4 Sequential Number Partitioning (SNP) We now introduce the first of our two new algorithms, called sequential number partitioning (SNP), using three-way partitioning as our first example. We first choose one complete subset, and then optimally partition the remaining numbers two ways. Since we can t identify a priori a subset in an optimal partition, we generate each subset that could possibly be part of an optimal three-way partition, and for each such subset, we use CKK to optimally partition the remaining numbers two ways to get a complete three-way partition. The standard way to generate subsets is to search an inclusion-exclusion binary tree, in which leaf nodes represent all possible subsets. Each level of the tree corresponds to a particular number, and at each branch we either include the corresponding number in the subset, or exclude it. For example, the left subtree of the root contains all subsets that include the first number, and the right subtree of the root contains all subsets that exclude the first number. We reduce the number of subsets considered by an upper bound on their sum. To eliminate duplicate partitions that differ only by permuting the subsets, we require that the first subset have the smallest sum. If t is the sum of all the numbers, the first subset sum can be no larger than t/3, or one of the remaining subsets would have to have a smaller sum. We also enforce a lower bound on the first subset sum. For a given first subset, the best we could do would be to perfectly partition the remaining numbers two ways. Thus, the difference between half the sum of the numbers excluded from the first subset, and the first subset sum, is a lower-bound on the three-way partition difference. If this difference is greater than or equal to the best three-way partition difference found so far, the corresponding first subset cannot be part of a better solution. In particular, if t is the sum of all the numbers, and d is the difference of the best three-way partition found so far, then the first subset sum must be at least (t 2d)/3. Given these lower and upper bounds on the first subset sum, we prune the inclusion-exclusion tree as follows: Any branch where the sum of the numbers included so far exceeds the upper bound is pruned. Also, any branch where the sum of the included numbers, and the numbers not yet included nor excluded is less than the lower bound is also pruned. To efficiently search this inclusion-exclusion tree, we first sort the numbers in decreasing order, and decide whether to include or exclude the largest numbers first. The reason is that the largest numbers have the greatest impact on the sum so far, and the sum of the remaining numbers, resulting in more pruning near the root of the tree. The overall algorithm for three-way partitioning works as follows: We first run the KK heuristic to get an approximate three-way partition, and use this to compute the lower bound on the first subset sum. Next we search an inclusionexclusion tree for subsets within the lower and upper bounds. For each such subset, we optimally partition the remaining numbers two ways using CKK. We then compute the corresponding three-way partition difference. If this solution is better than the best one found so far, we increase the lower bound on the first subset sum, and continue exploring all first subsets that could possibly lead to a better solution. Extending this algorithm to four-way partitioning is straightforward. We first run KK to get an approximate fourway partition with a difference of d. We then select first subsets by searching an inclusion-exclusion tree with an upper bound of t/4, and a lower bound of (t 3d)/4. For each such first subset, we call SNP to optimally partition the remaining elements three ways. This involves selecting a second subset by searching another inclusion-exclusion tree, with a sum greater than or equal to that of the first subset, since the subset sums are chosen in non-decreasing order. The extension to more than four subsets follows analogously. 540

4 5 Recursive Number Partitioning (RNP) The main drawback of SNP is that CKK is only used to determine the last two subsets, leaving the much less efficient inclusion-exclusion tree searches to determine all the previous subsets. Here we propose an alternative algorithm, which we call recursive number partitioning (RNP). We begin with the example of four-way partitioning. We first run the KK heuristic to get an approximate four-way partition. Then we divide all the numbers into two subsets, each of which will later be partitioned two ways. This top-level partitioning is done in every way that could possibly lead to a four-way partition better than the best one found so far. For a given top-level partition, the best we could do would be to perfectly partition each of the two subsets. Thus, half of the larger subset sum minus half of the smaller subset sum is a lower bound on the difference of any four-way partition derived from a given top-level partition. This bound must be less than the current best four-way partition difference for a given top-level partition to lead to a better solution. Thus, the current best solution imposes an upper bound on the difference of the top-level partitions. The top-level partitioning is done using CKK with this upper bound, but returning all twoway partitions within the bound, rather than a single optimal partition. We use CKK instead of CGA because it finds better top-level partitions sooner. For each such top-level partition, we use CKK to optimally partition the smaller of the two subsets. If the resulting partition, combined with a perfect partition of the larger subset, would lead to a better four-way partition, we optimally partition the larger subset as well. We compute the overall partition difference by subtracting the smallest of the four subset sums from the largest. If this four-way partition is better than the best one found so far, we update the upper bound on the initial top-level partitioning. In either case, we continue generating top-level partitions within the bound, in the order generated by CKK, until all such partitions that could lead to a better four-way partition have been explored. For three-way partitioning, SNP and RNP are identical, but they differ for four or more subsets. For five-way partitioning, for example, RNP first searches an inclusion-exclusion tree to select a first subset, and then calls RNP to optimally partition the remaining numbers four ways. For six-way partitioning, RNP first partitions all the numbers two ways, and then calls RNP, or equivalently SNP, to optimally partition each of the two subsets into three subsets each. In general, for an even number of subsets, RNP starts with two-way partitioning at the top level, and then recursively partitions each half. With an odd number of subsets, RNP searches an inclusionexclusion tree for a first subset, then calls RNP to divide the remaining numbers two ways, etc. 6 Experimental Results We experimented with three, four, and five-way partitioning. Each timing shows the total time to solve 100 problem instances, in the form days:hours:minutes:seconds. The average time for a single instance is one-hundredth of these values. The numbers were uniformly distributed from zero to the maximum value. In our first set of experiments, maximum values were chosen to allow solving the hardest instances with the existing algorithms. All experiments were run on an IBM Intellistation with a two gigahertz AMD Opteron processor. All the algorithms run in space that is linear in the number of numbers n, since all searches are depth first. 6.1 Three-Way Partitioning For three-way partitioning, CKK is the best previous algorithm, and was used for comparison. We choose a maximum value of ten million, since these are the largest integers for which CKK could solve the hardest instances. Table 1 shows our results. The first column contains the problem size N, or number of numbers. The second column gives the average optimal partition difference. The third column shows the running time for CKK to optimally solve all 100 problem instances, and the fourth column shows the running time for SNP to solve the same instances. RNP is the same as SNP for three-way partitioning. The last column shows the running time of CKK divided by the running time of SNP, which represents the speedup over the previous state of the art. N Diff CKK SNP CKK/SNP :01 : :46 : :51 : :17:51 : :41:39 :12 1, :52:50 :20 1, :00:47:51 :34 2, :01:58:24 :49 3, :10:15:02 1:01 6, :01:52:22 :43 10, :13:23:05 :19 24, :12:43:33 :14 21, :14:44:28 :09 15, :08:39:13 :10 13, :04:20:18 :08 13, :23:25:56 :07 11,619 Table 1: Three-Way Partitioning of Digit Integers The first thing to notice is that the running times of both algorithms exhibit the easy-hard-easy transition characteristic of number partitioning. We don t know why the most difficult problems for CKK contain 35 numbers while the most difficult problems for SNP contain only 33 numbers. The next thing to notice is that SNP is much faster than CKK. While CKK takes over an hour on average to solve the hardest problem instances, SNP solves the hardest instances in less than a second on average. For partitioning 35 numbers, SNP runs almost 25,000 times faster than CKK. 6.2 Four-Way Partitioning Table 2 shows our results for four-way partitioning, in the same format as Table 1. For four-way partitioning, CGA is faster than CKK, and thus CGA is used for comparison. Here we used a maximum value of a hundred thousand, as these are the largest integers for which CGA could solve the hardest instances. We also ran both SNP and RNP on these problems. 541

5 The last column shows the ratios of the running times of RNP and CGA. In the empty positions, SNP and RNP took less than half a second to solve all 100 problem instances. N Diff CGA SNP RNP RNP/CGA : : : :59 :01 : :30 :01 :01 1, :08 :02 :02 1, :37:23 :03 :03 3, :04:02 :05 :04 7, :05:58:18 :07 :06 17, :09:47:12 :09 :06 32, :23:25:17 :09 :04 104, :05:59:05 :09 :03 232, :03:19:33 :11 :01 383, :22:49:28 :17 1,769,720 Table 2: Four-Way Partitioning of Digit Integers Again we see running times going from days to seconds on the hardest problems. RNP partitions 33 numbers over a million times faster than CGA, reducing the running time from almost 9 days to less than half a second for 100 problem instances. RNP is consistently faster than SNP. The running time of SNP keeps increasing on the harder problems, while that of RNP decreases beyond 29 numbers. 6.3 Five-Way Partitioning Table 3 shows our results for five-way partitioning, in the same format as Tables 1 and 2. As for four-way partitioning, CGA is more efficient than CKK, and thus CGA is used for comparison. Here we used a maximum value of ten thousand, since these were the largest integers for which CGA could solve the hardest problems. We also ran both SNP and RNP on these problems. The last column shows the ratios of the running times of RNP and CGA. N Diff CGA SNP RNP RNP/CGA :11 :01 : :28 :01 : :24 :03 : :20 :05 : :02 :11 : :15:35 :22 : :01:11 :54 :20 1, :10:19:00 2:10 :39 3, :01:09:18 4:26 1:12 3, :04:27:31 10:14 2:08 4, :00:10:16 19:23 4:14 3,066 Table 3: Five-Way Partitioning of Digit Integers Again we see orders of magnitude speedup, although not as large as for three and four-way partitioning. The reason may be the inability of CGA to solve problems with larger values. For partitioning 29 numbers, RNP runs over four thousand times faster than CGA. Here we see a larger difference between the performance of RNP and SNP. For example, RNP partitions 29 or 30 numbers almost five times faster than SNP. 6.4 Empirical Asymptotic Time Complexity These dramatic performance improvements, and the fact that the ratios of the running times increase with increasing problem difficulty, suggest that both SNP and RNP are asymptotically faster than CKK and CGA. Due to the pruning rules employed, however, these algorithms are very difficult to analyze. The next best thing is to empirically estimate their asymptotic complexity. We do this by computing the ratios of the running times for pairs of successively larger problem instances. To do this, we chose problem instances without many perfect partitions, since as perfect partitions become more common, the running times of these algorithms decrease. Furthermore, without perfect partitions, the running times of these algorithms are insensitive to the size of the numbers. Thus, we chose integers uniformly distributed from zero to 100 million, since these are the largest integers we could handle using 32-bit arithmetic with up to 40 numbers. Table 4 shows the results of partitioning these numbers three, four, and five ways. The left-most column shows the number of numbers. The next four columns show results for three-way partitioning, the middle four for four-way partitioning, and the last four for five-way partitioning. Each group of columns starts with the total running time for the best previous algorithm to solve 100 uniform random problem instances, which is CKK for three-way partitioning, and CGA for four and five-way partitioning. The next column in each group gives the running time for the given number of numbers divided by the running time for one less number, from the line immediately above it. This ratio is the geometric growth between the two levels. The next two columns in each group give the corresponding data for RNP. For three-way partitioning, the geometric growth rate for CKK ranges between about 2.5 and 3. The corresponding growth rate for RNP stabilizes at about 1.8 for up to 35 numbers. For more numbers, the growth rate decreases as the problems get easier because perfect partitions become more common. This shows that RNP can optimally solve arbitrarily large three-way problems with up to 8-digit numbers. For four-way partitioning, the geometric growth of CGA ranges between about 3 and 4. The corresponding growth rate for RNP varies from about 1.8 to 1.9. This data oscillates between even and odd problem sizes. Again, we see clear evidence of an asymptotic improvement. If we compare threeway to four-way partitioning with RNP, four-way partitioning is only slightly more expensive for hard problems. For five-way partitioning, the geometric growth rate of CGA ranges between about 3 and 5. The corresponding growth rate for RNP ranges between about 2 and 2.4 for hard problems, with consecutive outliers at 2.98 and The geometric mean of these two values is These data strongly suggest that the asymptotic growth rate of RNP is lower than that of CKK or CGA. Since these values represent the base of the exponential complexity of the algorithms, any significant difference between them will eventually produce arbitrarily large constant factor speedups. 542

6 N Three-Way Partitioning Four-Way Partitioning Five-Way Partitioning CKK ratio RNP ratio CGA ratio RNP ratio CGA ratio RNP ratio 20 :02 :07 :06 :01 21 : : : : : : : : : : : : : : : : : :01 44: :02 1:30: : : : :07: : :40: : : : :51: : :58: : :23: : :06:34: : :10:23: : :29: : :23:52: : : :22: : : : :01:47: : : : : : : : : : : : :46: : : :52: : : :38: : : :02:33: : : : :38: : :09: Table 4: Asymptotic Growth of Three, Four, and Five-Way Partitioning of Digit Integers 7 Further Work To optimize performance, the implementations of all these algorithms were specialized for partitioning into three, four, and five subsets. The next step is to partition numbers into more subsets, using general implementations of these algorithms, to see if the same trends continue. In addition, a challenging task will be to prove theoretically that our new algorithms are in fact asymptotically faster than CKK and CGA. 8 Conclusions We have introduced two new algorithms for multi-way number partitioning, sequential number partitioning (SNP) and recursive number partitioning (RNP). Both algorithms rely on the insight that if an optimal k-way partition of a set of numbers includes a particular subset, then optimally partitioning the remaining numbers not in that subset k 1 ways results in an optimal k-way partition. Both algorithms dramatically outperform the best previous algorithms for this problem, the complete greedy algorithm (CGA) and the complete Karmarkar-Karp algorithm (CKK), by orders of magnitude. For four-way partitioning of 33 5-digit numbers, for example, RNP runs over a million times faster than CGA. For three-way partitioning of eight-digit numbers, SNP, or equivalently RNP, can solve the hardest random problem instances in less than ten seconds on average. For partitioning into four or more sets, RNP outperforms SNP. An empirical exploration of the performance of RNP on problem instances without perfect partitions strongly suggests that it is asymptotically faster than both CGA and CKK. In addition to the specific contributions to number partitioning, this work demonstrates that better search techniques can still achieve enormous speedups over the previous state-of-the-art in simple combinatorial problems. Acknowledgments This research was supported by NSF grant No. IIS Thanks to Satish Gupta and IBM for providing the machine these experiments were run on, References [Garey and Johnson, 1979] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman, New York, NY, [Karmarkar and Karp, 1982] Narendra Karmarkar and Richard M. Karp. The differencing method of set partitioning. Technical Report UCB/CSD 82/113, Computer Science Division, University of California, Berkeley, [Korf, 1995] Richard E. Korf. From approximate to optimal solutions: A case study of number partitioning. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-95), pages , Montreal, Canada, Aug [Korf, 1998] Richard E. Korf. A complete anytime algorithm for number partitioning. Artificial Intelligence, 106(2): , December [Mertens,1998] Stephen Mertens. Phase transition in the number partitioning problem. Physical Review Letters, 81(20): , November

A Hybrid Recursive Multi-Way Number Partitioning Algorithm

A Hybrid Recursive Multi-Way Number Partitioning Algorithm Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence A Hybrid Recursive Multi-Way Number Partitioning Algorithm Richard E. Korf Computer Science Department University

More information

Optimal Sequential Multi-Way Number Partitioning

Optimal Sequential Multi-Way Number Partitioning Optimal Sequential Multi-Way Number Partitioning Richard E. Korf, Ethan L. Schreiber, and Michael D. Moffitt Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 IBM

More information

Optimally Scheduling Small Numbers of Identical Parallel Machines

Optimally Scheduling Small Numbers of Identical Parallel Machines Proceedings of the Twenty-Third International Conference on Automated Planning and Scheduling Optimally Scheduling Small Numbers of Identical Parallel Machines Richard E. Korf and Ethan L. Schreiber Computer

More information

From Approximate to Optimal Solutions: A Case Study of Number Partitioning

From Approximate to Optimal Solutions: A Case Study of Number Partitioning From Approximate to Optimal Solutions: A Case Study of Number Partitioning Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90024 korf@cs.ucla.edu Abstract

More information

Richard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in

Richard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in A Complete Anytime Algorithm for Number Partitioning Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90095 korf@cs.ucla.edu June 27, 1997 Abstract Given

More information

Search Strategies for Optimal Multi-Way Number Partitioning

Search Strategies for Optimal Multi-Way Number Partitioning Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Search Strategies for Optimal Multi-Way Number Partitioning Michael D. Moffitt IBM Corp. mdmoffitt@us.ibm.com Abstract

More information

Methods for Solving Subset Sum Problems

Methods for Solving Subset Sum Problems Methods for Solving Subset Sum Problems R.J.W. James Department of Management, University of Canterbury, Private Bag 4800, Christchurch, New Zealand ross.james@canterbury.ac.nz R.H. Storer Department of

More information

Improved Bin Completion for Optimal Bin Packing and Number Partitioning

Improved Bin Completion for Optimal Bin Packing and Number Partitioning Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Improved Bin Completion for Optimal Bin Packing and Number Partitioning Ethan L. Schreiber and Richard E. Korf

More information

CSE 417 Branch & Bound (pt 4) Branch & Bound

CSE 417 Branch & Bound (pt 4) Branch & Bound CSE 417 Branch & Bound (pt 4) Branch & Bound Reminders > HW8 due today > HW9 will be posted tomorrow start early program will be slow, so debugging will be slow... Review of previous lectures > Complexity

More information

Complementary Graph Coloring

Complementary Graph Coloring International Journal of Computer (IJC) ISSN 2307-4523 (Print & Online) Global Society of Scientific Research and Researchers http://ijcjournal.org/ Complementary Graph Coloring Mohamed Al-Ibrahim a*,

More information

Additive Pattern Database Heuristics

Additive Pattern Database Heuristics Journal of Artificial Intelligence Research 22 (2004) 279-318 Submitted 04/04; published 11/04 Additive Pattern Database Heuristics Ariel Felner felner@bgu.ac.il Department of Information Systems Engineering,

More information

Optimal Packing of High-Precision Rectangles

Optimal Packing of High-Precision Rectangles Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence Optimal Packing of High-Precision Rectangles Eric Huang Palo Alto Research Center 3333 Coyote Hill Rd. Palo Alto, CA 94304 ehuang@parc.com

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

A Virtual Laboratory for Study of Algorithms

A Virtual Laboratory for Study of Algorithms A Virtual Laboratory for Study of Algorithms Thomas E. O'Neil and Scott Kerlin Computer Science Department University of North Dakota Grand Forks, ND 58202-9015 oneil@cs.und.edu Abstract Empirical studies

More information

Interleaved Depth-First Search *

Interleaved Depth-First Search * Interleaved Depth-First Search * Pedro Meseguer Institut d'investigacio en Intel.ligencia Artificial Consejo Superior de Investigaciones Cientificas Campus UAB, 08193 Bellaterra, Spain Abstract In tree

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Minimizing Disk I/O in Two-Bit Breadth-First Search

Minimizing Disk I/O in Two-Bit Breadth-First Search Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Minimizing Disk I/O in Two-Bit Breadth-First Search Richard E. Korf Computer Science Department University of California,

More information

SCHOOL OF ENGINEERING & BUILT ENVIRONMENT. Mathematics. Numbers & Number Systems

SCHOOL OF ENGINEERING & BUILT ENVIRONMENT. Mathematics. Numbers & Number Systems SCHOOL OF ENGINEERING & BUILT ENVIRONMENT Mathematics Numbers & Number Systems Introduction Numbers and Their Properties Multiples and Factors The Division Algorithm Prime and Composite Numbers Prime Factors

More information

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph. Trees 1 Introduction Trees are very special kind of (undirected) graphs. Formally speaking, a tree is a connected graph that is acyclic. 1 This definition has some drawbacks: given a graph it is not trivial

More information

REDUCING GRAPH COLORING TO CLIQUE SEARCH

REDUCING GRAPH COLORING TO CLIQUE SEARCH Asia Pacific Journal of Mathematics, Vol. 3, No. 1 (2016), 64-85 ISSN 2357-2205 REDUCING GRAPH COLORING TO CLIQUE SEARCH SÁNDOR SZABÓ AND BOGDÁN ZAVÁLNIJ Institute of Mathematics and Informatics, University

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

DLD VIDYA SAGAR P. potharajuvidyasagar.wordpress.com. Vignana Bharathi Institute of Technology UNIT 1 DLD P VIDYA SAGAR

DLD VIDYA SAGAR P. potharajuvidyasagar.wordpress.com. Vignana Bharathi Institute of Technology UNIT 1 DLD P VIDYA SAGAR UNIT I Digital Systems: Binary Numbers, Octal, Hexa Decimal and other base numbers, Number base conversions, complements, signed binary numbers, Floating point number representation, binary codes, error

More information

Chapter 3. The Efficiency of Algorithms INVITATION TO. Computer Science

Chapter 3. The Efficiency of Algorithms INVITATION TO. Computer Science Chapter 3 The Efficiency of Algorithms INVITATION TO 1 Computer Science Objectives After studying this chapter, students will be able to: Describe algorithm attributes and why they are important Explain

More information

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19

Treaps. 1 Binary Search Trees (BSTs) CSE341T/CSE549T 11/05/2014. Lecture 19 CSE34T/CSE549T /05/04 Lecture 9 Treaps Binary Search Trees (BSTs) Search trees are tree-based data structures that can be used to store and search for items that satisfy a total order. There are many types

More information

Semi-Independent Partitioning: A Method for Bounding the Solution to COP s

Semi-Independent Partitioning: A Method for Bounding the Solution to COP s Semi-Independent Partitioning: A Method for Bounding the Solution to COP s David Larkin University of California, Irvine Abstract. In this paper we introduce a new method for bounding the solution to constraint

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Optimal Rectangle Packing: New Results

Optimal Rectangle Packing: New Results From: ICAPS-04 Proceedings. Copyright 2004, AAAI (www.aaai.org). All rights reserved. Optimal Rectangle Packing: New Results Richard E. Korf Computer Science Department University of California, Los Angeles

More information

Compressing Pattern Databases

Compressing Pattern Databases Compressing Pattern Databases Ariel Felner and Ram Meshulam Computer Science Department Bar-Ilan University Ramat-Gan, Israel 92500 Email: felner,meshulr1 @cs.biu.ac.il Robert C. Holte Computing Science

More information

Improving the Efficiency of Depth-First Search by Cycle Elimination

Improving the Efficiency of Depth-First Search by Cycle Elimination Improving the Efficiency of Depth-First Search by Cycle Elimination John F. Dillenburg and Peter C. Nelson * Department of Electrical Engineering and Computer Science (M/C 154) University of Illinois Chicago,

More information

Algorithmic complexity of two defence budget problems

Algorithmic complexity of two defence budget problems 21st International Congress on Modelling and Simulation, Gold Coast, Australia, 29 Nov to 4 Dec 2015 www.mssanz.org.au/modsim2015 Algorithmic complexity of two defence budget problems R. Taylor a a Defence

More information

Monotone Paths in Geometric Triangulations

Monotone Paths in Geometric Triangulations Monotone Paths in Geometric Triangulations Adrian Dumitrescu Ritankar Mandal Csaba D. Tóth November 19, 2017 Abstract (I) We prove that the (maximum) number of monotone paths in a geometric triangulation

More information

Anytime AND/OR Depth-first Search for Combinatorial Optimization

Anytime AND/OR Depth-first Search for Combinatorial Optimization Anytime AND/OR Depth-first Search for Combinatorial Optimization Lars Otten and Rina Dechter Dept. of Computer Science University of California, Irvine Outline AND/OR Search Spaces. Conflict: Decomposition

More information

Lecture 6: Combinatorics Steven Skiena. skiena

Lecture 6: Combinatorics Steven Skiena.  skiena Lecture 6: Combinatorics Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Learning to Count Combinatorics problems are

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

APHID: Asynchronous Parallel Game-Tree Search

APHID: Asynchronous Parallel Game-Tree Search APHID: Asynchronous Parallel Game-Tree Search Mark G. Brockington and Jonathan Schaeffer Department of Computing Science University of Alberta Edmonton, Alberta T6G 2H1 Canada February 12, 1999 1 Running

More information

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I MCA 312: Design and Analysis of Algorithms [Part I : Medium Answer Type Questions] UNIT I 1) What is an Algorithm? What is the need to study Algorithms? 2) Define: a) Time Efficiency b) Space Efficiency

More information

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

10/11/2013. Chapter 3. Objectives. Objectives (continued) Introduction. Attributes of Algorithms. Introduction. The Efficiency of Algorithms

10/11/2013. Chapter 3. Objectives. Objectives (continued) Introduction. Attributes of Algorithms. Introduction. The Efficiency of Algorithms Chapter 3 The Efficiency of Algorithms Objectives INVITATION TO Computer Science 1 After studying this chapter, students will be able to: Describe algorithm attributes and why they are important Explain

More information

A Parallel Algorithm for Exact Structure Learning of Bayesian Networks

A Parallel Algorithm for Exact Structure Learning of Bayesian Networks A Parallel Algorithm for Exact Structure Learning of Bayesian Networks Olga Nikolova, Jaroslaw Zola, and Srinivas Aluru Department of Computer Engineering Iowa State University Ames, IA 0010 {olia,zola,aluru}@iastate.edu

More information

Computational Biology Lecture 12: Physical mapping by restriction mapping Saad Mneimneh

Computational Biology Lecture 12: Physical mapping by restriction mapping Saad Mneimneh Computational iology Lecture : Physical mapping by restriction mapping Saad Mneimneh In the beginning of the course, we looked at genetic mapping, which is the problem of identify the relative order of

More information

ARC 088/ABC 083. A: Libra. DEGwer 2017/12/23. #include <s t d i o. h> i n t main ( )

ARC 088/ABC 083. A: Libra. DEGwer 2017/12/23. #include <s t d i o. h> i n t main ( ) ARC 088/ABC 083 DEGwer 2017/12/23 A: Libra A, B, C, D A + B C + D #include i n t main ( ) i n t a, b, c, d ; s c a n f ( %d%d%d%d, &a, &b, &c, &d ) ; i f ( a + b > c + d ) p r i n t f (

More information

Recent Progress in Heuristic Search: A Case Study of the Four-Peg Towers of Hanoi Problem

Recent Progress in Heuristic Search: A Case Study of the Four-Peg Towers of Hanoi Problem Recent Progress in Heuristic Search: A Case Study of the Four-Peg Towers of Hanoi Problem Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 korf@cs.ucla.edu

More information

Floating Point Considerations

Floating Point Considerations Chapter 6 Floating Point Considerations In the early days of computing, floating point arithmetic capability was found only in mainframes and supercomputers. Although many microprocessors designed in the

More information

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017

Scribe: Virginia Williams, Sam Kim (2016), Mary Wootters (2017) Date: May 22, 2017 CS6 Lecture 4 Greedy Algorithms Scribe: Virginia Williams, Sam Kim (26), Mary Wootters (27) Date: May 22, 27 Greedy Algorithms Suppose we want to solve a problem, and we re able to come up with some recursive

More information

ELEC-270 Solutions to Assignment 5

ELEC-270 Solutions to Assignment 5 ELEC-270 Solutions to Assignment 5 1. How many positive integers less than 1000 (a) are divisible by 7? (b) are divisible by 7 but not by 11? (c) are divisible by both 7 and 11? (d) are divisible by 7

More information

arxiv:cs/ v1 [cs.ds] 20 Feb 2003

arxiv:cs/ v1 [cs.ds] 20 Feb 2003 The Traveling Salesman Problem for Cubic Graphs David Eppstein School of Information & Computer Science University of California, Irvine Irvine, CA 92697-3425, USA eppstein@ics.uci.edu arxiv:cs/0302030v1

More information

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES DESIGN AND ANALYSIS OF ALGORITHMS Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES http://milanvachhani.blogspot.in USE OF LOOPS As we break down algorithm into sub-algorithms, sooner or later we shall

More information

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario

Coping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario Coping with the Limitations of Algorithm Power Tackling Difficult Combinatorial Problems There are two principal approaches to tackling difficult combinatorial problems (NP-hard problems): Use a strategy

More information

Solving Sokoban Optimally using Pattern Databases for Deadlock Detection

Solving Sokoban Optimally using Pattern Databases for Deadlock Detection Solving Sokoban Optimally using Pattern Databases for Deadlock Detection André G. Pereira, Marcus Ritt, Luciana S. Buriol Institute of Informatics Federal University of Rio Grande do Sul, Brazil {agpereira,marcus.ritt,buriol

More information

Interval Graphs. Joyce C. Yang. Nicholas Pippenger, Advisor. Arthur T. Benjamin, Reader. Department of Mathematics

Interval Graphs. Joyce C. Yang. Nicholas Pippenger, Advisor. Arthur T. Benjamin, Reader. Department of Mathematics Interval Graphs Joyce C. Yang Nicholas Pippenger, Advisor Arthur T. Benjamin, Reader Department of Mathematics May, 2016 Copyright c 2016 Joyce C. Yang. The author grants Harvey Mudd College and the Claremont

More information

COUNTING AND PROBABILITY

COUNTING AND PROBABILITY CHAPTER 9 COUNTING AND PROBABILITY Copyright Cengage Learning. All rights reserved. SECTION 9.3 Counting Elements of Disjoint Sets: The Addition Rule Copyright Cengage Learning. All rights reserved. Counting

More information

Comp Online Algorithms

Comp Online Algorithms Comp 7720 - Online Algorithms Notes 4: Bin Packing Shahin Kamalli University of Manitoba - Fall 208 December, 208 Introduction Bin packing is one of the fundamental problems in theory of computer science.

More information

Richard Feynman, Lectures on Computation

Richard Feynman, Lectures on Computation Chapter 8 Sorting and Sequencing If you keep proving stuff that others have done, getting confidence, increasing the complexities of your solutions for the fun of it then one day you ll turn around and

More information

Extra-High Speed Matrix Multiplication on the Cray-2. David H. Bailey. September 2, 1987

Extra-High Speed Matrix Multiplication on the Cray-2. David H. Bailey. September 2, 1987 Extra-High Speed Matrix Multiplication on the Cray-2 David H. Bailey September 2, 1987 Ref: SIAM J. on Scientic and Statistical Computing, vol. 9, no. 3, (May 1988), pg. 603{607 Abstract The Cray-2 is

More information

Algorithm Analysis. (Algorithm Analysis ) Data Structures and Programming Spring / 48

Algorithm Analysis. (Algorithm Analysis ) Data Structures and Programming Spring / 48 Algorithm Analysis (Algorithm Analysis ) Data Structures and Programming Spring 2018 1 / 48 What is an Algorithm? An algorithm is a clearly specified set of instructions to be followed to solve a problem

More information

On Covering a Graph Optimally with Induced Subgraphs

On Covering a Graph Optimally with Induced Subgraphs On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number

More information

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

More information

Algorithms in Systems Engineering IE172. Midterm Review. Dr. Ted Ralphs

Algorithms in Systems Engineering IE172. Midterm Review. Dr. Ted Ralphs Algorithms in Systems Engineering IE172 Midterm Review Dr. Ted Ralphs IE172 Midterm Review 1 Textbook Sections Covered on Midterm Chapters 1-5 IE172 Review: Algorithms and Programming 2 Introduction to

More information

Time complexity of iterative-deepening-a

Time complexity of iterative-deepening-a Artificial Intelligence 129 (2001) 199 218 Time complexity of iterative-deepening-a Richard E. Korf a,, Michael Reid b, Stefan Edelkamp c a Computer Science Department, University of California, Los Angeles,

More information

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment Class: V - CE Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology Sub: Design and Analysis of Algorithms Analysis of Algorithm: Assignment

More information

Chapter 5: Analytical Modelling of Parallel Programs

Chapter 5: Analytical Modelling of Parallel Programs Chapter 5: Analytical Modelling of Parallel Programs Introduction to Parallel Computing, Second Edition By Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar Contents 1. Sources of Overhead in Parallel

More information

Algorithm classification

Algorithm classification Types of Algorithms Algorithm classification Algorithms that use a similar problem-solving approach can be grouped together We ll talk about a classification scheme for algorithms This classification scheme

More information

A Visualization Program for Subset Sum Instances

A Visualization Program for Subset Sum Instances A Visualization Program for Subset Sum Instances Thomas E. O Neil and Abhilasha Bhatia Computer Science Department University of North Dakota Grand Forks, ND 58202 oneil@cs.und.edu abhilasha.bhatia@my.und.edu

More information

c 2003 Society for Industrial and Applied Mathematics

c 2003 Society for Industrial and Applied Mathematics SIAM J. COMPUT. Vol. 32, No. 2, pp. 538 547 c 2003 Society for Industrial and Applied Mathematics COMPUTING THE MEDIAN WITH UNCERTAINTY TOMÁS FEDER, RAJEEV MOTWANI, RINA PANIGRAHY, CHRIS OLSTON, AND JENNIFER

More information

Repeating Segment Detection in Songs using Audio Fingerprint Matching

Repeating Segment Detection in Songs using Audio Fingerprint Matching Repeating Segment Detection in Songs using Audio Fingerprint Matching Regunathan Radhakrishnan and Wenyu Jiang Dolby Laboratories Inc, San Francisco, USA E-mail: regu.r@dolby.com Institute for Infocomm

More information

Modified Vertex Support Algorithm: A New approach for approximation of Minimum vertex cover

Modified Vertex Support Algorithm: A New approach for approximation of Minimum vertex cover Abstract Research Journal of Computer and Information Technology Sciences ISSN 2320 6527 Vol. 1(6), 7-11, November (2013) Modified Vertex Support Algorithm: A New approach for approximation of Minimum

More information

Efficient memory-bounded search methods

Efficient memory-bounded search methods Efficient memory-bounded search methods Mikhail Simin Arjang Fahim CSCE 580: Artificial Intelligence Fall 2011 Dr. Marco Voltorta Outline of The Presentation Motivations and Objectives Background - BFS

More information

The Prospects for Sub-Exponential Time

The Prospects for Sub-Exponential Time The Prospects for Sub-Exponential Time Thomas E. O'Neil Computer Science Department University of North Dakota Grand Forks, ND 58202-9015 oneil@cs.und.edu Abstract This paper provides a survey of some

More information

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014

CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting. Aaron Bauer Winter 2014 CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting Aaron Bauer Winter 2014 The main problem, stated carefully For now, assume we have n comparable elements in an array and we want

More information

Summer 2013 Modules 9-13

Summer 2013 Modules 9-13 Summer 201 Modules 9-1 Mastering the Fundamentals Chris Millett Copyright 201 All rights reserved. Written permission must be secured from the author to use or reproduce any part of this book. Academic

More information

Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models

Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models Akihiro Kishimoto IBM Research, Ireland Joint work with Radu Marinescu and Adi Botea Outline 1 Background 2 RBFAOO

More information

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees Fundamenta Informaticae 56 (2003) 105 120 105 IOS Press A Fast Algorithm for Optimal Alignment between Similar Ordered Trees Jesper Jansson Department of Computer Science Lund University, Box 118 SE-221

More information

Excerpt from: Stephen H. Unger, The Essence of Logic Circuits, Second Ed., Wiley, 1997

Excerpt from: Stephen H. Unger, The Essence of Logic Circuits, Second Ed., Wiley, 1997 Excerpt from: Stephen H. Unger, The Essence of Logic Circuits, Second Ed., Wiley, 1997 APPENDIX A.1 Number systems and codes Since ten-fingered humans are addicted to the decimal system, and since computers

More information

Chapter 11 Search Algorithms for Discrete Optimization Problems

Chapter 11 Search Algorithms for Discrete Optimization Problems Chapter Search Algorithms for Discrete Optimization Problems (Selected slides) A. Grama, A. Gupta, G. Karypis, and V. Kumar To accompany the text Introduction to Parallel Computing, Addison Wesley, 2003.

More information

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/27/17 01.433/33 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Priority Queues / Heaps Date: 9/2/1.1 Introduction In this lecture we ll talk about a useful abstraction, priority queues, which are

More information

CS-6402 DESIGN AND ANALYSIS OF ALGORITHMS

CS-6402 DESIGN AND ANALYSIS OF ALGORITHMS CS-6402 DESIGN AND ANALYSIS OF ALGORITHMS 2 marks UNIT-I 1. Define Algorithm. An algorithm is a sequence of unambiguous instructions for solving a problem in a finite amount of time. 2.Write a short note

More information

Boosting as a Metaphor for Algorithm Design

Boosting as a Metaphor for Algorithm Design Boosting as a Metaphor for Algorithm Design Kevin Leyton-Brown, Eugene Nudelman, Galen Andrew, Jim McFadden, and Yoav Shoham {kevinlb;eugnud;galand;jmcf;shoham}@cs.stanford.edu Stanford University, Stanford

More information

Towards a Memory-Efficient Knapsack DP Algorithm

Towards a Memory-Efficient Knapsack DP Algorithm Towards a Memory-Efficient Knapsack DP Algorithm Sanjay Rajopadhye The 0/1 knapsack problem (0/1KP) is a classic problem that arises in computer science. The Wikipedia entry http://en.wikipedia.org/wiki/knapsack_problem

More information

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs

Integer Programming ISE 418. Lecture 7. Dr. Ted Ralphs Integer Programming ISE 418 Lecture 7 Dr. Ted Ralphs ISE 418 Lecture 7 1 Reading for This Lecture Nemhauser and Wolsey Sections II.3.1, II.3.6, II.4.1, II.4.2, II.5.4 Wolsey Chapter 7 CCZ Chapter 1 Constraint

More information

CS 6402 DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK

CS 6402 DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK CS 6402 DESIGN AND ANALYSIS OF ALGORITHMS QUESTION BANK Page 1 UNIT I INTRODUCTION 2 marks 1. Why is the need of studying algorithms? From a practical standpoint, a standard set of algorithms from different

More information

Bumptrees for Efficient Function, Constraint, and Classification Learning

Bumptrees for Efficient Function, Constraint, and Classification Learning umptrees for Efficient Function, Constraint, and Classification Learning Stephen M. Omohundro International Computer Science Institute 1947 Center Street, Suite 600 erkeley, California 94704 Abstract A

More information

1 Motivation for Improving Matrix Multiplication

1 Motivation for Improving Matrix Multiplication CS170 Spring 2007 Lecture 7 Feb 6 1 Motivation for Improving Matrix Multiplication Now we will just consider the best way to implement the usual algorithm for matrix multiplication, the one that take 2n

More information

Connecting face hitting sets in planar graphs

Connecting face hitting sets in planar graphs Connecting face hitting sets in planar graphs Pascal Schweitzer and Patrick Schweitzer Max-Planck-Institute for Computer Science Campus E1 4, D-66123 Saarbrücken, Germany pascal@mpi-inf.mpg.de University

More information

Discrete Structures Lecture The Basics of Counting

Discrete Structures Lecture The Basics of Counting Introduction Good morning. Combinatorics is the study of arrangements of objects. Perhaps, the first application of the study of combinatorics was in the study of gambling games. Understanding combinatorics

More information

Symbolic Manipulation of Boolean Functions Using a Graphical Representation. Abstract

Symbolic Manipulation of Boolean Functions Using a Graphical Representation. Abstract Symbolic Manipulation of Boolean Functions Using a Graphical Representation Randal E. Bryant 1 Dept. of Computer Science Carnegie-Mellon University Abstract In this paper we describe a data structure for

More information

From: AAAI Technical Report SS Compilation copyright 1999, AAAI (www.aaai.org). All rights reserved.

From: AAAI Technical Report SS Compilation copyright 1999, AAAI (www.aaai.org). All rights reserved. From: AAAI Technical Report SS-99-07. Compilation copyright 1999, AAAI (www.aaai.org). All rights reserved. Divide-and-Conquer Bidirectional Search Richard E. Korf Computer Science Department University

More information

Alpha-Beta Pruning in Mini-Max Algorithm An Optimized Approach for a Connect-4 Game

Alpha-Beta Pruning in Mini-Max Algorithm An Optimized Approach for a Connect-4 Game Alpha-Beta Pruning in Mini-Max Algorithm An Optimized Approach for a Connect-4 Game Rijul Nasa 1, Rishabh Didwania 2, Shubhranil Maji 3, Vipul Kumar 4 1,2,3,4 Dept. of Computer Science & Engineering, The

More information

Applied Algorithm Design Lecture 3

Applied Algorithm Design Lecture 3 Applied Algorithm Design Lecture 3 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 3 1 / 75 PART I : GREEDY ALGORITHMS Pietro Michiardi (Eurecom) Applied Algorithm

More information

CHAPTER 3 DEVELOPMENT OF HEURISTICS AND ALGORITHMS

CHAPTER 3 DEVELOPMENT OF HEURISTICS AND ALGORITHMS CHAPTER 3 DEVELOPMENT OF HEURISTICS AND ALGORITHMS 3.1 INTRODUCTION In this chapter, two new algorithms will be developed and presented, one using the Pascal s triangle method, and the other one to find

More information

31.6 Powers of an element

31.6 Powers of an element 31.6 Powers of an element Just as we often consider the multiples of a given element, modulo, we consider the sequence of powers of, modulo, where :,,,,. modulo Indexing from 0, the 0th value in this sequence

More information

CS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:

CS270 Combinatorial Algorithms & Data Structures Spring Lecture 19: CS270 Combinatorial Algorithms & Data Structures Spring 2003 Lecture 19: 4.1.03 Lecturer: Satish Rao Scribes: Kevin Lacker and Bill Kramer Disclaimer: These notes have not been subjected to the usual scrutiny

More information

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch. Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

More information

Subset Sum Problem Parallel Solution

Subset Sum Problem Parallel Solution Subset Sum Problem Parallel Solution Project Report Harshit Shah hrs8207@rit.edu Rochester Institute of Technology, NY, USA 1. Overview Subset sum problem is NP-complete problem which can be solved in

More information

Allstate Insurance Claims Severity: A Machine Learning Approach

Allstate Insurance Claims Severity: A Machine Learning Approach Allstate Insurance Claims Severity: A Machine Learning Approach Rajeeva Gaur SUNet ID: rajeevag Jeff Pickelman SUNet ID: pattern Hongyi Wang SUNet ID: hongyiw I. INTRODUCTION The insurance industry has

More information

BackTracking Introduction

BackTracking Introduction Backtracking BackTracking Introduction Backtracking is used to solve problems in which a sequence of objects is chosen from a specified set so that the sequence satisfies some criterion. The classic example

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

B553 Lecture 12: Global Optimization

B553 Lecture 12: Global Optimization B553 Lecture 12: Global Optimization Kris Hauser February 20, 2012 Most of the techniques we have examined in prior lectures only deal with local optimization, so that we can only guarantee convergence

More information