Methods for Solving Subset Sum Problems
|
|
- Penelope Bell
- 5 years ago
- Views:
Transcription
1 Methods for Solving Subset Sum Problems R.J.W. James Department of Management, University of Canterbury, Private Bag 4800, Christchurch, New Zealand R.H. Storer Department of Industrial and Systems Engineering, Lehigh University, Mohler Laboratory, 200 West Packer Ave, Bethlehem, PA 18015, USA Abstract The subset sum problem is a simple and fundamental NP-hard problem that is found in many real world applications. For the particular application motivating this paper (combination weighers) a solution is required to the subset sum problem that is within a small tolerance level, and can be computed quickly. We propose five techniques for solving this problem. The first is an enumeration technique that is capable of solving small problems very efficiently. The next two techniques are based on an efficient number partitioning algorithm. These techniques can solve small problems very efficiently when the solution uses approximately half the available elements (numbers) and outperforms dynamic programming approaches proposed in the literature. The last two techniques use a direct approach of improving a solution. These techniques were found to perform very efficiently on large problems and outperform heuristic techniques currently proposed in the literature. 1 Introduction The Subset Sum problem is an important and fundamental combinatorial optimisation problem from which many real world problems are derived. The application we consider is a problem faced by automated packing machines, which are commonly used in the food industry, although the techniques proposed here have wider applications. In this application a bag of food, say crisps or pretzels, needs to have a minimum weight c of the product in it (i.e. c is the weight printed on the label). Automated packing machines, called combination weighers, are typically used in such applications. These machines consist of a number (often around 32) of containers or buckets. Each bucket has a built-in scale that can measure the weight of product it contains. Product is typically fed into each bucket by a vibratory feeder. The vibratory feeder is set to dispense a specified target weight. For example each bucket may have a target weight of V6 the total package weight c, and we would normally expect, in this instance, 6 containers to be used to fill a bag of product. Because vibratory feeders are not particularly accurate, especially when dispensing items like pretzels or crisps, it is normal for the actual weight in these containers to vary significantly under or over the target weight. The reason for having buckets in the first place is that it allows us to select a subset of buckets whose contents sum closely to the total package weight. The question to be answered is; which set of containers should be used to fill a bag in order to achieve a packing weight no less than the stated package size, while minimizing the amount of overfilling? The problem can be stated as follows: 29
2 minimize ^ h)x, /=i subject // X M-'v. - c /= i A',. <E{0,1},/ = 1,,.,/Z where w, is the weight of product in bucket /. As a further refinement to this formulation we also make the assumption that in practicality we can only measure the weights to a given tolerance level, and hence if we can find a solution that is between c and c + e then this an acceptable solution and no further searching is required. Following standard practice, we assume that each of the containers has the same target weight and that the actual weights in these containers are normally distributed around the target. However we will also experiment with different distributions to see how they affect the solution times of the different algorithms we propose. The subset sum problem is NP-Hard (Garey and Johnson, 1979) and therefore heuristic methods are most commonly used to solve it, however several papers have proposed exact methods. Martello and Toth (1984a) use a combination of dynamic programming and branch and bound in solving this problem. This technique was able to solve problems of up to 28 items drawn from a uniform population in a range from 1to Pisinger (1999) proposed a restricted dynamic programming method based on balanced solutions. His technique was tested on various sized problems, with various ranges in the weights of the items. The technique was able to solve 3000 item problems that were drawn from a uniform population from 1 to 106. This approach was restricted to solving the problem for a single constraint value (c). Some of the earlier heuristics for the subset sum problem are described and compared in Martello and Toth (1985). Their conclusion was that the algorithm of Martello and Toth (1984b) was the most efficient scheme. This was essentially a greedy scheme whereby items are added to the set in decreasing weight order until no more can be added due to the constraint. A successful generalisation of this scheme was to initially select the first element in the set then run the greedy algorithm on the remaining elements. More recent research has examined the use of heuristics to solve this problem. Gens and Levner (1994) used a combination of a heuristic and dynamic programming, Fischetti (1990) used a local search based heuristic while Ghosh and Chakravarti (1999) used a local search that used a permutation search space and a greedy heuristic. Keller et al (2000) developed two linear time construction heuristics which outperform those proposed by Martello and Toth (1984b) which are 0(n2). Przydatek (2002) developed a two-phase subset sum approximation algorithm. The first phase randomly generates a solution to the problem while the second phase improves the solution. This is repeated a given number of times and the best result found is returned. Przydatek (2002) compared this algorithm to the Martello and Toth (1984b) algorithm and found it to be superior. The problem proposed in this paper sits between the problems currently studied in the literature in that it does not require an optimal solution to the problem as the exact solutions methods provide, as it only requires a solution within a given tolerance range. However the heuristic methods currently proposed in the literature do not provide the level of accuracy required to give a solution within the tolerance level or the minimum if outside the tolerance level. to: 30
3 To the authors knowledge none of the literature to date solving subset sum problems has used normal data in testing the performance of the solution techniques. Previous testing has assumed numbers to be uniformly distributed integers between zero and some large number (or similarly, real numbers between 0 and 1 with fixed precision) or the data has been generated according to some scheme that is known to be difficult to solve by branch and bound and dynamic programming (Chvatal, 1980). Our application leads us to experiment with other distributions, and in particular, distributions with smaller coefficients of variation than those typically assumed. As will be seen in the results, the distribution of item weights turns out to have a significant effect on algorithm performance. 2 Linkages to Number Partitioning The subset sum problem is similar in nature to another well-known combinatorial optimization problem, the number partitioning problem. The number partitioning problem aims to split a set of numbers in two so that the difference between the sums of each of the two sets is minimized. Thus number partitioning is a special case of subset sum where c is exactly half the total weight T = X vr,. The subset sum problem can be converted to number partitioning by adding a new dummy variable, with weight equivalent to two times the required amount less the total of all the items (Korf 1998). The dummy variable essentially forces one of the sets to be either larger or smaller in order to get (close to) the required amount in one of the sets. An important difference is that number partitioning can be viewed as a two-sided problem in that the goal is to get as close as possible to the target sum regardless of whether the proposed solution is larger or smaller than the target. The subset sum problem is typically formulated to be one-sided in that we want to get as close as possible to the target sum without going under the target. When the target sum is exactly half the total sum (as in number partitioning), the one-sided problem is as easy as the two-sided problem. This is true since, after partitioning, one set will be slightly below the target, while the other will be slightly above. One simply chooses the set meeting the desired requirement (i.e. under or over the target). When the target sum is not exactly half the total, and the problem is one-sided, additional complications arise. However, number partitioning based algorithms can be successfully adapted to the subset sum problem in certain cases, as we will see subsequently. Clearly the application motivating our study requires a one-sided approach since failing to meet the printed package weight can result in substantial fines for a manufacturer. Karmarker and Karp (1982) proposed a very efficient (n\ogn) and effective differencing heuristic, which we will refer to as the KK heuristic, for solving number partitioning problems. Essentially the heuristic sorts the items from largest to smallest and places them on a list. It then removes the two largest items on the list, one going into one set and the other into the opposite set. The difference between the two items is then merged back into to the list so as to maintain the decreasing value order. This continues until one item remains on the list, whose value is the difference in the partition. Korf (1998) extended the concept of the KK heuristic so that it was incorporated into a branch and bound framework that was called the Complete Karmarker Karp (CKK). At each branch there are two options. Consecutive items on the list are either subtracted (placed in opposite sets) from each other or added together (placed in the same set). If at any node the value of the first item is greater than the sum of the remaining elements, then putting the largest item on one side and all remaining elements on the other will result in the minimum difference between the two partitions. Thus no further branching is required from such a node as the optimal solution among all successors is at hand. 31
4 Korf (1998) tested this procedure using integer numbers with 10 significant digits that were drawn from a uniform distribution from 0 to It was found that the hardest problems to solve at this precision were those with around 35 to 36 items. What was unclear from Korf (1998) was the effect of the distribution and variance of the population from which the items were drawn. In particular we are interested in how this affects the speed with which we can find a solution to the problem. 3 Seeded CKK In order to overcome the problem of initially having many infeasible solutions when applying Korf s algorithm (CKK) to the subset sum problem, we propose a Seeded CKK variant that we have termed SCKK. Essentially it starts the branch and bound search with a known feasible solution, which could be any solution to the problem. In our experiments we create this feasible starting solution from the KK solution by moving one or more elements between the two sets. This solution is then improved by a one-pass heuristic that tries to swap taken items (items in the subset) with non-taken items with a smaller value, in order to reduce the total value of the solution while still maintaining feasibility. Given a starting feasible solution, the next step in this process is to determine the starting set of branch decisions (i.e. path in the branch and bound tree) that will produce our predefined starting solution. This is done by traversing down the branch and bound tree in a recursive fashion. At each node in the tree the starting solution values of the next two items in the list are compared. If they are in the same set in our starting solution, then the same side branch is taken, if they are on opposite sides, then a different side branch is defined. When calculated items are involved, the value of the original parent item is used - i.e. the value of the largest original item involved in the addition or subtraction calculation. Once the path to the root node representing our seed solution is defined, we then start the tree search using depth first with backtracking. The branch and bound simply switches from the original branching decision to the opposite one as we backtrack at each level along the depth first path defined by the seed solution. When branching from previously unvisited nodes, we again follow the difference first rule. 4 Enumeration Enumeration can be an efficient method for solving small subset sum problems if we carefully choose the order of the variables evaluated and predefine the values of the variables in order to start the search in a promising area of the solution space. The approach we found to be the most effective is presented in Figure 1. 32
5 Generate a feasible solutions, and generate two sets, taken = { / : x, = 1 }; nottaken = { /: x, = 0 } Sort taken such that w, < wl+, V/' e taken; Sort nottaken such that Wj < wj+, V/ e nottaken; B = Ywj: j e taken; // Define the Best solution found to date. Enumerate_Taken (taken, nottaken, Xw;: ye taken); Enumerate_Taken(taken, nottaken, T) if (taken * 0 ) /' = first element in taken; Enumerate_Taken(taken - /', nottaken, T) // Test first with it taken if (c < T - w,+ l{ w j: ye nottaken}) then // Still can get a feasible solution Enumerate_NotTaken(nottaken, T); Enumerate_Taken(taken - /', nottaken, T -w,)/l Test without it taken end if end if Enumerate_NotTaken(nottaken, T) if (nottaken = 0 ) or (T > c) if (T < B) then B = T; if (T < c + tolerance) then Terminate Enumeration else /' = first element in nottaken if (T + w; < B) and (c < T + : ye nottaken}) then Enumerate_Taken(nottaken - /', T + w)\ II Test first with it taken Enumerate_Nottaken(nottaken - /', T); // Test without it taken end if end if Figure 1. The Enumeration Algorithm When the number of items required is small this procedure is very fast due to the bound in Enumerate_NotTaken, however as the problem size increases, the number of combinations increases exponentially and enumeration starts to take an excessive amount of time to solve the problem. 5 Direct Search Direct Search is a new method of solving subset sum problems. It is based on the premise that for a large set of items it is possible to fine-tune the problem in a systematic way that keeps the number of items taken constant and in the process eliminates large areas of the solution space thus improving efficiency significantly. The performance of this search is reliant on the starting solution having the same number of items as a solution that is within the required tolerance. If the difference between the number of items in he starting solution and the number of items in a desired solution is large, then the computational time required to find the final solution could be large. Thus this algorithm will be better suited for certain problem data sets, particularly those with smaller coefficients of variation. The starting solution is the same as the one used for the Seeded CKK procedure. If a solution is not found with the current number of items taken then the number of items is systematically changed (one added, one subtracted, two added, two subtracted etc) in order to try every possible combination. The number of taken items is always kept within a valid range determined from the number of items taken when the smallest items are used, giving the maximum number of 1s required for a valid solution, and the number of items taken 33
6 when the largest items are used, giving the minimum number of items required for a valid solution. The summary of the direct search is shown in Figure 2. Use a heuristic to find a feasible solution, x, to the problem Calculate the minimum and maximum number of items that would be taken Starting with the current solution,x, Repeat Increase the value of the solution by swapping taken items with non-taken items with a higher value. R=1 Repeat Try improving the solution by swapping R taken items with R nontaken items of a lower value then swap other taken items with items of higher value. If at any time a solution within the tolerance range is found, terminate the search R=R+1 Loop until R > number of items taken Change the original solution by adding or subtracting the number of items taken so that the number of items taken had covered all possible values from the minimum to the maximum number of items. Loop until all numbers of items in the possible range have been tried. Figure 2. Direct Search Algorithm. This process can take a long time if a solution witliin the tolerance level has a significantly different number of Is than the starting solution or there is no solution within the tolerance for the problem being solved. The procedure outlined above is guaranteed to find the optimal solution, and for small problem sizes it is often necessary to adjust the size of the number of items taken from that produced by the starting solution. However when we have a large number of items to take then there are usually many solutions that will satisfy our tolerance level, and it is more common for there to be a solution within the tolerance level with the initial number of items taken. In order to speed up the process for larger problems we can define a heuristic search based on the processes above to find a solution. Rather than investigating all swaps involving 1, 2, 3 etc swaps, the heuristic search only considers swaps involving a single 1 item with a lower valued item. Once the best candidate move is identified, the move is made and the process repeats from the new solution. The heuristic terminates only when a solution is within the tolerance. The danger of course is that we may not find a solution within tolerance when one exists. For larger problems this seems unlikely. 6 Computational Experiments The above solution algorithms and heuristics were coded in Microsoft C++ V6.0 and all the following computational experiments were run on an AMD Athlon XP CPU running at 1.53GHz, 1GB RAM using the Windows XP operating system. 6.1 Small Problems In order to test the various techniques discussed within this paper we solved 50 mndomly generated problems with 32 containers. The required amounts (subset totals) tested were c = 8, 10, 16. In order to test if there was a difference in the results if we used different data distributions we used both normal and uniform distributions. The normal distributions all had a mean of 1 with three different standard deviations: 0.1, 0.2, and 0.3. For Uniform 34
7 distributions, the maximum and minimum were set to 1 ± 3 standard deviations. All problems were solved to a tolerance of 1O'* We then ran these problems using the algorithms described previously with the exception of the Direct Search. We found that this technique took a long time to solve 32 item problems and was not at all competitive with the other techniques proposed. The reason for this was that the perfonnance of the search was dependent on having a starting point with the same number of containers as a solution within the tolerance levels. This was not always possible for small problems; hence it took the search a great deal of time to find a solution that would meet the tolerance requirement. In some cases no solution was within the tolerance levels and hence the direct search ended up effectively doing a complete enumeration of the solution space.the results for these computational experiments are outlined in Table 2. Requirem ent Solution Method Enum CKK SCKK Enum CKK SCKK Enum CKK SCKK Normal (1,0.1) Uniform [ ] Normal (1,0.2) Uniform [ ] Normal (1,0.3) Uniform [ ] A verage Table 1. Average Computational Times (in seconds) for 32 container problems From Table we can draw the following conclusions: CKK is the fastest technique to use when the required amount is 16. Enumeration is faster when required amounts are smaller (10 and 8). This can be explained by the fact that the subset sum problem is most similar to number partitioning when the requirement is 16, and CKK is number partitioning based. Enumeration becomes orders of magnitude faster as the number of items necessary to meet the requirement is reduced. On the other hand, CKK and SCKK were slowest solving problems with the requirement value of 10 due to the structure of the problem becoming more difficult to solve with one larger dummy variable dominating the KK process. However as the requirement reduces further the bounds start to make the solution process easier for the CKK and SCKK. The distribution of the data can have an impact on the perfonnance of the search. Generally the enumeration appears to perform better on uniform data when the requirement is 16, but prefers normal data when the requirement is 8 or 10. There does not appear to be a simple explanation for the CKK and SCKK results. Generally the enumeration s computational times are worse as the variance in the data increases whereas the CKK and the SCKK s performance improves as the variance in the data increases. In most cases the CKK outperforms the SCKK. In many cases this would be due to the extra overhead required by the SCKK to be implemented. Other times the KK solution may simply be closer to a good solution than the solution we have seeded the search with. As was indicated previously we must be aware that this conclusion is valid only for the starting point generated by our feasibility restoration heuristic. Other starting points may produce better or worse results for any problem instance. Further statistical analysis was carried out on the affect that the distribution had on the computational time for each technique. From this analysis it was found that in some instances there were statistically significant differences in the computational times when the problem data 35
8 was generated from different distributions. As would be expected, we found that as the variance increased the more likely significant differences between distributions occurred. Of the 54 comparisons between the means of the CKK and SCKK using nonnal and uniform data, 24 were found to be statistically significantly different. In all but two of these cases the CKK and SCKK found it more difficult to solve problems drawn from a nonnal distribution than a uniform distribution. In both exceptions to this rule the variance of the data was 0.1. Of the 27 comparisons between the means of the enumeration using nonnal and uniform data, 16 were found to be statistically significantly different. In all but two of these cases the enumeration found it more difficult to solve problems drawn from a uniform distribution than a nonnal distribution. Once again both of the exceptions to this rule were when the variance of the data was 0.1. From this we can conclude that, in general, the Karmarker- Karp-based algorithms perform best on data drawn from uniform distributions whereas the Enumeration algorithm perfonns best on data drawn from a normal distribution. 6.2 Larger Problems Larger problems are in some ways easier to solve than small problems, as the number of combinations of items that meet the given tolerance range increases. However the techniques used to solve smaller problems, which by nature have to be very thorough and complete, may not be as successful in solving larger problems that have exponentially large search spaces. For these problems the search needs to be taken quickly to a promising solution which is then manipulated in order to achieve a satisfactory result. In order to test the scalability of the solution techniques, we mndomly generated thirty problems of 200 items each with required amounts of 100 and 50. Normal distributions with mean 1 and standard deviations of 0.1, 0.2 and 0.3 were used to generate the item data. All problems were solved to a tolerance of 10'8 In pilot tests we attempted to solve these problems with enumeration and CKK but the CPU times required to solve some of the problems were excessive. Hence these techniques were abandoned for the larger size problems. The SCKK algorithm also required excessive computation times for some instances with standard deviation of 0.1. For this reason no results are reported for SCKK at 0.1, however at higher variances solutions were obtained. Along with the techniques proposed in this paper we also ran this experiment using a modified version of the Random Greedy Local Improvement (RGLI) algorithm proposed by Przydatek (2002). The two main changes made to this algorithm were that the objective was to minimise the weight while being over a certain constraint value and that instead of giving a predefined number of trials for the algorithm to execute, the algorithm was terminated only when it found a solution within the given tolerance level. The later modification means that if the tolerance level is too small for the size of the problem the algorithm could potentially run indefinitely, however with large problems this is unlikely. The results for these experiments are outlined in Table 2, where the HDS rows represents the results for the Heuristic Based Direct Search, DS are the results for the Direct Search algorithm, SCKK are the results for the Seeded Complete Karmarker Karp algorithm and RGLI are the results for the algorithm proposed by Przydatek (2002) modified as indicated previously. A dash ( - ) in the table indicates that solutions could not be generated within an acceptable time period. 36
9 Data Std D eviation R equired A lgorithm M ean Standard D eviation M ean Standard D eviation HDS DS SCKK RGLI HDS DS SCKK RGLI HDS DS SCKK RGLI Table 2. Average and Standard Deviations of Computational Times (in seconds) for 200 container problems From these results we can conclude For large sized problems the HDS, DS and RGLI algorithms are clearly superior to the SCKK algorithm. The HDS is clearly superior to DS when the requirement is 50 and the data standard deviation is 0.1. When the data standard deviation is higher the results between the two schemes are very similar. HDS out performs RGLI by an order of magnitude, while DS out performs RGLI by an order of magnitude except in the case when the requirement is 50 and the data standard deviation is 0.1. The SCKK performs best on problem data that have a high variance. We note that previous experiments using Uniform integer data were generated between 0 and a large number reflect a case with an extremely high variance. For HDS and DS the higher the problem data variance, the higher the variance in the solution times. For the RGLI the higher the data variance, the longer the solution times, while the smaller the requirement the smaller the solution times. The distribution of the numbers, and particularly the coefficient of variation, has a significant effect on the perfomiance of the various algorithms. The problem size has an even greater effect on relative algorithm performance. Indeed it is quite interesting to note that the best techniques for solving large sized problem are the worst techniques for solving smaller problems and vice versa. This last observation casts serious doubts on the tendency of some researchers, especially those in the heuristics area, to advocate the use of techniques that have been proven on small problems as the best techniques for solving larger problems. 7 Conclusions In this paper we have presented two new techniques for solving the subset sum problem. The first was an enumeration approach and the second was an approach that directly manipulates the solution into a better one, which have called Direct Search. We have also proposed a modification of the Complete Karmarker Karp algorithm proposed by Korf (1998) that uses a starting feasible solution to seed the branch and bound tree. 37
10 Computational experiments have shown that the best technique to use to solve the problem depends on the combination of the number of items in the problem, the requirement constraint and the amount of variance in the data sets. The Enumeration technique proposed performs veiy well on data that is small and has a small requirement. The Direct Search approach proposed in this paper has proven to perfonn extremely well on large sized problems and performs an order of magnitude faster than the Random Greedy Local Improvement technique proposed by Przydatek (2002). In terms of the application that inspired this research, the results indicates that it would be possible to design a packing machine with many more bins than is the norm at present, and that it would be practical to solve the subset sum problem to the required level of accuracy in real time. The advantage of these changes would be that the containers would hold smaller weights, and the net effect would be less overfilling of packages and therefore less wastage for the organisation. References Chvatal V Hard Knapsack Problems Operations Research 28: Fischetti. M A new linear storage, polynomial-time approximation scheme for the subset-sum problem." Discrete Applied Mathematics 26: Garey M.R., D.S. Johnson Computers and intractability: a guide to the theoiy o f NP-completeness, Freeman, San Francisco. Gens G., E. Levner A Fast Approximation Algorithm for the subset-sum problem. INFOR 32: Ghosh D., N. Chakravarti A competitive local search heuristic for the subset sum problem. Computers and Operations Research 26: Karmarkar N., R.M. Karp The differencing method of set partitioning. Technical Report UCB/CSD 82/113, Computer Science Division, University of California, Berkeley, CA. Korf R.E A complete anytime algorithm for number partitioning. Artificial Intelligence. 106: Martello S., P. Toth. 1984a. A mixture of dynamic programming and branch-and-bound for the subset-sum problem. Management Science 30: Martello S., P. Toth. 1984b. Worst-Case Analysis of Greedy Algorithms for the Subset sum Problem. Mathematical Programming 28: Martello S., P. Toth Approximation schemes for the subset-sum problem: survey and experimental analysis. European Journal o f Operational Research 22: Pisinger. D Linear Time Algorithms for Knapsack Problems with Bounded Weights. Journal o f Algorithms 33:1-14. Przydatek. B A fast approximation algorithm for the subset-sum problem. International Transactions in Operational Research 9:
Optimal Sequential Multi-Way Number Partitioning
Optimal Sequential Multi-Way Number Partitioning Richard E. Korf, Ethan L. Schreiber, and Michael D. Moffitt Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 IBM
More informationOptimally Scheduling Small Numbers of Identical Parallel Machines
Proceedings of the Twenty-Third International Conference on Automated Planning and Scheduling Optimally Scheduling Small Numbers of Identical Parallel Machines Richard E. Korf and Ethan L. Schreiber Computer
More informationMulti-Way Number Partitioning
Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09) Multi-Way Number Partitioning Richard E. Korf Computer Science Department University of California,
More informationA Hybrid Improvement Heuristic for the Bin Packing Problem
MIC 2001-4th Metaheuristics International Conference 63 A Hybrid Improvement Heuristic for the Bin Packing Problem Adriana C.F. Alvim Dario J. Aloise Fred Glover Celso C. Ribeiro Department of Computer
More informationA Hybrid Recursive Multi-Way Number Partitioning Algorithm
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence A Hybrid Recursive Multi-Way Number Partitioning Algorithm Richard E. Korf Computer Science Department University
More informationFrom Approximate to Optimal Solutions: A Case Study of Number Partitioning
From Approximate to Optimal Solutions: A Case Study of Number Partitioning Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90024 korf@cs.ucla.edu Abstract
More informationTheorem 2.9: nearest addition algorithm
There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used
More informationSubset sum problem and dynamic programming
Lecture Notes: Dynamic programming We will discuss the subset sum problem (introduced last time), and introduce the main idea of dynamic programming. We illustrate it further using a variant of the so-called
More informationLocal search heuristic for multiple knapsack problem
International Journal of Intelligent Information Systems 2015; 4(2): 35-39 Published online February 14, 2015 (http://www.sciencepublishinggroup.com/j/ijiis) doi: 10.11648/j.ijiis.20150402.11 ISSN: 2328-7675
More informationBranch-and-bound: an example
Branch-and-bound: an example Giovanni Righini Università degli Studi di Milano Operations Research Complements The Linear Ordering Problem The Linear Ordering Problem (LOP) is an N P-hard combinatorial
More information3 INTEGER LINEAR PROGRAMMING
3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=
More informationThe Match Fit Algorithm: A Testbed for the Computational Motivation of Attention
The Match Fit Algorithm: A Testbed for the Computational Motivation of Attention Joseph G. Billock 1, Demetri Psaltis 1, and Christof Koch 1 California Institute of Technology Pasadena, CA 91125, USA billgr@sunoptics.caltech.edu
More informationConstruction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen
Construction of Minimum-Weight Spanners Mikkel Sigurd Martin Zachariasen University of Copenhagen Outline Motivation and Background Minimum-Weight Spanner Problem Greedy Spanner Algorithm Exact Algorithm:
More informationHeuristic Algorithms for the Fixed-Charge Multiple Knapsack Problem
The 7th International Symposium on Operations Research and Its Applications (ISORA 08) Lijiang, China, October 31 Novemver 3, 2008 Copyright 2008 ORSC & APORC, pp. 207 218 Heuristic Algorithms for the
More information3 No-Wait Job Shops with Variable Processing Times
3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select
More informationImproved Bin Completion for Optimal Bin Packing and Number Partitioning
Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Improved Bin Completion for Optimal Bin Packing and Number Partitioning Ethan L. Schreiber and Richard E. Korf
More informationCS 231: Algorithmic Problem Solving
CS 231: Algorithmic Problem Solving Naomi Nishimura Module 7 Date of this version: January 28, 2019 WARNING: Drafts of slides are made available prior to lecture for your convenience. After lecture, slides
More informationAn Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem
An Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem Harald Feltl and Günther R. Raidl Institute of Computer Graphics and Algorithms Vienna University of Technology, Vienna, Austria
More informationReduction and Exact Algorithms for the Disjunctively Constrained Knapsack Problem
Reduction and Exact Algorithms for the Disjunctively Constrained Knapsack Problem Aminto Senisuka Byungjun You Takeo Yamada Department of Computer Science The National Defense Academy, Yokosuka, Kanagawa
More informationON WEIGHTED RECTANGLE PACKING WITH LARGE RESOURCES*
ON WEIGHTED RECTANGLE PACKING WITH LARGE RESOURCES* Aleksei V. Fishkin, 1 Olga Gerber, 1 Klaus Jansen 1 1 University of Kiel Olshausenstr. 40, 24118 Kiel, Germany {avf,oge,kj}@informatik.uni-kiel.de Abstract
More informationRichard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in
A Complete Anytime Algorithm for Number Partitioning Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90095 korf@cs.ucla.edu June 27, 1997 Abstract Given
More informationThe Size Robust Multiple Knapsack Problem
MASTER THESIS ICA-3251535 The Size Robust Multiple Knapsack Problem Branch and Price for the Separate and Combined Recovery Decomposition Model Author: D.D. Tönissen, Supervisors: dr. ir. J.M. van den
More informationHEURISTIC ALGORITHMS FOR THE GENERALIZED MINIMUM SPANNING TREE PROBLEM
Proceedings of the International Conference on Theory and Applications of Mathematics and Informatics - ICTAMI 24, Thessaloniki, Greece HEURISTIC ALGORITHMS FOR THE GENERALIZED MINIMUM SPANNING TREE PROBLEM
More informationCurve Correction in Atomic Absorption
Curve Correction in Atomic Absorption Application Note Atomic Absorption Authors B. E. Limbek C. J. Rowe Introduction The Atomic Absorption technique ultimately produces an output measured in optical units
More informationNP-completeness of generalized multi-skolem sequences
Discrete Applied Mathematics 155 (2007) 2061 2068 www.elsevier.com/locate/dam NP-completeness of generalized multi-skolem sequences Gustav Nordh 1 Department of Computer and Information Science, Linköpings
More informationComplementary Graph Coloring
International Journal of Computer (IJC) ISSN 2307-4523 (Print & Online) Global Society of Scientific Research and Researchers http://ijcjournal.org/ Complementary Graph Coloring Mohamed Al-Ibrahim a*,
More informationUNIT 4 Branch and Bound
UNIT 4 Branch and Bound General method: Branch and Bound is another method to systematically search a solution space. Just like backtracking, we will use bounding functions to avoid generating subtrees
More informationAlgorithmic patterns
Algorithmic patterns Data structures and algorithms in Java Anastas Misev Parts used by kind permission: Bruno Preiss, Data Structures and Algorithms with Object-Oriented Design Patterns in Java David
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationVNS-based heuristic with an exponential neighborhood for the server load balancing problem
Available online at www.sciencedirect.com Electronic Notes in Discrete Mathematics 47 (2015) 53 60 www.elsevier.com/locate/endm VNS-based heuristic with an exponential neighborhood for the server load
More informationScan Scheduling Specification and Analysis
Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract
More informationSeminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm
Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of
More informationSet Cover with Almost Consecutive Ones Property
Set Cover with Almost Consecutive Ones Property 2004; Mecke, Wagner Entry author: Michael Dom INDEX TERMS: Covering Set problem, data reduction rules, enumerative algorithm. SYNONYMS: Hitting Set PROBLEM
More informationCOMP3121/3821/9101/ s1 Assignment 1
Sample solutions to assignment 1 1. (a) Describe an O(n log n) algorithm (in the sense of the worst case performance) that, given an array S of n integers and another integer x, determines whether or not
More informationAlgorithms for the Bin Packing Problem with Conflicts
Algorithms for the Bin Packing Problem with Conflicts Albert E. Fernandes Muritiba *, Manuel Iori, Enrico Malaguti*, Paolo Toth* *Dipartimento di Elettronica, Informatica e Sistemistica, Università degli
More informationSome Applications of Graph Bandwidth to Constraint Satisfaction Problems
Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept
More informationComputational Biology Lecture 12: Physical mapping by restriction mapping Saad Mneimneh
Computational iology Lecture : Physical mapping by restriction mapping Saad Mneimneh In the beginning of the course, we looked at genetic mapping, which is the problem of identify the relative order of
More informationRanking Clustered Data with Pairwise Comparisons
Ranking Clustered Data with Pairwise Comparisons Alisa Maas ajmaas@cs.wisc.edu 1. INTRODUCTION 1.1 Background Machine learning often relies heavily on being able to rank the relative fitness of instances
More informationOnline algorithms for clustering problems
University of Szeged Department of Computer Algorithms and Artificial Intelligence Online algorithms for clustering problems Summary of the Ph.D. thesis by Gabriella Divéki Supervisor Dr. Csanád Imreh
More informationOutline of the module
Evolutionary and Heuristic Optimisation (ITNPD8) Lecture 2: Heuristics and Metaheuristics Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ Computing Science and Mathematics, School of Natural Sciences University
More informationA hybrid improvement heuristic for the bin-packing problem and its application to the multiprocessor scheduling problem
A hybrid improvement heuristic for the bin-packing problem and its application to the multiprocessor scheduling problem Adriana C. F. Alvim 1, Celso C. Ribeiro 1 1 Departamento de Informática Pontifícia
More informationComplete Local Search with Memory
Complete Local Search with Memory Diptesh Ghosh Gerard Sierksma SOM-theme A Primary Processes within Firms Abstract Neighborhood search heuristics like local search and its variants are some of the most
More informationCSE 417 Branch & Bound (pt 4) Branch & Bound
CSE 417 Branch & Bound (pt 4) Branch & Bound Reminders > HW8 due today > HW9 will be posted tomorrow start early program will be slow, so debugging will be slow... Review of previous lectures > Complexity
More informationNEW HEURISTIC APPROACH TO MULTIOBJECTIVE SCHEDULING
European Congress on Computational Methods in Applied Sciences and Engineering ECCOMAS 2004 P. Neittaanmäki, T. Rossi, S. Korotov, E. Oñate, J. Périaux, and D. Knörzer (eds.) Jyväskylä, 24 28 July 2004
More informationVariable Neighborhood Search for Solving the Balanced Location Problem
TECHNISCHE UNIVERSITÄT WIEN Institut für Computergraphik und Algorithmen Variable Neighborhood Search for Solving the Balanced Location Problem Jozef Kratica, Markus Leitner, Ivana Ljubić Forschungsbericht
More informationSeptember 11, Unit 2 Day 1 Notes Measures of Central Tendency.notebook
Measures of Central Tendency: Mean, Median, Mode and Midrange A Measure of Central Tendency is a value that represents a typical or central entry of a data set. Four most commonly used measures of central
More informationEfficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models
Efficient AND/OR Search Algorithms for Exact MAP Inference Task over Graphical Models Akihiro Kishimoto IBM Research, Ireland Joint work with Radu Marinescu and Adi Botea Outline 1 Background 2 RBFAOO
More informationDr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions
Dr. Amotz Bar-Noy s Compendium of Algorithms Problems Problems, Hints, and Solutions Chapter 1 Searching and Sorting Problems 1 1.1 Array with One Missing 1.1.1 Problem Let A = A[1],..., A[n] be an array
More informationPractice Problems for the Final
ECE-250 Algorithms and Data Structures (Winter 2012) Practice Problems for the Final Disclaimer: Please do keep in mind that this problem set does not reflect the exact topics or the fractions of each
More informationBi-Objective Optimization for Scheduling in Heterogeneous Computing Systems
Bi-Objective Optimization for Scheduling in Heterogeneous Computing Systems Tony Maciejewski, Kyle Tarplee, Ryan Friese, and Howard Jay Siegel Department of Electrical and Computer Engineering Colorado
More informationComparing Implementations of Optimal Binary Search Trees
Introduction Comparing Implementations of Optimal Binary Search Trees Corianna Jacoby and Alex King Tufts University May 2017 In this paper we sought to put together a practical comparison of the optimality
More informationComp Online Algorithms
Comp 7720 - Online Algorithms Notes 4: Bin Packing Shahin Kamalli University of Manitoba - Fall 208 December, 208 Introduction Bin packing is one of the fundamental problems in theory of computer science.
More informationBacktracking. Chapter 5
1 Backtracking Chapter 5 2 Objectives Describe the backtrack programming technique Determine when the backtracking technique is an appropriate approach to solving a problem Define a state space tree for
More informationUnit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION
DESIGN AND ANALYSIS OF ALGORITHMS Unit 6 Chapter 15 EXAMPLES OF COMPLEXITY CALCULATION http://milanvachhani.blogspot.in EXAMPLES FROM THE SORTING WORLD Sorting provides a good set of examples for analyzing
More informationCS-6402 DESIGN AND ANALYSIS OF ALGORITHMS
CS-6402 DESIGN AND ANALYSIS OF ALGORITHMS 2 marks UNIT-I 1. Define Algorithm. An algorithm is a sequence of unambiguous instructions for solving a problem in a finite amount of time. 2.Write a short note
More informationTable 1 below illustrates the construction for the case of 11 integers selected from 20.
Q: a) From the first 200 natural numbers 101 of them are arbitrarily chosen. Prove that among the numbers chosen there exists a pair such that one divides the other. b) Prove that if 100 numbers are chosen
More informationSchool of Computer and Information Science
School of Computer and Information Science CIS Research Placement Report Multiple threads in floating-point sort operations Name: Quang Do Date: 8/6/2012 Supervisor: Grant Wigley Abstract Despite the vast
More informationSolving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach
Solving Large Aircraft Landing Problems on Multiple Runways by Applying a Constraint Programming Approach Amir Salehipour School of Mathematical and Physical Sciences, The University of Newcastle, Australia
More informationA GENETIC ALGORITHM APPROACH TO OPTIMAL TOPOLOGICAL DESIGN OF ALL TERMINAL NETWORKS
A GENETIC ALGORITHM APPROACH TO OPTIMAL TOPOLOGICAL DESIGN OF ALL TERMINAL NETWORKS BERNA DENGIZ AND FULYA ALTIPARMAK Department of Industrial Engineering Gazi University, Ankara, TURKEY 06570 ALICE E.
More informationA parallel approach of Best Fit Decreasing algorithm
A parallel approach of Best Fit Decreasing algorithm DIMITRIS VARSAMIS Technological Educational Institute of Central Macedonia Serres Department of Informatics Engineering Terma Magnisias, 62124 Serres
More informationHEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY
Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A
More informationMetaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini
Metaheuristic Development Methodology Fall 2009 Instructor: Dr. Masoud Yaghini Phases and Steps Phases and Steps Phase 1: Understanding Problem Step 1: State the Problem Step 2: Review of Existing Solution
More informationInstituto Nacional de Pesquisas Espaciais - INPE/LAC Av. dos Astronautas, 1758 Jd. da Granja. CEP São José dos Campos S.P.
XXXIV THE MINIMIZATION OF TOOL SWITCHES PROBLEM AS A NETWORK FLOW PROBLEM WITH SIDE CONSTRAINTS Horacio Hideki Yanasse Instituto Nacional de Pesquisas Espaciais - INPE/LAC Av. dos Astronautas, 1758 Jd.
More informationAlgorithmic problem-solving: Lecture 2. Algorithmic problem-solving: Tractable vs Intractable problems. Based on Part V of the course textbook.
Algorithmic problem-solving: Lecture 2 Algorithmic problem-solving: Tractable vs Intractable problems Based on Part V of the course textbook. Algorithmic techniques Question: Given a computational task,
More informationAn Edge-Swap Heuristic for Finding Dense Spanning Trees
Theory and Applications of Graphs Volume 3 Issue 1 Article 1 2016 An Edge-Swap Heuristic for Finding Dense Spanning Trees Mustafa Ozen Bogazici University, mustafa.ozen@boun.edu.tr Hua Wang Georgia Southern
More informationTowards a Memory-Efficient Knapsack DP Algorithm
Towards a Memory-Efficient Knapsack DP Algorithm Sanjay Rajopadhye The 0/1 knapsack problem (0/1KP) is a classic problem that arises in computer science. The Wikipedia entry http://en.wikipedia.org/wiki/knapsack_problem
More informationAdvanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret
Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely
More informationGeneral properties of staircase and convex dual feasible functions
General properties of staircase and convex dual feasible functions JÜRGEN RIETZ, CLÁUDIO ALVES, J. M. VALÉRIO de CARVALHO Centro de Investigação Algoritmi da Universidade do Minho, Escola de Engenharia
More informationAlgorithms for Integer Programming
Algorithms for Integer Programming Laura Galli November 9, 2016 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is
More informationOptimization Techniques for Design Space Exploration
0-0-7 Optimization Techniques for Design Space Exploration Zebo Peng Embedded Systems Laboratory (ESLAB) Linköping University Outline Optimization problems in ERT system design Heuristic techniques Simulated
More informationSearch Algorithms for Discrete Optimization Problems
Search Algorithms for Discrete Optimization Problems Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. Topic
More information9/29/2016. Chapter 4 Trees. Introduction. Terminology. Terminology. Terminology. Terminology
Introduction Chapter 4 Trees for large input, even linear access time may be prohibitive we need data structures that exhibit average running times closer to O(log N) binary search tree 2 Terminology recursive
More informationExtending MATLAB and GA to Solve Job Shop Manufacturing Scheduling Problems
Extending MATLAB and GA to Solve Job Shop Manufacturing Scheduling Problems Hamidullah Khan Niazi 1, Sun Hou-Fang 2, Zhang Fa-Ping 3, Riaz Ahmed 4 ( 1, 4 National University of Sciences and Technology
More information3 SOLVING PROBLEMS BY SEARCHING
48 3 SOLVING PROBLEMS BY SEARCHING A goal-based agent aims at solving problems by performing actions that lead to desirable states Let us first consider the uninformed situation in which the agent is not
More informationSLS Methods: An Overview
HEURSTC OPTMZATON SLS Methods: An Overview adapted from slides for SLS:FA, Chapter 2 Outline 1. Constructive Heuristics (Revisited) 2. terative mprovement (Revisited) 3. Simple SLS Methods 4. Hybrid SLS
More informationT. Biedl and B. Genc. 1 Introduction
Complexity of Octagonal and Rectangular Cartograms T. Biedl and B. Genc 1 Introduction A cartogram is a type of map used to visualize data. In a map regions are displayed in their true shapes and with
More informationA Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices
A Genetic Algorithm Applied to Graph Problems Involving Subsets of Vertices Yaser Alkhalifah Roger L. Wainwright Department of Mathematical Department of Mathematical and Computer Sciences and Computer
More informationConstraint Satisfaction Problems
Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,
More informationOPTIMIZING THE TOTAL COMPLETION TIME IN PROCESS PLANNING USING THE RANDOM SIMULATION ALGORITHM
OPTIMIZING THE TOTAL COMPLETION TIME IN PROCESS PLANNING USING THE RANDOM SIMULATION ALGORITHM Baskar A. and Anthony Xavior M. School of Mechanical and Building Sciences, VIT University, Vellore, India
More informationHomework 2: Search and Optimization
Scott Chow ROB 537: Learning Based Control October 16, 2017 Homework 2: Search and Optimization 1 Introduction The Traveling Salesman Problem is a well-explored problem that has been shown to be NP-Complete.
More informationSearch Algorithms. IE 496 Lecture 17
Search Algorithms IE 496 Lecture 17 Reading for This Lecture Primary Horowitz and Sahni, Chapter 8 Basic Search Algorithms Search Algorithms Search algorithms are fundamental techniques applied to solve
More informationGRASP. Greedy Randomized Adaptive. Search Procedure
GRASP Greedy Randomized Adaptive Search Procedure Type of problems Combinatorial optimization problem: Finite ensemble E = {1,2,... n } Subset of feasible solutions F 2 Objective function f : 2 Minimisation
More informationOn Covering a Graph Optimally with Induced Subgraphs
On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number
More informationPractice Final Exam 2: Solutions
lgorithm Design Techniques Practice Final Exam 2: Solutions 1. The Simplex lgorithm. (a) Take the LP max x 1 + 2x 2 s.t. 2x 1 + x 2 3 x 1 x 2 2 x 1, x 2 0 and write it in dictionary form. Pivot: add x
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationNew Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm
New Trials on Test Data Generation: Analysis of Test Data Space and Design of Improved Algorithm So-Yeong Jeon 1 and Yong-Hyuk Kim 2,* 1 Department of Computer Science, Korea Advanced Institute of Science
More informationMonte Carlo Methods; Combinatorial Search
Monte Carlo Methods; Combinatorial Search Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 22, 2012 CPD (DEI / IST) Parallel and
More informationFundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.
Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture No. # 13 Transportation Problem, Methods for Initial Basic Feasible
More informationChapter 12: Indexing and Hashing. Basic Concepts
Chapter 12: Indexing and Hashing! Basic Concepts! Ordered Indices! B+-Tree Index Files! B-Tree Index Files! Static Hashing! Dynamic Hashing! Comparison of Ordered Indexing and Hashing! Index Definition
More informationTiming-Based Communication Refinement for CFSMs
Timing-Based Communication Refinement for CFSMs Heloise Hse and Irene Po {hwawen, ipo}@eecs.berkeley.edu EE249 Term Project Report December 10, 1998 Department of Electrical Engineering and Computer Sciences
More informationCSE 332: Data Structures & Parallelism Lecture 12: Comparison Sorting. Ruth Anderson Winter 2019
CSE 332: Data Structures & Parallelism Lecture 12: Comparison Sorting Ruth Anderson Winter 2019 Today Sorting Comparison sorting 2/08/2019 2 Introduction to sorting Stacks, queues, priority queues, and
More informationAdvanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 18 All-Integer Dual Algorithm We continue the discussion on the all integer
More informationGAUSSIAN VARIABLE NEIGHBORHOOD SEARCH FOR THE FILE TRANSFER SCHEDULING PROBLEM
Yugoslav Journal of Operations Research 26 (2016), Number 2, 173 188 DOI: 10.2298/YJOR150124006D GAUSSIAN VARIABLE NEIGHBORHOOD SEARCH FOR THE FILE TRANSFER SCHEDULING PROBLEM Zorica DRAŽIĆ Faculty of
More informationFundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology Madras.
Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture No # 06 Simplex Algorithm Initialization and Iteration (Refer Slide
More informationDeterministic Parallel Graph Coloring with Symmetry Breaking
Deterministic Parallel Graph Coloring with Symmetry Breaking Per Normann, Johan Öfverstedt October 05 Abstract In this paper we propose a deterministic parallel graph coloring algorithm that enables Multi-Coloring
More information31.6 Powers of an element
31.6 Powers of an element Just as we often consider the multiples of a given element, modulo, we consider the sequence of powers of, modulo, where :,,,,. modulo Indexing from 0, the 0th value in this sequence
More informationChapter 12: Indexing and Hashing
Chapter 12: Indexing and Hashing Basic Concepts Ordered Indices B+-Tree Index Files B-Tree Index Files Static Hashing Dynamic Hashing Comparison of Ordered Indexing and Hashing Index Definition in SQL
More informationRun Times. Efficiency Issues. Run Times cont d. More on O( ) notation
Comp2711 S1 2006 Correctness Oheads 1 Efficiency Issues Comp2711 S1 2006 Correctness Oheads 2 Run Times An implementation may be correct with respect to the Specification Pre- and Post-condition, but nevertheless
More informationCombinatorial Search; Monte Carlo Methods
Combinatorial Search; Monte Carlo Methods Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico May 02, 2016 CPD (DEI / IST) Parallel and Distributed
More informationProceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012
Proceedings of the 2012 International Conference on Industrial Engineering and Operations Management Istanbul, Turkey, July 3 6, 2012 Solving Assembly Line Balancing Problem in the State of Multiple- Alternative
More information