[8] V. Kumar, V. Nageshwara Rao, K. Ramesh, Parallel Depth First Search on the Ring Architecture

Size: px
Start display at page:

Download "[8] V. Kumar, V. Nageshwara Rao, K. Ramesh, Parallel Depth First Search on the Ring Architecture"

Transcription

1 [8] V. Kumar, V. Nageshwara Rao, K. Ramesh, Parallel Depth First Search on the Ring Architecture International Conference on Parallel Processing, pp [9] B. Kroger, O. Vornberger, A Parallel Branch and Bound approach for solving a two dimensional cutting stock problem, Technical Report No. 25/1990, University of Osnabruck, Department of Mathematics and Computer Science [10] E. L. Lawler, D. E. Wood, Branch and Bound Methods: A survey, Operations Research 14, 1966, pp [11] F. C. H. Lin, R. M. Keller, The Gradient Model Load Balancing Method, IEEE Transactions on Software Engineering, Vol. 13, No. 1 January 1987 [12] R. Luling, B. Monien, Two Strategies for Solving the Vertex Cover Problem on a Transputer Network, 3 rd Int. Workshop on Distributed Algorithms, 1989 Lecture Notes in Computer Science 392, pp [13] R. Luling, B. Monien, F. Ramme, Load Balancing in Large Networks : A Comparative Study to appear in: Proc. of 3rd IEEE Symposium on Parallel and Distributed Processing, Dallas, 1991 [14] B. Monien, H. Sudborough, Embedding one Interconnection Network in Another, Computing Suppl. 7 (1990), pp [15] L. M. Ni, C. W. Xu, T. B. Gendreau, Drafting Algorithm - A Dynamic Process Migration Protocoll for Distributed Systems, 5 th Int. Conf. on Distributed Computing Systems 1985, pp [16] B. Monien and O. Vornberger, Parallel processing of combinatorial search trees, Processings International Workshop on Parallel Algorithms and Architectures, Math. Research Nr. 38, Akademie - Verlag Berlin, pp , 1987 [17] B. Monien, E. Speckenmeyer, O. Vornberger, Upperbound for covering problems, Methods of operations research, 43, 1981, pp [18] M. J. Quinn, Analysis and Implementation of Branch and Bound Algorithms on a Hypercube Multicomputer, IEEE Transactions on Computers, Vol. 39, No. 3, 1990 [19] S. L. Scott, G. S. Sohi, The Use of Feedback in Multiprocessors and Its Application to Tree Saturation Control, IEEE Transactions on Parallel And Distributed Systems, Vol. 1, No. 4, October 1990, pp [20] J. A. Stankovic, I. S. Sidhu, An Adaptive Bidding Algorithm for Processes, Clusters and Distributed Groups, 4 th Int. Conf. on Distributed Computing Systems 1984, pp [21] R. E. Tarjan, A. E. Trojanowski, Finding a maximum independent set, SIAM J. Computing, Vol. 6, No. 3, September 1977, pp [22] J. M. Troya, M. Ortega, A study of parallel branch and bound algorithms with best bound rst search, Parallel Computing , pp [23] O. Vornberger, Implementing branch and bound in a ring of processors, Proceedings of CONPAR 86, Lecture Notes of Computer Science 237, Springer Verlag, pp , 1986 [24] O. Vornberger, Load Balancing in a Network of Transputers, Distributed Algorithms 1987, Lecture Notes of Computer Science 312, Springer Verlag, pp

2 (e. g. the Traveling Salesperson Problem), we made the same experiments and enlarged each message to bytes by sending some additional dummy messages. Since the number of migrated messages for our algorithm is very low in comparison to randomized load balancing methods, we could achieve nearly the same results on all De Bruijn networks. For example, we achieved a speedup of on a 64 processor network. 6 Conclusion We presented a universal load balancing strategy for the solution of any branch & bound problem on a distributed system. The experiments show that this strategy works well on networks of up to 256 processors and on networks with very large diameter. So it can be concluded that this strategy will also work on even larger networks. The most interesting fact is that these speedup values were achieved, although, the time of the whole parallel computation was very short, e. g. a speedup of on a 256 node network in seconds. We claim that most other best rst branch & bound problem can be parallelized with nearly the same eciency, since we demonstrated that also for larger message sizes this method achieves good speedup values. On the other side, many ecient branch & bound algorithms gain their eciency by a sophisticated branching method, which uses much longer computation times for each node in the solution tree. In this case the speedup of our algorithm cannot decrease since the communication time is constant and the local computation time of the sequential and parallel algorithm will increase, which leads in general to a better speedup. The presented load balancing algorithm was shown to be best suited for distributed best rst branch & bound algorithms. Since it is not tailored to this application, it can also be applied to other problems. In fact, it is already applied to a distributed at concurrent prolog interpreter [6]. The rst results were encouraging, since the here presented strategy behaves in most cases superior to some other used load balancing algorithms. In general, the algorithm can be applied to all kinds of load balancing problems. It is especially well suited to problems if not only idle times should be avoided, but all processors should work on the same level which is dened by a problem dependent weight function. It is also very robust to strongly varying load situations and escapes automatically from trashing situations by using a feedback strategy. References [1] E. Altmann, T. A. Marsland, T. Breitkreutz, Accounting for Parallel Tree Search Overheads, Proceedings of the International Conference on Parallel Processing 1988, pp [2] C. Berge, The theory of graphs and its applications, Methuen, London 1962 [3] R. Feldmann, B. Monien, P. Mysliwietz, O. Vornberger, Distributed Game Tree Search, in: Kanal, Gopalakrishnan, Kumar, Parallel Algorithms for Machine Intelligence and Pattern Recognition, North Holland/ Elsevier Publ. Co. [4] V. K. Janakiram, D. P. Agrawal, R. Mehrotra, A randomized Parallel Branch and Bound Algorithm, Proceedings of the International Conference on Parallel Processing 1988, pp [5] R. M. Karp, Y. Zhang, A randomized Parallel Branch and Bound Procedure, Proceedings of the ACM Symposium on Theory of Computing 1988, pp [6] M. Karcher, University of Paderborn, personal communication [7] G. A. P. Kindervater, J. K. Lenstra, Parallel Computing in Combinatorial Optimization, Annals of Operations Research, , pp

3 SEQ DB(5) DB(6) DB(7) DB(8) RING(128) # branchings time idle.time { max.idle { time.max.idle { speedup { eciency { Table 3: results for the vertex cover problem SEQ DB(5) DB(6) DB(7) DB(8) RING(128) # branchings time idle.time { max.idle { time.max.idle { speedup { eciency { Table 4: results for the weighted vertex cover problem Table 3 and 4 present the results for the vertex cover and the weighted vertex cover problem on dierent networks and problem instances of 150 nodes. These values indicate that there is a nearly linear speedup for all DeBruijn networks and both problems. They also indicate, the computation load to saturate a ring network has to be higher than for the De Bruijn network to achieve the same eciency. For example a computation time of seconds is sucient to achieve a speedup of on a De Bruijn network of dimension seven for the vertex cover problem. For nearly the same computation time on a ring network the achieved speedup is only A similar eect can be observed for the weighted vertex cover problem. In this case, the dierence between both problems is not as large as for the vertex cover problem since both computation times are higher which results in a relatively high speedup also for the ring network. The eect of lower speedups for networks with larger diameter is explained by the observations in section 4.2. All results were achieved using the adaptive feedback strategy for the estimation of the load balancing parameters. Since the used De Bruijn networks have very low diameter, there was only a slight increase of the speedup using this technique in comparison to the load balancing algorithm using constant parameters. For example, the speedup for the vertex cover problem increased from 212 to 237 for the De Bruijn network of dimension 8. This is due to the fact that in networks with small diameter, relatively large weight dierences between neighboring processors could be allowed, without eecting the global load balancing to much. In this case, relatively large -values can be used which make no dynamic escape from trashing situations necessary. In contrast, the controller process becomes very important as the diameter becomes larger, because an ecient load balancing on these networks require very low -values. We could increase the eciency for the weighted vertex cover problem on a 128 processor ring by up to 30 percent with this additional technique. Therefore, we can conclude that the feedback control strategy is very important as the network diameter becomes larger, which is the case for ring networks and for networks with even larger processor numbers than the ones that we used here. In this case, the diameter will increase because of the constant node degree, which makes the use of a dynamic parameter estimation necessary. In [13] it was found, that the message size is very critical to the eciency in case of randomized load balancing strategies, since these methods base on the migration of huge numbers of subproblems. In the case of the vertex cover and the weighted vertex cover problem, one subproblem contains only a vertex cover, which is encoded as a bit string and an integer bound. The whole message consists in our case of less than 100 bytes. To study the eect of larger messages which may be required by other problems 12

4 n DB(5) DB(6) DB(7) DB(8) time eciency time eciency time eciency time eciency Table 1: speedup results for the vertex cover problem n DB(5) DB(6) DB(7) DB(8) time eciency time eciency time eciency time eciency Table 2: speedup results for the weighted vertex cover problem speedup 256 parallel time sec. weighted vertex cover, n=150 parallel time parallel time parallel time weighted vertex cover n=125 parallel time parallel time weighted vertex cover, n=100 parallel time processors 11

5 5 Experimental Results We performed the experiments on De Bruijn- and ring networks of up to 256 processors. The DeBruijn network of dimension k, DB(k), consists of 2 k nodes. There is an edge between the nodes (x 1 ; : : : ; x k ) and (y 1 ; : : : ; y k ) (x i ; y i 2 f0; 1g) i (x 2 ; : : : ; x k ) = (y 1 ; : : : ; y k?1 ). This well known network is optimal for our purpose since it has logarithmic diameter and a maximum degree of four, which is a necessary condition since each transputer has four communication channels. In fact, all nodes have degree four, expect two of degree three and two nodes that have a self loop which is not used for our application. The experiments were also done on a 128 processor ring network, since this network has extremely large diameter. In section 2, it was shown that a short diameter is very important for the eciency of our load balancing algorithm. Therefore, it can be concluded that if the strategy works well on this network it will also work well on even larger DeBruijn networks than the ones we used. For the denition of the local weight we use a problem dependent weight function. It was found that the weight function (1), as described in chapter 2, is sucient for the vertex cover problem since the bound of each subproblem belongs to the interval [bound of optimal solution, j V j], where j V j is the number of nodes in the graph G. During the runtime of the branch & bound algorithm the bounds of nearly all subproblems are on the same level which makes a balancing according to function (1) sucient. In case of the weighted vertex cover problem, the bounds are distributed over the interval [bound of optimal solution 10000, j V j 10000] because the node weights are chosen from the set f1; : : : ; 10000g. In this case, it was necessary to weight the dierent heaps according to the quality of the subproblems. Therefore, we use function (3) as described in chapter 2. By using function (1) or (2), we were faced with search overheads between 10 and 20 percent, which could be signicantly reduced using function (3). The amount of work, which is generated by a computation and can be mapped onto a distributed system, is very important for the eciency of a load balancing strategy, that focus purely on the minimization of idle times. This is because most load balancing problems arise at the beginning and at the end of a distributed computation, which is due to the fact, that the network is in general relatively low loaded in these phases. If these phases are relatively large in comparison to the complete execution time, good speedup values are dicult to achieve. Since the aim of our load balancing strategy is to avoid idle times and to balance the quality of the subproblems through the network, the size of the underlying computation is also very important for our strategy. To study this eect, we solved the vertex cover and the weighted vertex cover problem for instances of dierent size. Therefore, we can observe the behavior for dierent execution times. To qualify the eciency of the load balancing strategy, the following values were measured : # branchings : average number of branching steps time : average runtime of the algorithm (seconds) idle.time : average accumulated idle time for one processor in the network (seconds) max.idle : average maximum idle time for each processor (seconds) time.max.idle : time stamp of the beginning of the maximum idle time speedup : execution time on 1 processor / execution time for k processors eciency : speedup / number of used processors All experiments were performed on sets of 50 randomly generated graphs with dierent average node degrees. In case of the weighted vertex cover problem, the weights were uniformly distributed in the interval [1, 10000]. Table 1 and 2 present the average eciency and the computation time for problem instances in the range of 100 to 150 nodes per graph for both problems. One can observe that for all problem sizes a substantial speedup could be achieved. Especially for problems with an average parallel computation time of more than 1 minute the eciency is nearly constant for dierent network sizes. For larger computation times (> 100 seconds) eciencies of more than 93 percent could be achieved for all network sizes of up to 256 processors. The gure presents the speedup curves for the weighted vertex cover problem, which demonstrate, for a very short computation time a very high speedup could be achieved. 10

6 if (prob t > 1 B 1 avg:prob t ) or (weights t > 1 B 2 avg:weights t ) then t+1 up := t up + 0:01 t+1 down := t down + 0:01 t+1 := t + 0:01 else t+1 up := maxft up? 0:01; min up g t+1 down := maxft down? 0:01; min down g t+1 := maxf t? 0:01; min g The factors B 1 and B 2 determine the behaviour of the feedback strategy. If 0 B 1 ; B 2 < 1, the controller reacts on increasing load balancing activities, which is not exible enough to avoid trashing situations. Especially for B 1 ; B 2! 0, the controller increases the parameters of the actor process only in case of very high load balancing activities. As B 1 ; B 2 increase, the controller process becomes more and more sensitive to high load balancing activities. We made experiments for dierent values of B 1 and B 2. We got the best results by choosing B 1 = B 2 = 2, but nearly the same speedup could be achieved for all other values in the range of 1 to 6. A typical example of this behavior is presented in the following gure. Here the number of transferred data (dotted line) and the values of the parameter (solid line) is monitored over a short time interval. One can observe that the control algorithm tends to decrease the parameters to the specied lower bound and reacts dynamically on an increased load balancing activity. delta prob time 9

7 w l := w l + w(x 1 ) w:new := w:new? w(x 1 ) send (INFORM, w:new) to all neighbors w:old := w:new 7) on receipt of (INFORM, w:neighbor) from p j w j := w:neighbor send (REQUEST, w:new) to p j Rule 0 is performed to initialize the used variables. The rules 1 and 2 handle the broadcast of a new solution across the network. If the local load decreases more than down percent, a REQUEST for work is sent to all neighbors (rule 3). This request is eventually answered by sending a work unit which causes the next request. By this protocol a low loaded processor is supplied by its heavy loaded neighbors (rules 3, 4 and 5). In case of an increasing local load, a processor sends some of his best subproblems to its neighbors (rule 6) in a randomized way to avoid concentration of "good" subproblems and informs all its neighbors about the actual load situation. This can cause a workload request by some neighbors (rule 7). 4.2 Control Process In the previous chapters we saw, that especially for networks with large diameter small values are necessary to achieve a good workload balancing. As mentioned before, trashing eects are likely to occur, especially when the system load changes dramatically, which is the case when a new solution of the branch & bound problem is found and broadcasted through the network. In such a case, the local heap of each processor is reduced in an unpredicatable way, which can cause great weight dierences of neighboring processes. For small values this situation will result in great communication and migration activity, which will stop any computation of the branch & bound algorithm. On the other side, it is not necessary that the heapweights are immediately balanced according to the given parameters after a new solution has been found. So it might be useful to increase the values for short time in order to slow done the balancing activities and to continue the branch & bound algorithm. After some time the values are decreased to the original values. To control this we use a feedback strategy. This type of control mechanism is well known in the design of engineering control systems [19]. It is a simple but extremely powerful method to prevent instability of the underlying system by keeping the outputs in a xed range. For our purpose, it is necessary to keep the load balancing activities (e. g. the number of migrated subproblems) in a range that depends on the communication capabilities of the machine and the necessary amount of communication to ensure a good load balancing result. Therefore we dene lower bounds min up, min down and min for the parameters up, down and of the actor process. We have to dene lower bounds, since the load balancing activities increase as the values of the parameters are becoming smaller. The exact value of the lower bounds depend on the communication capabilities of the used network, the size of a subproblem, the balancing quality which is necessary to achieve a high speedup and the weight function. For our implementation of the vertex cover and the weighted vertex cover problem, a processor can compute ve new subproblems per second. Each subproblem is encoded using a constant size data structure of nearly 100 bytes. Using these values and the communication capacity of 10 MBit per second for one communication channel, a lower bound of 0.01 for all three parameters is possible. The load balancing decisions are the output of the actor process. The consequences of these decisions are used by the controller to calculate the new parameters for the actor. Therefore, we messuare the number of migrated subproblems and heapweights during a xed small time interval, which is in our case 0.1 seconds. Let prob t, weights t be the number of subproblems and heapweights, which are send in step t of the control process, avg:prob t, avg:weights t be the average number of subproblems and heapweights, which are send in steps 0 to t. The controller computes t+1 up, t+1 down and t+1 on the following rule : 8

8 Initially one processor computes a suboptimal solution and broadcasts it through the network. In our case, we got this upper bound by a simple greedy algorithm. Each processor has a priority queue, which is organized as a heap. This data structure can be initialized in two ways. One is to put the root of the branch & bound tree into the heap of exactly one elected processor, whereas all other heaps are empty. This strategy has the drawback, that it will take some time to supply workload to all processors in the network. A more ecient way to start the parallel algorithm is to distribute at least n nodes of the solution tree on the network. In the case of the vertex cover problem we select i nodes of the problem graph and built 2 i n starting problems, which are immediately available to the distributed algorithm. Simple strategies of this kind can also be found for other problems and does therefore not eect the general applicability of our strategy. Each processor p with neighboring processors fp 1 ; : : : ; p k g manages some variables w 1 ; : : : ; w k. At each timestamp, w i equals the last weight value that was sent by processor p i to p. Let (x 1 ; : : :; x m ) always be the contents of the local heap such that bound(x i ) bound(x i+1 ) 8i and w:new the actual local heapweight as described in the introduction. Additionally each processor stores a variable c which equals the cost of the best suboptimal solution found so far. Then the load balancing algorithm for processor p i works on the following rules: 0) w 1 ; : : : ; w k = 0 c=upper bound initialize heap w:new = w:old = w(p i ) 1) if computing process has found new solution x if bound(x) < c c := bound(x) send (NEW.SOLUTION, c) to all neighbors update(w:new) 2) on receipt of (NEW.SOLUTION, c 0 ) from p j if c 0 < c c := c 0 send (NEW.SOLUTION, c) to all neighbors except p j update(w:new) 3) if w:new < w:old (1? down ) send (REQUEST, w:new) to all neighbors w:old := w:new 4) on receipt of (REQUEST, w:neighbor) from p j w j := w:neighbor if w j < w:new (1? ) and w:new > min:weight send (WORK, x 1 ) to p j w:new := w:new? w(x 1 ) w j := w j + w(x 1 ) 5) on receipt of (WORK, x) form processor p j if bound(x) < c w:new := w:new + w(x) send (REQUEST, w:new) to p j insert x into local heap 6) if w:new > w:old (1 + up ) and w:new > min:weight unmark all neighbors for j := 1 to k do choose a random unmarked neighbor p l and mark p l if w l < w:new (1? ) send (WORK, x 1 ) to processor p l 7

9 For networks which are used here, we could achieve good results with a local migration space. For larger networks and especially for networks of larger diameter it seems to be reasonable to extend the decision base. This can be done by introducing some kind of hierarchy by clustering processors together to form a node of a second hierarchy, on which an algorithm using a local decision base is performed. This means that the same algorithm will also work with extended decision base on larger networks. The basic principle of this algorithm is to balance the workload of a processor in a way that the weight of a processor p and its neighbors fp 1 ; : : : ; p k g are on a nearly equal level. Since the load situations vary dynamically the algorithm tries to achieve a situation in which the maximum weight dierence jw(p)?w(p max i)j i2f1;:::;kg is less than a xed. Since we use a transputer network for our experiments, jw(p)j a processor tries to achieve a balance with its four neighbors. Because each processor belongs to ve overlapping balanced islands, (an island is built by one processor and its four neighbors) this strategy leads to a balance throughout the whole network which depends on the network diameter and the parameter. Overlapping balanced islands lead to global balancing in dependence of network diameter and. To achieve a good workload balance necessary for our strategy, a small is favorable. On the other side, trashing eects, also known from the design of paging algorithm in operating systems, are likely to occur for small -values. This means that processors spend nearly all their time on workload balancing and do not proceed in their computational work. An easy calculation explaines the reason why such small -values are necessary for networks with large diameter. The maximal weight dierence of two processors p i ; p j in a network with diameter d caused by this strategy is expressed by : min:w := w(p i ) (1? ) d w(p j ) w(p i ) (1 + ) d =: max:w. The following table gives the values of min.w and max.w for this function if w(p i ) = 1 and dierent, which indicates the great importance of the network diameter to our load balancing strategy. d=2 d=4 d=8 min:w max:w min:w max:w min:w max:w The behavior of the actor process is determined by the following parameters: up, down : a load balancing activity is initiated, if the load weight decreases more than down percent or increases more than up percent. : a process p i participates on a load balancing activity of its neighbor p j, i their heapweights dier by more than percent. min:weight : all load balancing activities are only performed by a processor, if its local heapweight is larger than min:weight. 6

10 directly. Therefore, a branching step is performed only on graphs where every vertex has a degree of at least three. The cardinality of a minimal weight maximal matching for a graph is a natural lower bound for the vertex cover. A better lower bound for the costs of a vertex cover of a graph G which is already partly covered by the cover C is given by j C j + b 1 + b 2. In this case, b 1 is the value of a minimal weight maximal cover by triangles of the uncovered graph, which is found by a greedy algorithm. We compute a maximal matching on the graph which is neither covered by C nor by the covering by triangles. The weight of this matching is summed as b 2. 4 Distributed Algorithm In this section, we give a description of our load balancing strategy. The load balancing algorithm is performed on each processor in parallel to the sequential branch & bound algorithm and knows the local load situation. Two dierent processes are building the overall load balancing process. One process which is responsible for the balancing of subproblems is called "actor". The algorithm performed by this process is a dynamic distributed load balancing algorithm. Another process, which is called "controller", controls the load balancing activities of the actor and is responsible for the dynamic estimation of some parameters of the actor process. These parameters are controlled by a feedback strategy. loadsituation load balancing decisions actor process parameters controller process In the next part of this section we describe the actor process. Then we motivate why the parameters of this process have to be altered in response to its own activities. The design of the controller process is given in the second part of this section. 4.1 Actor Process Dynamic distributed load balancing algorithms can be characterized by the knowledge used to decide when to distribute load units (decision base) and by the space in which a processor migrates such load (migration space). We can distinguish between the global and local decision base. In the rst case a processor needs knowledge about the global system or nearly every processor in the network to decide about a load migration. If a local decision base is used, a migration decision is purely based on the local situation and that of the neighboring processors in the network. In the same way, we distinguish between a global and local migration space [13]. The algorithm proposed here uses a local decision and migration space. A local migration space seems to us to be the most natural way of load balancing with explicit protocols in large networks and dynamical load situations. This is based on our results which we gained by a comparison of dierent known and some new load balancing strategies, described in [13]. From these experiments, it seems to be better to decide only about the direction a packet should take in the network instead of the exact destination processor, if the local load situation is likely to change very fast. In this case, information, which is gathered in the network and send on long paths in the network, is often out of date. 5

11 processor. In this case the size of the network plays also an important role. Experience in the use of dierent weight functions will be described in section 5. The question of distributed dynamic load balancing has been intensively studied in the past. Most of the work which has been done in the eld of distributed dynamic load balancing is unsuited for our problem, since these load balancing strategies focus purely on the minimization of processor idle time. For this they often introduce two or three levels of processor saturation. A processor can be in one of the three static states low (L), normal (N) or high (H) in dependence of its local load situation. According to this, two processors can be in state H, but their load situation can be extremely dierent. This policy is best suited for load balancing in distributed operation systems and a lot of other applications, but does not fulll our needs since we introduced a quality measure. Our aim is to keep all processors on the same quality level. Strategies of this type are described in [11, 15, 20]. A comparison of these techniques can also be found in [13]. The aim of our load balancing method is to keep the heapweights of neighbored processors on a nearly equal level. For this purpose, each processor knows the heapweight of his neighbors in the network. It sends some subproblems to a neighbor if its own heapweight is large (relative to the heapweights of its neighbors) and sends its heapweight for work request if it is assumed that it is small. This strategy will guarantee that a processor and its neighbors have nearly the same heapweight and therefore lead to maximal processor saturation and an approximatively equal heapweight level in the whole network. If the load is dened in an appropriate way, this means that all processors work on subproblems which have a nearly equal bound. Therefore the search overhead is minimized if the load dierence of neighbored processors is small enough. To avoid trashing situations that are likely to occur if is small and the load situation changes dramatically, we introduce a controller process to modify by feedback of the load balancing activity. This algorithm is described in chapter 4. 3 Sequential Branch & Bound In this section, we give a short description of the branch and bound algorithms for the vertex cover and the weighted vertex cover problem. We assume familiarity with the general branch & bound principle. For an introduction see [10]. All subproblems are coded as tuples (C; b) where C is a subcover, i.e. C is a set of nodes that covers some edges of the graph G, and b is a lower bound for the costs of all solutions containing C. The branch and bound algorithms generate from a given subproblem (C; b) two new subproblems (C 1 ; b 1 ), (C 2 ; b 2 ). The generation of C 1 and C 2 by the branch procedure is done by searching for a node v with maximal degree in the uncovered graph (part of the original graph that has not yet been covered). The two new subproblems are built by C 1 := C [fvg and C 2 = C [fw 2 V j fv; wg 2 Eg. The branch algorithm reduces the uncovered graph by applying the following rules: if there is an edge fv; vg in the uncovered graph, then v has to be added to the cover. if the uncovered graph contains a node v and the sum of all weights of the neighboring nodes of v does not exceed the weight of v, then all neighbors belong to the minimal vertex cover. if there are edges fv 1 ; v 2 g, fv 2 ; v 3 g in the uncovered graph G=(V, E) and v 2 has degree 2, then a new graph G'=(V', E') is dened by: V 0 := V [ fv 0 gnfv 1 ; v 2 ; v 3 g where v 0 is a new node, and E 0 := ffu; vg j fu; vg \ fv 1 ; v 2 ; v 3 g = ;g [ ffu; v 0 g j fu; xg 2 E; x 2 fv 1 ; v 3 gg. The vertex cover problem is solved for the graph G 0. If v 0 in an element of the minimal vertex cover of G 0 then v 1 and v 3 are in the minimal cover of G, else v 2 is in the minimal cover of G. This rule can only be used in case of the unweighted vertex cover problem. Note that these rules can be applied as long as the uncovered graph has a node of degree at most two. This implies that a vertex cover for a graph containing only nodes of maximal degree two can be found 4

12 The idle time for each processor has to be minimized, which requires a workload distribution, that no processor runs out of work. The sequential algorithm explores in each step the subproblem which may lead to the best solution (subproblem with minimal bound). This bound could only be known to the processors of a distributed system if it is computed by a distributed minimum computation, or if the heap is managed in some centralized way. Since this is in general to inecient, it is likely that a distributed algorithm will produce search overhead i. e. the solution tree which is computed by the parallel algorithm is larger than that of the sequential algorithm. The minimization of search overhead is a goal contrary to the minimization of idle times, since it is necessary to distribute the workload through the system to fulll the rst goal. On the other side, one way to avoid search overhead is to concentrate the activities only on some points of the network. To handle this tradeo, we have to introduce an appropriate load balancing method. Every load balancing strategy will produce additional communication overhead, which is caused by the distribution of work packets through the network and by the communication protocols necessary to organize the balancing activities. The goal of minimizing the communication overhead is also contrary to the problems of minimizing idle times and avoiding search overhead. In general, we can distinguish static and dynamic load balancing. In the case of static load balancing, which is more often referred as the mapping problem, the task is to map a static process graph to a xed interconnection topology to minimize dilation, processor load dierences and edge congestion. For an overview see [14]. If the load situation of a distributed system changes dynamically by generation and consumption of load units in an unpredicatable way, it is necessary to use a dynamic load balancing strategy which reacts on this load changes. The decision of load remapping can either be centralized or distributed. Since centralized strategies are only useful for small distributed systems, we use a distributed dynamic load balancing strategy in this paper. To describe a load balancing strategy, one rst has to dene the load of a processor. For our purpose, we dene the load of a processor to be the result of a weight function w on the local heap elements of a processor. Let p i a processor, c the bound of the best solution found so far and fx 1 ; : : : ; x k g the bounds of the heap elements of processor p i which can lead to a better solution (x i < c). Then some possibilities for the weight function w : fp 1 ; : : : ; p n g?! N are : w(p i ) = k (1) w(p i ) = min j fx j g (2) w(p i ) = w(p i ) = kx kx j=1 j=1 (c? x j ) 2 (3) e c?xj (4) Since the local load situation various dynamically the weight function must be computable in constant time or fulll the requirement that w(x 1 ; : : : ; x k ) = w(x 1 ; : : : ; x i )+w(x i+1 ; : : : ; x k ) to support an ecient computation of the weight function in case of load migrations. With the help of this weight function the quality of the subproblems of each processor can be dened. The choice of the weight function depends on the concrete problem which has to be solved and the size of the used distributed system. One criteria for the weight function is the distribution of subproblembounds. If it can be assumed that the bounds of most subproblems are in a very small interval, it makes no sense to weight the single subproblem as done by the functions (3) and (4). In this case, the weight function (1) may be sucient. If on the other side the bounds are distributed over a large interval, it may be necessary to weight the "good" subproblems (subproblems with low bound) more than other subproblems in order to reduce search overhead. In this case functions (3) and (4) are useful. Another criteria is the average computation time for one subproblem. If this time is very large, it may be possible to balance the processor weights according to the bounds of the best subproblem of this 3

13 bound. This technique leads to very good computational performance, but it has to pay for this with large storage requirements. The parallelization of branch & bound algorithms for all kinds of parallel machines was studied by several authors. The implementation of branch & bound on a shared memory system is very simple, because each processor can access the global heap directly. The simulation of global memory on a distributed memory system for solving branch & bound problems was described in [12]. This technique is only useful in special cases or for small numbers of processors and leads in general to communication bottlenecks. Only if the computation time for one branching step is much larger than the communication time to transfer one load unit, bottlenecks can be avoided. Vornberger [24] presented a load balancing algorithm for the Vertex Cover problem that was motivated by the theoretical work of Karp and Zhang [5]. It leads with some additional features to a speedup of on a network of 32 transputers. We did some experiments with this technique and found it not to be useful for larger networks, because of the poor balancing in the ending phase of the algorithm (only few subproblems in the network, but concentrated on some processors). Quinn [18] presents results for a round robin distribution of unexamined subproblems to the neighbors of a processor and studies the eects of dierent selection strategies for the subproblem that is sent during one migration phase. He compares simulation results that were computed on a 64 processor hypercube network with theoretically predicted speedup values and achieves a very close relationship. He got the best results by distributing the second best subproblem to a randomly chosen processor after each iteration (speedup of on 64 processors and on 32 processors) There exist several other results concerning parallel and distributed implementations of branch & bound algorithms. Some of them are presented in [1, 4, 7, 9, 12, 13, 22, 23]. In this paper, we describe a distributed version of best-rst branch & bound and show its eciency by solving the vertex cover and the weighted vertex cover problem. These problems have applications in dierent areas, see [2] for an overview. For an undirected graph G = (V; E) a subset U V of nodes is called a vertex cover, i every edge is covered by a node from U, i.e. fu; vg 2 E implies u 2 Uor v 2 U. For the weighted vertex cover problem of a node weighted graph, the problem is to nd a cover of minimal weight. In case of the vertex cover problem all node weights equal one. These problems are known to be NP complete, an ecient backtracking algorithm is described in [21] and also in [17].. Our parallelization of this algorithm is fully distributed. The communication between processors is done solely by message passing on a xed interconnection network. There is no global or shared memory between processors. Every processor has its own local heap and performs a sequential branch & bound algorithms on this heap. If a processor computes a new solution of the problem, it is broadcasted to all processors in the network. Since subproblems are generated and consumed dynamically in an unpredictable way, it is necessary to use a distributed dynamic load balancing algorithm to distribute the workload equally through the processor network. The eciency of the whole branch & bound algorithm is strongly inuenced by the used load balancing strategy. The main part of this paper will therefore focus on the design of distributed load balancing strategies, which are applied to branch & bound algorithms. The rest of the paper is organized as follows: In section 2, we discuss some general questions concerning distributed load balancing. A short description of our branch and bound procedures solving the vertex cover problem is given in section 3. In section 4, we present a parallelization of branch & bound that uses distributed heap management and dynamic load balancing. Section 5 gives the experimental results on dierent transputer networks. Some conclusions nal the paper. 2 Dynamic Load Balancing In order to achieve a high speedup compared to the sequential branch & bound algorithm, the following two problems have to be solved : 2

14 International Parallel Processing Symposium (IPPS-92) Load Balancing for Distributed Branch & Bound Algorithms R. Luling, B. Monien Department of Mathematics and Computer Science University of Paderborn, Germany rl@uni-paderborn.de, bm@uni-paderborn.de Abstract In this paper, we present a new load balancing algorithm and its application to distributed branch & bound algorithms. We demonstrate the eciency of this scheme by solving some NP-complete problems on a network of up to 256 Transputers. The parallelization of our branch & bound algorithm is fully distributed. Every processor performs the same algorithm but on a dierent part of the solution tree. In this case, it is necessary to distribute subproblems among the processors to achieve a well balanced workload. We present a load balancing method which overcomes the problem of search overhead and idle times by an appropriate load model and avoids trashing eects by a feedback control strategy. To show the performance of our strategy, we solved the Vertex Cover and the weighted Vertex Cover problem for graphs of up to 150 nodes, using highly ecient branch and bound algorithms. Although the computing times were very short on a 256 processor network, we were able to achieve a speedup of up to on a 256 processor De Bruijn network, compared to the sequential algorithm. 1 Introduction Many problems from the areas of operations research and articial intelligence can be dened as combinatorial optimization problems. Among these are the Traveling Salesperson Problem, the Vertex Coloring Problem, Scheduling Problems and also the Vertex Cover Problem. To solve such a problem, an integer solution vector has to be found that respects some nite set of constraints and minimizes/maximizes a given function. Without loss of generality, we focus on minimization problems in this paper. The solution space of such problems is always nite. A simple strategy to nd the optimal solution is the brute force method, that computes the optimal solution by examining all elements of the solution space. Since the solution space grows exponentially in size of the input, this method leads to extreme computation times. Although an ecient parallel execution of this strategy is extremely simple and achieves linear speedup, the computation time is unacceptable. Branch & bound [10] is an universal and well known algorithmic technique for solving problems of this type. It also performs a search through the solution space, but uses heuristic knowledge about the solution to cut o parts of the search tree. Usually branch & bound techniques are combined with the use of dominance relations [21], leading to a further detraction of the search tree. Though these techniques lead to a dramatic reduction of the number of nodes in the search tree, the size of the search space is still growing exponentially with the size of the input. Therefore, it is an open research problem to study, whether parallelism can be fully exploited in solving problems of this type. The name branch & bound describes a large class of search techniques. Among these are depth-rst branch & bound and best-rst branch & bound. Depth-rst branch & bound performs a depth rst search through the solution space and cuts o a branch whenever its bound is worse than the best solution found up to that time. Best-rst branch & bound always branches a subproblem with minimal This work was partly supported by the german federal department of science and technology (BMFT), PARAWAN project ITR 9007 BO 1

Solving the Traveling Salesman Problem with a Parallel. Branch-and-Bound Algorithm on a 1024 Processor Network

Solving the Traveling Salesman Problem with a Parallel. Branch-and-Bound Algorithm on a 1024 Processor Network Solving the Traveling Salesman Problem with a Parallel Branch-and-Bound Algorithm on a 124 Processor Network S. Tschoke, M. Racke, R. Luling, B. Monien Department of Mathematics and Computer Science University

More information

Parallel Clustering on a Unidirectional Ring. Gunter Rudolph 1. University of Dortmund, Department of Computer Science, LS XI, D{44221 Dortmund

Parallel Clustering on a Unidirectional Ring. Gunter Rudolph 1. University of Dortmund, Department of Computer Science, LS XI, D{44221 Dortmund Parallel Clustering on a Unidirectional Ring Gunter Rudolph 1 University of Dortmund, Department of Computer Science, LS XI, D{44221 Dortmund 1. Introduction Abstract. In this paper a parallel version

More information

Flow simulation. Frank Lohmeyer, Oliver Vornberger. University of Osnabruck, D Osnabruck.

Flow simulation. Frank Lohmeyer, Oliver Vornberger. University of Osnabruck, D Osnabruck. To be published in: Notes on Numerical Fluid Mechanics, Vieweg 1994 Flow simulation with FEM on massively parallel systems Frank Lohmeyer, Oliver Vornberger Department of Mathematics and Computer Science

More information

Department of Computer Science. a vertex can communicate with a particular neighbor. succeeds if it shares no edge with other calls during

Department of Computer Science. a vertex can communicate with a particular neighbor. succeeds if it shares no edge with other calls during Sparse Hypercube A Minimal k-line Broadcast Graph Satoshi Fujita Department of Electrical Engineering Hiroshima University Email: fujita@se.hiroshima-u.ac.jp Arthur M. Farley Department of Computer Science

More information

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo Two-Stage Service Provision by Branch and Bound Shane Dye Department ofmanagement University of Canterbury Christchurch, New Zealand s.dye@mang.canterbury.ac.nz Asgeir Tomasgard SINTEF, Trondheim, Norway

More information

apply competitive analysis, introduced in [10]. It determines the maximal ratio between online and optimal oine solutions over all possible inputs. In

apply competitive analysis, introduced in [10]. It determines the maximal ratio between online and optimal oine solutions over all possible inputs. In Online Scheduling of Continuous Media Streams? B. Monien, P. Berenbrink, R. Luling, and M. Riedel?? University of Paderborn, Germany E-mail: bm,pebe,rl,barcom@uni-paderborn.de Abstract. We present a model

More information

Rearrangement of DNA fragments: a branch-and-cut algorithm Abstract. In this paper we consider a problem that arises in the process of reconstruction

Rearrangement of DNA fragments: a branch-and-cut algorithm Abstract. In this paper we consider a problem that arises in the process of reconstruction Rearrangement of DNA fragments: a branch-and-cut algorithm 1 C. E. Ferreira 1 C. C. de Souza 2 Y. Wakabayashi 1 1 Instituto de Mat. e Estatstica 2 Instituto de Computac~ao Universidade de S~ao Paulo e-mail:

More information

Finding a winning strategy in variations of Kayles

Finding a winning strategy in variations of Kayles Finding a winning strategy in variations of Kayles Simon Prins ICA-3582809 Utrecht University, The Netherlands July 15, 2015 Abstract Kayles is a two player game played on a graph. The game can be dened

More information

2 Data Reduction Techniques The granularity of reducible information is one of the main criteria for classifying the reduction techniques. While the t

2 Data Reduction Techniques The granularity of reducible information is one of the main criteria for classifying the reduction techniques. While the t Data Reduction - an Adaptation Technique for Mobile Environments A. Heuer, A. Lubinski Computer Science Dept., University of Rostock, Germany Keywords. Reduction. Mobile Database Systems, Data Abstract.

More information

Enumeration of Full Graphs: Onset of the Asymptotic Region. Department of Mathematics. Massachusetts Institute of Technology. Cambridge, MA 02139

Enumeration of Full Graphs: Onset of the Asymptotic Region. Department of Mathematics. Massachusetts Institute of Technology. Cambridge, MA 02139 Enumeration of Full Graphs: Onset of the Asymptotic Region L. J. Cowen D. J. Kleitman y F. Lasaga D. E. Sussman Department of Mathematics Massachusetts Institute of Technology Cambridge, MA 02139 Abstract

More information

The problem of minimizing the elimination tree height for general graphs is N P-hard. However, there exist classes of graphs for which the problem can

The problem of minimizing the elimination tree height for general graphs is N P-hard. However, there exist classes of graphs for which the problem can A Simple Cubic Algorithm for Computing Minimum Height Elimination Trees for Interval Graphs Bengt Aspvall, Pinar Heggernes, Jan Arne Telle Department of Informatics, University of Bergen N{5020 Bergen,

More information

Scalability of a parallel implementation of ant colony optimization

Scalability of a parallel implementation of ant colony optimization SEMINAR PAPER at the University of Applied Sciences Technikum Wien Game Engineering and Simulation Scalability of a parallel implementation of ant colony optimization by Emanuel Plochberger,BSc 3481, Fels

More information

CPSC 320 Sample Solution, Playing with Graphs!

CPSC 320 Sample Solution, Playing with Graphs! CPSC 320 Sample Solution, Playing with Graphs! September 23, 2017 Today we practice reasoning about graphs by playing with two new terms. These terms/concepts are useful in themselves but not tremendously

More information

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for Comparison of Two Image-Space Subdivision Algorithms for Direct Volume Rendering on Distributed-Memory Multicomputers Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc Dept. of Computer Eng. and

More information

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a

The Global Standard for Mobility (GSM) (see, e.g., [6], [4], [5]) yields a Preprint 0 (2000)?{? 1 Approximation of a direction of N d in bounded coordinates Jean-Christophe Novelli a Gilles Schaeer b Florent Hivert a a Universite Paris 7 { LIAFA 2, place Jussieu - 75251 Paris

More information

Theoretical Foundations of SBSE. Xin Yao CERCIA, School of Computer Science University of Birmingham

Theoretical Foundations of SBSE. Xin Yao CERCIA, School of Computer Science University of Birmingham Theoretical Foundations of SBSE Xin Yao CERCIA, School of Computer Science University of Birmingham Some Theoretical Foundations of SBSE Xin Yao and Many Others CERCIA, School of Computer Science University

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Modelling Combinatorial Problems for CLP(FD+R) Henk Vandecasteele. Department of Computer Science, K. U. Leuven

Modelling Combinatorial Problems for CLP(FD+R) Henk Vandecasteele. Department of Computer Science, K. U. Leuven Modelling Combinatorial Problems for CLP(FD+R) Henk Vandecasteele Department of Computer Science, K. U. Leuven Celestijnenlaan 200A, B-3001 Heverlee, Belgium henk.vandecasteele@cs.kuleuven.ac.be Robert

More information

Neuro-Remodeling via Backpropagation of Utility. ABSTRACT Backpropagation of utility is one of the many methods for neuro-control.

Neuro-Remodeling via Backpropagation of Utility. ABSTRACT Backpropagation of utility is one of the many methods for neuro-control. Neuro-Remodeling via Backpropagation of Utility K. Wendy Tang and Girish Pingle 1 Department of Electrical Engineering SUNY at Stony Brook, Stony Brook, NY 11794-2350. ABSTRACT Backpropagation of utility

More information

Search Algorithms for Discrete Optimization Problems

Search Algorithms for Discrete Optimization Problems Search Algorithms for Discrete Optimization Problems Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. Topic

More information

Parallel Computing. Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides)

Parallel Computing. Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides) Parallel Computing 2012 Slides credit: M. Quinn book (chapter 3 slides), A Grama book (chapter 3 slides) Parallel Algorithm Design Outline Computational Model Design Methodology Partitioning Communication

More information

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm

Seminar on. A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Seminar on A Coarse-Grain Parallel Formulation of Multilevel k-way Graph Partitioning Algorithm Mohammad Iftakher Uddin & Mohammad Mahfuzur Rahman Matrikel Nr: 9003357 Matrikel Nr : 9003358 Masters of

More information

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems

Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Some Applications of Graph Bandwidth to Constraint Satisfaction Problems Ramin Zabih Computer Science Department Stanford University Stanford, California 94305 Abstract Bandwidth is a fundamental concept

More information

Communication Networks I December 4, 2001 Agenda Graph theory notation Trees Shortest path algorithms Distributed, asynchronous algorithms Page 1

Communication Networks I December 4, 2001 Agenda Graph theory notation Trees Shortest path algorithms Distributed, asynchronous algorithms Page 1 Communication Networks I December, Agenda Graph theory notation Trees Shortest path algorithms Distributed, asynchronous algorithms Page Communication Networks I December, Notation G = (V,E) denotes a

More information

CSE 417 Branch & Bound (pt 4) Branch & Bound

CSE 417 Branch & Bound (pt 4) Branch & Bound CSE 417 Branch & Bound (pt 4) Branch & Bound Reminders > HW8 due today > HW9 will be posted tomorrow start early program will be slow, so debugging will be slow... Review of previous lectures > Complexity

More information

The element the node represents End-of-Path marker e The sons T

The element the node represents End-of-Path marker e The sons T A new Method to index and query Sets Jorg Homann Jana Koehler Institute for Computer Science Albert Ludwigs University homannjkoehler@informatik.uni-freiburg.de July 1998 TECHNICAL REPORT No. 108 Abstract

More information

arxiv:cs/ v1 [cs.ds] 20 Feb 2003

arxiv:cs/ v1 [cs.ds] 20 Feb 2003 The Traveling Salesman Problem for Cubic Graphs David Eppstein School of Information & Computer Science University of California, Irvine Irvine, CA 92697-3425, USA eppstein@ics.uci.edu arxiv:cs/0302030v1

More information

to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics

to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics An Application of Lagrangian Relaxation to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics M lardalen University SE-721

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

h h

h h Experiments in Routing h-relations Using Constant Thinning And Geometric Thinning Algorithms Anssi Kautonen University of Joensuu, Department of Computer Science, P.O. Box 111, 80101 Joensuu, Finland email

More information

v 2 v 3 v 1 v 0 K 3 K 1

v 2 v 3 v 1 v 0 K 3 K 1 It is Hard to Know when Greedy is Good for Finding Independent Sets Hans L. Bodlaender, Dimitrios M. Thilikos, Koichi Yamazaki Department of Computer Science, Utrecht University, P.O. Box 80.089, 3508

More information

Network. Department of Statistics. University of California, Berkeley. January, Abstract

Network. Department of Statistics. University of California, Berkeley. January, Abstract Parallelizing CART Using a Workstation Network Phil Spector Leo Breiman Department of Statistics University of California, Berkeley January, 1995 Abstract The CART (Classication and Regression Trees) program,

More information

Richard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in

Richard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in A Complete Anytime Algorithm for Number Partitioning Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90095 korf@cs.ucla.edu June 27, 1997 Abstract Given

More information

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the

Heap-on-Top Priority Queues. March Abstract. We introduce the heap-on-top (hot) priority queue data structure that combines the Heap-on-Top Priority Queues Boris V. Cherkassky Central Economics and Mathematics Institute Krasikova St. 32 117418, Moscow, Russia cher@cemi.msk.su Andrew V. Goldberg NEC Research Institute 4 Independence

More information

Abstract Relaxed balancing has become a commonly used concept in the design of concurrent search tree algorithms. The idea of relaxed balancing is to

Abstract Relaxed balancing has become a commonly used concept in the design of concurrent search tree algorithms. The idea of relaxed balancing is to The Performance of Concurrent Red-Black Tree Algorithms Institut fur Informatik Report 5 Sabine Hanke Institut fur Informatik, Universitat Freiburg Am Flughafen 7, 79 Freiburg, Germany Email: hanke@informatik.uni-freiburg.de

More information

Search Algorithms for Discrete Optimization Problems

Search Algorithms for Discrete Optimization Problems Search Algorithms for Discrete Optimization Problems Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text ``Introduction to Parallel Computing'', Addison Wesley, 2003. 1 Topic

More information

Welcome to the course Algorithm Design

Welcome to the course Algorithm Design Welcome to the course Algorithm Design Summer Term 2011 Friedhelm Meyer auf der Heide Lecture 13, 15.7.2011 Friedhelm Meyer auf der Heide 1 Topics - Divide & conquer - Dynamic programming - Greedy Algorithms

More information

Lecture 5: Search Algorithms for Discrete Optimization Problems

Lecture 5: Search Algorithms for Discrete Optimization Problems Lecture 5: Search Algorithms for Discrete Optimization Problems Definitions Discrete optimization problem (DOP): tuple (S, f), S finite set of feasible solutions, f : S R, cost function. Objective: find

More information

REDUCING GRAPH COLORING TO CLIQUE SEARCH

REDUCING GRAPH COLORING TO CLIQUE SEARCH Asia Pacific Journal of Mathematics, Vol. 3, No. 1 (2016), 64-85 ISSN 2357-2205 REDUCING GRAPH COLORING TO CLIQUE SEARCH SÁNDOR SZABÓ AND BOGDÁN ZAVÁLNIJ Institute of Mathematics and Informatics, University

More information

The strong chromatic number of a graph

The strong chromatic number of a graph The strong chromatic number of a graph Noga Alon Abstract It is shown that there is an absolute constant c with the following property: For any two graphs G 1 = (V, E 1 ) and G 2 = (V, E 2 ) on the same

More information

Parallel Rewriting of Graphs through the. Pullback Approach. Michel Bauderon 1. Laboratoire Bordelais de Recherche en Informatique

Parallel Rewriting of Graphs through the. Pullback Approach. Michel Bauderon 1. Laboratoire Bordelais de Recherche en Informatique URL: http://www.elsevier.nl/locate/entcs/volume.html 8 pages Parallel Rewriting of Graphs through the Pullback Approach Michel Bauderon Laboratoire Bordelais de Recherche en Informatique Universite Bordeaux

More information

Henning Koch. Dept. of Computer Science. University of Darmstadt. Alexanderstr. 10. D Darmstadt. Germany. Keywords:

Henning Koch. Dept. of Computer Science. University of Darmstadt. Alexanderstr. 10. D Darmstadt. Germany. Keywords: Embedding Protocols for Scalable Replication Management 1 Henning Koch Dept. of Computer Science University of Darmstadt Alexanderstr. 10 D-64283 Darmstadt Germany koch@isa.informatik.th-darmstadt.de Keywords:

More information

Lecture 4: September 11, 2003

Lecture 4: September 11, 2003 Algorithmic Modeling and Complexity Fall 2003 Lecturer: J. van Leeuwen Lecture 4: September 11, 2003 Scribe: B. de Boer 4.1 Overview This lecture introduced Fixed Parameter Tractable (FPT) problems. An

More information

Complexity Results on Graphs with Few Cliques

Complexity Results on Graphs with Few Cliques Discrete Mathematics and Theoretical Computer Science DMTCS vol. 9, 2007, 127 136 Complexity Results on Graphs with Few Cliques Bill Rosgen 1 and Lorna Stewart 2 1 Institute for Quantum Computing and School

More information

A technique for adding range restrictions to. August 30, Abstract. In a generalized searching problem, a set S of n colored geometric objects

A technique for adding range restrictions to. August 30, Abstract. In a generalized searching problem, a set S of n colored geometric objects A technique for adding range restrictions to generalized searching problems Prosenjit Gupta Ravi Janardan y Michiel Smid z August 30, 1996 Abstract In a generalized searching problem, a set S of n colored

More information

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

Two Problems - Two Solutions: One System - ECLiPSe. Mark Wallace and Andre Veron. April 1993

Two Problems - Two Solutions: One System - ECLiPSe. Mark Wallace and Andre Veron. April 1993 Two Problems - Two Solutions: One System - ECLiPSe Mark Wallace and Andre Veron April 1993 1 Introduction The constraint logic programming system ECL i PS e [4] is the successor to the CHIP system [1].

More information

A New Theory of Deadlock-Free Adaptive. Routing in Wormhole Networks. Jose Duato. Abstract

A New Theory of Deadlock-Free Adaptive. Routing in Wormhole Networks. Jose Duato. Abstract A New Theory of Deadlock-Free Adaptive Routing in Wormhole Networks Jose Duato Abstract Second generation multicomputers use wormhole routing, allowing a very low channel set-up time and drastically reducing

More information

Parallel Combinatorial Search on Computer Cluster: Sam Loyd s Puzzle

Parallel Combinatorial Search on Computer Cluster: Sam Loyd s Puzzle Parallel Combinatorial Search on Computer Cluster: Sam Loyd s Puzzle Plamenka Borovska Abstract: The paper investigates the efficiency of parallel branch-and-bound search on multicomputer cluster for the

More information

On the other hand, the main disadvantage of the amortized approach is that it cannot be applied in real-time programs, where the worst-case bound on t

On the other hand, the main disadvantage of the amortized approach is that it cannot be applied in real-time programs, where the worst-case bound on t Randomized Meldable Priority Queues Anna Gambin and Adam Malinowski Instytut Informatyki, Uniwersytet Warszawski, Banacha 2, Warszawa 02-097, Poland, faniag,amalg@mimuw.edu.pl Abstract. We present a practical

More information

Dominating Set on Bipartite Graphs

Dominating Set on Bipartite Graphs Dominating Set on Bipartite Graphs Mathieu Liedloff Abstract Finding a dominating set of minimum cardinality is an NP-hard graph problem, even when the graph is bipartite. In this paper we are interested

More information

Control Schemes in a Generalized Utility for Parallel Branch-and-Bound Algorithms

Control Schemes in a Generalized Utility for Parallel Branch-and-Bound Algorithms Control Schemes in a Generalized Utility for Parallel Branch-and-Bound Algorithms Yuji Shinano, Kenichi Harada and Ryuichi Hirabayashi Department of Management Science Science University of Tokyo 1-3,

More information

A Recursive Coalescing Method for Bisecting Graphs

A Recursive Coalescing Method for Bisecting Graphs A Recursive Coalescing Method for Bisecting Graphs The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Accessed Citable

More information

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD

Localization in Graphs. Richardson, TX Azriel Rosenfeld. Center for Automation Research. College Park, MD CAR-TR-728 CS-TR-3326 UMIACS-TR-94-92 Samir Khuller Department of Computer Science Institute for Advanced Computer Studies University of Maryland College Park, MD 20742-3255 Localization in Graphs Azriel

More information

Data Communication and Parallel Computing on Twisted Hypercubes

Data Communication and Parallel Computing on Twisted Hypercubes Data Communication and Parallel Computing on Twisted Hypercubes E. Abuelrub, Department of Computer Science, Zarqa Private University, Jordan Abstract- Massively parallel distributed-memory architectures

More information

q ii (t) =;X q ij (t) where p ij (t 1 t 2 ) is the probability thatwhen the model is in the state i in the moment t 1 the transition occurs to the sta

q ii (t) =;X q ij (t) where p ij (t 1 t 2 ) is the probability thatwhen the model is in the state i in the moment t 1 the transition occurs to the sta DISTRIBUTED GENERATION OF MARKOV CHAINS INFINITESIMAL GENERATORS WITH THE USE OF THE LOW LEVEL NETWORK INTERFACE BYLINA Jaros law, (PL), BYLINA Beata, (PL) Abstract. In this paper a distributed algorithm

More information

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions

Data Partitioning. Figure 1-31: Communication Topologies. Regular Partitions Data In single-program multiple-data (SPMD) parallel programs, global data is partitioned, with a portion of the data assigned to each processing node. Issues relevant to choosing a partitioning strategy

More information

Parallel Branch & Bound

Parallel Branch & Bound Parallel Branch & Bound Bernard Gendron Université de Montréal gendron@iro.umontreal.ca Outline Mixed integer programming (MIP) and branch & bound (B&B) Linear programming (LP) based B&B Relaxation and

More information

Department of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley

Department of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley Department of Computer Science Remapping Subpartitions of Hyperspace Using Iterative Genetic Search Keith Mathias and Darrell Whitley Technical Report CS-4-11 January 7, 14 Colorado State University Remapping

More information

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) January 11, 2018 Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1 In this lecture

More information

A CSP Search Algorithm with Reduced Branching Factor

A CSP Search Algorithm with Reduced Branching Factor A CSP Search Algorithm with Reduced Branching Factor Igor Razgon and Amnon Meisels Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105, Israel {irazgon,am}@cs.bgu.ac.il

More information

On Object Orientation as a Paradigm for General Purpose. Distributed Operating Systems

On Object Orientation as a Paradigm for General Purpose. Distributed Operating Systems On Object Orientation as a Paradigm for General Purpose Distributed Operating Systems Vinny Cahill, Sean Baker, Brendan Tangney, Chris Horn and Neville Harris Distributed Systems Group, Dept. of Computer

More information

A Linear Time Algorithm for the Minimum Spanning Tree Problem. on a Planar Graph. Tomomi MATSUI. (January 1994 )

A Linear Time Algorithm for the Minimum Spanning Tree Problem. on a Planar Graph. Tomomi MATSUI. (January 1994 ) A Linear Time Algorithm for the Minimum Spanning Tree Problem on a Planar Graph Tomomi MATSUI (January 994 ) Department of Mathematical Engineering and Information Physics Faculty of Engineering, University

More information

Blocking vs. Non-blocking Communication under. MPI on a Master-Worker Problem. Institut fur Physik. TU Chemnitz. D Chemnitz.

Blocking vs. Non-blocking Communication under. MPI on a Master-Worker Problem. Institut fur Physik. TU Chemnitz. D Chemnitz. Blocking vs. Non-blocking Communication under MPI on a Master-Worker Problem Andre Fachat, Karl Heinz Homann Institut fur Physik TU Chemnitz D-09107 Chemnitz Germany e-mail: fachat@physik.tu-chemnitz.de

More information

Vertex Cover Approximations on Random Graphs.

Vertex Cover Approximations on Random Graphs. Vertex Cover Approximations on Random Graphs. Eyjolfur Asgeirsson 1 and Cliff Stein 2 1 Reykjavik University, Reykjavik, Iceland. 2 Department of IEOR, Columbia University, New York, NY. Abstract. The

More information

Let v be a vertex primed by v i (s). Then the number f(v) of neighbours of v which have

Let v be a vertex primed by v i (s). Then the number f(v) of neighbours of v which have Let v be a vertex primed by v i (s). Then the number f(v) of neighbours of v which have been red in the sequence up to and including v i (s) is deg(v)? s(v), and by the induction hypothesis this sequence

More information

Eect of fan-out on the Performance of a. Single-message cancellation scheme. Atul Prakash (Contact Author) Gwo-baw Wu. Seema Jetli

Eect of fan-out on the Performance of a. Single-message cancellation scheme. Atul Prakash (Contact Author) Gwo-baw Wu. Seema Jetli Eect of fan-out on the Performance of a Single-message cancellation scheme Atul Prakash (Contact Author) Gwo-baw Wu Seema Jetli Department of Electrical Engineering and Computer Science University of Michigan,

More information

The Number of Connected Components in Graphs and Its. Applications. Ryuhei Uehara. Natural Science Faculty, Komazawa University.

The Number of Connected Components in Graphs and Its. Applications. Ryuhei Uehara. Natural Science Faculty, Komazawa University. The Number of Connected Components in Graphs and Its Applications Ryuhei Uehara uehara@komazawa-u.ac.jp Natural Science Faculty, Komazawa University Abstract For any given graph and an integer k, the number

More information

Parallel Interval Analysis for Chemical Process Modeling

Parallel Interval Analysis for Chemical Process Modeling arallel Interval Analysis for Chemical rocess Modeling Chao-Yang Gau and Mark A. Stadtherr Λ Department of Chemical Engineering University of Notre Dame Notre Dame, IN 46556 USA SIAM CSE 2000 Washington,

More information

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Zhou B. B., Brent R. P. and Tridgell A. y Computer Sciences Laboratory The Australian National University Canberra,

More information

perform. If more storage is required, more can be added without having to modify the processor (provided that the extra memory is still addressable).

perform. If more storage is required, more can be added without having to modify the processor (provided that the extra memory is still addressable). How to Make Zuse's Z3 a Universal Computer Raul Rojas January 14, 1998 Abstract The computing machine Z3, built by Konrad Zuse between 1938 and 1941, could only execute xed sequences of oating-point arithmetical

More information

Lecture 4 Wide Area Networks - Routing

Lecture 4 Wide Area Networks - Routing DATA AND COMPUTER COMMUNICATIONS Lecture 4 Wide Area Networks - Routing Mei Yang Based on Lecture slides by William Stallings 1 ROUTING IN PACKET SWITCHED NETWORK key design issue for (packet) switched

More information

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007 CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

More information

2 J. Karvo et al. / Blocking of dynamic multicast connections Figure 1. Point to point (top) vs. point to multipoint, or multicast connections (bottom

2 J. Karvo et al. / Blocking of dynamic multicast connections Figure 1. Point to point (top) vs. point to multipoint, or multicast connections (bottom Telecommunication Systems 0 (1998)?? 1 Blocking of dynamic multicast connections Jouni Karvo a;, Jorma Virtamo b, Samuli Aalto b and Olli Martikainen a a Helsinki University of Technology, Laboratory of

More information

[13] D. Karger, \Using randomized sparsication to approximate minimum cuts" Proc. 5th Annual

[13] D. Karger, \Using randomized sparsication to approximate minimum cuts Proc. 5th Annual [12] F. Harary, \Graph Theory", Addison-Wesley, Reading, MA, 1969. [13] D. Karger, \Using randomized sparsication to approximate minimum cuts" Proc. 5th Annual ACM-SIAM Symposium on Discrete Algorithms,

More information

Reinforcement Learning Scheme. for Network Routing. Michael Littman*, Justin Boyan. School of Computer Science. Pittsburgh, PA

Reinforcement Learning Scheme. for Network Routing. Michael Littman*, Justin Boyan. School of Computer Science. Pittsburgh, PA A Distributed Reinforcement Learning Scheme for Network Routing Michael Littman*, Justin Boyan Carnegie Mellon University School of Computer Science Pittsburgh, PA * also Cognitive Science Research Group,

More information

Implementations of Dijkstra's Algorithm. Based on Multi-Level Buckets. November Abstract

Implementations of Dijkstra's Algorithm. Based on Multi-Level Buckets. November Abstract Implementations of Dijkstra's Algorithm Based on Multi-Level Buckets Andrew V. Goldberg NEC Research Institute 4 Independence Way Princeton, NJ 08540 avg@research.nj.nec.com Craig Silverstein Computer

More information

For example, in the widest-shortest path heuristic [8], a path of maximum bandwidth needs to be constructed from a structure S that contains all short

For example, in the widest-shortest path heuristic [8], a path of maximum bandwidth needs to be constructed from a structure S that contains all short A Note on Practical Construction of Maximum Bandwidth Paths Navneet Malpani and Jianer Chen Λ Department of Computer Science Texas A&M University College Station, TX 77843-3112 Abstract Constructing maximum

More information

Reduction of Huge, Sparse Matrices over Finite Fields Via Created Catastrophes

Reduction of Huge, Sparse Matrices over Finite Fields Via Created Catastrophes Reduction of Huge, Sparse Matrices over Finite Fields Via Created Catastrophes Carl Pomerance and J. W. Smith CONTENTS 1. Introduction 2. Description of the Method 3. Outline of Experiments 4. Conclusion

More information

Combinatorial Search; Monte Carlo Methods

Combinatorial Search; Monte Carlo Methods Combinatorial Search; Monte Carlo Methods Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico May 02, 2016 CPD (DEI / IST) Parallel and Distributed

More information

Chordal Graphs: Theory and Algorithms

Chordal Graphs: Theory and Algorithms Chordal Graphs: Theory and Algorithms 1 Chordal graphs Chordal graph : Every cycle of four or more vertices has a chord in it, i.e. there is an edge between two non consecutive vertices of the cycle. Also

More information

Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems

Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems Gamal Attiya and Yskandar Hamam Groupe ESIEE Paris, Lab. A 2 SI Cité Descartes, BP 99, 93162 Noisy-Le-Grand, FRANCE {attiyag,hamamy}@esiee.fr

More information

Computing optimal total vertex covers for trees

Computing optimal total vertex covers for trees Computing optimal total vertex covers for trees Pak Ching Li Department of Computer Science University of Manitoba Winnipeg, Manitoba Canada R3T 2N2 Abstract. Let G = (V, E) be a simple, undirected, connected

More information

Extended Dataflow Model For Automated Parallel Execution Of Algorithms

Extended Dataflow Model For Automated Parallel Execution Of Algorithms Extended Dataflow Model For Automated Parallel Execution Of Algorithms Maik Schumann, Jörg Bargenda, Edgar Reetz and Gerhard Linß Department of Quality Assurance and Industrial Image Processing Ilmenau

More information

31.6 Powers of an element

31.6 Powers of an element 31.6 Powers of an element Just as we often consider the multiples of a given element, modulo, we consider the sequence of powers of, modulo, where :,,,,. modulo Indexing from 0, the 0th value in this sequence

More information

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX Towards an Adaptive Distributed Shared Memory (Preliminary Version ) Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 77843-3 E-mail: fjhkim,vaidyag@cs.tamu.edu

More information

Constraint-Directed Backtracking Algorithm. Wanlin Pang. Scott D. Goodwin. May 1996

Constraint-Directed Backtracking Algorithm. Wanlin Pang. Scott D. Goodwin. May 1996 Constraint-Directed Backtracking Algorithm for Constraint-Satisfaction Problems Wanlin Pang Scott D. Goodwin Technical Report CS-96-05 May 1996 cwanlin Pang and Scott D. Goodwin Department of Computer

More information

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017 CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017 Reading: Section 9.2 of DPV. Section 11.3 of KT presents a different approximation algorithm for Vertex Cover. Coping

More information

Chordal graphs and the characteristic polynomial

Chordal graphs and the characteristic polynomial Discrete Mathematics 262 (2003) 211 219 www.elsevier.com/locate/disc Chordal graphs and the characteristic polynomial Elizabeth W. McMahon ;1, Beth A. Shimkus 2, Jessica A. Wolfson 3 Department of Mathematics,

More information

Monte Carlo Methods; Combinatorial Search

Monte Carlo Methods; Combinatorial Search Monte Carlo Methods; Combinatorial Search Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 22, 2012 CPD (DEI / IST) Parallel and

More information

Search and Optimization

Search and Optimization Search and Optimization Search, Optimization and Game-Playing The goal is to find one or more optimal or sub-optimal solutions in a given search space. We can either be interested in finding any one solution

More information

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p

perform well on paths including satellite links. It is important to verify how the two ATM data services perform on satellite links. TCP is the most p Performance of TCP/IP Using ATM ABR and UBR Services over Satellite Networks 1 Shiv Kalyanaraman, Raj Jain, Rohit Goyal, Sonia Fahmy Department of Computer and Information Science The Ohio State University

More information

V 1. volume. time. t 1

V 1. volume. time. t 1 On-line Trac Contract Renegotiation for Aggregated Trac R. Andreassen and M. Stoer a a Telenor AS, P.O.Box 83, 2007 Kjeller, Norway. Email: fragnar.andreassen, mechthild.stoerg@fou.telenor.no Consider

More information

The Minimum Spanning Tree Problem on a Planar Graph. Tomomi MATSUI. (February 1994 ) Department of Mathematical Engineering and Information Physics

The Minimum Spanning Tree Problem on a Planar Graph. Tomomi MATSUI. (February 1994 ) Department of Mathematical Engineering and Information Physics The Minimum Spanning Tree Problem on a Planar Graph Tomomi MATSUI (February 994 ) Department of Mathematical Engineering and Information Physics Faculty of Engineering, University of Tokyo Bunkyo-ku, Tokyo

More information

Exercise set #2 (29 pts)

Exercise set #2 (29 pts) (29 pts) The deadline for handing in your solutions is Nov 16th 2015 07:00. Return your solutions (one.pdf le and one.zip le containing Python code) via e- mail to Becs-114.4150@aalto.fi. Additionally,

More information

160 M. Nadjarbashi, S.M. Fakhraie and A. Kaviani Figure 2. LUTB structure. each block-level track can be arbitrarily connected to each of 16 4-LUT inp

160 M. Nadjarbashi, S.M. Fakhraie and A. Kaviani Figure 2. LUTB structure. each block-level track can be arbitrarily connected to each of 16 4-LUT inp Scientia Iranica, Vol. 11, No. 3, pp 159{164 c Sharif University of Technology, July 2004 On Routing Architecture for Hybrid FPGA M. Nadjarbashi, S.M. Fakhraie 1 and A. Kaviani 2 In this paper, the routing

More information

Image-Space-Parallel Direct Volume Rendering on a Cluster of PCs

Image-Space-Parallel Direct Volume Rendering on a Cluster of PCs Image-Space-Parallel Direct Volume Rendering on a Cluster of PCs B. Barla Cambazoglu and Cevdet Aykanat Bilkent University, Department of Computer Engineering, 06800, Ankara, Turkey {berkant,aykanat}@cs.bilkent.edu.tr

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable

A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable A Simplied NP-complete MAXSAT Problem Venkatesh Raman 1, B. Ravikumar 2 and S. Srinivasa Rao 1 1 The Institute of Mathematical Sciences, C. I. T. Campus, Chennai 600 113. India 2 Department of Computer

More information

A Nim game played on graphs II

A Nim game played on graphs II Theoretical Computer Science 304 (2003) 401 419 www.elsevier.com/locate/tcs A Nim game played on graphs II Masahiko Fukuyama Graduate School of Mathematical Sciences, University of Tokyo, 3-8-1 Komaba,

More information