Abstract. This paper shows how parallelism has been integrated into

Similar documents
ACO and other (meta)heuristics for CO

Escaping Local Optima: Genetic Algorithm

A B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U

A Parallel Architecture for the Generalized Traveling Salesman Problem

Non-deterministic Search techniques. Emma Hart

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

Heuristic Optimisation

Machine Learning for Software Engineering

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Introduction to Optimization Using Metaheuristics. Thomas J. K. Stidsen

A simulated annealing algorithm for the vehicle routing problem with time windows and synchronization constraints

Using Genetic Algorithms to optimize ACS-TSP

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Towards ParadisEO-MO-GPU: a Framework for GPU-based Local Search Metaheuristics

SAMOS: an Active Object{Oriented Database System. Stella Gatziu, Klaus R. Dittrich. Database Technology Research Group

ALGORITHM SYSTEMS FOR COMBINATORIAL OPTIMIZATION: HIERARCHICAL MULTISTAGE FRAMEWORK

Computational Complexity CSC Professor: Tom Altman. Capacitated Problem

A New Rank Based Version of the. Ant System. - A Computational Study. Bernd Bullnheimer. Richard F. Hartl. Christine Strau. Working Paper No.

Introduction to Optimization Using Metaheuristics. The Lecturer: Thomas Stidsen. Outline. Name: Thomas Stidsen: Nationality: Danish.

METAHEURISTICS. Introduction. Introduction. Nature of metaheuristics. Local improvement procedure. Example: objective function

Metaheuristic Development Methodology. Fall 2009 Instructor: Dr. Masoud Yaghini

Heuristic Optimisation

MINIMAL EDGE-ORDERED SPANNING TREES USING A SELF-ADAPTING GENETIC ALGORITHM WITH MULTIPLE GENOMIC REPRESENTATIONS

Outline of the module

Outline. Computer Science 331. Information Hiding. What This Lecture is About. Data Structures, Abstract Data Types, and Their Implementations

CHAPTER 4 GENETIC ALGORITHM

Solving Traveling Salesman Problem Using Parallel Genetic. Algorithm and Simulated Annealing

A Two-Dimensional Mapping for the Traveling Salesman Problem

Attractor of Local Search Space in the Traveling Salesman Problem

Combining Two Local Searches with Crossover: An Efficient Hybrid Algorithm for the Traveling Salesman Problem

Optimization Techniques for Design Space Exploration

HEURISTICS FOR THE NETWORK DESIGN PROBLEM

Heuristic Optimization Introduction and Simple Heuristics

to the Traveling Salesman Problem 1 Susanne Timsj Applied Optimization and Modeling Group (TOM) Department of Mathematics and Physics

Constraint Optimisation Problems. Constraint Optimisation. Cost Networks. Branch and Bound. Dynamic Programming

Complete Local Search with Memory

SUITABLE CONFIGURATION OF EVOLUTIONARY ALGORITHM AS BASIS FOR EFFICIENT PROCESS PLANNING TOOL

ACCELERATING THE ANT COLONY OPTIMIZATION

Some Basics on Tolerances. Gerold Jäger

(Stochastic) Local Search Algorithms

HEURISTICS optimization algorithms like 2-opt or 3-opt

2WKHUZLVH 6HOHFWFRQIOLFW& LL 6HOHFWFRQVWUDLQW7& WKDWZRXOGOHDGWRUHSDLU& LLL &UHDWHDFKRLFHSRLQWFRUUHVSRQGLQJD GLVMXQFWLRQRI7& DQGLWVQHJDWLRQ*RWR

Origins of Operations Research: World War II

Massively Parallel Approximation Algorithms for the Traveling Salesman Problem

Algorithm Design (4) Metaheuristics

Theoretical Foundations of SBSE. Xin Yao CERCIA, School of Computer Science University of Birmingham

THE IMPLEMENTATION OF A DISTRIBUTED FILE SYSTEM SUPPORTING THE PARALLEL WORLD MODEL. Jun Sun, Yasushi Shinjo and Kozo Itano

Parallel Genetic Algorithm to Solve Traveling Salesman Problem on MapReduce Framework using Hadoop Cluster

Modelling Combinatorial Problems for CLP(FD+R) Henk Vandecasteele. Department of Computer Science, K. U. Leuven

A Generalized Permutation Approach to. Department of Economics, University of Bremen, Germany

A HYBRID APPROACH IN GENETIC ALGORITHM: COEVOLUTION OF THREE VECTOR SOLUTION ENCODING. A CASE-STUDY

A Parallel Architecture for the Generalized Travelling Salesman Problem: Project Proposal

Tabu search and genetic algorithms: a comparative study between pure and hybrid agents in an A-teams approach

a local optimum is encountered in such a way that further improvement steps become possible.

A New Selection Operator - CSM in Genetic Algorithms for Solving the TSP

Rich Vehicle Routing Problems Challenges and Prospects in Exploring the Power of Parallelism. Andreas Reinholz. 1 st COLLAB Workshop

Guidelines for the use of meta-heuristics in combinatorial optimization

Network. Department of Statistics. University of California, Berkeley. January, Abstract

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *

Using Genetic Algorithm with Triple Crossover to Solve Travelling Salesman Problem

TABU search and Iterated Local Search classical OR methods

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD

Computational Intelligence

Outline. TABU search and Iterated Local Search classical OR methods. Traveling Salesman Problem (TSP) 2-opt

A Development of Hybrid Cross Entropy-Tabu Search Algorithm for Travelling Repairman Problem

Heuristic Optimisation

Automatic Code Generation for Non-Functional Aspects in the CORBALC Component Model

DRAFT for FINAL VERSION. Accepted for CACSD'97, Gent, Belgium, April 1997 IMPLEMENTATION ASPECTS OF THE PLC STANDARD IEC

Comparison of TSP Algorithms

Travelling salesman problem using reduced algorithmic Branch and bound approach P. Ranjana Hindustan Institute of Technology and Science

INF Biologically inspired computing Lecture 1: Marsland chapter 9.1, Optimization and Search Jim Tørresen

Microsoft Excel Level 2

Introduction to Functional Programming: Lecture 7 2 Functions as innite data structures Ordinary ML data structures are always nite. We can, however,

SPATIAL OPTIMIZATION METHODS

A Simplied NP-complete MAXSAT Problem. Abstract. It is shown that the MAX2SAT problem is NP-complete even if every variable

A Comparison of Modelling Frameworks. Norwegian Institute of Technology, Trondheim, Norway. The close relationship between the elds is indicated by a

Overview of Tabu Search

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction

Conditional Branching is not Necessary for Universal Computation in von Neumann Computers Raul Rojas (University of Halle Department of Mathematics an

Four Methods for Maintenance Scheduling

Jorge Carrizo, Fernando Tinetti and Pablo Moscato. CeTAD C.C. 75 ARGENTINA ABSTRACT

Genetic Algorithms for the Traveling Salesman Problem. Jean-Yves Potvin

Genetic algorithms for the traveling salesman problem

A tabu search based memetic algorithm for the max-mean dispersion problem

Steering. Stream. User Interface. Stream. Manager. Interaction Managers. Snapshot. Stream

Natural Semantics [14] within the Centaur system [6], and the Typol formalism [8] which provides us with executable specications. The outcome of such

Local Search Overview

Optimizing Clustering Algorithm in Mobile Ad hoc Networks Using Simulated Annealing

A novel approach to automatic model-based test case generation

Dynamic Vehicle Routing Using Hybrid Genetic Algorithms

Pre-requisite Material for Course Heuristics and Approximation Algorithms

Unit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.

Prewitt. Gradient. Image. Op. Merging of Small Regions. Curve Approximation. and

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Parallel Computing in Combinatorial Optimization

Comparative Analysis of Genetic Algorithm Implementations

Fuzzy Inspired Hybrid Genetic Approach to Optimize Travelling Salesman Problem

Transcription:

Parallel Optimisation in the SCOOP Library Per Kristian Nilsen 1 and Nicolas Prcovic 2 1 SINTEF Applied Mathematics, Box 124 Blindern, 0314 Oslo, NORWAY E-mail: pkn@math.sintef.no 2 CERMICS-INRIA Sophia Antipolis, FRANCE E-mail: Nicolas.Prcovic@sophia.inria.fr Abstract. This paper shows how parallelism has been integrated into SCOOP, a C++ class library for solving optimisation problems. After a description of the modeling and the optimisation parts of SCOOP, two new classes that permit parallel optimisation are presented: a class whose only purpose is to handle messages and a class for managing optimiser and message handler objects. Two of the most interesting aspects of SCOOP, modularity and generality, are preserved by clearly separating problem representation, solution techniques and parallelisation scheme. This allows the user to easily model a problem and construct a parallel optimiser for solving it by combining existing SCOOP classes. 1 Introduction SCOOP (SINTEF Constrained Optimisation Package) [2], is a generic, objectoriented C++ class library for modeling and solving optimisation problems. The library has been used for solving large{scale real{life problems in the elds of vehicle routing and forestry management[7]. Originally SCOOP contained a set of sequential optimisation algorithms, but recently it has been extended to accomodate for parallel optimisation as well. There already exist several parallel optimisation libraries (e.g, [3, 4]) but they are dedicated to one specic optimisation technique. Bringing parallelism to SCOOP meant to elaborate a high level parallelism model and optimiser-independent classes. SCOOP consists of two main parts, a problem specication or modeling part and an optimisation part. The modeling part of the library provides means for dening optimisation problems. In SCOOP a problem is represented by an encoding which is an aggregation of variables, constraints and objective function. This problem representation will remain static during optimisation. The optimisation part consists of a set of optimisers together with classes that represent auxiliary entities needed by the optimisers. Section 2 gives a brief overview of encodings, section 3 describes the main features of the optimisation part of SCOOP, while section 4 describes the parallelism model and the corresponding classes and how parallel versions of the optimisers can be instantiated.

2 Encodings Given a real world problem there are several ways to represent it mathematically. An encoding is a concrete mathematical description of the problem, which an optimiser understands. A single problem may be described by several equivalent encoding instances from dierent encoding classes, which require dierent methods for ecient optimisation. An encoding consists of { a variable set { a constraint set { an objective A variable is a mathematical entity which has a domain and may take a value from that domain. Variables are represented by the ScoVariable base class. There are dierent types of variables in SCOOP, and they are all dened as subclasses of ScoVariable. Constraints are dened on a set of variables. Given a solution, a constraint may examine the variables' values and tell whether it is satised or not. Objective functions are able to evaluate solutions in twoways. They assign to each solution a real number which is the solution's objective value. However, for some objectives, such a ranking scheme is not always meaningful. Thus objectives may also rank the solutions by telling which of two solutions is considered to be the better or whether the objective has no preference. Dierent kinds of objectives are dened as subclasses of the ScoObjective base class. Some encodings can be very general, representing large problem classes. An example of this can be an LP-encoding (Linear Programming). The LP-encoding would require real variables, linear constraints and a linear objective function. Other encodings can be very specic, thus allowing for the exploitation of problem-specic features to enhance eciency. An example of this can be an encoding for TSPs (Travelling Salesman Problem). Currently SCOOP provides a small set of encodings. If the problem at hand is not supported by any of those, or if it would be benecial to dene a more problem-specic encoding, the programmer can dene a new encoding type by subclassing the existing SCOOP classes. Consider that we want to make a TSP-encoding. One way to do it is to dene a variable type which represents a TSP tour and an objective that minimizes the length of such a tour. This is done by subclassing the SCOOP variable and objective classes. By representing the cities in a TSP by the integers 1 to n a tour can be seen as a permutation of those numbers. Permutations are represented by the class ScoPermVar. A TSP-encoding will contain one such variable. Another attribute of the TSP-encoding will be a cost matrix that represents the cost of traveling between any cities i and j. We represent the objective by the class ScoTSPCostSum which has a member function objectivevalue that calculates the sum of the cost of traveling a full circle between the cities given by a route.

The TSP-encoding is summarized in the following gure: ScoVariable ScoObjective ScoEncoding ScoPermVar ScoTSPCostSum real ObjectiveValue ScoTSPEncoding Cost matrix m ScoPermVar v ScoTSPCostSum c Fig. 1. ScoTSPEncoding 3 Optimisers An optimiser is a class that represents some optimisation algorithm. The optimiser is given an encoding to optimise, and produces trial solutions to that encoding during optimisation. The best solution found is externally available. Optimisers may be created on two levels of generality. The least general optimisers are dedicated to solving problems of one particular encoding class. E.g. an LP solver is only able to solve a linear program. Other more general optimisers are in principle able to solve any problem encoding. These optimisers represent the abstract aspects of optimisation methods, while the concrete details are lled in by auxiliary classes. In the current version of SCOOP we have focussed on a particular class of such general optimisers known as iterative improvement techniques (IIT), which are meta-heuristics for discrete optimisation. Examples are simulated annealing, genetic algorithms and tabu search. Every optimiser is implemented as a subclass of the ScoOptimiser class which is an abstract class dening the minimum interface of optimisers. It contains a virtual function optimise() which is where the optimisation is performed. The dierent optimisers have their own versions of the optimise() function as well as additional data structures and functionality not found in the ScoOptimiser class. Optimisers have a set of attributes in common. These are: { solution { initial solution generator { solution manipulators

{ stop criterion In the following we will explain what these entities are and how they interact with the optimisers. The emphasis will be on IIT optimisers. Solutions are represented by the ScoSolution class. The IIT optimisers operate on complete solutions, i.e. solutions where each variable is assigned a value. Therefore, SCOOP also provides initial solution generators, represented by the ScoInitialSolGen class, which generate complete solutions that are taken as starting points for the optimisers. An optimiser has an initial solution generator attached to it so that it can itself generate a starting solution if it is not provided one. The IIT optimisers decide which solutions should be manipulated and how, but never perform any actual manipulation or evaluation which requires knowledge of the encoding and solution structure. This manipulation is left to attached solution manipulator classes which provide the detailed knowledge. The manipulators are the means by which new solutions are created and thus they represent the problem-specic aspects of an optimisation problem. They come in various types which take dierent numbers of input solutions and produce dierent numbers of new solutions. By applying a suitable sequence of manipulators, the optimiser attempts to produce an optimal solution. There are two main groups of solution manipulator classes: neighbourhood operators and solution operators, represented by the base classes ScoNeighbourhoodOp and ScoSolutionOp, respectively. Examples of solution operators are crossover operators for genetic algorithms, while two-opt and relocate are examples of neighbourhood operators which can be used e.g. by a simulated annealing optimiser for solving TSPs and VRPs (vehicle routing problems), respectively. In addition to solution manipulators the optimiser needs a stop criterion. A stop criterion is a criterion that the optimiser uses in order to determine when to stop optimising. It is represented by the ScoStopCriterion base class. Currently SCOOP provides three dierent stop criteria, represented by the following subclasses of ScoStopCriterion: { ScoIterStopCriterion. Makes the optimiser stop after a specied number of iterations. { ScoUserInterrupt. Becomes fullled when the user presses "Ctrl-C". { ScoStopWhenEither. Consists of a number of sub-criteria. The criterion is satised when one or more of the sub-criteria is satised. To sum up, the basic attributes of optimisers described in the current section are represented by the following SCOOP classes: { ScoSolution { ScoInitialSolGen { ScoStopCriterion

{ ScoSolutionOp and ScoNeighbourhoodOp Each of these classes represent the base of a class hierarchy and are fairly generic. At the lowest levels in the hierarchies are classes representing more specic entities. An example of this is the stop criterion hierarchy with the base class ScoStopCriterion and the subclasses described above. Similarly to the encoding part of SCOOP, the optimiser part hierarchies will in many cases be extended by the SCOOP user by subclassing in order to solve specic optimisation problems eciently. To illustrate how this is done we will continue the TSP example from the previous section. 3.1 Optimiser and Solution Operators for the TSP We decide to use a genetic algorithm [5] to solve our TSP. For our purposes we can use the ScoGAOptimiser provided by SCOOP. However, we have to provide it with a suitable crossover operator and mutator. Since these solution operators represent the specic aspects of our problem we will have to dene them ourselves. We dene a crossover operator which takes two permutations p1 and p2 and an integer k as arguments. The new permutation will be identical to p1 up to and including index k, wherefrom the rest of the numbers will follow in the same order as in p2, viewed cyclically from the position i where p2(i) = p1(k). In SCOOP there is a solution operator dened by the class ScoOptimiser- RunOp that applies an optimiser to a given solution to produce a new solution. This solution operator can be used as a mutator. We will use the ScoDescent optimiser as operator for the ScoOptimiserRunOp. ScoDescent is a simple steepest descent algorithm. One of its attributes is a neighbourhood operator. We will use a common neighborhood operator for TSPs: two-opt, which we dene by the ScoTSPTwoOptNH class. For both optimisers we dene an initial solution generator by the class ScoT- SPRandomSolution. It generates a solution which is a random path. Finally we choose the ScoIterStopCriterion for both the genetic algorithm optimiser and the mutator. Our choices and denitions can be summed up in gures 2{4. ScoOptimiser ScoStopCriterion ScoDescent ScoGAOptimiser ScoIterStopCriterion Fig. 2. Optimisers and stop criterion

ScoSolutionOp ScoOptimiserRunOp ScoCrossoverOp ScoTSPCrossoverOp Fig. 3. TSP crossover operator ScoInitialSolGen ScoNeighbourhood ScoImplicitNeighbourhood ScoTSPRandomSolGen ScoTSPTwoOptNH Fig. 4. Initial solution generator and two{opt operator The gures show the parts of the SCOOP class hierarchies we used. Figure 2 contains only SCOOP classes. In gures 3 and 4 the classes above the dotted line are SCOOP classes while the classes below are the user-dened classes. The user-dened classes will henceforth be a part of SCOOP. In this way SCOOP is a constantly evolving library. We can now set up a main program for solving TSPs by using the above classes: //Include necessary files //Initialize cost matrix c ScoTSPEncoding encoding; encoding.setcosts(c); ScoTSPTwoOptNH nh;

ScoDescent descent; descent.setinitialsolgen(* new ScoTSPRandomSolution); descent.setneighbourhoodop(nh); descent.setstopcriterion(* new ScoIterStopCriterion(20)); ScoGAOptimiser optimiser; optimiser.setcrossoverop(* new ScoTSPCrossoverOp); optimiser.setmutator(* new ScoOptimiserRunOp(descent)); optimiser.setpopgenerator(* new ScoTSPRandomSolution); optimiser.setpopulationsize(200); optimiser.setstopcriterion(* new ScoIterStopCriterion(500)); optimiser.setencoding(encoding); optimiser.optimise(); optimiser.getbestsolution().print(cout); 4 Parallelism in SCOOP In order to keep the generality and modularity aspects of the SCOOP library, two main goals were to be reached: { We wanted to dene new classes that would be general enough to permit parallel (i.e. each process uses the same optimiser on a dierent part of the search space of the problem), concurrent (i.e. each process uses a dierent optimiser to solve the whole problem) or cooperative (i.e. each process uses a dierent optimiser on a dierent part of the problem) optimisation. { We wanted to re-use all the Sco[...]Optimiser::optimise() functions already dened, not to re-write a special parallel optimisation code corresponding to each way of optimising. Actually, we intended to be able to generate a ScoParOptimiser object by coupling a Sco[...]Optimiser object and a Sco- HandleMsg object (which would contain all that relates to parallelism). In the following, we explain how we achieved this. 4.1 The ScoParOptimiser Class The ScoParOptimiser class had to inherit from the ScoOptimiser class for several reasons. It is conceptually still an optimiser that will be used in the same way as a sequential one by using the same methods (optimise(), etc.). It contains two new variables: optimiser, a ScoOptimiser object and message handler, a ScoHandleMsg object. As we wanted to be as general as possible, we had to be able to combine parallel, concurrent and cooperative processing. For example, if eight processors are available, it should be possible to run two concurrent optimisers, each using four processors to run a dierent parallel optimisation. Thus, a ScoParOptimiser also had to be able to handle a ScoParOptimiser object stored in optimiser and that was another reason to make it inherit from the ScoOptimiser class. Here is what the denition of ScoParOptimiser looks like:

class ScoParOptimiser : public ScoOptimiser { ScoOptimiser* optimiser; ScoHandleMsg message_handler; }; and the optimise() method: void ScoParOptimiser :: optimise() { [initialisations] optimiser->optimise(); [terminations] } The ScoHandleMsg class will be described in section 4.3. 4.2 From sequential to parallel optimisation Whatever kind of optimiser is used (systematic search like tree search algorithms, or local search like simulated annealing, genetic algorithms, tabu search, etc.), the algorithm always consists of a loop containing the following steps: { select a state from the previously generated ones. { generate one or several new states from the selected one. { if none of the newly generated states is the best (or good enough) then loop. The parallel version of an optimiser is very similar. The dierence is that each process running its optimiser has also to communicate sometimes with the other processes. Here is what the loop looks like: { select a state from the previously generated ones. { generate one or several new states from the selected one. { handle messages : check if messages have arrived and execute the corresponding procedures and send messages if necessary. { if none of the newly generated states is the best (or is good enough) and no \stop" message has arrived, then loop. To turn a sequential optimiser into a parallel one, a sequence of instructions managing the interprocess communication has to be inserted at the end of the optimisation loop. Nothing more has to be changed, apart from some initializations and terminations, before and after the loop. 4.3 The ScoHandleMsg Class Now, we are going to explain how itwas possible to insert message passing in an optimiser without modifying the already existing classes. Here is what a Sco[...]Optimiser::optimise() function looks like:

Sco[...]Optimiser::optimise() { [initialisations] while (! stop_criterion -> issatisfied()) { [generation of solution(s)] } [terminations] } The main idea is to store a message handler in the issatised() function in order to leave the optimise() function unchanged. Here is what the ScoHandleMsg class looks like: class ScoHandleMsg : public ScoStopCriterion { [...] StopCriterion* stop_criterion; } Boolean handle_messages(); [...] and the issatied() method: Boolean ScoHandleMsg::isSatified() { return handle_messages() stop_criterion->issatisfied(); } handle messages() is a boolean function which receives new messages, calls the corresponding procedures and sends resulting messages. It returns True only when the process receives a \stop" message. When a parallel optimiser paropt is built from a sequential optimiser opt, a ScoHandleMsg object hm will be created. The stop criterion of hm will be set to the one of opt, then the stop criterion of opt will be set to hm. So when the issatied() method will be called in the optimisation loop, the handle messages() method will be called before the initial issatied() method of the optimiser. 4.4 Building a ScoParOptimiser Object From a user's point of view, creating a parallel version of an optimiser is very simple. He just has to dene his optimiser opt and his message handler hm and then declare in his source les: ScoParOptimiser paropt(opt, hm);

then perform optimisation by: paropt.optimise(); opt and hm are gathered and embedded in paropt during the call of the Sco- ParOptimiser constructor. It sets the optimiser and messagehandler variables to opt and hm and then changes their contents and redirects pointers as described in section 4.3. Nowwe recapitulate what happens during the call of paropt.optimise(). After some initializations (e.g. initial work sharing between the processes) the optimise() method of optimiser is called. So the loop of opt is run and, when calling the issatied() method, handle messages() is called before running the initial issatised() method of opt. The ScoParOptimiser class provides a boolean function hasbestsolutionglobal() that returns true if the optimiser's solution is better than the solutions of the other optimisers. This function should be called when all processes has nished optimising to identify which process contains the best solution. The parallel scheme presented here maintains the high degree of modularity which is one of the most important aspect of SCOOP. By clearly separating parallel management from sequential optimisation, it allows two complementary advantages: rstly, applying the same parallel scheme to dierent optimisers; secondly, applying dierent parallel schemes to the same optimiser, without rewriting any of them. 4.5 SCOOP Message Handlers Some basic but general message handlers (inheriting from ScoHandleMsg) are already dened in SCOOP: { the ScoHandleMsgFirst class : every process is independently trying to nd the best (or a good enough) solution, and as soon as one has found it, it tells the others to stop. The stop criterion to be used with this parallel scheme is the quality/value of the objective function. { the ScoHandleMsgBest class : every process has to look for its own local optimum, and when every process has found it, results are gathered and the best of the local optima is returned. The stop criterion to be used is the number of steps of the optimisation loop. These two message handler classes may be used with any optimiser but they are not likely to be ecient for a broad number of problems. They neither keep on sharing the work during the resolution in order to maintain a good balance, nor do they exchange useful information that could accelerate the search (e.g. a new upper bound in a branch and bound solver). In order to perform an ecient parallel optimisation, other more specialized ScoHandleMsg classes have to be dened in order to t each solving technique.

ScoOptimiser ScoStopCriterion ScoHandleMsg ScoParOptimiser ScoHandleMsgFirst ScoHandleMsgBest Fig. 5. Classes added in SCOOP for parallelism 4.6 An Example of Parallel Optimisation Let us consider again the TSP example presented in the previous sections. This problem was solved using a genetic algorithm optimiser but we may suppose that other solving techniques would t and may be even more ecient. If we cannot have any good estimate of which optimisation technique is the best, we can try them all in parallel and get the result of the one that has produced the best solution after a given elapsed time. Here is an example of a main program for concurrently solving the TSP example by using a genetic algorithm and simulated annealing [6]: //Include necessary files //Initialize cost matrix c ScoTSPEncoding encoding; encoding.setcosts(c); // Define ScoGAOptimiser optimiser1 and make the // appropriate settings as in the sequential example // Define ScoSAOptimiser optimiser2 and make its settings ScoOptimiser* optimiser; if (my_id == FIRST_PROCESS) optimiser = &optimiser1; else /* my_id == SECOND_PROCESS */ optimiser = &optimiser2; ScoParOptimiser par_opt(*optimiser, *new ScoHandleMsgBest); par_opt.optimise(); if (par_opt.hasbestsolutionglobal()) par_opt.getbestsolution().print(cout); Parallelism in SCOOP has been implemented with MPI (Message Passing Interface) [1] so this main program will be run in parallel on n dierent processes,

where the number n is determined by the user. 5 Conclusion Because of the modularity of SCOOP, wehave been able to integrate parallelism by adding a few general classes, without changing anything that was already dened in the library. These new classes are independent from the optimisation techniques because they only deal with message handling. Thus, it is possible to parallelize any optimiser with them and parallelism can be extended by dening subclasses from them. However, in order to be really ecient, the already existing classes mayhave to be customized with the specic aspects of some optimisation technique. In the near future we intend to extend SCOOP with parallel classes with general or specic load balancing techniques. References 1. MPI Forum. MPI: A message-passing interface standard. International Journal of Supercomputer Applications, 8 (3/4):165{416, 1994. 2. SCOOP 2.0 Reference Manual, SINTEF report no. STF 42A98001, ISBN 82-14- 00047-5, 1998. 3. M. Benaichouche, V. D. Cung, S. Dowaji, B. Le Cun, T. Mautor and C. Roucairol. Building a Parallel Branch and Bound Library. Solving Combinatorial Optimization Problems in Parallel, LNCS 1054, Springer, 201{231, 1996. 4. R. Finkel and U. Manber. DIB A Distributed Implementation of Backtracking. ACM Transaction on Programming Languages and Systems, 9(2):235{256, 1987. 5. J.H. Holland. Adaptation in Natural and Articial Systems. The University of Michigan Press, 1975. 6. S. Kirkpatrick, C. Gellat and M. Vecchi. Optimisation by simulated annealing. Science, 220:671{680, 1983. 7. G. Misund, G. Hasle and B. S. Johansen. Solving the clear-cut scheduling problem with geographical information technology and constraint reasoning. In ScanGIS'95 Proceedings, pages 42{56, 1995. Also published as SINTEF report no. STF 33S95027.