2WKHUZLVH 6HOHFWFRQIOLFW& LL 6HOHFWFRQVWUDLQW7& WKDWZRXOGOHDGWRUHSDLU& LLL &UHDWHDFKRLFHSRLQWFRUUHVSRQGLQJD GLVMXQFWLRQRI7& DQGLWVQHJDWLRQ*RWR

Size: px
Start display at page:

Download "2WKHUZLVH 6HOHFWFRQIOLFW& LL 6HOHFWFRQVWUDLQW7& WKDWZRXOGOHDGWRUHSDLU& LLL &UHDWHDFKRLFHSRLQWFRUUHVSRQGLQJD GLVMXQFWLRQRI7& DQGLWVQHJDWLRQ*RWR"

Transcription

1 Local Probing for Resource Constrained Scheduling Olli Kamarainen, Hani El Sakkout, Jonathan Lever IC-Parc Imperial College of Science, Technology and Medicine, London SW7 2AZ, UK Abstract This paper investigates a form of algorithm hybridization that combines constraint satisfaction algorithms with local search algorithms. On one hand, constraint satisfaction algorithms are eective at nding feasible solutions for tightly constrained problems with complex constraints. On the other hand, local search algorithms are usually better at nding solutions that are good with respect to an optimization function when the problem is loosely constrained, or has simple constraints that can be embedded into neighbourhood operators. The tight hybridization of these two algorithm classes may lead to more powerful algorithms for certain classes of large scale combinatorial optimization problems (LSCOs). A new hybrid algorithm, local probing, is introduced for resource constrained scheduling where the aim is to optimize an objective function while satisfying both resource and temporal constraints. The problem is solved in a backtracking branch-and-bound tree. In each node of the tree, a relaxed temporal sub-problem is created by removing resource constraints, and a prober which, in this paper, consists of a local search algorithm with a dynamic neighbourhood operator generates a partial solution that satises the remaining constraints and gives a good value to the objective function. Possible resource violations are solved in the backtracking search tree by posting (or removing when backtracking) temporal precedence constraints which are guaranteed to be satised by the neighbourhood operator. For the local search prober, we introduce dynamic neighbourhood operators that satisfy the constraints of the temporal sub-problem using techniques based on limited discrepancy search and linear programming. Experiments are carried out on a class of resource constrained scheduling problems extended with a generic objective function. 1 Introduction Generally, the constraint satisfaction techniques of constraint programming (CP), such as backtrack search and constraint propagation, are good at dealing with constraints and creating feasible solutions. However, they are less suitable for optimization. In this paper, another algorithm class, local search, is selected for hybridization with CP backtrack search. In local search, a partial or sub-optimal solution is improved at each search step. Local search behaves in a dierent way

2 to constraint satisfaction: it performs well on optimization especially with complex objective functions involving many inter-related variables but does not guarantee to satisfy complex and easily violated constraints. Thus, constraint satisfaction and local search may complement each other well. A typical large scale combinatorial optimization problem, the kernel resource feasibility problem, extended with a piecewise linear objective function (PLKRFP [1]) is selected here as a testbed for the technique that is developed. Its core is a general scheduling problem with many important application areas such as job shop scheduling, ship loading and bridge building. Furthermore, it can be clearly divided into dierent sub-problems, and therefore it is a good vehicle to compare dierent hybrid algorithms. There are many ways to hybridize CP and local search. This paper investigates the use of probe backtrack search [3, 4]. It is a general hybridization form and it has been shown to yield ecient hybrid algorithms on certain problem classes. In probe backtracking, the resource constrained scheduling problem is solved in a backtracking branch-and-bound tree. In each node of the tree, a relaxed temporal sub-problem is created by removing resource constraints, and a prober algorithm generates a partial solution that satises the remaining constraints and gives a good value to the objective function. Possible resource violations are solved in the backtracking search tree by posting (or removing when backtracking) temporal precedence constraints to the temporal sub-problem. Dierent versions of probe backtrack search have been applied successfully to commercial dynamic scheduling problems and to the PLKRFP [4, 1]. In [1], the prober algorithm solves the sub-problem using linear programming (LP) and mixed-integer programming (MIP) methods, whereas in local probing, local search is used instead. LP and MIP can be used when the problem can be modelled such that the objective function is linear or piecewise linear and the constraints are linear inequations, while local search can deal with arbitrary objective functions and constraints. Local search methods are usually incomplete and cannot guarantee optimal solution like LP and MIP do. However, for some hard large scale combinatorial optimization problems, local search may nd feasible solutions with good objective values more quickly. In local probing, the local search algorithm is equipped with a neighbourhood operator that is generic and capable of satisfying dierent classes of constraints, particularly those of the temporal sub-problem. The neighbourhood operator is dynamic, because it satises a set of constraints that will change as the search progresses. Three dierent neighbourhood operators are described, namely, limited variable search (LVS) and limited shift search (LSS) [10] which are based on limited discrepancy search (LDS) [7], as well as minimal perturbation search (MPS) [3, 4]. The paper is structured as follows. In Section 2, some background of probe backtrack search, local search and the selected application domain is explained. Section 3 introduces local probing. Experimental results are described in Section 4, and Section 5 concludes the paper.

3 2 Background 2.1 Probe Backtracking The probe backtrack algorithm presented in [4] is an extension of conventional backtrack search. At each node of the search tree, the algorithm extracts a probing sub-problem by relaxing some constraints. The sub-problem is delegated to a solving algorithm (called as a prober). It returns a probe, i.e. a suggested variable assignment, which is a partial solution to the original problem and is supposed to be good with respect to the objective function. Since the probe may violate relaxed constraints, backtrack search focuses on repairing the violations by posting (or removing when backtracking) additional constraints, which would lead to feasible search regions. Thus, probe backtracking interlaces traditional constraint satisfaction methods with the prober to incrementally satisfy the remaining constraints of the problem. At a search node where an additional constraint is posted, a backtrackable choice point is created. It contains the posted constraint, which divides the search space into two sub-spaces, and its negation. Figure 1 summarizes the probe backtracking procedure, and Fig. 2 illustrates the behaviour of probe backtrack search when the prober returns a globally infeasible probe. A more detailed pseudo-code can be found in [4]. ([WUDFWSURELQJVXESUREOHP3 DWWKHFXUUHQWQRGH 5XQWKH SUREHU RQ3,IDSUREHWKDWVROYHV3 LVQRWIRXQGWKHQEDFNWUDFN,IWKHSUREHLVJOREDOO\FRQVLVWHQWWKHQ DVROXWLRQLVIRXQG 2WKHUZLVH L 6HOHFWFRQIOLFW& LL 6HOHFWFRQVWUDLQW7& WKDWZRXOGOHDGWRUHSDLU& LLL &UHDWHDFKRLFHSRLQWFRUUHVSRQGLQJD GLVMXQFWLRQRI7& DQGLWVQHJDWLRQ*RWR Figure 1. A brief overview of probe backtrack search

4 6ROYHVXESUREOHP3 3 ZLWKWKH SUREHU 3 D 3UREHD YLRODWHVVRPH UHOD[HGFRQVWUDLQWV &UHDWHFKRLFHSRLQW 7& 7& 7& 25 7& 3UREHU 3š7& 3š 7& Figure 2. An illustration of probe backtrack search 2.2 Local Search In this paper, local search is applied in the probing algorithm of probe backtrack search. In local search algorithms, a partial or sub-optimal solution is improved by slightly changing it at each search step. The search starts from an initial solution, and at each search step a neigbourhood is explored and a neigbourhood operator selects a promising solution from the neighbourhood as the next search node. Search continues until a termination condition is satised, e.g. when the search has visited a maximum number of search nodes. A local search algorithm for optimization can be created in several ways. A basic method is called hill climbing. In hill climbing, a neighbour improving the value of objective function is selected at each search step. The objective function may have several local optima, where there is no neighbour which gives a better solution. In such cases, the algorithm can be started again from a dierent initial solution. However, there are more advanced local search methods to avoid local optima (although not applied in this paper), such as simulated annealing [9], tabu search [5] and genetic algorithms [6, 8]. 2.3 Application Domain The problem investigated in this paper is a scheduling problem that is modelled as a kernel resource feasibility problem (KRFP) [2, 4]. This generalizes most scheduling benchmarks, including job-shop and resource constrained project scheduling.

5 In the KRFP, we have several types of resources and a given quantity of each. We have also a xed number of non-preemptive activities. Each activity requires a quantity of a specic resource type during its execution. The aim is to schedule activities so that any temporal constraints are satised and the demand on any resource does not exceed the resource quantities at any time on the scheduling horizon. While the KRFP itself represents only a constraint satisfaction problem, an objective function is often needed. Many real world optimization criteria such as revenue and cost optimization in transportation can be approximated by a non-convex piecewise linear function. In the problem we investigate, a piecewise linear objective function is attached to each of the activities of the KRFP. The total objective function of the problem is the sum of these functions. The kernel resource feasibility problem with piecewise linear objective function is abbreviated by PLKRFP [1]. In this paper, the objective function of the PLKRFP represents revenue to be maximized. 3 Hybridization approach: Local Probing 3.1 Overview This section describes how local probing is carried out on the PLKRFP. In probe backtracking, dierent types of constraint propagation and search ordering heuristics can be applied. When making the decision on which constraint to post to the sub-problem in order to repair the violations, we select rst the resource constraint that is violated the most, then we choose a temporal constraint that would reduce violation (by reducing resource overlap), but which would cause minimal change to the proposed schedule. On backtracking, the negation of this constraint is posted instead. Details of the selected heuristic are presented in [4]. 3.2 Scope of the Local Search Sub-Problem In probe backtracking, it is assumed that LSCOs are easier to solve by identifying an easy relaxed sub-problem that will be probed to guide the search. The constraints of the PLKRFP consist of three classes: resource constraints, temporal constraints and objective function constraints. Any combination of these constraint classes could be included in the local search sub-problem. The neighbourhood operator of the local search algorithm could be designed such that it satises certain constraint classes, thereby guaranteeing that the probes never violate these classes. It is possible to take any subset of the problem constraints into the neighbourhood operator, and thus, ensure probe feasibility with respect to that subset. The question is which kind of sub-problem is easy for the neighbourhood operator to satisfy and makes the overall algorithm ecient at the same time. This is important since the class of constraints satised by the neighbourhood operator includes the ones that will be posted by probe backtracking as it drives the search towards globally feasible solutions.

6 In this paper, the sub-problem chosen for local search contains temporal constraints and piecewise linear objective function constraints but no resource constraints. In [1], to prune the search, the higher level probe backtrack procedure also uses cost bounds to restrict new objective values to be better than before, which may also translate into reductions of the domains of temporal variables. Local probing uses cost bounds in this way as well 1. The temporal and the cost bound constraints are dealt with as a part of the neighbourhood operator while the local search objective function aims to maximize revenue. The results presented suggest that this sub-problem is easy for a prober to solve. Probe backtrack search reduces resource violation by posting (or removing when backtracking) temporal precedence constraints which are guaranteed to be satised by the neighbourhood operator. 3.3 Local Search Algorithm as Prober Local Search Method. At each local search step, the neighbourhood operator suggests for the sub-problem a candidate solution which satises all the temporal sub-problem constraints. A hill climbing strategy is applied: if the total revenue at the new step (i.e. the sum of the revenues given by the piecewise linear objective function constraints on the sub-problem solution) is more than the total revenue at the last accepted solution, this new neighbour is considered as a basis for the forthcoming steps. Otherwise, the neighbour is rejected and the last accepted solution is kept as a basis. The neighbourhood operator selects a neighbour solution from a neighbourhood, which is a set of solutions close to the basis. For any given problem, there are many ways to dene a neighbourhood. Usually, it is a very small subset of the search space that can be explored quickly, and it contains solutions that dier from the basis solution by a small change of variable-value assignments. An ecient neigbourhood operator which nds feasible neighbour solutions is needed. It should be generic and capable of satisfying dierent classes of constraints, especially temporal constraints in our scope. As we expressed earlier, constraint satisfaction methods are suited to obtaining feasible solutions. However, we want to look for a neighbour solution systematically in the neighbourhood of the basis solution. Next, three dierent neighbourhood operators are described, namely, limited variable search (LVS) and limited shift search (LSS) which are based on limited discrepancy search (LDS) [7], as well as minimal perturbation search (MPS). Neighbourhood Operators with Limited Variable and Limited Shift Search. Limited variable search and limited shift search are generic and move away slowly from from the basis solution. This means that in the search of 1 The term cost bound is typically used within the cost minimization context. Here we are maximizing revenue, but we will continue to use the term to refer to the bound on revenue.

7 a feasible neighbour, the search space is explored gradually by increasing the discrepancy from the reference solution. In limited variable search, which has some similar features to large neighbourhood search [11], the search decisions are variable assignments, and the neighbour solution is rst searched by letting only one variable change. If a feasible solution is not found after exploring all the possibilities, the discrepancy limit is set to two and the neighbour is sought in the set of assignments where exactly two variables are changed from their original values. This is continued until a feasible solution is found or the whole search space is explored. In limited shift search (LSS) [10], instead of changing the number of variable re-assignments, the measurement of discrepancy is the absolute distance from the basis solution (variables should be numeric with arithmetic dierences, of course). For example, when we are looking for a neighbour in the search subspace where the variables are integer, and the LSS discrepancy limit is two, we are actually searching for solutions where exactly two assigned integer values are incremented and/or decremented, or the value of exactly one variable is changed by 2. After setting the discrepancy limit, variables are selected in random order for re-assignment. The new value for the selected variable is taken heuristically. The value selection heuristic simply selects the value which maps to the highest revenue. Neighbourhood Operator with Minimal Perturbation Search. In order to utilize the eciency benets of Simplex within the neighbourhood operator of local probing, an operator based on minimal perturbation search [4, 3] is also introduced. First, it assigns a value to only one variable, and uses then linear programming for assigning the remaining variables as a minimal perturbation problem, where the aim is to nd the temporally feasible solution which minimizes the changes to the previous assignment. The objective function and the minimal perturbation constraints are modelled as follows. s.t. NX min dx i ; (1) i=1 dx i xi? ci; i = 1; 2; : : : ; N (2) dx i ci? xi; i = 1; 2; : : : ; N (3) The absolute change between variable x and its initial value c is a non-linear expression. Thus, it is represented by a new variable dx = jx? cj, and the total change over all the N non-instantiated variables can be minimized by adding linear constraints (2) and (3). The objective function (1) of the minimal perturbation problem is linear and the solution can be produced by Simplex extremely quickly. Cost bound constraint can not posted to the minimal perturbation problem.

8 4 Experimental Results 4.1 Test Problems Resource Feasibility Benchmarks. The results presented here were carried out with test examples based on IC-Parc's resource feasibility problem benchmarks 2. Each problem contains an initial schedule which needs to be changed because it is made infeasible by reducing the number of resources. The initial schedules are a set of activities which all use the same type of resource. The duration of each activity can be reduced within a certain limit or increased indenitely. In addition, the schedule les contain randomly generated temporal constraints between arbitrary start and end points of dierent activities. The temporal constraints are of the form urv c, where R 2 f=; <; >; ; g, u and v are time points, and c is a constant time period. The parameter temporal constraint density is proportional to the probability of a constraint existing between any pair of time points. The temporal constraints are satised by the initial schedule, but must continue to be enforced as the schedule changes. Objective Functions. The original objective in these resource feasibility problem benchmarks was to create a feasible schedule while keeping the number of changes minimal (a minimal perturbation problem). In order to apply the benchmarks for the PLKRFP, new piecewise linear objective functions for maximization are added for each activity in the resource feasibility benchmark problems. The functions are created linear fragment by linear fragment, starting from the start point of the time horizon of the resource feasibility problem benchmarks until the end point of the time horizon. The temporal lengths of the linear fragments are generated randomly from a uniform distribution between the minimum and maximum lengths of fragments which are both adjustable parameters. Here, the piecewise linear functions are continuous, and the slope of each linear fragment is selected randomly from a uniform distribution between sprev? decmax and sprev + incmax, where sprev is the slope of the previous linear fragment and decmax and incmax are the maximum relative decrease and increase of the slope. All the start and end points of the linear fragments are divisible by the time granularity of the resource feasibility problem benchmarks, and the objective values are scaled between 0 and A, where, for each objective function, A is generated from a uniform distribution between 500 and Test Instances. The tests for this report are run on 100 dierent problem instances of 4 dierent types of problems, namely resource feasibility benchmarks where the required changes to the number of available resources are 0 and?3, and the objective function is hard or easy, to create a total of 400 variations. The hard objective functions are generated such that the minimum and maximum temporal lengths of the linear fragments are set to 5 and 100, and the maximum relative decrease and increase of the slopes are both set to

9 They are likely to be non-concave and have several local maxima. The easy objective functions are similar, but the minimum and maximum temporal lengths of the linear fragments are set to 500 and 800. Because the time horizon in RFP benchmarks is 985 time units, the easy objective functions contain only two linear fragments, and thus, they are either concave or convex. In all these test problems, the number of activities is 30, the maximal activity shrinkage is zero (i.e. durations of activities are xed), the temporal density is 0:5 and the maximum shift of any time point is greater than the time horizon. All these 400 problem instances are run with dierent setups of the local search prober. The basic cases are algorithms with LVS, LSS and MPS neighbourhood operators with random variable selection and heuristic value selection. Since the cost bound constraint is excluded from MPS, we exclude it from the LVS and LSS as well to get a fair comparison. Within MPS, the heuristic value selection tends to lead hill climbing to local minima where it is stuck. This is why the tests are run with MPS neighbourhood operator with semi-random value selection as well. Here, the variable domain is divided into two sets, by randomly selecting a value x and placing values better than x in the better set and worse (or equal) values in the worse set. The better set is explored rst by using random value selection, before exploring the worse set. This increase in the stochastic aspect is expected to give better coverage of the search space. In order to investigate the benets of LSS, LVS and MPS, we run the tests with neighbourhood operators using depth-rst search (DFS) as well, with and without cost bound propagation. The starting point of the local search is the initial schedule at the very beginning, and thereafter, the previous sub-problem solution. The general termination condition for the prober algorithm is to stop local search after 50 iterations. The whole local probing search is terminated if it is not completed before 1000 seconds of CPU time. The test were run on PCs with Pentium II 450 MHz processors. 4.2 Overview of Results Table 1 presents, for each algorithm setup, the percentage of solved problems over all the 400 test instances, the improvement to the revenue of the initial schedule, and the CPU time per local search step. They are ordered by the percentage of solutions found, i.e. the proportion of local probing runs which found a feasible solution. In addition, restricted averages are given for the revenue improvement and the CPU time per local search step; restricted averages are based only on the instances where all the dierent local probing algorithm setups were able to nd a solution. This is to ensure a fair comparison by comparing them only on the same instances. Therefore, e.g. in Table 1, the results of only 34 per cent of the instances were used to compute these two average values. The most eective strategy to nd feasible solutions seems to be local probing with MPS neighbourhood operator, and especially with heuristic value selection. The neighbourhood operator equipped with DFS works almost as well with and without cost bound propagation. In the sense of improving the objective value,

10 both MPS and DFS setups have similar performances. MPS with semi-random value selection is slightly inferior to MPS with heuristic value selection, in terms of not only solutions found but also improvement and CPU time per local search steps. Possibly, we could have dierent conclusions with dierent local search termination conditions and larger problem instances. LSS and especially LVS appear to be clearly worse neighbourhood strategies in nding a feasible solution (LVS, however, performing signicantly better in optimization). This could be explained by the speed of MPS neighbourhood operator (with the restricted average of CPU time per local search step being only 0:13 and 0:15 seconds). Unlike MPS and DFS, LVS and LSS are restricted by the discrepancy limit, allowing them to explore the search space only systematically outwards from the basis solution. Thus, it seems that the systematic approach of LVS and LSS does not help here the case could be dierent with larger problem instances. Table 1. All test instances Neighb. Variable Value operator selection selection Cost bound Solutions Restr. av. Restr. av. CPU propagation found (%) impr. (%) time / LS step (s) MPS random heuristic o MPS random semi-random o DFS random heuristic o DFS random heuristic on LSS random heuristic o LSS random heuristic on LVS random heuristic o LVS random heuristic on LVS versus LSS. Compared with LSS, LVS provides better solutions. In LVS, the discrepancy limit does not constrain domains, and with heuristic value selection, good assignments may be found relatively quickly. While in LSS, domains are very restricted in the beginning: it may take a much longer time to get to the best regions of the search space with respect to the objective function. On the other hand, LSS seems to be much more eective at obtaining feasible solutions. More than 63 per cent of the test runs with LSS nd a solution, compared to only 35 per cent with LVS. Indeed, LVS which tends to nd a neighbour further away from the basis solution, and thus, it may easily lead to solutions inconsistent with resource constraints. Furthermore, as seen from the restricted averages of CPU times per local search step, LSS is much quicker than LVS, which spends more time exploring whole domains. In other words,

11 LVS invests much computational eort in obtaining probes that are good with respect to objective function but are not easily repaired to be feasible due to the limited resources. Cost bound. When cost bound propagation is in use, infeasible assignments with respect to the cost bound on the total revenue are pruned as soon as inconsistencies are detected. On one hand, cost bound propagation directs the local search to feasible areas with better revenues than the cost bound. On the other hand, the propagation between the domains of revenue and decision variables needs more computational eort. DFS, LSS and LVS based neighbourhood operators are tested with and without cost bound propagation. When cost bound propagation is not in use, the decision variables are not linked to corresponding revenue variables within the sub-problem. Instead, the total revenue of a solution is computed and compared to the cost bound separately, after a neighbour candidate has been found. In terms of solutions found and the restricted average of improvement, inclusion of cost bound propagation in the neighbourhood operator has little eect. However, removing cost bound propagation signicantly decreases the restricted average of CPU time per local search step, especially for DFS: from 12:16 to 0:44 secs. 4.3 Results for Resource Changes Tables 2 and 3 give the results on the test instances where the number of available resources are kept in the original level, and reduced by 3 units, respectively. Naturally, if resource reductions are not made, a solution is found much more often, and MPS, DFS and LSS seem to be very eective techniques to nd a solution. In terms of revenue optimization, MPS and DFS perform equally, both being superior to LSS and LVS. When resource reduction is 3 units, the eciency ordering between dierent neighbourhood strategies can be seen better. MPS, especially with heuristic value selection, clearly outperforms not only LSS and LVS, but also DFS. Meanwhile, the slowness of LVS signicantly prevents the probe backtracking algorithm to reach feasibility. In both cases, the previous conclusions on cost bound and value selection still hold. 4.4 Results for "Hard" and "Easy" Objective Functions Next, the set of problem instances is divided into two sets depending on the type of the piecewise linear objective function, namely easy with two linear fragments only, and hard with more inection points. The results are shown in Tables 4 and 5. Further dierences to previous results in the behaviour between the algorithms do not seem to appear. Local search algorithms used in the ways presented perform similarly independent of the diculty of the objective function.

12 Table 2. No resource reduction Neighb. Variable Value Cost bound Solutions Restr. av. Restr. av. CPU operator selection selection propagation found (%) impr. (%) time / LS step (s) MPS random heuristic o MPS random semi-random o DFS random heuristic o DFS random heuristic on LSS random heuristic o LSS random heuristic on LVS random heuristic o LVS random heuristic on Table 3. Resource reduction of 3 units Neighb. Variable Value Cost bound Solutions Restr. av. Restr. av. CPU operator selection selection propagation found (%) impr. (%) time / LS step (s) MPS random heuristic o MPS random semi-random o DFS random heuristic o DFS random heuristic on LSS random heuristic o LSS random heuristic on LVS random heuristic o LVS random heuristic on Table 4. Easy objective function Neighb. Variable Value Cost bound Solutions Restr. av. Restr. av. CPU operator selection selection propagation found (%) impr. (%) time / LS step (s) MPS random heuristic o MPS random semi-random o DFS random heuristic o DFS random heuristic on LSS random heuristic on LSS random heuristic o LVS random heuristic o LVS random heuristic on

13 Table 5. Hard objective function Neighb. Variable Value Cost bound Solutions Restr. av. Restr. av. CPU operator selection selection propagation found (%) impr. (%) time / LS step (s) MPS random heuristic o MPS random semi-random o DFS random heuristic o DFS random heuristic on LSS random heuristic o LSS random heuristic on LVS random heuristic o LVS random heuristic on Conclusion A new hybridization form of constraint programming and local search, local probing, has been introduced for resource constrained scheduling. The algorithm structure is based on probe backtrack search, which solves the problem by constructing a branch-and-bound tree where a relaxed sub-problem is created and solved at each node. The sub-problem does not contain all the constraints. Possible violations of remaining constraints were solved by posting (or removing when backtracking) additional constraints to the sub-problem. In local probing, the algorithm for solving the sub-problem is local search, equipped with a dynamic neighbourhood operator which nds a neighbour solution satisfying all the constraints posted to the sub-problem. In the class of resource constrained scheduling problems investigated, the sub-problem includes the temporal constraints between the activities while the resource constraints are relaxed. Within this structure, four major types of neighbourhood operators have been applied, limited variable search, limited shift search, minimal perturbation search and basic depth-rst search. Experiments were carried out on a class of resource constrained scheduling problems extended with a piecewise linear objective function. According to the tests, local probing with a neighbourhood operator based on minimal perturbation search is the best choice for quickly nding a feasible solution with a good value of the objective function. Depth- rst search performs almost as well, while limited variable search and limited shift search appeared to be worse in these tests, the former being better in terms of optimization but clearly slower, which explains the low percentages of feasible solutions to be found. Removing cost bound propagation did not decrease the percentage of solutions found nor did it degrade the optimization performance, and it reduced the CPU time per local search step signicantly. In the future, the local probing framework will be tested with dierent local search approaches, such as extending the objective function with the resource

14 view, and applying simulated annealing and tabu search strategies. The performance of local probing will be compared to the LP/MIP probe backtracking hybrids presented in [1], as well as with pure constraint programming and local search methods. Acknowledgements The rst author is partially supported by Emil Aaltonen Foundation, Finnish Cultural Foundation, Magnus Ehrnrooth Foundation and Foundation of Technology (Tekniikan edist miss ti ). The authors would like to thank Farid Ajili, Vassilis Liatsos and Neil Yorke-Smith for their valuable comments on this work. References 1. F. Ajili and H. El Sakkout. Lp probing for piecewise linear optimization in scheduling. In Proc. of Third International Workshop on Integration of AI and OR Techniques in Constraint Programming for Combinatorial Optimization Problems (CP- AI-OR'01), pages , A. El-Kholy and B. Richards. Temporal and resource reasoning in planning: The parcplan approach. In W. Wahlster, editor, Proc. of European Conference on AI (ECAI 96), pages , Budapest, Hungary, H. El Sakkout. Improving Backtrack Search: Three Case Studies of Localized Dynamic Hybridization. PhD Thesis, Imperial College, London, H. El Sakkout and M. Wallace. Probe backtrack search for minimal perturbation in dynamic scheduling. Constraints, Special Issue on Industrial Constraint-Directed Scheduling, 5(4):359388, F. Glover. Future paths for integer programming and links to articial intelligence. Computers & Operations Research, 5:533549, D.E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading Mass, W.D. Harvey and M.L. Ginsberg. Limited discrepancy search. In Proc. of the Fourteenth International Joint Conference on Articial Intelligence (IJCAI-95), pages , Morgan Kaufman, Los Angeles, J.H. Holland. Adaptation in Natural and Articial Systems. University of Michigan Press, Ann Arbor, S. Kirkpatrick, C. Gelatt Jr., and M. Vecchi. Optimization by simulated annealing. Science, 220:671680, N. Prcovic. When tree search meets local search: an introduction to search methods featuring both systematicity and neighbourhood. 11. P. Shaw. Using constraint programming and local search methods to solve vehicle routing problems. In Proc. of the Fourth International Conference on Principles and Practice of Constraint Programming (CP'98), pages , Lecture Notes in Computer Science, Vol Springer Verlag, 1998.

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA

Kalev Kask and Rina Dechter. Department of Information and Computer Science. University of California, Irvine, CA GSAT and Local Consistency 3 Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717-3425 fkkask,dechterg@ics.uci.edu Abstract It has been

More information

Modelling Combinatorial Problems for CLP(FD+R) Henk Vandecasteele. Department of Computer Science, K. U. Leuven

Modelling Combinatorial Problems for CLP(FD+R) Henk Vandecasteele. Department of Computer Science, K. U. Leuven Modelling Combinatorial Problems for CLP(FD+R) Henk Vandecasteele Department of Computer Science, K. U. Leuven Celestijnenlaan 200A, B-3001 Heverlee, Belgium henk.vandecasteele@cs.kuleuven.ac.be Robert

More information

Andrew Davenport and Edward Tsang. fdaveat,edwardgessex.ac.uk. mostly soluble problems and regions of overconstrained, mostly insoluble problems as

Andrew Davenport and Edward Tsang. fdaveat,edwardgessex.ac.uk. mostly soluble problems and regions of overconstrained, mostly insoluble problems as An empirical investigation into the exceptionally hard problems Andrew Davenport and Edward Tsang Department of Computer Science, University of Essex, Colchester, Essex CO SQ, United Kingdom. fdaveat,edwardgessex.ac.uk

More information

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA

Telecommunication and Informatics University of North Carolina, Technical University of Gdansk Charlotte, NC 28223, USA A Decoder-based Evolutionary Algorithm for Constrained Parameter Optimization Problems S lawomir Kozie l 1 and Zbigniew Michalewicz 2 1 Department of Electronics, 2 Department of Computer Science, Telecommunication

More information

GSAT and Local Consistency

GSAT and Local Consistency GSAT and Local Consistency Kalev Kask Computer Science Department University of California at Irvine Irvine, CA 92717 USA Rina Dechter Computer Science Department University of California at Irvine Irvine,

More information

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems *

Journal of Global Optimization, 10, 1{40 (1997) A Discrete Lagrangian-Based Global-Search. Method for Solving Satisability Problems * Journal of Global Optimization, 10, 1{40 (1997) c 1997 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Discrete Lagrangian-Based Global-Search Method for Solving Satisability Problems

More information

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur

Module 4. Constraint satisfaction problems. Version 2 CSE IIT, Kharagpur Module 4 Constraint satisfaction problems Lesson 10 Constraint satisfaction problems - II 4.5 Variable and Value Ordering A search algorithm for constraint satisfaction requires the order in which variables

More information

A Re-examination of Limited Discrepancy Search

A Re-examination of Limited Discrepancy Search A Re-examination of Limited Discrepancy Search W. Ken Jackson, Morten Irgens, and William S. Havens Intelligent Systems Lab, Centre for Systems Science Simon Fraser University Burnaby, B.C., CANADA V5A

More information

The Branch & Move algorithm: Improving Global Constraints Support by Local Search

The Branch & Move algorithm: Improving Global Constraints Support by Local Search Branch and Move 1 The Branch & Move algorithm: Improving Global Constraints Support by Local Search Thierry Benoist Bouygues e-lab, 1 av. Eugène Freyssinet, 78061 St Quentin en Yvelines Cedex, France tbenoist@bouygues.com

More information

Discrete Lagrangian-Based Search for Solving MAX-SAT Problems. Benjamin W. Wah and Yi Shang West Main Street. Urbana, IL 61801, USA

Discrete Lagrangian-Based Search for Solving MAX-SAT Problems. Benjamin W. Wah and Yi Shang West Main Street. Urbana, IL 61801, USA To appear: 15th International Joint Conference on Articial Intelligence, 1997 Discrete Lagrangian-Based Search for Solving MAX-SAT Problems Abstract Weighted maximum satisability problems (MAX-SAT) are

More information

MERL { A MITSUBISHI ELECTRIC RESEARCH LABORATORY. Empirical Testing of Algorithms for. Variable-Sized Label Placement.

MERL { A MITSUBISHI ELECTRIC RESEARCH LABORATORY. Empirical Testing of Algorithms for. Variable-Sized Label Placement. MERL { A MITSUBISHI ELECTRIC RESEARCH LABORATORY http://www.merl.com Empirical Testing of Algorithms for Variable-Sized Placement Jon Christensen Painted Word, Inc. Joe Marks MERL Stacy Friedman Oracle

More information

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg

MVE165/MMG630, Applied Optimization Lecture 8 Integer linear programming algorithms. Ann-Brith Strömberg MVE165/MMG630, Integer linear programming algorithms Ann-Brith Strömberg 2009 04 15 Methods for ILP: Overview (Ch. 14.1) Enumeration Implicit enumeration: Branch and bound Relaxations Decomposition methods:

More information

Crossword Puzzles as a Constraint Problem

Crossword Puzzles as a Constraint Problem Crossword Puzzles as a Constraint Problem Anbulagan and Adi Botea NICTA and Australian National University, Canberra, Australia {anbulagan,adi.botea}@nicta.com.au Abstract. We present new results in crossword

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Conflict-based Statistics

Conflict-based Statistics Conflict-based Statistics Tomáš Müller 1, Roman Barták 1 and Hana Rudová 2 1 Faculty of Mathematics and Physics, Charles University Malostranské nám. 2/25, Prague, Czech Republic {muller bartak}@ktiml.mff.cuni.cz

More information

A B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U

A B. A: sigmoid B: EBA (x0=0.03) C: EBA (x0=0.05) U Extending the Power and Capacity of Constraint Satisfaction Networks nchuan Zeng and Tony R. Martinez Computer Science Department, Brigham Young University, Provo, Utah 8460 Email: zengx@axon.cs.byu.edu,

More information

Unimodular probing for minimal perturbance. in dynamic resource feasibility problems. Hani El Sakkout, Thomas Richards, Mark G.

Unimodular probing for minimal perturbance. in dynamic resource feasibility problems. Hani El Sakkout, Thomas Richards, Mark G. Unimodular probing for minimal perturbance in dynamic resource feasibility problems Hani El Sakkout, Thomas Richards, Mark G. Wallace IC-Parc, Imperial College, London SW7 2AZ, United Kingdom. Email: fhhe,mgwg

More information

space. We will apply the idea of enforcing local consistency to GSAT with the hope that its performance can

space. We will apply the idea of enforcing local consistency to GSAT with the hope that its performance can GSAT and Local Consistency 3 Kalev Kask Computer Science Department University of California at Irvine Irvine, CA 92717 USA Rina Dechter Computer Science Department University of California at Irvine Irvine,

More information

Conflict based Backjumping for Constraints Optimization Problems

Conflict based Backjumping for Constraints Optimization Problems Conflict based Backjumping for Constraints Optimization Problems Roie Zivan and Amnon Meisels {zivanr,am}@cs.bgu.ac.il Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, 84-105,

More information

Lecture: Iterative Search Methods

Lecture: Iterative Search Methods Lecture: Iterative Search Methods Overview Constructive Search is exponential. State-Space Search exhibits better performance on some problems. Research in understanding heuristic and iterative search

More information

Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems

Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems Hybrid Constraint Programming and Metaheuristic methods for Large Scale Optimization Problems Fabio Parisini Tutor: Paola Mello Co-tutor: Michela Milano Final seminars of the XXIII cycle of the doctorate

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search

Job Shop Scheduling Problem (JSSP) Genetic Algorithms Critical Block and DG distance Neighbourhood Search A JOB-SHOP SCHEDULING PROBLEM (JSSP) USING GENETIC ALGORITHM (GA) Mahanim Omar, Adam Baharum, Yahya Abu Hasan School of Mathematical Sciences, Universiti Sains Malaysia 11800 Penang, Malaysia Tel: (+)

More information

A. Atamturk. G.L. Nemhauser. M.W.P. Savelsbergh. Georgia Institute of Technology. School of Industrial and Systems Engineering.

A. Atamturk. G.L. Nemhauser. M.W.P. Savelsbergh. Georgia Institute of Technology. School of Industrial and Systems Engineering. A Combined Lagrangian, Linear Programming and Implication Heuristic for Large-Scale Set Partitioning Problems 1 A. Atamturk G.L. Nemhauser M.W.P. Savelsbergh Georgia Institute of Technology School of Industrial

More information

Modelling and Solving Fleet Assignment in a. Flexible Environment. Hani El Sakkout. IC-Parc, Imperial College, London SW7 2AZ.

Modelling and Solving Fleet Assignment in a. Flexible Environment. Hani El Sakkout. IC-Parc, Imperial College, London SW7 2AZ. Modelling and Solving Fleet Assignment in a Flexible Environment Hani El Sakkout IC-Parc, Imperial College, London SW7 2AZ email: hhe@doc.ic.ac.uk Keywords Combinatorial Optimization, Constraint Satisfaction,

More information

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies

4 INFORMED SEARCH AND EXPLORATION. 4.1 Heuristic Search Strategies 55 4 INFORMED SEARCH AND EXPLORATION We now consider informed search that uses problem-specific knowledge beyond the definition of the problem itself This information helps to find solutions more efficiently

More information

THE Multiconstrained 0 1 Knapsack Problem (MKP) is

THE Multiconstrained 0 1 Knapsack Problem (MKP) is An Improved Genetic Algorithm for the Multiconstrained 0 1 Knapsack Problem Günther R. Raidl Abstract This paper presents an improved hybrid Genetic Algorithm (GA) for solving the Multiconstrained 0 1

More information

On Constraint Problems with Incomplete or Erroneous Data

On Constraint Problems with Incomplete or Erroneous Data On Constraint Problems with Incomplete or Erroneous Data Neil Yorke-Smith and Carmen Gervet IC Parc, Imperial College, London, SW7 2AZ, U.K. nys,cg6 @icparc.ic.ac.uk Abstract. Real-world constraint problems

More information

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery

A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery A Steady-State Genetic Algorithm for Traveling Salesman Problem with Pickup and Delivery Monika Sharma 1, Deepak Sharma 2 1 Research Scholar Department of Computer Science and Engineering, NNSS SGI Samalkha,

More information

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation

Module 1 Lecture Notes 2. Optimization Problem and Model Formulation Optimization Methods: Introduction and Basic concepts 1 Module 1 Lecture Notes 2 Optimization Problem and Model Formulation Introduction In the previous lecture we studied the evolution of optimization

More information

Chapter 14 Global Search Algorithms

Chapter 14 Global Search Algorithms Chapter 14 Global Search Algorithms An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Introduction We discuss various search methods that attempts to search throughout the entire feasible set.

More information

Speeding Up the ESG Algorithm

Speeding Up the ESG Algorithm Speeding Up the ESG Algorithm Yousef Kilani 1 and Abdullah. Mohdzin 2 1 Prince Hussein bin Abdullah Information Technology College, Al Al-Bayt University, Jordan 2 Faculty of Information Science and Technology,

More information

Abstract. This paper shows how parallelism has been integrated into

Abstract. This paper shows how parallelism has been integrated into Parallel Optimisation in the SCOOP Library Per Kristian Nilsen 1 and Nicolas Prcovic 2 1 SINTEF Applied Mathematics, Box 124 Blindern, 0314 Oslo, NORWAY E-mail: pkn@math.sintef.no 2 CERMICS-INRIA Sophia

More information

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX

(Preliminary Version 2 ) Jai-Hoon Kim Nitin H. Vaidya. Department of Computer Science. Texas A&M University. College Station, TX Towards an Adaptive Distributed Shared Memory (Preliminary Version ) Jai-Hoon Kim Nitin H. Vaidya Department of Computer Science Texas A&M University College Station, TX 77843-3 E-mail: fjhkim,vaidyag@cs.tamu.edu

More information

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo

2 The Service Provision Problem The formulation given here can also be found in Tomasgard et al. [6]. That paper also details the background of the mo Two-Stage Service Provision by Branch and Bound Shane Dye Department ofmanagement University of Canterbury Christchurch, New Zealand s.dye@mang.canterbury.ac.nz Asgeir Tomasgard SINTEF, Trondheim, Norway

More information

Richard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in

Richard E. Korf. June 27, Abstract. divide them into two subsets, so that the sum of the numbers in A Complete Anytime Algorithm for Number Partitioning Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90095 korf@cs.ucla.edu June 27, 1997 Abstract Given

More information

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Zhou B. B., Brent R. P. and Tridgell A. y Computer Sciences Laboratory The Australian National University Canberra,

More information

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2

Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 Surrogate Gradient Algorithm for Lagrangian Relaxation 1,2 X. Zhao 3, P. B. Luh 4, and J. Wang 5 Communicated by W.B. Gong and D. D. Yao 1 This paper is dedicated to Professor Yu-Chi Ho for his 65th birthday.

More information

Constraint Programming

Constraint Programming Depth-first search Let us go back to foundations: DFS = Depth First Search Constraint Programming Roman Barták Department of Theoretical Computer Science and Mathematical Logic 2 3 4 5 6 7 8 9 Observation:

More information

3. Genetic local search for Earth observation satellites operations scheduling

3. Genetic local search for Earth observation satellites operations scheduling Distance preserving recombination operator for Earth observation satellites operations scheduling Andrzej Jaszkiewicz Institute of Computing Science, Poznan University of Technology ul. Piotrowo 3a, 60-965

More information

Limited Discrepancy Search. William D. Harvey and Matthew L. Ginsberg CIRL University of Oregon. Eugene, Oregon U.S.A.

Limited Discrepancy Search. William D. Harvey and Matthew L. Ginsberg CIRL University of Oregon. Eugene, Oregon U.S.A. Limited Discrepancy Search William D. Harvey and Matthew L. Ginsberg CIRL 1269 University of Oregon Eugene, Oregon 97403 U.S.. ginsberg@cs.uoregon.edu bstract Many problems of practical interest can be

More information

Using Genetic Algorithms to optimize ACS-TSP

Using Genetic Algorithms to optimize ACS-TSP Using Genetic Algorithms to optimize ACS-TSP Marcin L. Pilat and Tony White School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, ON, K1S 5B6, Canada {mpilat,arpwhite}@scs.carleton.ca

More information

Rearrangement of DNA fragments: a branch-and-cut algorithm Abstract. In this paper we consider a problem that arises in the process of reconstruction

Rearrangement of DNA fragments: a branch-and-cut algorithm Abstract. In this paper we consider a problem that arises in the process of reconstruction Rearrangement of DNA fragments: a branch-and-cut algorithm 1 C. E. Ferreira 1 C. C. de Souza 2 Y. Wakabayashi 1 1 Instituto de Mat. e Estatstica 2 Instituto de Computac~ao Universidade de S~ao Paulo e-mail:

More information

Constraint Optimisation Problems. Constraint Optimisation. Cost Networks. Branch and Bound. Dynamic Programming

Constraint Optimisation Problems. Constraint Optimisation. Cost Networks. Branch and Bound. Dynamic Programming Summary Network Search Bucket Elimination Soft Soft s express preferences over variable assignments Preferences give dierent values over variable assignment A student can follow only one class at a time

More information

Integrating Local-Search Advice Into Refinement Search (Or Not)

Integrating Local-Search Advice Into Refinement Search (Or Not) Integrating Local-Search Advice Into Refinement Search (Or Not) Alexander Nareyek, Stephen F. Smith, and Christian M. Ohler School of Computer Science Carnegie Mellon University 5 Forbes Avenue Pittsburgh,

More information

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN

REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION. Nedim TUTKUN REAL-CODED GENETIC ALGORITHMS CONSTRAINED OPTIMIZATION Nedim TUTKUN nedimtutkun@gmail.com Outlines Unconstrained Optimization Ackley s Function GA Approach for Ackley s Function Nonlinear Programming Penalty

More information

Kalev Kask. Abstract. This paper evaluates the power of a new scheme that generates

Kalev Kask. Abstract. This paper evaluates the power of a new scheme that generates New Search Heuristics for Max-CSP.? Kalev Kask Department of Information and Computer Science University of California, Irvine, CA 92697-3425 kkask@ics.uci.edu Abstract. This paper evaluates the power

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture 16 Cutting Plane Algorithm We shall continue the discussion on integer programming,

More information

v b) Λ (a v b) Λ (a v b) AND a v b a _ v b v b

v b) Λ (a v b) Λ (a v b) AND a v b a _ v b v b A NN Algorithm for Boolean Satisability Problems William M. Spears AI Center - Code 5514 Naval Research Laboratory Washington, D.C. 20375-5320 202-767-9006 (W) 202-767-3172 (Fax) spears@aic.nrl.navy.mil

More information

A Graph-Based Method for Improving GSAT. Kalev Kask and Rina Dechter. fkkask,

A Graph-Based Method for Improving GSAT. Kalev Kask and Rina Dechter. fkkask, A Graph-Based Method for Improving GSAT Kalev Kask and Rina Dechter Department of Information and Computer Science University of California, Irvine, CA 92717 fkkask, dechterg@ics.uci.edu Abstract GSAT

More information

2ND INTERNATIONAL CONFERENCE ON METAHEURISTICS - MIC97 1. Graduate School of Engineering, Kyoto University

2ND INTERNATIONAL CONFERENCE ON METAHEURISTICS - MIC97 1. Graduate School of Engineering, Kyoto University 2ND INTERNATIONAL CONFERENCE ON METAHEURISTICS - MIC97 1 A Variable Depth Search Algorithm for the Generalized Assignment Problem Mutsunori Yagiura 1, Takashi Yamaguchi 1 and Toshihide Ibaraki 1 1 Department

More information

Column Generation Method for an Agent Scheduling Problem

Column Generation Method for an Agent Scheduling Problem Column Generation Method for an Agent Scheduling Problem Balázs Dezső Alpár Jüttner Péter Kovács Dept. of Algorithms and Their Applications, and Dept. of Operations Research Eötvös Loránd University, Budapest,

More information

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem L. De Giovanni M. Di Summa The Traveling Salesman Problem (TSP) is an optimization problem on a directed

More information

ALGORITHM SYSTEMS FOR COMBINATORIAL OPTIMIZATION: HIERARCHICAL MULTISTAGE FRAMEWORK

ALGORITHM SYSTEMS FOR COMBINATORIAL OPTIMIZATION: HIERARCHICAL MULTISTAGE FRAMEWORK ALGORITHM SYSTEMS FOR COMBINATORIAL OPTIMIZATION: HIERARCHICAL MULTISTAGE FRAMEWORK Dr. Mark Sh. Levin, The Research Inst., The College Judea & Samaria, Ariel, Israel Introduction In recent decades, signicance

More information

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems

Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems 4 The Open Cybernetics and Systemics Journal, 008,, 4-9 Revision of a Floating-Point Genetic Algorithm GENOCOP V for Nonlinear Programming Problems K. Kato *, M. Sakawa and H. Katagiri Department of Artificial

More information

Performance Comparison Between AAL1, AAL2 and AAL5

Performance Comparison Between AAL1, AAL2 and AAL5 The University of Kansas Technical Report Performance Comparison Between AAL1, AAL2 and AAL5 Raghushankar R. Vatte and David W. Petr ITTC-FY1998-TR-13110-03 March 1998 Project Sponsor: Sprint Corporation

More information

Constraint Satisfaction Problems Chapter 3, Section 7 and Chapter 4, Section 4.4 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 3, Section

Constraint Satisfaction Problems Chapter 3, Section 7 and Chapter 4, Section 4.4 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 3, Section Constraint Satisfaction Problems Chapter 3, Section 7 and Chapter 4, Section 4.4 AIMA Slides cstuart Russell and Peter Norvig, 1998 Chapter 3, Section 7 and Chapter 4, Section 4.4 1 Outline } CSP examples

More information

Department of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley

Department of. Computer Science. Remapping Subpartitions of. Hyperspace Using Iterative. Genetic Search. Keith Mathias and Darrell Whitley Department of Computer Science Remapping Subpartitions of Hyperspace Using Iterative Genetic Search Keith Mathias and Darrell Whitley Technical Report CS-4-11 January 7, 14 Colorado State University Remapping

More information

Articial Intelligence Search Algorithms. Richard E. Korf. University of California, Los Angeles. Los Angeles, Ca

Articial Intelligence Search Algorithms. Richard E. Korf. University of California, Los Angeles. Los Angeles, Ca Articial Intelligence Search Algorithms Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, Ca. 90095 July 5, 1996 1 Introduction Search is a universal problem-solving

More information

Constraint-Based Scheduling: An Introduction for Newcomers

Constraint-Based Scheduling: An Introduction for Newcomers Constraint-Based Scheduling: An Introduction for Newcomers Roman Barták * Charles University in Prague, Faculty of Mathematics and Physics Malostranské námestí 2/25, 118 00, Praha 1, Czech Republic bartak@kti.mff.cuni.cz

More information

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD

IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD IMPLEMENTATION OF A FIXING STRATEGY AND PARALLELIZATION IN A RECENT GLOBAL OPTIMIZATION METHOD Figen Öztoprak, Ş.İlker Birbil Sabancı University Istanbul, Turkey figen@su.sabanciuniv.edu, sibirbil@sabanciuniv.edu

More information

Rowena Cole and Luigi Barone. Department of Computer Science, The University of Western Australia, Western Australia, 6907

Rowena Cole and Luigi Barone. Department of Computer Science, The University of Western Australia, Western Australia, 6907 The Game of Clustering Rowena Cole and Luigi Barone Department of Computer Science, The University of Western Australia, Western Australia, 697 frowena, luigig@cs.uwa.edu.au Abstract Clustering is a technique

More information

GRASP. Greedy Randomized Adaptive. Search Procedure

GRASP. Greedy Randomized Adaptive. Search Procedure GRASP Greedy Randomized Adaptive Search Procedure Type of problems Combinatorial optimization problem: Finite ensemble E = {1,2,... n } Subset of feasible solutions F 2 Objective function f : 2 Minimisation

More information

Integer Programming Chapter 9

Integer Programming Chapter 9 1 Integer Programming Chapter 9 University of Chicago Booth School of Business Kipp Martin October 30, 2017 2 Outline Branch and Bound Theory Branch and Bound Linear Programming Node Selection Strategies

More information

CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS. Roman Barták

CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS. Roman Barták In Proceedings of 7th IFAC Workshop on Intelligent Manufacturing Systems (IMS 2003), Elsevier Science, 2003 (to appear). CONSTRAINT-BASED SCHEDULING: AN INTRODUCTION FOR NEWCOMERS Roman Barták Charles

More information

Dynamic Symmetry Breaking in Constraint Programming and Linear Programming Hybrids

Dynamic Symmetry Breaking in Constraint Programming and Linear Programming Hybrids Dynamic Symmetry Breaking in Constraint Programming and Linear Programming Hybrids Karen E. Petrie and Barbara Smith and Neil Yorke-Smith Abstract. Symmetry in Constraint Satisfaction Problems (CSPs) can

More information

Combining constraint programming and local search to design new powerful heuristics

Combining constraint programming and local search to design new powerful heuristics MIC2003: The Fifth Metaheuristics International Conference 028-1 Combining constraint programming and local search to design new powerful heuristics Narendra Jussien Olivier Lhomme École des Mines de Nantes

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Frank C. Langbein F.C.Langbein@cs.cf.ac.uk Department of Computer Science Cardiff University 13th February 2001 Constraint Satisfaction Problems (CSPs) A CSP is a high

More information

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17

What is Search For? CSE 473: Artificial Intelligence. Example: N-Queens. Example: N-Queens. Example: Map-Coloring 4/7/17 CSE 473: Artificial Intelligence Constraint Satisfaction Dieter Fox What is Search For? Models of the world: single agent, deterministic actions, fully observed state, discrete state space Planning: sequences

More information

A simulated annealing algorithm for the vehicle routing problem with time windows and synchronization constraints

A simulated annealing algorithm for the vehicle routing problem with time windows and synchronization constraints A simulated annealing algorithm for the vehicle routing problem with time windows and synchronization constraints Sohaib Afifi 1, Duc-Cuong Dang 1,2, and Aziz Moukrim 1 1 Université de Technologie de Compiègne

More information

2 Keywords Backtracking Algorithms, Constraint Satisfaction Problem, Distributed Articial Intelligence, Iterative Improvement Algorithm, Multiagent Sy

2 Keywords Backtracking Algorithms, Constraint Satisfaction Problem, Distributed Articial Intelligence, Iterative Improvement Algorithm, Multiagent Sy 1 The Distributed Constraint Satisfaction Problem: Formalization and Algorithms IEEE Trans. on Knowledge and DATA Engineering, vol.10, No.5 September 1998 Makoto Yokoo, Edmund H. Durfee, Toru Ishida, and

More information

Optimization Techniques for Design Space Exploration

Optimization Techniques for Design Space Exploration 0-0-7 Optimization Techniques for Design Space Exploration Zebo Peng Embedded Systems Laboratory (ESLAB) Linköping University Outline Optimization problems in ERT system design Heuristic techniques Simulated

More information

residual residual program final result

residual residual program final result C-Mix: Making Easily Maintainable C-Programs run FAST The C-Mix Group, DIKU, University of Copenhagen Abstract C-Mix is a tool based on state-of-the-art technology that solves the dilemma of whether to

More information

Single Machine Scheduling with Interfering Job Sets. Arizona State University PO Box Tempe, AZ

Single Machine Scheduling with Interfering Job Sets. Arizona State University PO Box Tempe, AZ Single Machine Scheduling with Interfering Job Sets Ketan Khowala 1,3, John Fowler 1,3, Ahmet Keha 1* and Hari Balasubramanian 2 1 Department of Industrial Engineering Arizona State University PO Box 875906

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems CE417: Introduction to Artificial Intelligence Sharif University of Technology Spring 2013 Soleymani Course material: Artificial Intelligence: A Modern Approach, 3 rd Edition,

More information

In search of the best constraint satisfaction search 3. Daniel Frost and Rina Dechter.

In search of the best constraint satisfaction search 3. Daniel Frost and Rina Dechter. In search of the best constraint satisfaction search 3 Daniel Frost and Rina Dechter Dept. of Information and Computer Science University of California, Irvine, CA 92717 fdfrost,dechterg@ics.uci.edu Abstract

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information

An Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem

An Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem An Improved Hybrid Genetic Algorithm for the Generalized Assignment Problem Harald Feltl and Günther R. Raidl Institute of Computer Graphics and Algorithms Vienna University of Technology, Vienna, Austria

More information

SPATIAL OPTIMIZATION METHODS

SPATIAL OPTIMIZATION METHODS DELMELLE E. (2010). SPATIAL OPTIMIZATION METHODS. IN: B. WHARF (ED). ENCYCLOPEDIA OF HUMAN GEOGRAPHY: 2657-2659. SPATIAL OPTIMIZATION METHODS Spatial optimization is concerned with maximizing or minimizing

More information

A generic framework for solving CSPs integrating decomposition methods

A generic framework for solving CSPs integrating decomposition methods A generic framework for solving CSPs integrating decomposition methods L. Blet 1,3, S. N. Ndiaye 1,2, and C. Solnon 1,3 1 Université de Lyon - LIRIS 2 Université Lyon 1, LIRIS, UMR5205, F-69622 France

More information

Unconstrained Optimization

Unconstrained Optimization Unconstrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 Denitions Economics is a science of optima We maximize utility functions, minimize

More information

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY

HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A PHARMACEUTICAL MANUFACTURING LABORATORY Proceedings of the 1998 Winter Simulation Conference D.J. Medeiros, E.F. Watson, J.S. Carson and M.S. Manivannan, eds. HEURISTIC OPTIMIZATION USING COMPUTER SIMULATION: A STUDY OF STAFFING LEVELS IN A

More information

A Hybrid Recursive Multi-Way Number Partitioning Algorithm

A Hybrid Recursive Multi-Way Number Partitioning Algorithm Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence A Hybrid Recursive Multi-Way Number Partitioning Algorithm Richard E. Korf Computer Science Department University

More information

Fundamentals of Integer Programming

Fundamentals of Integer Programming Fundamentals of Integer Programming Di Yuan Department of Information Technology, Uppsala University January 2018 Outline Definition of integer programming Formulating some classical problems with integer

More information

3 INTEGER LINEAR PROGRAMMING

3 INTEGER LINEAR PROGRAMMING 3 INTEGER LINEAR PROGRAMMING PROBLEM DEFINITION Integer linear programming problem (ILP) of the decision variables x 1,..,x n : (ILP) subject to minimize c x j j n j= 1 a ij x j x j 0 x j integer n j=

More information

Rina Dechter. Irvine, California, USA A constraint satisfaction problem (csp) dened over a constraint network

Rina Dechter. Irvine, California, USA A constraint satisfaction problem (csp) dened over a constraint network Constraint Satisfaction Rina Dechter Department of Computer and Information Science University of California, Irvine Irvine, California, USA 92717 dechter@@ics.uci.edu A constraint satisfaction problem

More information

A tabu search based memetic algorithm for the max-mean dispersion problem

A tabu search based memetic algorithm for the max-mean dispersion problem A tabu search based memetic algorithm for the max-mean dispersion problem Xiangjing Lai a and Jin-Kao Hao a,b, a LERIA, Université d'angers, 2 Bd Lavoisier, 49045 Angers, France b Institut Universitaire

More information

Constraint Satisfaction Problems

Constraint Satisfaction Problems Constraint Satisfaction Problems Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University References: 1. S. Russell and P. Norvig. Artificial Intelligence:

More information

Four Methods for Maintenance Scheduling

Four Methods for Maintenance Scheduling Four Methods for Maintenance Scheduling Edmund K. Burke, University of Nottingham, ekb@cs.nott.ac.uk John A. Clark, University of York, jac@minster.york.ac.uk Alistair J. Smith, University of Nottingham,

More information

Lecture 6: Constraint Satisfaction Problems (CSPs)

Lecture 6: Constraint Satisfaction Problems (CSPs) Lecture 6: Constraint Satisfaction Problems (CSPs) CS 580 (001) - Spring 2018 Amarda Shehu Department of Computer Science George Mason University, Fairfax, VA, USA February 28, 2018 Amarda Shehu (580)

More information

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines

Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines Ecient Implementation of Sorting Algorithms on Asynchronous Distributed-Memory Machines B. B. Zhou, R. P. Brent and A. Tridgell Computer Sciences Laboratory The Australian National University Canberra,

More information

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods

15.083J Integer Programming and Combinatorial Optimization Fall Enumerative Methods 5.8J Integer Programming and Combinatorial Optimization Fall 9 A knapsack problem Enumerative Methods Let s focus on maximization integer linear programs with only binary variables For example: a knapsack

More information

Outline of the module

Outline of the module Evolutionary and Heuristic Optimisation (ITNPD8) Lecture 2: Heuristics and Metaheuristics Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ Computing Science and Mathematics, School of Natural Sciences University

More information

CP-based Local Branching

CP-based Local Branching CP-based Local Branching Zeynep Kiziltan 1, Andrea Lodi 2, Michela Milano 2, and Fabio Parisini 2 1 Department of Computer Science, University of Bologna, Italy. zeynep@cs.unibo.it 2 D.E.I.S., University

More information

Map Colouring. Constraint Satisfaction. Map Colouring. Constraint Satisfaction

Map Colouring. Constraint Satisfaction. Map Colouring. Constraint Satisfaction Constraint Satisfaction Jacky Baltes Department of Computer Science University of Manitoba Email: jacky@cs.umanitoba.ca WWW: http://www4.cs.umanitoba.ca/~jacky/teaching/cour ses/comp_4190- ArtificialIntelligence/current/index.php

More information

Two Problems - Two Solutions: One System - ECLiPSe. Mark Wallace and Andre Veron. April 1993

Two Problems - Two Solutions: One System - ECLiPSe. Mark Wallace and Andre Veron. April 1993 Two Problems - Two Solutions: One System - ECLiPSe Mark Wallace and Andre Veron April 1993 1 Introduction The constraint logic programming system ECL i PS e [4] is the successor to the CHIP system [1].

More information

The only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori

The only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori Use of K-Near Optimal Solutions to Improve Data Association in Multi-frame Processing Aubrey B. Poore a and in Yan a a Department of Mathematics, Colorado State University, Fort Collins, CO, USA ABSTRACT

More information

Interleaved Depth-First Search *

Interleaved Depth-First Search * Interleaved Depth-First Search * Pedro Meseguer Institut d'investigacio en Intel.ligencia Artificial Consejo Superior de Investigaciones Cientificas Campus UAB, 08193 Bellaterra, Spain Abstract In tree

More information

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe

Constraint Satisfaction Problems. slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Constraint Satisfaction Problems slides from: Padhraic Smyth, Bryan Low, S. Russell and P. Norvig, Jean-Claude Latombe Standard search problems: State is a black box : arbitrary data structure Goal test

More information

Introduction to Computer Science and Programming for Astronomers

Introduction to Computer Science and Programming for Astronomers Introduction to Computer Science and Programming for Astronomers Lecture 9. István Szapudi Institute for Astronomy University of Hawaii March 21, 2018 Outline Reminder 1 Reminder 2 3 Reminder We have demonstrated

More information